Skip to main content
Volume Rendering

Volume Rendering

Techniques for visualizing 3D scalar fields like CT scans, fluid simulations, and participating media.

What Is Volume Rendering?

Unlike surface-based rendering techniques like rasterization or path tracing, which draw the outside of objects, volume rendering visualizes the interior of 3D data. Think of a CT scan revealing the bones, organs, and tissues inside a human body, or a visual effects simulation showing smoke and fire swirling through a scene. These are problems where there is no clear "surface" to draw — the interesting information exists throughout a 3D space.

The input to a volume renderer is typically a 3D grid of scalar values — a "volume" or "voxel grid" — rather than a triangle mesh. Each voxel (a 3D pixel) stores a quantity like density, temperature, absorption coefficient, or some other physical measurement. A 512 x 512 x 512 CT scan, for example, contains over 134 million data points, each representing tissue density at that location in the body.

Volume rendering answers a deceptively simple question: "If I shine light through this 3D field of varying density, what do I see?" The answer involves casting rays through the volume, sampling values along the way, mapping those values to visual properties like color and opacity, and compositing the results into a final image. The technique is indispensable in medical imaging, scientific visualization, fluid dynamics, and cinematic visual effects.

How It Works

Volume rendering operates per pixel, similar in spirit to ray tracing but with a fundamentally different intersection model:

  1. Ray casting — For each pixel in the output image, a ray is cast from the camera through the volume. Unlike surface rendering, these rays don't stop at the first intersection — they pass through the entire volume.

  2. Sampling — At regular intervals along the ray, the volume's scalar value is sampled at that 3D position. Since the data is stored on a discrete grid, trilinear interpolation is used to estimate values between grid points. Typical step sizes are fractions of a voxel width — smaller steps yield more accurate results but cost more computation.

  3. Transfer function — Each sampled value is mapped to a color and opacity using a transfer function. This is the creative and scientific heart of volume rendering: the transfer function determines what the viewer sees. For a CT scan, high density values might map to opaque white (bone), medium values to translucent red (muscle), and low values to transparent (air). Designing effective transfer functions is both an art and a science — the same dataset can reveal completely different structures depending on how values are mapped.

  4. Compositing — The color and opacity samples along each ray are accumulated using front-to-back (or back-to-front) alpha compositing. This is mathematically equivalent to the volume rendering integral, discretized into a Riemann sum. Front-to-back compositing allows early ray termination — once accumulated opacity reaches near 1.0, the ray can stop, since nothing behind will be visible.

  5. Display — The accumulated color for each pixel forms the final image. Post-processing steps like tone mapping and gamma correction prepare the image for display.

The mathematical foundation is the emission-absorption model, where each point in the volume can both emit light (contributing color) and absorb light passing through it (reducing transmission). Other optical models exist — pure absorption, single scattering, full multiple scattering with phase functions — but emission-absorption is the standard for direct volume rendering and strikes the best balance between visual quality and computational cost.

Key Concepts

Voxel — A 3D pixel: a single data point in a volumetric grid. The word is a portmanteau of "volume" and "pixel." Voxel grids are the most common input format for volume rendering, with typical resolutions ranging from 256 cubed for simple datasets to 2048 cubed or larger for high-resolution medical or simulation data.

Transfer Function — The mapping from raw scalar data values to visual properties (color and opacity). Transfer functions can be as simple as a 1D lookup table indexed by data value, or as complex as multi-dimensional functions incorporating gradient magnitude, spatial context, or temporal information. Designing a good transfer function is often the key to useful volume visualization — it determines which structures are visible and how they appear.

Participating Media — The general term for materials that light passes through rather than bouncing off of — fog, smoke, clouds, fire, subsurface skin. Volume rendering handles these naturally. In physically based renderers like PBRT and Mitsuba 3, participating media are simulated with Monte Carlo sampling of scattering events within volumes.

Isosurface — A surface within a volume where the data has a constant value, analogous to a contour line on a topographic map. Extracting isosurfaces using algorithms like marching cubes converts volumetric data to a triangle mesh at a chosen threshold, providing an alternative to direct volume rendering that can be displayed with standard surface rendering techniques.

Emission-Absorption Model — The optical model where each point in the volume can both emit light and absorb light passing through it. This is the physical basis for volume rendering compositing. The volume rendering integral, which this model describes, has no closed-form solution for general data and must be approximated numerically through the sampling and compositing process.

Strengths

Volume rendering is unmatched at revealing the interior structure of 3D data. A CT scan viewed as a volume rendering can simultaneously show bone, soft tissue, and air passages in a single image, with the transfer function controlling which structures are emphasized. No other rendering technique can do this without first extracting explicit surface geometry — a process that discards information and introduces artifacts.

The technique is equally powerful for scientific visualization of simulation data (fluid dynamics, astrophysics, weather modeling), where the phenomena of interest — temperature fields, pressure distributions, particle densities — are inherently volumetric. And in cinematic visual effects, volume rendering of participating media produces the realistic fog, smoke, fire, and cloud effects that audiences expect.

Modern GPU implementations achieve interactive frame rates (30+ FPS) for moderate-resolution volumes, making volume rendering practical for real-time exploration of medical and scientific data. Hardware improvements and algorithmic advances like empty-space skipping, adaptive sampling, and multi-resolution representations continue to push performance boundaries.

Tradeoffs

The primary tradeoff is memory and computational cost. A 3D grid is inherently larger than a 2D image or a surface mesh of the same scene: a 1024 cubed float volume requires 4 GB of storage. Every pixel in the output image requires dozens to hundreds of samples along its ray, each involving trilinear interpolation and transfer function evaluation. This makes volume rendering significantly more expensive per pixel than surface-based rendering.

Transfer function design requires expertise. While automatic transfer function generation is an active research area, in practice creating an effective transfer function often requires domain knowledge (understanding what density ranges correspond to what anatomical structures, for example) and manual tuning. Poor transfer function choices can hide important features or create misleading visualizations.

Volume rendering is also not well-suited for hard-surface objects. A triangle mesh of a building, a car, or a character is far more efficiently rendered with rasterization or ray tracing than by converting it to a volume. Volume rendering shines specifically for data that is inherently volumetric — where the information exists throughout 3D space rather than on surfaces.

History

Volume rendering emerged in the late 1980s with independent contributions from Levoy ("Display of Surfaces from Volume Data," 1988) and Drebin, Carpenter, and Hanrahan ("Volume Rendering," 1988). These foundational papers introduced the ray casting approach and the concept of transfer functions that remain central to the technique today. Through the 1990s, volume rendering was computationally expensive and limited to specialized workstations. The arrival of programmable GPU shaders in the early 2000s transformed the field — texture-based and ray-casting implementations on commodity graphics hardware made interactive volume rendering accessible. Frameworks like VTK (1993) and ParaView (2002) brought volume rendering to scientists and engineers. Today, production renderers like PBRT, Mitsuba 3, and Blender Cycles support physically based volumetric light transport for participating media effects, while medical visualization tools like 3D Slicer and Voreen provide specialized volume rendering for clinical and research use.

Renderers Using Volume Rendering

View all on Explore

Further Reading