Skip to main content
Path Tracing

Path Tracing

Physically-based light simulation that traces rays through a scene to produce photorealistic images.

What Is Path Tracing?

Path tracing is a rendering algorithm that simulates how light travels in the physical world. For every pixel in the final image, the algorithm traces a ray from the camera into the scene, follows that ray as it bounces off surfaces — reflecting, refracting, scattering — and records how much light accumulates along the way. By repeating this process thousands of times per pixel, each time with slightly randomized bounce directions, and averaging the results, path tracing converges toward a physically accurate image.

This approach is called a Monte Carlo method because it relies on random sampling to approximate a complex integral: the rendering equation, first formalized by James Kajiya in 1986. The rendering equation describes the total light leaving every point on every surface in a scene, accounting for all possible light paths. Path tracing solves it by sampling — instead of computing all paths analytically, which is impossible for complex scenes, it randomly generates paths and averages their contributions.

The result is photorealistic images with physically correct global illumination, soft shadows, caustics, color bleeding, and depth of field — effects that emerge naturally from the simulation rather than being faked with tricks.

How It Works

The algorithm operates per pixel:

  1. Ray generation — A primary ray is cast from the camera through the pixel into the scene.
  2. Intersection testing — The ray is tested against scene geometry (triangles, curves, volumes) to find the closest surface hit. This is accelerated by spatial data structures like BVH (Bounding Volume Hierarchy) trees.
  3. Shading — At the hit point, the surface's material properties (BSDF — Bidirectional Scattering Distribution Function) determine how light interacts with the surface. The material could be diffuse, specular, glossy, transmissive, or any combination.
  4. Bouncing — A new ray direction is sampled according to the BSDF. The ray continues into the scene, hitting another surface. This continues recursively up to a maximum bounce depth.
  5. Light contribution — Whenever a ray hits a light source or the environment, the light's energy is carried back along the path and multiplied by each surface's reflectance along the way.
  6. Accumulation — Steps 1 through 5 are repeated many times per pixel (each repetition is a "sample" or SPP — samples per pixel). The final pixel color is the average of all samples.

Modern path tracers use importance sampling (biasing random directions toward where light is likely coming from) and next-event estimation (directly sampling light sources at each bounce rather than hoping a random bounce finds one) to converge much faster. Without these optimizations, a naive path tracer might need millions of samples per pixel; with them, 256 to 4096 SPP often suffices for production-quality images.

Strengths

Path tracing's core strength is physical correctness by default. Because it simulates actual light transport, effects like soft shadows, indirect illumination (color bleeding between nearby surfaces), caustics (concentrated light patterns through glass or water), and participating media (fog, smoke, subsurface scattering in skin) all emerge naturally. There is no need to implement each effect as a separate algorithm — they are all consequences of solving the rendering equation.

This makes path tracing the preferred technique for visual effects, architectural visualization, product rendering, and any context where the image must be indistinguishable from a photograph. It is also the reference standard against which other techniques are benchmarked for correctness.

Tradeoffs

The primary tradeoff is computational cost. Each pixel requires hundreds to thousands of light paths to converge to a noise-free result. A single frame at 1920 by 1080 with 1024 SPP means tracing over two billion light paths. Production renders of complex scenes can take minutes to hours per frame.

Noise is the visible artifact of insufficient samples — the image appears grainy, especially in dark regions or areas illuminated primarily by indirect light. Denoising algorithms (both traditional and ML-based) can mitigate this but cannot fully replace converged sampling for all use cases.

Path tracing is also not real-time in most scenarios. While GPU-accelerated path tracers (using NVIDIA OptiX, AMD HIP, or Intel oneAPI) have dramatically reduced render times, and denoised interactive previews are now common, full-quality path-traced frames at real-time rates remain limited to relatively simple scenes or require specialized hardware with dedicated ray tracing cores.

History

The foundations of ray tracing were laid by Arthur Appel in 1968, with Turner Whitted introducing recursive ray tracing in 1979. Kajiya's 1986 rendering equation paper established the theoretical basis for path tracing as a general solution to light transport. Eric Veach's PhD thesis in 1997 introduced multiple importance sampling and bidirectional path tracing, dramatically improving practical convergence. The technique became viable for production use through progressive improvements in acceleration structures (particularly BVH), material models, and GPU hardware throughout the 2000s and 2010s. Today, hardware-accelerated ray tracing cores in NVIDIA RTX and AMD RDNA GPUs have made real-time path-traced previews a reality, while offline path tracers like PBRT, Mitsuba, and Cycles remain the backbone of photorealistic rendering.

Renderers Using Path Tracing

View all on Explore

Further Reading