Skip to main content

Rendering Glossary

Quick definitions for common rendering terms, acronyms, and concepts.

A

Albedo

The base color of a surface, representing the fraction of light reflected at each wavelength. Independent of lighting or viewing angle, albedo is the intrinsic color of a material before any shading is applied.

Related:,,See technique →

Ambient Occlusion

A shading technique that darkens creases, holes, and surfaces close to other surfaces, approximating how ambient light is occluded in concavities. Can be screen-space (SSAO), ray-traced, or estimated from SDF evaluations.

Related:,See technique →

B

Bidirectional Path Tracing

An extension of path tracing that traces light paths from both the camera and light sources, then connects them. Handles difficult light transport scenarios like small light sources and caustics better than unidirectional path tracing.

Related:,,See technique →

BRDF

Bidirectional Reflectance Distribution Function. Describes how light reflects off a surface depending on the incoming and outgoing light directions. The reflection-only component of the more general BSDF.

Related:,,See technique →

BSDF

Bidirectional Scattering Distribution Function. Combines BRDF (reflection) and BTDF (transmission) to describe the complete light-surface interaction. Most physically based renderers use BSDFs to define material appearance.

Related:,,See technique →

BTDF

Bidirectional Transmittance Distribution Function. Describes how light transmits through a surface, such as glass or water. Combined with the BRDF, it forms the complete BSDF.

Related:,See technique →

BVH

Bounding Volume Hierarchy. A tree data structure that spatially organizes scene geometry into nested bounding boxes to accelerate ray intersection tests. Used by nearly every modern ray tracer.

Related:,,See technique →

C

Caustics

Focused light patterns created by reflection or refraction through curved surfaces, such as bright patterns on the bottom of a swimming pool. Notoriously difficult for unidirectional path tracers to render efficiently.

Related:,,See technique →

Convergence

In Monte Carlo rendering, the process of noise reducing as more samples are taken. A converged image has negligible visible noise. The rate of convergence depends on the variance of the estimator and the sampling strategies used.

Related:,,See technique →

CSG

Constructive Solid Geometry. A technique for building complex 3D shapes by combining simpler primitives using boolean operations: union, intersection, and subtraction. With SDFs, CSG is trivially implemented as min, max, and negation of distance values.

Related:,,See technique →

CUDA

NVIDIA's parallel computing platform for GPU programming. Provides a C/C++-like model for writing GPU kernels. Many renderers use CUDA for acceleration, including PBRT v4, Mitsuba 3, and Gaussian Splatting implementations.

Related:,,

D

Denoising

Algorithms that remove Monte Carlo noise from rendered images, allowing acceptable quality at lower sample counts. Can be AI-based (NVIDIA OptiX AI Denoiser, Intel Open Image Denoise) or traditional (non-local means, bilateral filtering).

Related:,,See technique →

Differentiable Rendering

Rendering where gradients of the output image can be computed with respect to scene parameters (geometry, materials, lighting), enabling optimization-based inverse problems through gradient descent.

Related:,See technique →

Displacement Mapping

A technique that physically moves surface vertices based on a texture map, creating real geometric detail. Unlike normal maps which only fake the appearance of detail, displacement maps modify actual geometry and produce correct silhouettes and shadows.

Related:,,

E

Embree

Intel's high-performance CPU ray tracing library providing optimized BVH construction and traversal using SIMD instructions. Used as a ray intersection backend by many renderers including OSPRay, PBRT, Mitsuba, LuxCoreRender, and appleseed.

Related:,See technique →

EXR

OpenEXR. A high dynamic range image format supporting 16-bit and 32-bit floating point values per channel. The standard output format for physically based renderers, preserving the full range of computed light values for post-processing.

Related:,

F

Fresnel Effect

The phenomenon where a surface's reflectivity increases at grazing angles. All real materials exhibit this — a lake appears mirror-like at a distance but transparent when looking straight down. Physically based renderers model this using Fresnel equations.

Related:,,See technique →

G

Gaussian Splatting

A rendering technique representing scenes as collections of 3D Gaussian ellipsoids with learned color, opacity, and shape parameters. Rendered by projecting each Gaussian onto the image plane and alpha-compositing. Achieves real-time novel view synthesis.

Related:,,See technique →

GI

Global Illumination. The simulation of all light transport in a scene, including indirect bounces where light reflects between surfaces. The gold standard for realism, producing effects like color bleeding, soft indirect shadows, and caustics.

Related:,,See technique →

glTF

GL Transmission Format. A royalty-free 3D scene format designed for efficient transmission. Supports meshes, materials, textures, animations, and scene hierarchy. Often called 'the JPEG of 3D,' supported by most real-time and many offline renderers.

Related:,,

Gradient

A vector of partial derivatives indicating how much and in which direction each scene parameter should change to reduce a loss function. The fundamental quantity enabling gradient-based optimization in differentiable rendering and neural network training.

Related:,See technique →

H

HDRI

High Dynamic Range Image. A panoramic image with a wide brightness range, commonly used as environment lighting in rendering. Captures real-world illumination from all directions, enabling realistic image-based lighting of 3D scenes.

Related:,,

I

Importance Sampling

A variance reduction technique that concentrates random samples in directions where the integrand is large, reducing noise for the same sample count. In rendering, this means sampling rays toward bright light sources or likely BRDF directions.

Related:,,See technique →

Inverse Rendering

The problem of recovering scene properties (3D shape, materials, lighting) from one or more images. Differentiable rendering enables solving this via gradient-based optimization rather than hand-crafted algorithms.

Related:,See technique →

Isosurface

A surface within a volume where the data has a constant value, analogous to a contour line on a topographic map. Extracting isosurfaces via algorithms like marching cubes converts volumetric data into triangle meshes at a chosen threshold.

Related:,,See technique →

L

LPIPS

Learned Perceptual Image Patch Similarity. A neural network-based perceptual quality metric where lower values indicate greater similarity. Generally correlates better with human judgments of image quality than PSNR or SSIM.

Related:,,

M

MIS

Multiple Importance Sampling. A framework for combining samples from multiple strategies (e.g., BRDF sampling and light sampling) in a provably near-optimal way. Introduced by Eric Veach in his 1997 PhD thesis.

Related:,,See technique →

MLT

Metropolis Light Transport. A path tracing variant using Markov chain Monte Carlo to explore path space, concentrating effort on high-contribution paths. Effective for extremely difficult lighting like light through a keyhole into a dark room.

Related:,,See technique →

Monte Carlo Sampling

A class of algorithms using random sampling to estimate integrals. In rendering, Monte Carlo integration is the foundation of path tracing — rays are sampled randomly to approximate the rendering equation, which has no closed-form solution for complex scenes.

Related:,,,See technique →

MSE

Mean Squared Error. The average of squared pixel differences between two images. The simplest quality metric, computed as the mean of (reference minus test) squared across all pixels and channels. PSNR is derived directly from MSE.

Related:,,

N

NEE

Next Event Estimation. A technique where the renderer explicitly samples light sources at each path bounce rather than waiting for a random bounce to find one. Dramatically reduces noise for scenes with small or distant lights.

Related:,,See technique →

NeRF

Neural Radiance Field. A method that trains a neural network to represent a 3D scene as a continuous volumetric function mapping spatial coordinates and viewing direction to color and density. Enables photorealistic novel view synthesis from posed photographs.

Related:,,See technique →

Normal Map

A texture that stores surface normal perturbations, creating the illusion of fine geometric detail (bumps, dents, scratches) without additional polygons. Each texel encodes a normal direction that modifies surface shading.

Related:,,See technique →

Novel View Synthesis

The task of generating new camera views of a scene from a set of input images. The primary application of NeRF and Gaussian Splatting, enabling virtual camera movement through a captured real-world scene.

Related:,,See technique →

O

OBJ

Wavefront OBJ. A simple, widely-supported text-based 3D geometry format storing vertices, normals, texture coordinates, and polygon faces. One of the oldest and most universal 3D formats, though lacking support for materials and animations.

Related:,,

Offline Rendering

Rendering where quality is prioritized over speed. A single frame may take seconds, minutes, or hours. Path tracing is the dominant offline technique, used for film VFX, architectural visualization, and product rendering.

Related:,,See technique →

OptiX

NVIDIA's ray tracing SDK for CUDA GPUs. Provides hardware-accelerated BVH traversal and ray-triangle intersection testing using RT Cores. Used as a backend by many GPU ray tracers including PBRT v4 and Blender Cycles.

Related:,,,See technique →

P

Participating Media

Materials that light passes through rather than bouncing off — fog, smoke, clouds, fire, and scattering within translucent solids like skin and wax. Volume rendering and participating media simulation in path tracers handle these effects.

Related:,,See technique →

Path Tracing

A Monte Carlo rendering algorithm that traces random light paths from the camera through the scene, accumulating light at each surface bounce. The core algorithm behind most modern physically based renderers.

Related:,,,See technique →

PBR

Physically Based Rendering. An approach where materials and lighting follow real-world physics — energy conservation, Fresnel equations, measured material data. Produces realistic results because the simulation is grounded in how light actually behaves.

Related:,,,See technique →

Photon Mapping

A two-pass global illumination algorithm: first, photons are traced from lights and stored where they interact with surfaces; second, stored photons estimate indirect illumination during rendering. Effective for caustics and participating media.

Related:,,See technique →

Physically Based

An approach to rendering grounded in the physics of light transport: energy conservation, Fresnel equations, measured material data, and radiometric correctness. Results look realistic because the simulation follows real-world rules.

Related:,,See technique →

PLY

Polygon File Format (Stanford Triangle Format). A simple format for 3D scanned data, commonly used for point clouds and Gaussian Splatting data. Supports per-vertex properties like color, normal, and custom attributes.

Related:,

PSNR

Peak Signal-to-Noise Ratio. A pixel-level image quality metric measured in decibels (dB). Higher values indicate less error, derived from MSE. A common baseline metric, though it does not always correlate well with human perception.

Related:,,

R

Radiance Field

A continuous function mapping 3D coordinates (and optionally viewing direction) to color and density. The scene representation learned by NeRF, implementable as a neural network, hash grid, or other parameterization.

Related:,,,See technique →

Radiosity

A global illumination method that computes diffuse inter-reflections by subdividing surfaces into patches and solving for energy equilibrium. Produces soft, view-independent lighting. Largely historical, superseded by path tracing.

Related:,

Rasterization

The process of projecting 3D triangles onto a 2D screen and determining which pixels each triangle covers. The foundation of real-time rendering, implemented in hardware by every modern GPU.

Related:,,See technique →

Ray Marching

A rendering technique that advances along rays in discrete steps, evaluating a distance or density function at each step. When combined with signed distance fields, the step size adapts to the distance to the nearest surface (sphere tracing).

Related:,,See technique →

Real-Time Rendering

Rendering fast enough for interactive use, typically 30 or more frames per second. Achieved through rasterization, GPU acceleration, and approximations of global illumination. Modern approaches increasingly incorporate selective ray tracing.

Related:,,See technique →

Renderer

A software system that takes a 3D scene description as input and produces a 2D image as output. Renderers vary widely in technique (path tracing, rasterization, neural), speed (real-time to hours per frame), and target use case.

Related:,

Roughness

A material parameter controlling how sharp or blurry reflections appear. Low roughness produces mirror-like reflections; high roughness produces matte, diffuse surfaces. A key parameter in physically based material models.

Related:,,,See technique →

RT Cores

Dedicated hardware units in NVIDIA RTX GPUs that accelerate BVH traversal and ray-triangle intersection testing in silicon, dramatically speeding up ray tracing by offloading the most expensive operations from shader cores.

Related:,,See technique →

S

SDF

Signed Distance Function. A function returning the shortest distance from any point in space to the nearest surface. Positive outside, negative inside, zero on the surface. Used in ray marching, procedural modeling, and collision detection.

Related:,,See technique →

Spectral Rendering

Rendering that simulates individual wavelengths of light rather than only RGB triplets. Produces more physically accurate results, especially for dispersion (prisms, rainbows), fluorescence, and complex material interactions.

Related:,See technique →

Sphere Tracing

A ray marching algorithm that uses the SDF value as the step size. Since the SDF guarantees no surface is closer than the returned distance, the ray safely advances by that amount — large steps in open space, small steps near surfaces.

Related:,,See technique →

SPP

Samples Per Pixel. The number of random light path samples averaged for each pixel in a Monte Carlo renderer. More samples reduce noise but increase render time. Typical values range from 64 (noisy preview) to 4096+ (final quality).

Related:,,,See technique →

SSIM

Structural Similarity Index Measure. A perceptual quality metric comparing luminance, contrast, and structural patterns between two images. Ranges from 0 to 1 (1 = identical). Better correlated with human perception than PSNR.

Related:,,

Subsurface Scattering

Light enters a translucent material, scatters inside, and exits at a different point. Critical for realistic rendering of skin, wax, marble, milk, and other translucent materials where light visibly penetrates the surface.

Related:,,See technique →

T

Tone Mapping

The process of converting high dynamic range (HDR) rendered values to the limited range displayable on a screen. Essential for physically based renderers, since real-world light intensities span orders of magnitude that displays cannot reproduce directly.

Related:,,

Transfer Function

In volume rendering, the mapping from raw scalar data values to visual properties (color and opacity). Determines which structures in volumetric data are visible and how they appear, serving as the primary artistic and scientific control.

Related:,See technique →

Triangle Mesh

The most common 3D surface representation: a collection of vertices connected into triangles. Standard input for most renderers, stored in formats like OBJ, glTF, PLY, and FBX.

Related:,,,

U

USD

Universal Scene Description. Pixar's scene format for complex production pipelines. Supports layered composition, references, variants, and massive scenes with millions of objects. Increasingly adopted across film, games, and simulation.

Related:,

UV Mapping

The process of projecting a 3D surface onto a 2D plane for texture application. U and V are the 2D coordinate axes. Good UV layouts minimize distortion and seams, enabling textures to wrap cleanly around 3D geometry.

Related:,,

V

Volume Rendering

Rendering that visualizes the interior of 3D volumetric data by casting rays through it and accumulating color and opacity. Used for medical imaging (CT/MRI), scientific visualization, and participating media effects like fog and fire.

Related:,,,See technique →

Voxel

A 3D pixel — a single data point in a volumetric grid. A portmanteau of 'volume' and 'pixel.' Voxels store scalar values (density, temperature, color) at discrete grid positions and are the primary data structure for volume rendering.

Related:,See technique →

Vulkan

A low-level, cross-platform GPU API and successor to OpenGL. Provides explicit control over GPU resources for both graphics and compute. Supports ray tracing extensions for hardware-accelerated ray tracing on compatible GPUs.

Related:,,See technique →

W

WebGPU

A modern web standard for GPU access in browsers, successor to WebGL. Provides compute shader support and a more efficient API design aligned with native GPU APIs like Vulkan, Metal, and DirectX 12.

Related:,See technique →