What Is Rasterization?
Rasterization is the rendering technique that powers virtually every real-time 3D application: video games, CAD software, virtual reality, web-based 3D viewers, and interactive simulations. It takes 3D geometry (triangles, typically) and efficiently determines which pixels on the screen each triangle covers, then shades those pixels to produce the final image.
Unlike path tracing, which simulates light physics from first principles, rasterization works "forward" from geometry to pixels. It processes each triangle independently, projecting it onto the screen and filling in the covered pixels. This approach is inherently parallelizable — GPUs are designed specifically to execute this pipeline across thousands of triangles and millions of pixels simultaneously.
The result is speed: modern GPUs can rasterize complex scenes at 60 to 240+ frames per second, enabling interactive experiences that path tracing cannot yet achieve for equivalent scene complexity.
How It Works
Rasterization follows a well-defined pipeline, implemented in hardware by every modern GPU:
-
Vertex processing — Each vertex of each triangle is transformed from its local 3D coordinate space through model, view, and projection matrices into clip space (a normalized coordinate system relative to the camera). Vertex shaders — small GPU programs written in GLSL, HLSL, or WGSL — can also compute per-vertex lighting, animation, and other attributes.
-
Primitive assembly and clipping — Vertices are grouped into primitives (triangles). Triangles partially or fully outside the camera's view frustum are clipped or culled entirely, avoiding wasted computation on invisible geometry.
-
Rasterization — Each triangle is "scan-converted" into fragments: the individual pixel-sized samples that the triangle covers on screen. For each fragment, the GPU interpolates vertex attributes (position, normal, texture coordinates, color) across the triangle's surface using barycentric coordinates.
-
Fragment shading — A fragment shader runs for each fragment, computing its final color. This is where material appearance is defined: texture sampling, lighting calculations (using models like Blinn-Phong or physically-based BRDF models), normal mapping, shadow map lookups, screen-space ambient occlusion, and post-processing effects.
-
Per-sample operations — The depth test compares each fragment's depth against the depth buffer to resolve occlusion — determining which surface is in front. Blending handles transparent surfaces. Stencil operations enable masking effects.
-
Framebuffer output — The surviving fragments are written to the framebuffer, which is displayed on screen or passed to post-processing stages.
This entire pipeline executes in under 16 milliseconds for a 60 FPS target — often processing millions of triangles and billions of fragment shader invocations per frame.
Strengths
Rasterization's defining strength is speed. The pipeline is so well-suited to parallel execution that dedicated GPU hardware can process it orders of magnitude faster than general-purpose computation. This enables interactivity: users can move the camera, change lighting, animate characters, and see results instantly.
Modern rasterization has also become remarkably capable at approximating effects that traditionally required ray tracing. Techniques like screen-space reflections, screen-space ambient occlusion (SSAO), shadow mapping, cascaded shadow maps, light probes, image-based lighting, and temporal anti-aliasing produce results that are visually convincing in motion, even if not physically exact.
The introduction of hardware ray tracing (NVIDIA RTX, AMD RDNA 2+) has enabled hybrid approaches where rasterization handles primary visibility and most shading, while selective ray tracing adds accurate reflections, shadows, or global illumination. This combination is the current state of the art in real-time rendering.
Tradeoffs
Rasterization processes triangles independently — it has no inherent knowledge of the global light distribution in a scene. This means effects that depend on light bouncing between surfaces (global illumination, color bleeding, caustics) do not emerge naturally and must be approximated with specialized techniques, each with its own limitations and cost.
Shadows require shadow maps (which have resolution and bias artifacts), reflections require screen-space or probe-based approximations (which fail for off-screen objects), and global illumination requires pre-baked lightmaps or real-time approximations (which trade accuracy for speed). Each of these approximations adds complexity to the rendering pipeline and introduces its own failure modes.
For applications where physical accuracy is paramount — visual effects, product visualization, architectural lighting studies — rasterization alone is insufficient. The growing trend is toward hybrid rendering that combines rasterization's speed with selective ray tracing for effects that require global scene knowledge.
History
The z-buffer algorithm was invented by Ed Catmull in 1974. Hardware-accelerated rasterization became mainstream with consumer GPUs in the late 1990s — the 3dfx Voodoo and NVIDIA RIVA TNT brought 3D acceleration to home PCs. The programmable shader revolution in the early 2000s (NVIDIA GeForce 3, ATI Radeon 9700) transformed fixed-function pipelines into the flexible GPU computing platforms we use today. OpenGL and DirectX standardized the API surface, while modern APIs like Vulkan, DirectX 12, Metal, and WebGPU give developers unprecedented low-level control over the rendering pipeline. The 2018 introduction of hardware ray tracing in NVIDIA RTX GPUs marked the beginning of a convergence between rasterization and ray tracing, producing the hybrid rendering architectures that dominate contemporary game engines.
Renderers Using Rasterization
Further Reading
- Real-Time Rendering, 4th Edition (Akenine-Moller et al.)Book
The comprehensive textbook on real-time rendering techniques. Covers the full rasterization pipeline and modern GPU architectures.
- Learn OpenGL (Joey de Vries)Tutorial
Excellent free tutorial series that walks through building a rasterization renderer from scratch using OpenGL.
- Vulkan TutorialTutorial
Step-by-step guide to the modern Vulkan graphics API, which gives explicit control over the GPU rasterization pipeline.
- A Trip Through the Graphics Pipeline (Fabian Giesen)Tutorial
Legendary blog series explaining how the GPU hardware actually executes the rasterization pipeline at the silicon level.
- WebGPU FundamentalsTutorial
Introduction to the next-generation web graphics API that brings modern GPU programming to the browser.