Dr.Jit
v1.0.0Just-in-time compiler for vectorized and differentiable computation — the computational backbone of Mitsuba 3
Development Activity
Sample Renders
Overview
Best for
Researchers building custom differentiable rendering, physics simulation, or optimization algorithms who need compiled-code performance from Python — particularly those already using or planning to use Mitsuba 3
Not ideal for
Users who want a ready-to-use differentiable renderer — Dr.Jit is computational infrastructure, not an application. Use Mitsuba 3 (built on Dr.Jit) or PyTorch3D for a higher-level rendering experience
Strengths
- JIT compilation transforms Python numerical code into optimized CUDA or LLVM kernels automatically, achieving compiled-code performance from Python-level programs without manual kernel writing or framework-specific tensor operations
- Supports forward and reverse-mode automatic differentiation of arbitrary programs — not limited to predefined rendering operations, enabling differentiation of custom integrators, BSDFs, sampling strategies, and physics simulations
- Multiple backend targets from a single codebase: scalar (debugging with readable outputs), LLVM (vectorized CPU with SIMD), and CUDA (GPU), selectable at runtime without any source code modifications
- The proven computational engine behind Mitsuba 3, demonstrated at scale for differentiable rendering of complex physically based scenes with spectral light transport, volumetric effects, and gradient-based optimization
- Clean NumPy-like array programming interface that feels natural to Python scientific computing users, with lazy evaluation and kernel fusion for memory-efficient computation of large computational graphs
Limitations
- Not a renderer — Dr.Jit produces no images on its own. Users who want to render must build a complete rendering pipeline on top of Dr.Jit or use Mitsuba 3, which does this work for them
- Incompatible with PyTorch's autograd ecosystem — programs must use Dr.Jit arrays throughout, not PyTorch tensors, and bridging between the two requires explicit conversion with performance overhead and loss of gradient tracking
- Steep learning curve due to JIT tracing semantics — understanding lazy evaluation, kernel compilation boundaries, and Dr.Jit's memory model requires significant investment, especially for users coming from eager-execution frameworks like PyTorch
- Smaller community and ecosystem than PyTorch — fewer tutorials, Stack Overflow answers, and community examples. Most practical usage is through Mitsuba 3 rather than direct Dr.Jit programming
Background
Dr.Jit is a just-in-time (JIT) compiler for vectorized and differentiable numerical computation, developed at EPFL by Wenzel Jakob and collaborators. It is not a renderer — it is the computational infrastructure layer that powers Mitsuba 3's differentiable rendering capabilities. Dr.Jit transforms Python or C++ numerical programs into optimized machine code targeting multiple backends: scalar (for debugging), LLVM (for vectorized CPU execution), and CUDA (for GPU execution), all from a single source without code changes.
The system's core contribution is enabling efficient automatic differentiation of arbitrary numerical programs, not just predefined rendering operations. Where PyTorch's autograd is limited to its own tensor operations and computational graph, Dr.Jit can differentiate custom algorithms — including physically based rendering integrators, custom BSDF evaluation, Monte Carlo sampling strategies, and physics simulations — at compiled-code performance levels. It supports both forward-mode and reverse-mode differentiation, with lazy evaluation and kernel fusion for memory-efficient computation.
Dr.Jit provides a NumPy-like array programming interface in Python, making it familiar to scientific computing users. Its arrays track computational graphs across operations and compile them into fused kernels on demand. While its primary use case is as the engine behind Mitsuba 3, Dr.Jit is a standalone library that can be used for any differentiable computation task — optimization, physics simulation, inverse problems, or custom rendering algorithms. The system requires a conceptual understanding of JIT tracing, lazy evaluation, and kernel compilation that sets it apart from eager-execution frameworks.
Quick Start
pip install drjitRelated Renderers
Community & Resources
Performance Benchmarks
No benchmark data available for Dr.Jit yet.
Benchmarks will be added as more renderers are tested across our standard scene suite.
Learn about our methodology