Skip to main content

redner

v0.4.1

Differentiable path tracer with correct edge-sampling gradients for physically based inverse rendering

DifferentiablePath TracingRay Tracing
C++/Python
MIT
Maintenance
GPU: CUDA
CPU
Stars
1.2k
Latest Release0.4.1
Release DateJan 2023
Contributors18
Forks160
At a Glance
Technique
Differentiable, Path Tracing, Ray Tracing
Language
C++/Python
License
MIT
Platforms
Linux
macOS
Windows
GPU Support
Yes (CUDA)
CPU Support
Yes
Scene Formats
OBJ, Mitsuba XML, Programmatic
Output Formats
Tensor, PNG, EXR
First Release
Nov 2018
Latest Release
0.4.1 — Jan 2023
Best For
Researchers who need physically correct differentiable gradients through full global illumination effects including silhouette edges — particularly for inverse rendering, material estimation, and lighting reconstruction papers

Development Activity

1.2k
Stars
0.4.1
3 years ago
18
Contributors
View on GitHub

Overview

Best for

Researchers who need physically correct differentiable gradients through full global illumination effects including silhouette edges — particularly for inverse rendering, material estimation, and lighting reconstruction papers

Not ideal for

Large-scale training loops requiring fast rendering throughput or real-time performance — use nvdiffrast or PyTorch3D when rasterization-level accuracy is sufficient and speed is the priority

Strengths

  • The only fully differentiable physically based path tracer that correctly handles edge and silhouette gradients — where rasterization-based methods produce zero or incorrect gradients at object boundaries, redner's edge-sampling algorithm computes mathematically correct derivatives
  • Full global illumination with differentiable gradients through soft shadows, interreflections, color bleeding, and caustics, enabling physically accurate inverse rendering that accounts for complex light transport phenomena
  • Runs on both CPU and GPU — unlike nvdiffrast and most of Kaolin, redner has a fully functional multi-threaded CPU rendering path, making it accessible on hardware without NVIDIA GPUs including macOS and AMD systems
  • Clean mathematical foundation based on the seminal edge-sampling papers by Tzu-Mao Li, making it a valuable reference implementation for academic work on differentiable rendering theory
  • Permissive MIT license with no commercial restrictions, and a stable, well-understood codebase that continues to be cited as a baseline in differentiable rendering research

Limitations

  • Significantly slower per iteration than rasterization-based differentiable renderers (nvdiffrast, PyTorch3D) — path tracing is inherently more expensive, limiting its use in training loops requiring millions of iterations at high throughput
  • In maintenance mode with infrequent updates — the project is stable but no longer under active feature development, and newer approaches (Mitsuba 3's differentiable mode) have superseded it for many use cases
  • Limited scene complexity in practice — while it supports arbitrary geometry, the per-iteration cost of differentiable path tracing makes it impractical for large scenes with dense geometry and complex multi-bounce lighting
  • Smaller community and ecosystem than PyTorch3D or Kaolin — fewer tutorials, examples, and integration guides, which increases the onboarding effort for new users

Background

redner is a differentiable renderer developed by Tzu-Mao Li (MIT/UC San Diego) that solves a fundamental problem in differentiable rendering: computing correct gradients at object silhouettes and geometric discontinuities. While rasterization-based differentiable renderers (nvdiffrast, PyTorch3D) produce zero or incorrect gradients when object boundaries cross pixel boundaries, redner uses an edge-sampling algorithm that analytically accounts for these discontinuities, producing mathematically correct derivatives even at silhouette edges.

Unlike the rasterization-based tools common in the differentiable rendering ecosystem, redner is a full physically based path tracer that computes global illumination — soft shadows, interreflections, color bleeding, and caustics all participate in the gradient computation. This makes redner uniquely suited for inverse rendering tasks where physical accuracy matters: material estimation from photographs, lighting reconstruction, and geometry optimization in scenes with complex light transport.

redner supports both CPU and GPU execution paths. The CPU path uses multi-threaded rendering and is fully functional without an NVIDIA GPU, making redner more accessible than GPU-only alternatives. The renderer integrates with PyTorch through a custom autograd function, accepts OBJ meshes and Mitsuba XML scene files, and supports physically based materials including microfacet BSDFs, depth of field, and HDRI environment lighting. The project is in maintenance mode — while the codebase is stable and functional, active feature development has slowed as the author has moved to newer projects.

Quick Start

pip install redner-gpu

Community & Resources

Performance Benchmarks

No benchmark data available for redner yet.

Benchmarks will be added as more renderers are tested across our standard scene suite.

Learn about our methodology