3D Gaussian Splatting
v1.0Original INRIA implementation of real-time radiance field rendering using 3D Gaussian primitives
Development Activity
Sample Renders
Overview
Best for
Researchers reproducing original 3DGS results, benchmarking new Gaussian splatting methods, and as the canonical reference implementation for the splatting paradigm
Not ideal for
Commercial applications requiring permissive licensing, production pipelines needing robust error handling, or projects targeting non-NVIDIA hardware
Strengths
- Pioneered the 3D Gaussian Splatting technique that fundamentally changed real-time novel view synthesis — the seminal SIGGRAPH 2023 paper in this space with thousands of citations
- Achieves over 100 FPS rendering at 1080p resolution, orders of magnitude faster than NeRF-based volumetric methods while matching or exceeding their visual quality
- Explicit, interpretable 3D representation (point cloud of Gaussians with color, opacity, and covariance) that can be directly inspected, manipulated, and exported
- Training converges in minutes on a single GPU, compared to hours for comparable NeRF quality — enabling rapid iteration during research
- The canonical reference implementation that all subsequent Gaussian splatting works build upon — unmatched ecosystem compatibility and reproducibility for benchmarking
Limitations
- Requires NVIDIA GPU with CUDA support — no AMD, Intel, or Apple Silicon compatibility
- Research-grade codebase with minimal error handling, limited documentation, and no formal versioning or release cycle
- Memory usage scales with scene complexity — large outdoor scenes can require 12-24 GB VRAM during training and produce multi-gigabyte PLY files
- No built-in support for dynamic scenes, relighting, or material decomposition — captures appearance only under fixed lighting conditions
- The custom Inria license restricts commercial use, unlike permissively-licensed alternatives such as gsplat (Apache-2.0)
Background
3D Gaussian Splatting (3DGS) is the reference implementation of the seminal SIGGRAPH 2023 paper by Kerbl et al. from INRIA, which introduced a fundamentally new approach to novel view synthesis. Instead of representing scenes as neural radiance fields queried via volume rendering, 3DGS represents scenes as collections of 3D Gaussian primitives — each defined by a position, covariance matrix, opacity, and spherical harmonic color coefficients — that are rasterized via a custom differentiable tile-based splatting renderer.
This explicit, point-based representation enables real-time rendering at over 100 FPS at 1080p resolution, a dramatic improvement over NeRF-based methods that typically require seconds per frame. The training process optimizes Gaussian parameters through gradient descent with adaptive densification and pruning, converging in minutes rather than the hours required by comparable NeRF quality. The approach achieves state-of-the-art visual quality on standard benchmarks while enabling real-time interaction.
The original implementation includes a CUDA-based differentiable rasterizer and a SIBR-based real-time viewer. While the codebase is research-grade — prioritizing correctness and reproducibility over production robustness — it has become the canonical reference for the rapidly growing Gaussian splatting ecosystem. Hundreds of follow-up papers build upon this codebase, and it remains the standard baseline for benchmarking new Gaussian splatting methods.
Quick Start
git clone https://github.com/graphdeco-inria/gaussian-splatting --recursive && cd gaussian-splatting && pip install -e submodules/diff-gaussian-rasterization submodules/simple-knnRelated Renderers
Community & Resources
Performance Benchmarks
No benchmark data available for 3D Gaussian Splatting yet.
Benchmarks will be added as more renderers are tested across our standard scene suite.
Learn about our methodology