Inverse rendering seeks to recover 3D geometry, surface material, and lighting from captured images, enabling advanced applications such as novel-view synthesis, relighting, and virtual object insertion. However, most existing techniques rely on high dynamic range (HDR) images as input, limiting accessibility for general users. In response, we introduce IRIS, an inverse rendering framework that recovers the physically based material, spatially-varying HDR lighting, and camera response functions from multi-view, low-dynamic-range (LDR) images. By eliminating the dependence on HDR input, we make inverse rendering technology more accessible.
We evaluate our approach on real-world and synthetic scenes and compare it with state-of-the-art methods. Our results show that IRIS effectively recovers HDR lighting, accurate material, and plausible camera response functions, supporting photorealistic relighting and object insertion.
Click and jump to:
Applications Ground Truth Comparisons Qualitative Comparisons
Our videos show the new light sources are reflected by the specular surfaces (e.g. white board, mirror).
Baseline FIPT* takes LDR images and our estimated emission masks as input.
FIPT takes HDR as input and serves as reference. However, HDR images are not available in some of the real scenes data.
Real Scenes
Methods (LDR input)
Methods (HDR input)
Applications
To evaluate the quality of inverse rendering, we compare IRIS with multiple baselines on synthetic scenes from FIPT, where ground truth material, geometry, and lighting are available.
Synthetic Scenes
Baseline FIPT* takes LDR images and our estimated emission masks as input.
FIPT takes HDR as input and serves as reference. However, HDR images are not available in some of the real scenes data.
Real Scenes
Methods (LDR input)
Methods (HDR input)
Results
Given multi-view posed LDR images, our inverse rendering pipeline is divided into two main stages. In the initialization stage, we initialize the BRDF, extract a surface light field, and estimate emitter geometry. In the optimization stage, we first recover HDR radiance from the LDR input, then bake shading maps, and jointly optimize BRDF and CRF parameters. These three steps are repeated until convergence.
Wu, Liwen, et al. "Factorized inverse path tracing for efficient and accurate material-lighting estimation." ICCV, 2023.
Yao, Yao, et al. "NeILF: Neural incident light field for physically-based material estimation." ECCV, 2022.
Zhu, Jingsen, et al. "I^2-SDF: Intrinsic indoor scene reconstruction and editing via raytracing in neural sdfs." CVPR, 2023.
Li, Zhengqin, et al. "Physically-based editing of indoor scene lighting from a single image." ECCV, 2022.