👁️ IRIS: Inverse Rendering of Indoor Scenes
from Low Dynamic Range Images

1Meta  2University of Illinois Urbana-Champaign  3University of Maryland, College Park

IRIS estimates accurate material, lighting, and camera response function given a set of LDR images and scene geometry, enabling photorealistic and view-consistent relighting and object insertion.

Abstract

While numerous 3D reconstruction and novel-view synthesis methods allow for photorealistic rendering of a scene from multi-view images easily captured with consumer cameras, they bake illumination in their representations and fall short of supporting advanced applications like material editing, relighting, and virtual object insertion. The reconstruction of physically based material properties and lighting via inverse rendering promises to enable such applications. However, most inverse rendering techniques require high dynamic range (HDR) images as input, a setting that is inaccessible to most users. We present a method that recovers the physically based material properties and spatially-varying HDR lighting of a scene from multi-view, low-dynamic-range (LDR) images.

We model the LDR image formation process in our inverse rendering pipeline and propose a novel optimization strategy for material, lighting, and a camera response model. We evaluate our approach with synthetic and real scenes compared to the state-of-the-art inverse rendering methods that take either LDR or HDR input. Our method outperforms existing methods taking LDR images as input, and allows for highly realistic relighting and object insertion.

Results

Click and jump to:

Material & Lighting Ground Truth Comparisons Relighting & Insertion Moving Light Source

Material and Lighting Qualitative Comparisons

We use FIPT(HDR) as reference, which takes in HDR images as input.
Baseline FIPT*(LDR) takes LDR images and estimated emission masks as input.
The roughness σ and metallic m are visualized with OPENCV MAGMA colormap (from left to right: 0~1):
For the HDR emission Le , we mask the non-emitter region in blue, and show tonemapped emission otherwise, such that it is not saturated and difference is visible.

Real (FIPT)

Real (ScanNet++)

Methods (LDR input)

Methods (HDR input)

Results



Material and Lighting Comparisons with Ground Truth

In addition to FIPT(HDR) and FIPT*(LDR), we also compare the material & lighting estimation with NeILF, I2-SDF, and Li et al. 2022.
We thank the authors of I2-SDF and and Li et al. 2022 for providing the results.
The roughness σ is visualized with OPENCV MAGMA colormap (from left to right: 0~1):
For the HDR emission Le , we mask the non-emitter region in blue, and show tonemapped emission otherwise, such that it is not saturated and difference is visible.

Synthetic (FIPT)

Methods (LDR input)

Methods (HDR input)

Results



Relighting and Object Insertion Comparisons

We use FIPT(HDR) as reference, which takes in HDR images as input.
Baseline FIPT*(LDR) takes LDR images and estimated emission masks as input.

Real (FIPT)

Real (ScanNet++)

Methods (LDR input)

Methods (HDR input)

Applications



Comparisons with Moving Light Source

We use FIPT(HDR) as reference, which takes in HDR images as input.
Baseline FIPT*(LDR) takes LDR images and estimated emission masks as input.

Synthetic (FIPT)

Real (FIPT)

Methods (LDR input)

Methods (HDR input)

Results

Method

Given multi-view posed LDR images and a surface mesh, our inverse rendering pipeline is divided into two main stages. In the initialization stage, we initialize the BRDF, extract a surface light field, and estimate emitter geometry. In the optimization stage, we first recover HDR radiance from the LDR input, then bake shading maps, and jointly optimize BRDF and CRF parameters. These three steps are repeated until convergence.