Neural radiance fields have achieved remarkable performance in modeling the appearance of 3D scenes. However, existing approaches still struggle with the view-dependent appearance of glossy surfaces, especially under complex lighting of indoor environments. Unlike existing methods, which typically assume distant lighting like an environment map, we propose a learnable Gaussian directional encoding to better model the view-dependent effects under near-field lighting conditions. Importantly, our new directional encoding captures the spatially-varying nature of near-field lighting and emulates the behavior of prefiltered environment maps. As a result, it enables the efficient evaluation of preconvolved specular color at any 3D location with varying roughness coefficients. We further introduce a data-driven geometry prior that helps alleviate the shape radiance ambiguity in reflection modeling. We show that our Gaussian directional encoding and geometry prior significantly improve the modeling of challenging specular reflections in neural radiance fields, which helps decompose appearance into more physically meaningful components.
Here we present an overview of our model. There are two key factors that facilitate the model's capability in modeling photorealitic specular reflections: 1) The novel 3D Gaussian directional encoding module that transforms the reflected rays into a spatially-varying embeddings, which are further decoded into specular colors. 2) Monocular normal supervision which effectively resolves the shape-radiance ambiguity inherent in specular reflection modeling.
@misc{ma2023specnerf,
title={SpecNeRF: Gaussian Directional Encoding for Specular Reflections},
author={Li Ma and Vasu Agrawal and Haithem Turki and Changil Kim and Chen Gao and Pedro Sander and Michael Zollhöfer and Christian Richardt},
year={2023},
eprint={2312.13102},
archivePrefix={arXiv},
primaryClass={cs.CV}
}