Code for paper "NeuRIS: Neural Reconstruction of Indoor Scenes Using Normal Priors"

Code for paper "NeuRIS: Neural Reconstruction of Indoor Scenes Using Normal Priors"
Abstract: Reconstructing 3D indoor scenes from 2D images is an important task in many computer vision and graphics applications. A main challenge in this task is that large texture-less areas in typical indoor scenes make existing methods struggle to produce satisfactory reconstruction results. We propose a new method, named NeuRIS, for high quality reconstruction of indoor scenes. The key idea of NeuRIS is to integrate estimated normal of indoor scenes as a prior in a neural rendering framework for reconstructing large texture-less shapes and, importantly, to do this in an adaptive manner to also enable the reconstruction of irregular shapes with fine details. Specifically, we evaluate the faithfulness of the normal priors on-the-fly by checking the multi-view consistency of reconstruction during the optimization process. Only the normal priors accepted as faithful will be utilized for 3D reconstruction, which typically happens in the regions of smooth shapes possibly with weak texture. However, for those regions with small objects or thin structures, for which the normal priors are usually unreliable, we will only rely on visual features of the input images, since such regions typically contain relatively rich visual features (e.g., shade changes and boundary contours). Extensive experiments show that NeuRIS significantly outperforms the state-of-the-art methods in terms of reconstruction quality.


We propose a new method, dubbed NeuRIS, for high quality reconstruction of indoor scenes.

Project page | Paper | Data


Data preparation

Scene data used in NeuRIS can be downloaded from here and extract the scene data into folder dataset/indoor. And the scene data used in ManhattanSDF are also included for convenient comparisons. The data is organized as follows:

|-- cameras_sphere.npz   # camera parameters
|-- image
    |-- 0000.png        # target image for each view
    |-- 0001.png
|-- depth
    |-- 0000.png        # target depth for each view
    |-- 0001.png
|-- pose
    |-- 0000.txt        # camera pose for each view
    |-- 0001.txt
|-- pred_normal
    |-- 0000.npz        # predicted normal for each view
    |-- 0001.npz
|-- xxx.ply		# GT mesh or point cloud from MVS
|-- trans_n2w.txt       # transformation matrix from normalized coordinates to world coordinates

Refer to the file for more details about data preparation of ScanNet or private data.


conda create -n neuris python=3.8
conda activate neuris
conda install pytorch=1.9.0 torchvision torchaudio cudatoolkit=10.2 -c pytorch
pip install -r requirements.txt


python ./ --mode train --conf ./confs/neuris.conf --gpu 0 --scene_name scene0625_00

Mesh extraction

python --mode validate_mesh --conf <config_file> --is_continue


python ./ --mode eval_3D_mesh_metrics


Cite as below if you find this repository is helpful to your project:

      	title={NeuRIS: Neural Reconstruction of Indoor Scenes Using Normal Priors}, 
      	author={Wang, Jiepeng and Wang, Peng and Long, Xiaoxiao and Theobalt, Christian and Komura, Taku and Liu, Lingjie and Wang, Wenping},
	publisher = {arXiv},

Download Source Code

Download ZIP

Paper Preview

Aug 21, 2022