Abstract: Recent development of neural implicit function has shown tremendous success on high-quality 3D shape reconstruction. However, most works divide the space into inside and outside of the shape, which limits their representing power to single-layer and watertight shapes. This limitation leads to tedious data processing (converting non-watertight raw data to watertight) as well as the incapability of representing general object shapes in the real world. In this work, we propose a novel method to represent general shapes including non-watertight shapes and shapes with multi-layer surfaces. We introduce General Implicit Function for 3D Shape (GIFS), which models the relationships between every two points instead of the relationships between points and surfaces. Instead of dividing 3D space into predefined inside-outside regions, GIFS encodes whether two points are separated by any surface. Experiments on ShapeNet show that GIFS outperforms previous state-of-the-art methods in terms of reconstruction quality, rendering efficiency, and visual fidelity. Project page is available at this https URL .
GIFS
This repository is the official implementation for GIFS introduced in the paper:
GIFS: Neural Implicit Function for General Shape Representation
Jianglong Ye, Yuntao Chen, Naiyan Wang, Xiaolong Wang
CVPR, 2022
Project Page / ArXiv / Video
Visualization
Multi-layer ball | Bus with seats inside |
---|---|
![]() |
![]() |
Table of Content
Environment Setup
PyTorch with CUDA support are required, please follow the official installation guide.
(Our code is tested on python 3.9, torch 1.8.0, CUDA 11.1 and RTX 3090)
Besides, following libraries are used in this project:
configargparse
trimesh
tqdm
numba
scipy # for data preparation
igl # for data preparation
wandb # (optional, for training)
Installation instruction:
pip install configargparse trimesh tqdm numba scipy wandb
conda install -c conda-forge igl
Demo
Download our pretrained model from here and put it in the PROJECT_ROOT/experiments/demo/checkpoints
directory.
Run demo with the following command:
python generate.py --config configs/demo.txt
Some meshes are generated in the PROJECT_ROOT/experiments/demo/evaluation/generation/
directory. You can check them in meshlab. Note that we do not make normals from neighbouring faces point to the same direction, so you may need to enable the back face -> double
option in meshlab for better visualization.
Data Preparation
To reproduce our experiments, please download the raw ShapeNetCore v1 dataset and unzip it into the PROJECT_ROOT/datasets/shapenet/data
directory.
Compile the Label Generation Code
Our GT label generation code depends on CGAL which can be installed on ubuntu by following command (or other ways):
apt install libcgal-dev
Compile the label generation code with following commands:
cd PROJECT_ROOT/data_processing/intersection_detection/
mdkir build
cd build
cmake -DCMAKE_BUILD_TYPE=Release ..
make
Or, we provide a precompiled binary file (static linking) for ubuntu which can be downloaded from here. After downloading, put the binary file to PROJECT_ROOT/data_processing/intersection_detection/build/
and change the permission of the binary file:
chmod +x intersection
Data Processing
Run the following command to process the data:
export PYTHONPATH=.
python dataprocessing/create_split.py --config configs/shapenet_cars.txt
python dataprocessing/preprocess.py --config configs/shapenet_cars.txt
Run
We provide a pretrained model here.
Training
Run the following command to train the model:
python ddp_train.py --config configs/shapenet_cars.txt
Generation
Run the following command to generate meshes:
python generate.py --config configs/shapenet_cars.txt
Citation
@inproceedings{ye2022gifs,
title={GIFS: Neural Implicit Function for General Shape Representation},
author={Ye, Jianglong and Chen, Yuntao and Wang, Naiyan and Wang, Xiaolong},
booktitle={Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition},
year={2022}
}
Acknowledgement
The code is based on NDF, thanks for the great work!