Code for paper "Learning Deep Implicit Functions for 3D Shapes with Dynamic Code Clouds"

Abstract: Deep Implicit Function (DIF) has gained popularity as an efficient 3D shape representation. To capture geometry details, current methods usually learn DIF using local latent codes, which discretize the space into a regular 3D grid (or octree) and store local codes in grid points (or octree nodes). Given a query point, the local feature is computed by interpolating its neighboring local codes with their positions. However, the local codes are constrained at discrete and regular positions like grid points, which makes the code positions difficult to be optimized and limits their representation ability. To solve this problem, we propose to learn DIF with Dynamic Code Cloud, named DCC-DIF. Our method explicitly associates local codes with learnable position vectors, and the position vectors are continuous and can be dynamically optimized, which improves the representation ability. In addition, we propose a novel code position loss to optimize the code positions, which heuristically guides more local codes to be distributed around complex geometric details. In contrast to previous methods, our DCC-DIF represents 3D shapes more efficiently with a small amount of local codes, and improves the reconstruction quality. Experiments demonstrate that DCC-DIF achieves better performance over previous methods. Code and data are available at this https URL.

DCC-DIF

This is an implementation of the CVPR 2022 paper Learning Deep Implicit Functions for 3D Shapes with Dynamic Code Clouds.

[Paper] [Supplement] [Data] [Project page] [Jittor Code]

Data

We use the ShapeNet dataset in our experiments. To run our method, the sampled points and their signed distances are needed as training data. We put the code and documents about data processing in sample_SDF_points.zip. For ease of use, we also provide the processed data of bench in 02828884_sdf_samples.zip.

Running code

First, install python denpendencies:

pip install -r requirements.txt

Then, prepare your configuration file based on the example we provide in configs/bench.py. You may need to specify the paths to data and split files.

Now you can reproduce the experimental results in our paper by running:

python train.py configs.bench
python reconstruct.py configs.bench
python evaluate.py configs.bench

The pretrained models can be found here, we also provide demo code and document to use pretrained models in usage_demo.zip.

Citing DCC-DIF

If you find this code useful, please consider citing:

@inproceedings{Li2022DCCDIF,
    title={Learning Deep Implicit Functions for 3D Shapes with Dynamic Code Clouds},
    author={Tianyang Li and Xin Wen and Yu-Shen Liu and Hua Su and Zhizhong Han},
    booktitle={IEEE/CVF Conference on Computer Vision and Pattern Recognition},
    year={2022}
}

Download Source Code

Download ZIP

Paper Preview

Jun 13, 2022