Code for paper "KD-MVS: Knowledge Distillation Based Self-supervised Learning for MVS"

Abstract: Supervised multi-view stereo (MVS) methods have achieved remarkable progress in terms of reconstruction quality, but suffer from the challenge of collecting large-scale ground-truth depth. In this paper, we propose a novel self-supervised training pipeline for MVS based on knowledge distillation, termed \textit{KD-MVS}, which mainly consists of self-supervised teacher training and distillation-based student training. Specifically, the teacher model is trained in a self-supervised fashion using both photometric and featuremetric consistency. Then we distill the knowledge of the teacher model to the student model through probabilistic knowledge transferring. With the supervision of validated knowledge, the student model is able to outperform its teacher by a large margin. Extensive experiments performed on multiple datasets show our method can even outperform supervised methods.

KD-MVS: Knowledge Distillation Based Self-supervised Learning for Multi-view Stereo

Paper | Project Page | Data | Checkpoints

Installation

Clone this repo:

git clone https://github.com/megvii-research/KD-MVS.git
cd KD-MVS

We recommend to use Anaconda to manage python environment:

conda create -n kdmvs python=3.6
conda activate kdmvs
pip install -r requirements.txt

Data preparation

Pseudo label

Download our pseudo label (DTU dataset) from here. Unzip it and put it in the CHECKED_DEPTH_DIR folder as introduced below.

DTU

Download the preprocessed DTU training data (from Original MVSNet), and unzip it to construct a dataset folder like:

dtu_training
 ├── Cameras
 └── Rectified

For DTU testing set, download the preprocessed DTU testing data (from Original MVSNet) and unzip it as the test data folder, which should contain one cams folder, one images folder and one pair.txt file.

Training

Unsupervised training

Set the configuration in scripts/run_train_unsup.sh as:

  • Set DATASET_DIR as the path of DTU training set.
  • Set LOG_DIR as the path to save the checkpoints.
  • Set NGPUS and BATCH_SIZE according to your machine.
  • (Optional) Modify other hyper-parameters according to the argparse in train_unsup.py, such as summary_freq, save_freq, and so on.

To train your model, just run:

bash scripts/run_train_unsup.sh

KD training

Set the configuration in scripts/run_train_kd.sh as:

  • Set DATASET_DIR as the path of DTU training set.
  • Set LOG_DIR as the path to save the checkpoints.
  • Set CHECKED_DEPTH_DIR as the path to our pseudo label folder.
  • Set NGPUS and BATCH_SIZE according to your machine.
  • (Optional) Modify other hyper-parameters according to the argparse, such as summary_freq, save_freq, and so on.

To train student model, just run:

bash scripts/run_train_kd.sh

Note: 2~3 or more rounds of KD-training with different thresholds are needed to get fine scores when training from scratch (as shown in Tab. 6), which is somewhat time-consuming.

Testing

For easy testing, you can download our pretrained models and put them in ckpt folder, or use your own models and follow the instruction below.

Set the configuration in scripts/run_test_dtu.sh:

  • Set TESTPATH as the path of DTU testing set.
  • Set CKPT_FILE as the path of the model weights.
  • Set OUTDIR as the path to save results.

Run:

bash scripts/run_test_dtu.sh

To get quantitative results of the fused point clouds from the official MATLAB evaluation tools, you can refer to TransMVSNet.

Citation

@inproceedings{ding2022kdmvs,
  title={KD-MVS: Knowledge Distillation Based Self-supervised Learning for Multi-view Stereo},
  author={Ding, Yikang and Zhu, Qingtian and Liu, Xiangyue and Yuan, Wentao and Zhang, Haotian  and Zhang, Chi},
  booktitle={European Conference on Computer Vision},
  year={2022},
  organization={Springer}
}

Acknowledgments

We borrow some code from TransMVSNet and U-MVS. We thank the authors for releasing the source code.

Download Source Code

Download ZIP

Paper Preview

Aug 18, 2022