Code for paper "Boosting 3D Object Detection via Object-Focused Image Fusion"

Code for paper "Boosting 3D Object Detection via Object-Focused Image Fusion"
Abstract: 3D object detection has achieved remarkable progress by taking point clouds as the only input. However, point clouds often suffer from incomplete geometric structures and the lack of semantic information, which makes detectors hard to accurately classify detected objects. In this work, we focus on how to effectively utilize object-level information from images to boost the performance of point-based 3D detector. We present DeMF, a simple yet effective method to fuse image information into point features. Given a set of point features and image feature maps, DeMF adaptively aggregates image features by taking the projected 2D location of the 3D point as reference. We evaluate our method on the challenging SUN RGB-D dataset, improving state-of-the-art results by a large margin (+2.1 [email protected] and [email protected]). Code is available at this https URL.

Boosting 3D Object Detection via Object-Focused Image Fusion

PWC

This is a MMDetection3D implementation of the paper Yang et al, "Boosting 3D Object Detection via Object-Focused Image Fusion".

Boosting 3D Object Detection via Object-Focused Image Fusion
Hao Yang*, Chen Shi*, Yihong Chen, Liwei Wang

paper

Pipeline

Prerequisites

The code is tested with Python3.7, PyTorch == 1.8, CUDA == 11.1, mmdet3d == 0.18.1, mmcv_full == 1.3.18 and mmdet == 2.14. We recommend you to use anaconda to make sure that all dependencies are in place. Note that different versions of the library may cause changes in results.

Step 1. Create a conda environment and activate it.

conda create --name demf python=3.7
conda activate demf

Step 2. Install MMDetection3D following the instruction here.

Step 3. Prepare SUN RGB-D Data following the procedure here.

Getting Started

Step 1. First we need to train a Deformable DETR on SUN RGB-D image data to get the checkpoint of image branch.

python -m torch.distributed.launch --nproc_per_node=$GPUS --master_port=$PORT train.py configs/deformdetr/imvotenet_deform.py --launcher pytorch ${@:3}

Or you can download the pre-trained image branch here.

Step 2. Specify the path to the pre-trained image branch in config.

Step 3. Train our DeMF using the following command.

python -m torch.distributed.launch --nproc_per_node=8 --master_port=$PORT train.py configs/demf/demf_votenet.py --launcher pytorch ${@:3}

We also provide pre-trained DeMF here. Use eval.py to evaluate the pretrained model and you will get the [email protected] and [email protected]

python -m torch.distributed.launch --nproc_per_node=8 --master_port=$PORT test.py --config configs/demf/demf_votenet.py --checkpoint $CHECKPOINT --eval mAP --launcher pytorch ${@:4}

Main Results

We re-implemented VoteNet and ImVoteNet, which are some improvement over the original results.

Method Point Backbone Input [email protected] [email protected]
VoteNet PointNet++ PC 60.0 41.3
ImVoteNet PointNet++ PC+RGB 64.4 43.3
DeMF(VoteNet based) PointNet++ PC+RGB 65.6 (65.3) 46.1 (45.4)
DeMF(FCAF3D based) HDResNet34 PC+RGB 67.4 (67.1) 51.2 (50.5)

Todo

We will release the code of the DeMF (Fcaf3d based) soon.

Citation

If you find this work useful for your research, please cite our paper:

@misc{https://doi.org/10.48550/arxiv.2207.10589,
  author    = {Yang, Hao and Shi, Chen and Chen, Yihong and Wang, Liwei},
  title     = {Boosting 3D Object Detection via Object-Focused Image Fusion},
  publisher = {arXiv},
  year      = {2022},
}

Download Source Code

Download ZIP

Paper Preview

Jul 26, 2022