This repository contains a reference implementation of the algorithms described in the ECCV 2020 paper "Deep View Synthesis from Colored 3D Point Clouds".
Given a colored 3D point cloud, we aim to synthesize the corresponding realistic image from a specific view point. An example is illustrated bellow:
- generate a point cloud from keyframes of a video sequence using DSO.
- synthesize from different view points
- SUN3D dataset re-orgnized by DeMoN.
- ICL-NUIM synthesis dataset, the example data is available (700M) here
- invsfm dataset generated from NYU-V2, the demo data can be download (11G) here.
- PyTorch=1.2
- opencv-python
- numpy
cd models/pointnet2
python setup.py install
python inv_icl_test.py --mode test --dataset dataset --test_data_dir test_data_files --root_dir dataset_root_dir
Check the 'config.py' for more configuration details.
python demon_train.py --mode train --train_data_dir train_data_files --val_data_dir val_data_files --root_dir dataset_root_dir
Check the 'config.py' for more configuration details.
First download the pre-trained models here. Then unzip the pre-trained model to folder 'results/demon_4096'.
Note: The model is trained using DeMoN SUN3D indoor dataset, and the training data separation is 'data/filenames/demon_train.txt'.
If you use this code/model for your research, please cite the following paper:
@misc{song2020,
title={Deep View Synthesis from Colored 3D Point Clouds},
author={Zhenbo Song and Wayne Chen and Dylan Campbell and Hongdong Li},
year={2020},
}
The code is based on the 'EdgeConnect: Generative Image Inpainting with Adversarial Edge Learning'
The PointNet++ impletementation is from Pointnet2.PyTorch