Giter Site home page Giter Site logo

chengzhag / deeppanocontext Goto Github PK

View Code? Open in Web Editor NEW
88.0 4.0 12.0 77.72 MB

Official PyTorch code of DeepPanoContext: Panoramic 3D Scene Understanding with Holistic Scene Context Graph and Relation-based Optimization (ICCV 2021 Oral).

Home Page: https://chengzhag.github.io/publication/dpc/

License: MIT License

Python 100.00%
panorama ldif sgcn rgcn gcn deep-learning pytorch

deeppanocontext's Introduction

DeepPanoContext: Panoramic 3D Scene Understanding with Holistic Scene Context Graph and Relation-based Optimization

Cheng Zhang, Zhaopeng Cui, Cai Chen, Shuaicheng Liu, Bing Zeng, Hujun Bao, Yinda Zhang

teaser pipeline

Introduction

This repo contains data generation, data preprocessing, training, testing, evaluation, visualization code of our ICCV 2021 paper.

Install

Install necessary tools and create conda environment (needs to install anaconda if not available):

sudo apt install xvfb ninja-build freeglut3-dev libglew-dev meshlab
conda env create -f environment.yaml
conda activate Pano3D
pip install wandb
python -m pip install detectron2 -f https://dl.fbaipublicfiles.com/detectron2/wheels/cu101/torch1.7/index.html
python project.py build
  • When running python project.py build, the script will run external/build_gaps.sh which requires password for sudo privilege for apt-get install. Please make sure you are running with a user with sudo privilege. If not, please reach your administrator for installation of these libraries and comment out the corresponding lines then run python project.py build.
  • If you encounter /usr/bin/ld: cannot find -lGL problem when building GAPS, please follow this issue.

Since the dataloader loads large number of variables, before training, please follow this to raise the open file descriptor limits of your system. For example, to permanently change the setting, edit /etc/security/limits.conf with a text editor and add the following lines:

*         hard    nofile      500000
*         soft    nofile      500000
root      hard    nofile      500000
root      soft    nofile      500000

Demo

Download the pretrained checkpoints of detector, layout estimation network, and other modules. Then unzip the folder out into the root directory of current project. Since the given checkpoints are trained with current version of our code, which is a refactored version, the results are slightly better than those reported in our paper.

Please run the following command to predict on the given example in demo/input with our full model:

CUDA_VISIBLE_DEVICES=0 WANDB_MODE=dryrun python main.py configs/pano3d_igibson.yaml --model.scene_gcn.relation_adjust True --mode test

Or run without relation optimization:

CUDA_VISIBLE_DEVICES=0 WANDB_MODE=dryrun python main.py configs/pano3d_igibson.yaml --mode test

The results will be saved to out/pano3d/<demo_id>. If nothing goes wrong, you should get the following results:

rgb.png visual.png
det3d.jpg render.png

Data preparation

Our data is rendered with iGibson.

Deprecated - Here, we follow their Installation guide to download iGibson dataset, then render and preprocess the data with our code.

Update - Since iGibson has gone through a major update, their dataset download link has been updated. Please download the dataset here and follow the README to put the dataset into right places.

  1. Download iGibson dataset with:

    python -m gibson2.utils.assets_utils --download_ig_dataset
  2. Render panorama with:

    python -m utils.render_igibson_scenes --renders 10 --random_yaw --random_obj --horizon_lo --world_lo

    The rendered dataset should be in data/igibson/.

  3. Make models watertight and render/crop single object image:

    python -m utils.preprocess_igibson_obj --skip_mgn

    The processed results should be in data/igibson_obj/.

  4. (Optional) Before proceeding to the training steps, you could visualize dataset ground-truth of data/igibson/ with:

    python -m utils.visualize_igibson

    Results ('visual.png' and 'render.png') should be saved to folder of each camera like data/igibson/Pomaria_0_int/00007.

Training and Testing

Preparation

  1. We use the pretrained weights of Implicit3DUnderstanding for fine-tuning Bdb3d Estimation Network (BEN) and LIEN+LDIF. Please download the pretrained checkpoint and unzip it into out/total3d/20110611514267/.

  2. We use wandb for logging and visualizing experiments. You can follow their quickstart guide to sign up for a free account and login on your machine with wandb login. The training and testing results will be uploaded to your project "deeppanocontext".

  3. Hint: The <XXX_id> in the commands bellow needs to be replaced with the XXX_id trained in the previous steps.

  4. Hint: In the steps bellow, when training or testing with main.py, you can override yaml configurations with command line parameter:

    CUDA_VISIBLE_DEVICES=0 python main.py configs/layout_estimation_igibson.yaml --train.epochs 100

    This might be helpful when debugging or tuning hyper-parameters.

First Stage

2D Detector

Please follow Demo section to download weights for detector before we release full fine-tuning code for detector.

Layout Estimation

Train layout estimation network (HorizonNet) with:

CUDA_VISIBLE_DEVICES=0 python main.py configs/layout_estimation_igibson.yaml

The checkpoint and visualization results will be saved to out/layout_estimation/<layout_estimation_id>/model_best.pth

Save First Stage Outputs

  1. Save predictions of 2D detector and LEN as dateset for stage 2 training:

    CUDA_VISIBLE_DEVICES=0 WANDB_MODE=dryrun python main.py configs/first_stage_igibson.yaml --mode qtest --weight out/layout_estimation/<layout_estimation_id>/model_best.pth

    The first stage outputs should be saved to data/igibson_stage1

  2. (Optional) Visualize stage 1 dataset with:

    python -m utils.visualize_igibson --dataset data/igibson_stage1 --skip_render

Second Stage

Object Reconstruction

Train object reconstruction network (LIEN+LDIF) with:

CUDA_VISIBLE_DEVICES=0 python main.py configs/ldif_igibson.yaml

The checkpoint and visualization results will be saved to out/ldif/<ldif_id>.

Bdb3D Estimation

Train bdb3d estimation network (BEN) with:

CUDA_VISIBLE_DEVICES=0 python main.py configs/bdb3d_estimation_igibson.yaml

The checkpoint and visualization results will be saved to out/bdb3d_estimation/<bdb3d_estimation_id>.

Relation SGCN

  1. Train Relation SGCN without relation branch:

    CUDA_VISIBLE_DEVICES=0 python main.py configs/relation_scene_gcn_igibson.yaml --model.scene_gcn.output_relation False --model.scene_gcn.loss BaseLoss --weight out/bdb3d_estimation/<bdb3d_estimation_id>/model_best.pth out/ldif/<ldif_id>/model_best.pth

    The checkpoint and visualization results will be saved to out/relation_scene_gcn/<relation_sgcn_wo_rel_id>.

  2. Train Relation SGCN with relation branch:

    CUDA_VISIBLE_DEVICES=0 python main.py configs/relation_scene_gcn_igibson.yaml --weight out/relation_scene_gcn/<relation_sgcn_wo_rel_id>/model_best.pth --train.epochs 20 

    The checkpoint and visualization results will be saved to out/relation_scene_gcn/<relation_sgcn_id>.

  3. Fine-tune Relation SGCN end-to-end with relation optimization:

    CUDA_VISIBLE_DEVICES=0 python main.py configs/relation_scene_gcn_igibson.yaml --weight out/relation_scene_gcn/<relation_sgcn_id>/model_best.pth --model.scene_gcn.relation_adjust True --train.batch_size 1 --val.batch_size 1 --device.num_workers 2 --train.freeze shape_encoder shape_decoder --model.scene_gcn.loss_weights.bdb3d_proj 1.0 --model.scene_gcn.optimize_steps 20 --train.epochs 10

    The checkpoint and visualization results will be saved to out/relation_scene_gcn/<relation_sgcn_ro_id>.

Test Full Model

Run:

CUDA_VISIBLE_DEVICES=0 python main.py configs/relation_scene_gcn_igibson.yaml --weight out/relation_scene_gcn/<relation_sgcn_ro_id>/model_best.pth --log.path out/relation_scene_gcn --resume False --finetune True --model.scene_gcn.relation_adjust True --mode qtest --model.scene_gcn.optimize_steps 100

The visualization results will be saved to out/relation_scene_gcn/<relation_sgcn_ro_test_id>.

Citation

If you find our work and code helpful, please consider cite:

@inproceedings{zhang2021deeppanocontext,
  title={DeepPanoContext: Panoramic 3D Scene Understanding With Holistic Scene Context Graph and Relation-Based Optimization},
  author={Zhang, Cheng and Cui, Zhaopeng and Chen, Cai and Liu, Shuaicheng and Zeng, Bing and Bao, Hujun and Zhang, Yinda},
  booktitle={Proceedings of the IEEE/CVF International Conference on Computer Vision},
  pages={12632--12641},
  year={2021}
}

@InProceedings{Zhang_2021_CVPR,
    author    = {Zhang, Cheng and Cui, Zhaopeng and Zhang, Yinda and Zeng, Bing and Pollefeys, Marc and Liu, Shuaicheng},
    title     = {Holistic 3D Scene Understanding From a Single Image With Implicit Representation},
    booktitle = {Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)},
    month     = {June},
    year      = {2021},
    pages     = {8833-8842}
}

We thank the following great works:

  • Total3DUnderstanding for their well-structured code. We construct our network based on their well-structured code.
  • Coop for their dataset. We used their processed dataset with 2D detector prediction.
  • LDIF for their novel representation method. We ported their LDIF decoder from Tensorflow to PyTorch.
  • Graph R-CNN for their scene graph design. We adopted their GCN implemention to construct our SGCN.
  • Occupancy Networks for their modified version of mesh-fusion pipeline.

If you find them helpful, please cite:

@InProceedings{Nie_2020_CVPR,
author = {Nie, Yinyu and Han, Xiaoguang and Guo, Shihui and Zheng, Yujian and Chang, Jian and Zhang, Jian Jun},
title = {Total3DUnderstanding: Joint Layout, Object Pose and Mesh Reconstruction for Indoor Scenes From a Single Image},
booktitle = {IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)},
month = {June},
year = {2020}
}
@inproceedings{huang2018cooperative,
  title={Cooperative Holistic Scene Understanding: Unifying 3D Object, Layout, and Camera Pose Estimation},
  author={Huang, Siyuan and Qi, Siyuan and Xiao, Yinxue and Zhu, Yixin and Wu, Ying Nian and Zhu, Song-Chun},
  booktitle={Advances in Neural Information Processing Systems},
  pages={206--217},
  year={2018}
}	
@inproceedings{genova2020local,
    title={Local Deep Implicit Functions for 3D Shape},
    author={Genova, Kyle and Cole, Forrester and Sud, Avneesh and Sarna, Aaron and Funkhouser, Thomas},
    booktitle={Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition},
    pages={4857--4866},
    year={2020}
}
@inproceedings{yang2018graph,
    title={Graph r-cnn for scene graph generation},
    author={Yang, Jianwei and Lu, Jiasen and Lee, Stefan and Batra, Dhruv and Parikh, Devi},
    booktitle={Proceedings of the European Conference on Computer Vision (ECCV)},
    pages={670--685},
    year={2018}
}
@inproceedings{mescheder2019occupancy,
  title={Occupancy networks: Learning 3d reconstruction in function space},
  author={Mescheder, Lars and Oechsle, Michael and Niemeyer, Michael and Nowozin, Sebastian and Geiger, Andreas},
  booktitle={Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition},
  pages={4460--4470},
  year={2019}
}

deeppanocontext's People

Contributors

chengzhag avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar

deeppanocontext's Issues

DEMO error

I got DEMO error(CUDA_VISIBLE_DEVICES=0 WANDB_MODE=dryrun python main.py configs/pano3d_igibson.yaml --model.scene_gcn.relation_adjust True --mode test) after finished Install in README.

RuntimeError: Error building extension 'build': [1/2] c++ -MMD -MF chamfer_distance.o.d -DTORCH_EXTENSION_NAME=build -DTORCH_API_INCLUDE_EXTENSION_H -DPYBIND11_COMPILER_TYPE="_gcc" -DPYBIND11_STDLIB="_libstdcpp" -DPYBIND11_BUILD_ABI="_cxxabi1011" -isystem /home/baotong/miniconda3/envs/Pano3D/lib/python3.7/site-packages/torch/include -isystem /home/baotong/miniconda3/envs/Pano3D/lib/python3.7/site-packages/torch/include/torch/csrc/api/include -isystem /home/baotong/miniconda3/envs/Pano3D/lib/python3.7/site-packages/torch/include/TH -isystem /home/baotong/miniconda3/envs/Pano3D/lib/python3.7/site-packages/torch/include/THC -isystem /usr/local/cuda-11.8/include -isystem /home/baotong/miniconda3/envs/Pano3D/include/python3.7m -D_GLIBCXX_USE_CXX11_ABI=0 -fPIC -std=c++14 -c /home/baotong/DeepPanoContext/external/pyTorchChamferDistance/chamfer_distance/chamfer_distance.cpp -o chamfer_distance.o
FAILED: chamfer_distance.o
c++ -MMD -MF chamfer_distance.o.d -DTORCH_EXTENSION_NAME=build -DTORCH_API_INCLUDE_EXTENSION_H -DPYBIND11_COMPILER_TYPE="_gcc" -DPYBIND11_STDLIB="_libstdcpp" -DPYBIND11_BUILD_ABI="_cxxabi1011" -isystem /home/baotong/miniconda3/envs/Pano3D/lib/python3.7/site-packages/torch/include -isystem /home/baotong/miniconda3/envs/Pano3D/lib/python3.7/site-packages/torch/include/torch/csrc/api/include -isystem /home/baotong/miniconda3/envs/Pano3D/lib/python3.7/site-packages/torch/include/TH -isystem /home/baotong/miniconda3/envs/Pano3D/lib/python3.7/site-packages/torch/include/THC -isystem /usr/local/cuda-11.8/include -isystem /home/baotong/miniconda3/envs/Pano3D/include/python3.7m -D_GLIBCXX_USE_CXX11_ABI=0 -fPIC -std=c++14 -c /home/baotong/DeepPanoContext/external/pyTorchChamferDistance/chamfer_distance/chamfer_distance.cpp -o chamfer_distance.o
In file included from /usr/local/include/c++/13.1.0/climits:42,
from /home/baotong/miniconda3/envs/Pano3D/lib/python3.7/site-packages/torch/include/c10/util/llvmMathExtras.h:19,
from /home/baotong/miniconda3/envs/Pano3D/lib/python3.7/site-packages/torch/include/c10/core/DispatchKeySet.h:4,
from /home/baotong/miniconda3/envs/Pano3D/lib/python3.7/site-packages/torch/include/c10/core/Backend.h:5,
from /home/baotong/miniconda3/envs/Pano3D/lib/python3.7/site-packages/torch/include/c10/core/Layout.h:3,
from /home/baotong/miniconda3/envs/Pano3D/lib/python3.7/site-packages/torch/include/ATen/core/TensorBody.h:4,
from /home/baotong/miniconda3/envs/Pano3D/lib/python3.7/site-packages/torch/include/ATen/Tensor.h:3,
from /home/baotong/miniconda3/envs/Pano3D/lib/python3.7/site-packages/torch/include/ATen/Context.h:4,
from /home/baotong/miniconda3/envs/Pano3D/lib/python3.7/site-packages/torch/include/ATen/ATen.h:9,
from /home/baotong/miniconda3/envs/Pano3D/lib/python3.7/site-packages/torch/include/torch/csrc/api/include/torch/types.h:3,
from /home/baotong/miniconda3/envs/Pano3D/lib/python3.7/site-packages/torch/include/torch/csrc/api/include/torch/data/dataloader_options.h:4,
from /home/baotong/miniconda3/envs/Pano3D/lib/python3.7/site-packages/torch/include/torch/csrc/api/include/torch/data/dataloader/base.h:3,
from /home/baotong/miniconda3/envs/Pano3D/lib/python3.7/site-packages/torch/include/torch/csrc/api/include/torch/data/dataloader/stateful.h:3,
from /home/baotong/miniconda3/envs/Pano3D/lib/python3.7/site-packages/torch/include/torch/csrc/api/include/torch/data/dataloader.h:3,
from /home/baotong/miniconda3/envs/Pano3D/lib/python3.7/site-packages/torch/include/torch/csrc/api/include/torch/data.h:3,
from /home/baotong/miniconda3/envs/Pano3D/lib/python3.7/site-packages/torch/include/torch/csrc/api/include/torch/all.h:8,
from /home/baotong/miniconda3/envs/Pano3D/lib/python3.7/site-packages/torch/include/torch/csrc/api/include/torch/torch.h:3,
from /home/baotong/DeepPanoContext/external/pyTorchChamferDistance/chamfer_distance/chamfer_distance.cpp:1:
/usr/include/limits.h:124:26: 错误:没有包含路径可供搜索 limits.h
124 | # include_next <limits.h>
| ^
In file included from /usr/local/include/c++/13.1.0/climits:42,
from /home/baotong/miniconda3/envs/Pano3D/lib/python3.7/site-packages/torch/include/c10/util/Logging.h:4,
from /home/baotong/miniconda3/envs/Pano3D/lib/python3.7/site-packages/torch/include/c10/core/TensorImpl.h:18,
from /home/baotong/miniconda3/envs/Pano3D/lib/python3.7/site-packages/torch/include/ATen/core/TensorBody.h:11:
/usr/include/limits.h:124:26: 错误:没有包含路径可供搜索 limits.h
124 | # include_next <limits.h>
| ^
In file included from /usr/local/include/c++/13.1.0/climits:42,
from /home/baotong/miniconda3/envs/Pano3D/lib/python3.7/site-packages/torch/include/c10/util/logging_is_not_google_glog.h:5,
from /home/baotong/miniconda3/envs/Pano3D/lib/python3.7/site-packages/torch/include/c10/util/Logging.h:28:
/usr/include/limits.h:124:26: 错误:没有包含路径可供搜索 limits.h
...

How to obtain the results of Total3d and Im3d?

This work is so excellent! I'm curious about how to obtain the results of total3d and im3d, as the layout they generate is a cuboid. Also, whether I can train total3d and im3d with current codes, or may you share their results? I want to learn how to combine several single-view results into a panoramic one.

Save First Stage Outputs, ValueError: all input arrays must have the same shape

ENV:conda env create -f environment.yaml

DATA: use Update data - Since iGibson has gone through a major update, their dataset download link has been updated. Please download the dataset here and follow the README to put the dataset into right places.

PROBLEM: Follow the README file. Run Training and Testing - First Stage - Save First Stage Outputs - CUDA_VISIBLE_DEVICES=0 WANDB_MODE=dryrun python main.py configs/first_stage_igibson.yaml --mode qtest --weight out/layout_estimation/<layout_estimation_id>/model_best.pth.
An error has occurred:
raise ValueError('all input arrays must have the same shape')
ValueError: all input arrays must have the same shape

How do I need to solve this problem?
Thank you.

libGL.so.1.2.0

스크린샷 2022-04-12 오후 3 00 31

When i run python project.py build i get above message.
I reinstalled libgl1-mesa-glx and checked whether it is installed by apt-get list --installed but still there's no libGL.so.1.2.0 in /usr/lib/x86_64-linux-gnu

how could i solve it?

Model weights for LIEN, LDIF and BEN modules

The provided links in the readme contain model weights for the detector, layout estimation, and relation_scene_gcn modules, as well as the total3D weights.

Would it be possible to also provide the weights for the LIEN, LDIF and BEN modules?

Many thanks!

GLSL 4.5 is not supported

Hello, i got problem when i run rendering.

message is like this.
스크린샷 2022-05-08 오후 6 03 20

I tried to change versions with export MESA_GL_VERSION_OVERRIDE, export MESA_GLSL_VERSION_OVERRIDE, export MESA_GLES_VERSION_OVERRIDE but it didn't work.

And i don't have 'shader' folder in '/root/anaconda3/envs/Pano3D/lib/python3.7/site-packages/gibson2/render/mesh_renderer'

My environment is docker container of remote server.

Could you help me?

Code run stuck at project build

It doesn't show any error. But while running project.py build command, it is getting stuck for long after giving the admin password.

Plans to release 2D detector code with fine tuning?

I looked into the code base and found that the 2D detector pipeline is unavailable.
I can see a comment that "Please follow Demo section to download weights for detector before we release full fine-tuning code for detector."

Is there any timeline for this?
If not is it possible for you to explain the steps if we want to upgrade it from detection or mask-rcnn to something else?
@chengzhag

Run on Colab

Hi, I am trying to run DPC on google colab, but it is just not working, can you please give a notebook that is running DPC on colab.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.