Giter Site home page Giter Site logo

shawlyu / hybridneuralrendering Goto Github PK

View Code? Open in Web Editor NEW

This project forked from cvmi-lab/hybridneuralrendering

2.0 0.0 0.0 3.95 MB

(CVPR 2023) Hybrid Neural Rendering for Large-Scale Scenes with Motion Blur

License: Apache License 2.0

Shell 17.43% C++ 0.14% Python 81.40% Cuda 1.03%

hybridneuralrendering's Introduction

Hybrid Neural Rendering for Large-Scale Scenes with Motion Blur

Hybrid Neural Rendering for Large-Scale Scenes with Motion Blur (CVPR 2023)
Peng Dai*, Yinda Zhang*, Xin Yu, Xiaoyang Lyu, Xiaojuan Qi.
Paper, Project_page

Introduction


Our method takes advantages of both neural 3D representation and image-based rendering to render high-fidelity and temporally consistent results. Specifically, the image-based features compensate for the defective neural 3D features, and the neural 3D features boost the temporal consistency of image-based features. Moreover, we propose efficient designs to handle motion blurs that occur during capture.

Environment

  • We use the same environment as PointNeRF, please follow their installation step by step. (conda virtual environment is recommended)

  • Install the dependent python libraries

pip install opencv_python imutils

The code has been tested on a single NVIDIA 3090 GPU.

Preparation

  • Please download datasets used in this paper. The layout looks like this:
HybridNeuralRendering
├── data_src
    ├── scannet
    │   │──frame_weights_step5 
    │   │──scans 
    |   │   │──scene0101_04
    │   │   │──scene0241_01
    │   │   │──livingroom
    │   │   │──vangoroom
    ├── nerf
    │   │──nerf_synthetic
    │   │   │──chair
    │   │   │──lego
  • Download pre-trained models. Since we currently focus on per-scene optimization, make sure that "checkpoints" folder contains "init" and "MVSNet" folders with pre-trained models.

Quality-aware weights

The weights have been included in the "frame_weights_step5" folder. Alternatively, you can follow the RAFT to build the running environment and download their pre-trained models. Then, compute quality-aware weights by running:

cd raft
python demo_content_aware_weights.py --model=models/raft-things.pth --path=path of RGB images  --ref_path=path of RGB images  --scene_name=scene name

Train

We take the training on ScanNet 'scene0241_01' for example (The training scripts will resume training if "xxx.pth" files are provided in the pre-trained scene folder, e.g., "checkpoints/scannet/xxx/xxx.pth". Otherwise, train from scratch.):

Hybrid rendering

Only use hybrid rendering, run:

bash ./dev_scripts/w_scannet_etf/scene241_hybrid.sh

Hybrid rendering + blur-handling module (pre-defined degradation kernels)

The full version of our method, run:

bash ./dev_scripts/w_scannet_etf/scene241_full.sh

Hybrid rendering + blur-handling module (learned degradation kernels)

Instead of using pre-defined kernels, we also provide an efficient way to estimate degradation kernels from rendered and GT patches. Specifically, flattened rendering and GT patches are concatenated and fed into an MLP to predict the degradation kernel.

bash ./dev_scripts/w_scannet_etf/scene241_learnable.sh

Evaluation

We take the evaluation on ScanNet 'scene0241_01' for example:
Please specify "name" in "scene241_test.sh" to evaluate different experiments, then run:

bash ./dev_scripts/w_scannet_etf/scene241_test.sh

You can directly evaluate using our pre-trained models.

Results

Our method generates high-fidelity results when comparing with PointNeRF' results and reference images. Please visit our project_page for more comparisons.

Contact

If you have questions, you can email me ([email protected]).

Citation

If you find this repo useful for your research, please consider citing our paper:

@inproceedings{dai2023hybrid,
  title={Hybrid Neural Rendering for Large-Scale Scenes with Motion Blur},
  author={Dai, Peng and Zhang, Yinda and Yu, Xin and Lyu, Xiaoyang and Qi, Xiaojuan},
  booktitle={Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition},
  year={2023}
}

Acknowledgement

This repo is heavily based on PointNeRF and RAFT, we thank authors for their brilliant works.

hybridneuralrendering's People

Contributors

daipengwa avatar

Stargazers

Zizhang Li avatar shawlyu avatar

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.