Giter Site home page Giter Site logo

ku-cvlab / rain-gs Goto Github PK

View Code? Open in Web Editor NEW
279.0 10.0 14.0 561.51 MB

Code for "Relaxing Accurate Initialization Constraint for 3D Gaussian Splatting" by Jaewoo Jung, Jisang Han, Honggyu An, Jiwon Kang, Seonghoon Park, and Seungryong Kim

Home Page: https://ku-cvlab.github.io/RAIN-GS

License: MIT License

Python 61.28% CMake 0.44% C++ 7.57% Cuda 30.57% C 0.14%

rain-gs's Introduction

RAIN-GS: Relaxing Accurate Initialization Constraint for 3D Gaussian Splatting


This is our official implementation of the paper "Relaxing Accurate Initialization Constraint for 3D Gaussian Splatting"!

by Jaewoo Jung, Jisang Han, Honggyu An, Jiwon Kang, Seonghoon Park, Seungryong Kim

☔: Equal Contribution
†: Corresponding Author

Introduction


We introduce a novel optimization strategy (RAIN-GS) for 3D Gaussian Splatting!

We show that our simple yet effective strategy consisting of sparse-large-variance (SLV) random initialization, progressive Gaussian low-pass filter control, and the Adaptive Bound-Expanding Split (ABE-Split) algorithm robustly guides 3D Gaussians to model the scene even when starting from random point cloud.

❗️Update (2024/05/29): We have updated our paper and codes which significantly improve our previous results!
😴 TL;DR for our update is as follows:

  • We added a modification to the original split algorithm of 3DGS which enables the Gaussians to model scenes further from the viewpoints! This new splitting algorithm is named Adaptive Bound-Expanding Split algorithm (ABE-Split algorithm).

  • Now with our three key components (SLV initialization, progressive Gaussians low-pass filtering, ABE-Split), we perform on-par or even better compared to 3DGS trainied with SfM initialized point cloud.

  • As RAIN-GS only requires the initial point cloud to be sparse (SLV initialization), we now additionally apply our strategy to SfM/Noisy SfM point cloud by choosing a sparse set of points from the point cloud.

For further details and visualization results, please check out our updated paper and our new project page.

Installation

We implement RAIN-GS above the official implementation of 3D Gaussian Splatting.
For environmental setup, we kindly guide you to follow the original requirements of 3DGS.

Training

To train 3D Gaussians Splatting with our updated RAIN-GS novel strategy, all you need to do is:

python train.py -s {dataset_path} --exp_name {exp_name} --eval --ours_new 

You can train from various initializations by adding --train_from ['random', 'reprojection', 'cluster', 'noisy_sfm'] (random is default)

To train with Mip-NeRF360 dataset, you can add argument --images images_4 for outdoor scenes and --images images_2 for indoor scenes to modify the resolution of the input images.

Toggle to find more details for training from various initializations.
  • Random Initialization (Default)
python train.py -s {dataset_path} --exp_name {exp_name} --eval --ours_new --train_from 'random'
  • SfM (Structure-from-Motion) Initialization
    In order to apply RAIN-GS to SfM Initialization, we need to start with a sparse set of points (SLV Initialization).
    To choose the sparse set of points, you can choose several options:

    • Clustering : Apply clustering to the initial point cloud using the HDBSCAN algorithm.
    python train.py -s {dataset_path} --exp_name {exp_name} --eval --ours_new --train_from 'cluster'
    • Top 10% : Each of the points from SfM comes with a confidence value, which is the reprojection error. Select the top 10% most confident points from the point cloud.
    python train.py -s {dataset_path} --exp_name {exp_name} --eval --ours_new --train_from 'reprojection'
  • Noisy SfM Initialization
    In real-world scenarios, the point cloud from SfM can contain noise. To simulate this scenario, we add a random noise sampled from a normal distribution to the SfM point cloud. If you run with this option, we apply the clustering algorithm to the Noisy SfM point cloud.

python train.py -s {dataset_path} --exp_name {exp_name} --eval --ours_new --train_from 'noisy_sfm'

To train 3D Gaussian Splatting with our original RAIN-GS, all you need to do is:

python train.py -s {dataset_path} --exp_name {exp_name} --eval --ours

For dense-small-variance (DSV) random initialization (used in the original 3D Gaussian Splatting), you can simply run with the following command:

python train.py -s {dataset_path} --exp_name {exp_name} --eval --paper_random

For SfM (Structure-from-Motion) initialization (used in the original 3D Gaussian Splatting), you can simply run with the following command:

python train.py -s {dataset_path} --exp_name {exp_name} --eval

For Noisy SfM initialization (used in the original 3D Gaussian Splatting), you can simply run with the following command:

python train.py -s {dataset_path} --exp_name {exp_name} --eval --train_from 'noisy_sfm'

Acknowledgement

We would like to acknowledge the contributions of 3D Gaussian Splatting for open-sourcing the official codes for 3DGS!

Citation

If you find our work helpful, please cite our work as:

@article{jung2024relaxing,
  title={Relaxing Accurate Initialization Constraint for 3D Gaussian Splatting},
  author={Jung, Jaewoo and Han, Jisang and An, Honggyu and Kang, Jiwon and Park, Seonghoon and Kim, Seungryong},
  journal={arXiv preprint arXiv:2403.09413},
  year={2024}
}

rain-gs's People

Contributors

crepejung00 avatar hg010303 avatar onground-korea avatar ku-cvlab avatar loggerjk avatar

Stargazers

Jiahao Ma avatar  avatar Cesare avatar  avatar  avatar yaoshun li avatar Hengyu Liu avatar Xiaoyun Zheng (郑小云) avatar Lee avatar Jungho Lee avatar bro_q_dev avatar  avatar Tamura Shun avatar  avatar MasterNL avatar Calvin123 avatar  avatar  avatar Griffin Seonho Lee avatar  avatar  avatar Zixin Zhang | 张梓欣 avatar  avatar Realcat avatar Pablo Dawson avatar Wenchuang·Peng avatar whwhwh avatar Y. avatar Jun Sun avatar  avatar jasonlz avatar slam_study avatar Inhee Lee avatar Snow avatar Wenqiang Sun avatar Hengkai Guo avatar  avatar WANG KAIXUAN avatar Andrew Russell avatar wsmtht520 avatar  avatar  avatar Chenyu Lyu avatar Fudong Ge avatar  avatar JoonKyu Min avatar fenbiHE515 avatar Ruowen Zhao avatar SereinH avatar  avatar AllenLiu avatar Song Jiang avatar Summerkirakira avatar wangm avatar HITXYR avatar  avatar hyunsoo avatar Yuchen Lin avatar PengZheng avatar  avatar  avatar  avatar  avatar  avatar shuangkang fang avatar Qianyue He avatar  avatar  avatar Lingfan Zheng avatar Jeff Carpenter avatar  avatar hhhhhhhhhhhhh avatar 이성진 avatar  avatar John Lin avatar IskXCr avatar Seongyun avatar Minseong Bae avatar  avatar yaxu avatar hiyyg avatar LeeQi avatar Erick Luis avatar  avatar andless avatar Ha Yeong Park avatar Kim SeYeon avatar Pengyu Yin avatar Hyunji Min avatar  avatar  avatar jungwu_kim avatar DoSung Lee avatar  avatar zhanghairong avatar  avatar Pan lei avatar  avatar  avatar Yucheol Jung avatar

Watchers

bro_q_dev avatar Even avatar  avatar Snow avatar Poyraz Tahan avatar  avatar  avatar hiyyg avatar Drewvv avatar  avatar

rain-gs's Issues

Toy Experiment

Can you provide code for the toy experiment (1D Gaussian)? I want to understand the workings of the Gaussian splatting model at a deeper level.

Thanks,
Aditya.

Some questions about the Progressive Gaussian low-pass filter control and ABE-Split

Wonderful work!

I have some questions about the effectiveness of the Progressive Gaussian low-pass filter control and the ABE-Split strategy:

  1. Since you adopt SLV random initialization, each Gaussian already has a large covariance and covers a larger area. Why is it still necessary to further enlarge their coverage area using the low-pass filter?
  2. I'm a bit confused about the ABE-Split strategy. It seems that you clone a new Gaussian to a distant area, but how do you ensure that this cloned Gaussian is positioned appropriately?

Thanks.

Initial findings

Hello,

We swapped in rain-gs instead of the original on our generation backend and find that for the same scene:

  • training is more than 2x slower (75 mins vs 33 mins)
  • fewer splats are generated (2M vs 3M)
  • there appear to be more 'floaters' in the scene compared to original
  • missing detail on focus areas compared to original

Could you give any advice, presumably we are mis-configuring rain-gs?

Thanks!

torch cuda version

Hey have you tried cuda 11.7? I'm unable to install the submodules.
Thanks.

--ours has RuntimeError

I encountered the following RuntimeError when training with --ours, but there is no problem with --DSV,How to solve it
RAbug

Slv and low pass code location

Hi
In the paper you are saying that it is a 1 line modification for your method.
Can you point out where are the location of these modifications?

--ours has an RuntimeError

I encountered the following RuntimeError when training with --ours, but there is no problem with --DSV,How to solve it
企业微信截图_17095339126494

The experimental results are inconsistent with the results in the paper.

I have run the command as described in your project without modifying any code, but the results obtained are different from those in your paper.

python train.py -s {dataset_path} --exp_name {exp_name} --eval --ours

scene PSNR SSIM LPIPS
train 20.96 0.76 0.27
truck 22.92 0.82 0.21
drjohnson 28.94 0.89 0.27
playroom 30.09 0.90 0.26

python train.py -s {dataset_path} --exp_name {exp_name} --eval --DSV

scene PSNR SSIM LPIPS
train 20.68 0.77 0.26
truck 20.08 0.75 0.25
drjohnson 28.74 0.89 0.26
playroom 28.53 0.89 0.27

The result in your paper:
image

error: option -s not recognized

thanks for great works ,but i do have some questions,
(3dg) alex@alex-System-Product-Name:~/RAIN-GS-main$ python train.py -s data/truck --eval --ours
usage: train.py [global_opts] cmd1 [cmd1_opts] [cmd2 [cmd2_opts] ...]
or: train.py --help [cmd1 cmd2 ...]
or: train.py --help-commands
or: train.py cmd --help
thanks a lot

One line of code implementation change

Hello, thanks for your work!

The paper mentions both changes can be implemented each with one line of code.
I am not familiar with the entire code of 3DGS, could I please know which line(s) I should look at to understand your changes?

Quesion about camera poses?

Congrats on your paper ! I have two quick questions related to camera pose since your method does not rely on SFM pointcloud as the initialisation.

  1. How can we obtain the camera poses ? I suppose you guys also run colmap on the training data to obtain the pose and use your proposed SLV random pointcloud initialisation to train the 3DGS model. I wonder if that is the case.

  2. Can we optimize the camera poses if they are noisy using your method ? Or do you guys assume that they are fixed during training ?

Thanks, Phong !

training with sfm and init improvements

hi, thanks for your great work, have you tested training with sfm points and your init improvements? cause sfm points are very sparse, there's no sfm point on lots of parts of the scene. will the results be more better if using both sfm points and your init improvements ?
@crepejung00 Looking forward to your reply, thanks.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.