Giter Site home page Giter Site logo

autonomousvision / sdfstudio Goto Github PK

View Code? Open in Web Editor NEW
1.9K 30.0 182.0 57.96 MB

A Unified Framework for Surface Reconstruction

License: Apache License 2.0

JavaScript 9.29% Dockerfile 0.22% Python 89.08% HTML 0.19% SCSS 0.44% TypeScript 0.24% Shell 0.54%
3d-reconstruction implicit-neural-representation multi-view-reconstruction nerf pytorch sdf surface-reconstruction

sdfstudio's Introduction

nerfstudio

A Unified Framework for Surface Reconstruction

About

SDFStudio is a unified and modular framework for neural implicit surface reconstruction, built on top of the awesome nerfstudio project. We provide a unified implementation of three major implicit surface reconstruction methods: UniSurf, VolSDF, and NeuS. SDFStudio also supports various scene representions, such as MLPs, Tri-plane, and Multi-res. feature grids, and multiple point sampling strategies such as surface-guided sampling as in UniSurf, and Voxel-surface guided sampling from NeuralReconW. It further integrates recent advances in the area such as the utillization of monocular cues (MonoSDF), geometry regularization (UniSurf) and multi-view consistency (Geo-NeuS). Thanks to the unified and modular implementation, SDFStudio makes it easy to transfer ideas from one method to another. For example, Mono-NeuS applies the idea from MonoSDF to NeuS, and Geo-VolSDF applies the idea from Geo-NeuS to VolSDF.

Updates

2023.06.16: Add bakedangelo which combines BakedSDF with numerical gridents and progressive training of Neuralangelo.

2023.06.16: Add neus-facto-angelo which combines neus-facto with numerical gridents and progressive training of Neuralangelo.

2023.06.16: Support Neuralangelo.

2023.03.12: Support BakedSDF.

2022.12.28: Support Neural RGB-D Surface Reconstruction.

Quickstart

1. Installation: Setup the environment

Prerequisites

CUDA must be installed on the system. This library has been tested with version 11.3. You can find more information about installing CUDA here.

Create environment

SDFStudio requires python >= 3.7. We recommend using conda to manage dependencies. Make sure to install Conda before proceeding.

conda create --name sdfstudio -y python=3.8
conda activate sdfstudio
python -m pip install --upgrade pip

Dependencies

Install pytorch with CUDA (this repo has been tested with CUDA 11.3) and tiny-cuda-nn

pip install torch==1.12.1+cu113 torchvision==0.13.1+cu113 -f https://download.pytorch.org/whl/torch_stable.html
pip install git+https://github.com/NVlabs/tiny-cuda-nn/#subdirectory=bindings/torch

Installing SDFStudio

git clone https://github.com/autonomousvision/sdfstudio.git
cd sdfstudio
pip install --upgrade pip setuptools
pip install -e .
# install tab completion
ns-install-cli

2. Train your first model

The following will train a NeuS-facto model,

# Download some test data: you might need to install curl if your system don't have that
ns-download-data sdfstudio

# Train model on the dtu dataset scan65
ns-train neus-facto --pipeline.model.sdf-field.inside-outside False --vis viewer --experiment-name neus-facto-dtu65 sdfstudio-data --data data/sdfstudio-demo-data/dtu-scan65

# Or you could also train model on the Replica dataset room0 with monocular priors
ns-train neus-facto --pipeline.model.sdf-field.inside-outside True --pipeline.model.mono-depth-loss-mult 0.1 --pipeline.model.mono-normal-loss-mult 0.05 --vis viewer --experiment-name neus-facto-replica1 sdfstudio-data --data data/sdfstudio-demo-data/replica-room0 --include_mono_prior True

If everything works, you should see the following training progress:

image

Navigating to the link at the end of the terminal will load the webviewer (developled by nerfstudio). If you are running on a remote machine, you will need to port forward the websocket port (defaults to 7007). With an RTX3090 GPU, it takes ~15 mins for 20K iterations but you can already see reasonable reconstruction results after 2K iterations in the webviewer.

image

Resume from checkpoint / visualize existing run

It is also possible to load a pretrained model by running

ns-train neus-facto --trainer.load-dir {outputs/neus-facto-dtu65/neus-facto/XXX/sdfstudio_models} sdfstudio-data --data data/sdfstudio-demo-data/dtu-scan65 

This will automatically resume training. If you do not want to resume training, add --viewer.start-train False to your training command. Note that the order of command matters, dataparser subcommand needs to come after the model subcommand.

3. Exporting Results

Once you have a trained model you can export mesh and render the mesh.

Extract Mesh

ns-extract-mesh --load-config outputs/neus-facto-dtu65/neus-facto/XXX/config.yml --output-path meshes/neus-facto-dtu65.ply

Render Mesh

ns-render-mesh --meshfile meshes/neus-facto-dtu65.ply --traj interpolate  --output-path renders/neus-facto-dtu65.mp4 sdfstudio-data --data data/sdfstudio-demo-data/dtu-scan65

You will get the following video if everything works properly.

neus-facto-dtu65.mp4

Render Video

First we must create a path for the camera to follow. This can be done in the viewer under the "RENDER" tab. Orient your 3D view to the location where you wish the video to start, then press "ADD CAMERA". This will set the first camera key frame. Continue to new viewpoints adding additional cameras to create the camera path. We provide other parameters to further refine your camera path. Once satisfied, press "RENDER" which will display a modal that contains the command needed to render the video. Kill the training job (or create a new terminal if you have lots of compute) and the command to generate the video.

To view all video export options run:

ns-render --help

4. Advanced Options

Training models other than NeuS-facto

We provide many other models than NeuS-facto, see the documentation. For example, if you want to train the original NeuS model, use the following command:

ns-train neus --pipeline.model.sdf-field.inside-outside False sdfstudio-data --data data/sdfstudio-demo-data/dtu-scan65

For a full list of included models run ns-train --help. Please refer to the documentation for a more detailed explanation for each method.

Modify Configuration

Each model contains many parameters that can be changed, too many to list here. Use the --help command to see the full list of configuration options.

Note, that order of parameters matters! For example, you cannot set --machine.num-gpus after the --data parameter

ns-train neus-facto --help
[Click to see output]

help-output

Tensorboard / WandB

Nerfstudio supports three different methods to track training progress, using the viewer, tensorboard, and Weights and Biases. These visualization tools can also be used in SDFStudio. You can specify which visualizer to use by appending --vis {viewer, tensorboard, wandb} to the training command. Note that only one may be used at a time. Additionally the viewer only works for methods that are fast (ie. NeuS-facto and NeuS-acc), for slower methods like NeuS-facto-bigmlp, use the other loggers.

5. Using Custom Data

Please refer to the datasets and data format documentation if you like to use custom datasets.

Built On

tyro logo tyro logo
  • Easy-to-use config system
  • Developed by Brent Yi
tyro logo
  • Library for accelerating NeRF renders
  • Developed by Ruilong Li

Citation

If you use this library or find the documentation useful for your research, please consider citing:

@misc{Yu2022SDFStudio,
    author    = {Yu, Zehao and Chen, Anpei and Antic, Bozidar and Peng, Songyou and Bhattacharyya, Apratim 
                 and Niemeyer, Michael and Tang, Siyu and Sattler, Torsten and Geiger, Andreas},
    title     = {SDFStudio: A Unified Framework for Surface Reconstruction},
    year      = {2022},
    url       = {https://github.com/autonomousvision/sdfstudio},
}

sdfstudio's People

Contributors

akanazawa avatar akristoffersen avatar andreas-geiger avatar brentyi avatar decrispell avatar ethanweber avatar evonneng avatar jake-austin avatar julianknodt avatar kerrj avatar kevinddchen avatar krahets avatar ksnzh avatar liruilong940607 avatar machenmusik avatar mcallisterdavid avatar mirmix avatar mvwouden avatar nikmo33 avatar niujinshuchong avatar origamiman72 avatar pablovela5620 avatar pengsongyou avatar salykova avatar tancik avatar terrancewang avatar thomasw21 avatar trancelestial avatar zmurez avatar zunhammer avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

sdfstudio's Issues

cant generating 3D using neus , unisurf , volsdf , monosdf , mono-unisurf

Hi, thanks for such wonderful framework.
I was able to export result from neus-facto method for my own dataset given the instant-ngp-data using the following command

ns-train neus-facto --pipeline.model.sdf-field.inside-outside False --vis viewer --experiment-name ignatius instant-ngp-data --data data/ignatius/ignatius_all_images
and after a while (20 minutes ) I can export mesh using the following command

s-extract-mesh --load-config outputs/ignatius/neus-facto/2023-01-11_112928/config.yml --output-path outputs/ignatius/neus-facto/neus-facto-ignatius.ply

but it cant work properly when I want to run other methods (neus , unisurf , volsdf , monosdf , mono-unisurf , mono-neus , geo-neus , geo-unisurf , geo-volsdf , neus-acc ). It starts training as you can see from the following pic but it requires a very long time to finish training (around 10 hours ). I dont care about time I get result.
command to run unisurf
ns-train unisurf --pipeline.model.sdf-field.use-grid-feature False --pipeline.model.sdf-field.use-grid-feature True instant-ngp-data --data data/ignatius/ignatius_all_images/

after training
image

but at the end when I want to export mesh it gives me the following error (No checkpoint) which makes sense because in the --load-config path there is no chackpoint.

command for exporting mesh

ns-extract-mesh --load-config outputs/ignatius/unisurf/2023-01-13_133321/config.yml --output-path outputs/ignatius/unisurf/unisurf-ignatius.ply

error
image

note. I am using ignatius from Tanks and Temples dataset
https://www.tanksandtemples.org/download/

for orientation part, I have transforms.json obtained by instant-ngp

it should be noted that this procedure is working properly for neus-facto but not for others

So, would you please let me know what is am I missing ???

I have also another question, is it possible to generate 3D from scratch giving only images or a video as input ????
if so would you please let me know how ??

Best
Ali

Training with custom data

Describe the bug
I tried to train with nerfacto model, but it couldn't generate correct data.
I found out our own data is quite different from yours. All your images are square image, I doubt that rectangle image might cause the issue.

Screenshots
image

Additional context
I also trained the same data wit nerfstudio, it's good without any problem.
But the outcome of nerfstudio is not able to extract to mesh data with sdfstudio.
image

Would you please help me figure out the problem? Thanks.

Convertor?

Multiple issues, but the most problematic is that there seems to not be a way to import COLMAP or nerfstudio data format into sdfstudio format. I've tried python scripts/datasets/process_nerfstudio_to_sdfstudio.py --data D:\datasets\TanksAndTemples\data\image_sets\Caterpillar\nerf --output-dir D:\datasets\TanksAndTemples\data\image_sets\Caterpillar\sdf --data-type colmap --scene-type object
and not only that I had to solve a bunch of importing issues ("the file namings are not corresponding to the ones in nerfstudio output, the depth maps are not imported for COLMAP format), but the output is garbage (nothing can be recognized, just some yellow and blue blobs), even though the nerfstudio output looks amazing.

Ideally there should be a script to directly import from COLMAP as there is no way to get the image pairs data out of the nerfstudio format.

Some other issues:

  • the rendering and results are upside-down
  • it is impressive how many geometry focused methods where implemented in sdfstudio, but I find it strange that the method with arguable best results Neus2 is not even mentioned, at least as a comparison
  • why create yet an other repo since nerfstudio is so modular and easy to upgrade with new features

Thank you for this work, unfortunately it is very hard to test/use if you do not have few days to dedicate overcoming these kind of issues.

Understanding noise in depth map in NeusFacto v.s. MonoSDF

So I've been trying to understand how NeusFacto compares to other methods specifically for indoor. I've noticed that for NeusFacto does a much better job in image quality, but there's a strange amount of noise specifically in the depth reconstruction. I've shared examples on a room I captured using Polycam.

https://api.wandb.ai/report/pablovela5620/psjincx1

Would love to know if ya'll have any insight on this. Is this mostly a problem with the rendering method of Neus v.s. VolSDF?
I also noticed this issue when I tried experimenting with Highres tanks and temples on Neus-facto v.s. monosdf.

Heres NeusFacto
image

and here's MonoSDF

image

tinycuda error

I installed the tinycuda with the command

pip install git+https://github.com/NVlabs/tiny-cuda-nn/#subdirectory=bindings/torch

When I try import tinycudann, I got this error

`Traceback (most recent call last): File "", line 1, in File "/home/leo/anaconda3/envs/sdfstudio/lib/python3.10/site-packages/tinycudann/init.py", line 9, in from tinycudann.modules import free_temporary_memory, NetworkWithInputEncoding, Network, Encoding File "/home/leo/anaconda3/envs/sdfstudio/lib/python3.10/site-packages/tinycudann/modules.py", line 50, in C = importlib.import_module(f"tinycudann_bindings.{cc}_C") File "/home/leo/anaconda3/envs/sdfstudio/lib/python3.10/importlib/init.py", line 126, in import_module return _bootstrap._gcd_import(name[level:], package, level) ImportError: /home/leo/anaconda3/envs/sdfstudio/lib/python3.10/site-packages/tinycudann_bindings/_61_C.cpython-310-x86_64-linux-gnu.so: undefined symbol: _ZN3c104impl8GPUTrace13gpuTraceStateE

I tired this in following environment:`
Ubuntu 22.04
python 10
pytroch 1.12.1+cu113

Import error when using instant-ngp

Everytime when I want to using instant-ngp method, I will meet a problem like this

ImportError: cannot import name 'csrc' from 'nerfacc'
image

how to solve it?

my command is: ns-train instant-ngp --pipeline.model.background-color white --vis viewer --viewer.skip-openrelay True --viewer.websocket-port 7007 --experiment-name with_white_background_50 instant-ngp-data --data /home/zyan/Downloads/data/with_white_background/images_50/

BakedSDF-MLP PyTorch Memory Allocation

OS : Ubuntu-18.04 LTS (WSL2)
CUDA : 11.3
Pytorch : 1.12.1
GPU : 3090ti

I've had no issue training on other models like neus-facto, but when I try baskedsdf or bakedsdf-mlp it's saying that PyTorch is reserving almost every single byte of my GPU memory. Is this an actual bug or does this model require more than 24GBs?

image

Missing `triplane` implementation

Hi,

First, thanks for opening source this code. It's super useful!

Here I report the bug I found. I'm not sure if I mistake something. So I would like to confirm it here. Thanks!

I can't use the triplane as the sdf field whereas Document said triplane feature is supported.

Specifically, it reports the error when I run CMD: ns-train neus --pipeline.model.sdf-field.use-grid-feature True --pipeline.model.sdf-field.encoding-type tri-plane sdfstudio-data --data data/sdfstudio-demo-data/dtu-scan65/

image

I checked the code. Seems the code didn't implement the triplane if I understand correctly.

class TensorVMEncoding(Encoding):

Did I miss something in Document?

Thanks,
Weiwei.

Instant NGP takes 6Gb per training, but GPU OOM on 16Gb with eval

Hi,

I was trying to train instant-ngp with standard params, but it suddenly crashes with CUDA OOM when trying to evaluate.
What's surprising, is that the GPU memory consumption is only 6Gb during training. Any advice what should I decrease?
Machine is GCP v100, 16Gb.

  File "/home/dmytromishkin/miniconda3/envs/sdfstudio/lib/python3.8/site-packages/tinycudann/modules.py", line 179, in forward
    x_padded.to(torch.float).contiguous(),
RuntimeError: CUDA out of memory. Tried to allocate 1.81 GiB (GPU 0; 15.78 GiB total capacity;
 5.98 GiB already allocated; 460.25 MiB free; 9.38 GiB reserved in total by PyTorch) 
If reserved memory is >> allocated memory try setting max_split_size_mb to avoid fragmentation. 
 See documentation for Memory Management and PYTORCH_CUDA_ALLOC_CONF

Support for different intrinsics for every camera

We have a dataset from a multi-camera capture system with different intrinsics for every camera. SDFStudio currently supports only one intrinsic as of now. Is there a way to enable multiple intrinsics?

When I try to update the meta-data.json with an array of intrinsics, it gives an error.

Providing your own data.

Guys i don't see in your documentation how i can provide my own dataset.

Is it the same as nerfstudio ns-process-data?
I see you have a mask data in your dataset. How do i apply that mask for training?

ns-render-mesh crashing, `load_from_json`

Describe the bug
Trying to render a mesh following what was described on the README

To Reproduce
Steps to reproduce the behavior:
After generating mesh, used the command
ns-render-mesh --meshfile meshes/neus-facto-dtu65.ply --traj interpolate --data.data data/sdfstudio-demo-data/dtu-scan65 --output-path renders/neus-facto-dtu65.mp4

get the following error

Creating trajectory video
[Open3D WARNING] GLFW Error: X11: The DISPLAY environment variable is missing
[Open3D WARNING] Failed to initialize GLFW
Traceback (most recent call last):
  File "/home/pablo/miniconda3/envs/sdfstudio/bin/ns-render-mesh", line 8, in <module>
    sys.exit(entrypoint())
  File "/home/pablo/0Dev/repos/sdfstudio/scripts/render_mesh.py", line 246, in entrypoint
    tyro.cli(RenderTrajectory).main()
  File "/home/pablo/0Dev/repos/sdfstudio/scripts/render_mesh.py", line 231, in main
    _render_trajectory_video(
  File "/home/pablo/0Dev/repos/sdfstudio/scripts/render_mesh.py", line 103, in _render_trajectory_video
    vis.get_render_option().load_from_json("scripts/render.json")
AttributeError: 'NoneType' object has no attribute 'load_from_json'

Expected behavior
A clear and concise description of what you expected to happen.

Should be outputting an mp4

Screenshots
If applicable, add screenshots to help explain your problem.

Additional context
This is on a remote machine using vscode with ssh. Not sure if that's relevant or not

also congrats on the release! I'm excited to use this library, it was exactly what I was looking for.

monosdf vs mono-neus

I found mono-neus produces much worse results than monosdf, is it reasonable? Here are the results:

MonoSDF:
image

Mono-Neus:
image

I kept the hyper-params consistent for both methods, and here are the commands:
MonoSDF: OMP_NUM_THREADS=4 CUDA_VISIBLE_DEVICES=4,5 ns-train monosdf --pipeline.model.sdf-field.use-grid-feature True --pipeline.model.sdf-field.hidden-dim 128 --pipeline.model.sdf-field.num-layers 2 --pipeline.model.sdf-field.num-layers-color 2 --pipeline.model.sdf-field.use-appearance-embedding True --pipeline.model.sdf-field.geometric-init True --pipeline.model.sdf-field.inside-outside True --pipeline.model.sdf-field.bias 0.6 --pipeline.model.sdf-field.beta-init 0.1 --pipeline.datamanager.train-num-images-to-sample-from 1 --pipeline.datamanager.train-num-times-to-repeat-images 0 --trainer.steps-per-eval-image 5000 --trainer.steps-per-save 2000 --trainer.save-only-latest-checkpoint False --pipeline.model.background-model none --vis tensorboard --experiment-name monosdf-yungu-normal0.05 --pipeline.model.mono-depth-loss-mult 0.001 --pipeline.model.mono-normal-loss-mult 0.05 --pipeline.datamanager.train-num-rays-per-batch 2048 --machine.num-gpus 2 --trainer.max-num-iterations 25000 sdfstudio-data --data /mnt/cap/zjh/data/sdfstudio_data/yungu --include_mono_prior True --skip_every_for_val_split 30

Mono-Neus: OMP_NUM_THREADS=4 CUDA_VISIBLE_DEVICES=0,1 ns-train mono-neus --pipeline.model.sdf-field.use-grid-feature True --pipeline.model.sdf-field.hidden-dim 128 --pipeline.model.sdf-field.num-layers 2 --pipeline.model.sdf-field.num-layers-color 2 --pipeline.model.sdf-field.use-appearance-embedding True --pipeline.model.sdf-field.geometric-init True --pipeline.model.sdf-field.inside-outside True --pipeline.model.sdf-field.bias 0.6 --pipeline.model.sdf-field.beta-init 0.1 --pipeline.datamanager.train-num-images-to-sample-from 1 --pipeline.datamanager.train-num-times-to-repeat-images 0 --trainer.steps-per-eval-image 5000 --trainer.steps-per-save 2000 --trainer.save-only-latest-checkpoint False --pipeline.model.background-model none --vis tensorboard --experiment-name mono-neus-yungu --pipeline.model.mono-depth-loss-mult 0.001 --pipeline.model.mono-normal-loss-mult 0.05 --pipeline.datamanager.train-num-rays-per-batch 2048 --machine.num-gpus 2 --trainer.max-num-iterations 25000 sdfstudio-data --data /mnt/cap/zjh/data/sdfstudio_data/yungu --include_mono_prior True --skip_every_for_val_split 30

For mono-neus, I tried to use the same LR scheduler as monosdf and adjust the anneal_end, but none of these methods worked.

Can't export pointcloud from a geo-neus model

Describe the bug
When i finished training a geo-neus model ,I can't export a pointcloud by "ns-export pointcloud".

Screenshots
The command I entered is as follows:
ns-export pointcloud --load-config outputs/geo-volsdf-dtu24/geo-neus/2023-03-16_114255/config.yml --output-dir pointcloud/scan24 --num-rays-per-batch 4096
The console error is as follows:
image

Additional context
I found the fucntion and try to print the "pipeline.datamanager.next_train(0)".It has 3 items:"RayBundle","images"and"uv".

Only showing cameras but not the render

I am only seeing the cameras but not the render itself when following the intro tutorial.

Steps to reproduce the behavior:
I am running the intro example of the skull on a google the remote compute engine.
after cloning the repo, installing dependencies and port forwarding to port 7007 I expected to get this result:

image

However I find this:

image

A similar issues occurred in the nerfstudio project itself and where fixed by removing webrtc. (see: nerfstudio-project/nerfstudio#1579). I tried the nerfstudio tutorial itself and it indeed worked as expected.

Any suggestions?

RGB

Describe the bug
A clear and concise description of what the bug is.

To Reproduce
Steps to reproduce the behavior:

  1. Go to '...'
  2. Click on '....'
  3. Scroll down to '....'
  4. See error

Expected behavior
A clear and concise description of what you expected to happen.

Screenshots
If applicable, add screenshots to help explain your problem.

Additional context
Add any other context about the problem here.

Got empty scene using monosdf on customized data

Describe the bug
Thanks so much for the amazing work! I am new to this project, and meet some trouble that haven't figured out for a few days. I got empty scene using monosdf on customized RGB-D data with known camera poses. Thanks for your help!

To Reproduce
ns-train monosdf --data data/[my_crop_data(attached link below)]
original: https://drive.google.com/drive/folders/1Qk23lXA0E-00HGI9QsLCXiHNnzlKyFYM?usp=share_link
preprocessed (run my script down below): https://drive.google.com/drive/folders/1CuPpnpquuxoPQUzc22nbD-C6_u72VzTH?usp=share_link
Expected behavior
There should be plant reconstructed in the middle as shown in the screenshot

Screenshots
It seems camera poses are registered correctly
Screenshot from 2023-03-16 09-34-25
Screenshot from 2023-03-16 09-34-37
But nothing in the extracted mesh, after training for 120000 steps
Screenshot from 2023-03-16 09-37-13

Additional context
Preprocessed_data can be downloaded here.
ns-train monosdf --data data/[my_crop_data(attached link below)]
original: https://drive.google.com/drive/folders/1Qk23lXA0E-00HGI9QsLCXiHNnzlKyFYM?usp=share_link
preprocessed (run my script down below): https://drive.google.com/drive/folders/1CuPpnpquuxoPQUzc22nbD-C6_u72VzTH?usp=share_link

To preprocess my data, I modified the example script process_neuralrgbd_to_sdfstudio.py to run the following script
https://drive.google.com/file/d/18-0-oePotDbRdJapxfiTAfgbq0qGHKBt/view?usp=share_link

Remove webrtc from nerfstudio viewer

Problem

The viewer often suffers from showing as Server Connected, but Render Disconnected, and will not render any images (also see #31)

Proposed Solution

The Nerfstudio folks recently fixed this issue by removing webrtc entirely from the viewer and sending everything over TCP. I'd like to propose that this project also follows suit for improved viewer stability. I'm happy to do it myself and submit a PR if that's ok! This is the PR that fixed the issue.

Camera Poses not matching the viewer

Describe the bug
I am trying to estimate the surface from a dataset created in blender. The set of cameras surrounds the object completely. Screenshot attached below. However, the nerfstudio viewer is almost superimposing them together due to which its not learning anything
To Reproduce
Attaching a drive link with images, masks and meta_data.json
The link

Expected behavior
Screenshot_2023-03-08_23-07-57

Above picture of camera and object in blender. Below the viewer output. I have used the same camera poses in other code bases and it worked there.
Screenshot_2023-03-08_23-07-28
Please help
A clear and concise description of what you expected to happen.

Screenshots

Additional context
Add any other context about the problem here.

Implementation of BakedSDF

https://bakedsdf.github.io/, most of the components are already there (VolSDF | MipNerf 360 | Marching cubes). The largest missing part is the appearance model that uses spherical Gaussians. Would love to see this brought into SDFStudio

Extracting MonoCues

Having a trouble extracting MonoCues

It's just doesn't extract

Do i need to get camera poses beforehand?

image

DTU demo scene is flipped upside down

Hi,
I tried to reproduce the "Train your first model" steps in the README, i.e. I executed the following command:

ns-train neus-facto --pipeline.model.sdf-field.inside-outside False --vis viewer --experiment-name neus-facto-dtu65 sdfstudio-data --data data/sdfstudio-demo-data/dtu-scan65

However, the input images are loaded upside down. Hence I could not reproduce the viewer state that is shown in the README image https://github.com/autonomousvision/sdfstudio/blob/master/media/viewer_screenshot.png. As the viewer does not allow a full 180 degree flip, it was not possible to view the scene in the correct orientation.

I could fix the problem by setting auto_orient in the file nerfstudio/data/dataparsers/sdfstudio_dataparser.py to True. However, I'm not sure if this is a bug or intended, hence I opened this issue.

Mesh sharpness decreased for the latest code

Hi @niujinshuchong ! I found that the latest codes produce meshes with less sharpness compared to the codes before 6f51ffd .

  • Only tested neus-facto.
  • All the args related to RefNeRF were turned off for consistency.

I've read the code changes and it's still not clear what the problem is. Were you got the same result?

Even if I set the same parameters and dataset, the mesh of sdfstudio monosdf pipline is different from original monosdf mesh.

Hi, dear author.
I trained the sdfstudio monosdf pipline using the same dataset and the same parameter with the original monosdf, while meshes are different. The floor of mesh extracted from sdfsutdio monosdf pipline is raised. However, the floor of mesh extracted from original monosdf is smooth and flat. Have you had any similar problems when compared to the original monosdf results?
My parameters are:
ns-train monosdf --pipeline.model.sdf-field.inside-outside True --pipeline.model.eikonal-loss-mult 0.05 --pipeline.model.mono-depth-loss-mult 0.1 --pipeline.model.mono-normal-loss-mult 0.05 --pipeline.model.sdf-field.use-grid-feature True --pipeline.model.sdf-field.bias 0.9 --pipeline.model.sdf-field.divide-factor 1.5 --trainer.steps-per-save 50000 --trainer.max-num-iterations 200000 --vis viewer --experiment-name monosdf_cyberlab_train3 sdfstudio-data --data xx/scan1 --include_mono_prior True

Thanks!

About eikonal loss in tri-plane

Thanks for your work! Now I use tri-plane to represent 3D with sphere init, but my eikonal loss is quite high, like 40. I wonder why.

Visualize trained model with viewer

Hi, thank you for great work!

I'm working with your code and I wonder how to visualize trained model with viewer.

I read that just resume training with option --viewer.start-train False .

However, in the code when load the checkpoint, I didn't find that code loads the configuration file of checkpoint.

  1. So I wonder that it doesn't affect to the visualization result without loading configuration of loaded checkpoint?
  2. When I resume training with checkpoint, also does it not affect to further training?

Thanks :)

can it support RGB images?

I use process_nerfstudio_to_sdfstudio.py to process nerfstudio data, nert use extract_monocular_cues.py but the images size is fix,can it self-adaption my data size? and then I skip the step, run train, got bad result.can it support RGB images?

Can't use the viewer

Everything seems to be going right until I click on the link to the viewer

image

I get this stack trace :

Viewer at: https://viewer.nerf.studio/versions/22-12-02-0/?websocket_url=ws://localhost:7007
Exception in thread Thread-16 (loop_in_thread):
Traceback (most recent call last):
File "/usr/lib/python3.10/threading.py", line 1016, in _bootstrap_inner
self.run()
File "/usr/lib/python3.10/threading.py", line 953, in run
self._target(*self._args, **self._kwargs)
File "/home/tommysugg/sdfstudio/nerfstudio/viewer/server/viewer_utils.py", line 365, in loop_in_thread
loop.run_until_complete(self.send_webrtc_answer(data))
File "/usr/lib/python3.10/asyncio/base_events.py", line 646, in run_until_complete
return future.result()
File "/home/tommysugg/sdfstudio/nerfstudio/viewer/server/viewer_utils.py", line 546, in send_webrtc_answer

And the log says something different :

ZMQWebSocketBridge using zmq_port="58147" and websocket_port="7007"
opened: <main.WebSocketHandler object at 0x7fa11e984ac0>
Sending entire scene state due to websocket connection established.
closed: <main.WebSocketHandler object at 0x7fa11e984ac0>

I've tried on many of the models and this happens

Need advice on custom data with poor result

Hi,
Thank you for amazing work.
I have used instant-ngp, nerfstudio to generate mesh. However, sdfstudio impress me by mesh generation.
When I use sample data, I always get good render and mesh.
However, when I use custom data, results were always poor although I have done experiments and follow all discussions in this page.

Could you give me advice to process with this data: https://drive.google.com/file/d/1hLnwXon29bc4SiyuxiV32VAfQnZY2gTd/view?usp=share_link
As screen shot of above shared data like this:
Screenshotelephant

The steps I have done as followings:

  • Used colmap to generate transforms.json file as usual in instant-ngp
  • Tested with instant-ngp, give a very good result (video 1 below)
  • Tested with nerfstudio by:
    ns-train neus-facto --pipeline.model.sdf-field.inside-outside False --vis viewer instant-ngp-data --data D:\Hung\sdfstudio\data_cu\elephantNERFSDF\PNG_colmap
    Give a not good render
  • I also use scripts ns-process-data, then render with nerfstudio, give a good result
  • used script process_nerfstudio_to_sdfstudio.py to convert from nerfstudio json file to meta_data.json file, and run with sdfstudio (neus-facto), got a very poor result (video 2).

I have checked very careful discussion in #2 (#2), and #49 (#49), but rendering result is very poor like #49.

Below videos are rendering with instant-ngp
Video 1: https://user-images.githubusercontent.com/110378166/222356933-3112bdb8-3c96-4c30-a516-fc3af3e88706.mp4

And rendering with sdfstudio
video 2: https://user-images.githubusercontent.com/110378166/222357045-c8175d1c-f59c-49db-82ab-1f9c5b81f6fa.mp4

The mesh I got from sdfstudio like this:
ScreenshotElephantMesh

Highly appreciate with any help.

Can't run docker image due to missing Qt 5.15 required by pymeshlab 2022.2.post2

Describe the bug
Can't run "ns-install-cli" on docker images.

  1. neftstudio is dependent on "pymeshlab==2022.2.post2" (inside pyproject.toml)
  2. pymeshlab 2022.2.post2 requires "Qt = 5.15"
  3. Qt 5.15 can't be simply installed using apt-get and needs to be built
  4. It takes hours and I'm getting errors trying to build it on the docker specified

To Reproduce
Steps to reproduce the behavior:
Simply build the Dockerfile provided and try to run the image

Expected behavior
A clear and concise description of what you expected to happen.

Screenshots
image

Additional context
running "qmake -v" shows I have Qt 5.12:
QMake version 3.1 Using Qt version 5.12.8 in /usr/lib/x86_64-linux-gnu

Progressive Training in NeuS2

Is your feature request related to a problem? Please describe.
Training NeuS usually takes a lot of time. Although using multi-resolution hash encoding can be faster, it will lead to under-fitting on low-resolution or over-fitting on large-resolution, as analyzed in Sec. 4.3 of NeuS2

Describe the solution you'd like
A training strategy to gradually increase the bandwidth of spatial grid encoding may be helpful.

Describe alternatives you've considered
None

Additional context
Please refer to the NeuS2 paper.

Neus2

Currently Neus is implemented, but its successor, Neus2, is promising improved accuracy and faster processing. I am quite well equipped on the photogrammetry side of things to have an attempt myself at extending nerfstudio to support this method (I am the developer of OpenMVS and I watched closely the progress of NeRFs too), but I have little practical experience with ML, and nerfstudio. Is there someone that could guide me in implementing this?

How to overcome ripples

Hi!

We are using SDFStudio to train our datasets, and we often have "ripple" (e.g. wavy-like) patterns in our datasets. I wonder if you have also stumbled upon these issues, and if you have any strategies to overcome them.
This is a sample of a depthmap calculated based on SDF rootfinding... with the neus-facto method.

image

Server connected, Render disconnected

Hi
I want to visualize my model using viewer tool. accessing through the link provided belove the training part but the issues as you can see in the following pic, server is connected but render is disconnected
what can be the issue
so I can only see the image orientation and not the 3D model.

image

Instant NGP is divergent from NeRFStudio

Describe the bug
The model NGPModel is divergent from the one found in NeRFStudio. This raises an issue due to incompatibility with newer NerfAcc version (i.e. nerfacc.unpack_info()).

To Reproduce
Steps to reproduce the behavior:

  1. Install SDFStudio.
  2. Run ns-train instant-ngp sdfstudio-data --data data/sdfstudio-demo-data/replica-room0.

Expected behavior
Regular training.

Screenshots

Traceback (most recent call last):
  File "/home/joey/venvs/sdfstudio/bin/ns-train", line 8, in <module>
    sys.exit(entrypoint())
  File "/home/joey/code/sdfstudio/scripts/train.py", line 248, in entrypoint
    main(
  File "/home/joey/code/sdfstudio/scripts/train.py", line 234, in main
    launch(
  File "/home/joey/code/sdfstudio/scripts/train.py", line 173, in launch
    main_func(local_rank=0, world_size=world_size, config=config)
  File "/home/joey/code/sdfstudio/scripts/train.py", line 88, in train_loop
    trainer.train()
  File "/home/joey/code/sdfstudio/nerfstudio/engine/trainer.py", line 151, in train
    loss, loss_dict, metrics_dict = self.train_iteration(step)
  File "/home/joey/code/sdfstudio/nerfstudio/utils/profiler.py", line 43, in wrapper
    ret = func(*args, **kwargs)
  File "/home/joey/code/sdfstudio/nerfstudio/engine/trainer.py", line 318, in train_iteration
    _, loss_dict, metrics_dict = self.pipeline.get_train_loss_dict(step=step)
  File "/home/joey/code/sdfstudio/nerfstudio/pipelines/dynamic_batch.py", line 78, in get_train_loss_dict
    model_outputs, loss_dict, metrics_dict = super().get_train_loss_dict(step)
  File "/home/joey/code/sdfstudio/nerfstudio/utils/profiler.py", line 43, in wrapper
    ret = func(*args, **kwargs)
  File "/home/joey/code/sdfstudio/nerfstudio/pipelines/base_pipeline.py", line 260, in get_train_loss_dict
    model_outputs = self._model(ray_bundle)
  File "/home/joey/venvs/sdfstudio/lib/python3.8/site-packages/torch/nn/modules/module.py", line 1130, in _call_impl
    return forward_call(*input, **kwargs)
  File "/home/joey/code/sdfstudio/nerfstudio/models/base_model.py", line 142, in forward
    return self.get_outputs(ray_bundle)
  File "/home/joey/code/sdfstudio/nerfstudio/models/instant_ngp.py", line 171, in get_outputs
    ray_samples, packed_info, ray_indices = self.sampler(
  File "/home/joey/venvs/sdfstudio/lib/python3.8/site-packages/torch/nn/modules/module.py", line 1130, in _call_impl
    return forward_call(*input, **kwargs)
  File "/home/joey/code/sdfstudio/nerfstudio/model_components/ray_samplers.py", line 481, in forward
    ray_indices = nerfacc.unpack_info(packed_info)
  File "/home/joey/venvs/sdfstudio/lib/python3.8/site-packages/torch/autograd/grad_mode.py", line 27, in decorate_context
    return func(*args, **kwargs)
TypeError: unpack_info() missing 1 required positional argument: 'n_samples'

Additional context
I tried hotfixing the error by updating problematic methods in models/instant_ngp.py and model_components/ray_samplers.py. While it can run, the results are not great, which makes me doubt that this was done correctly. Instant-NGP is not an SDF-based technique, but it would be nice to not have to also have NeRFStudio installed to run it for e.g. comparison baselines.

Comparison on public benchmarks against SOTA

Is your feature request related to a problem? Please describe.

I've been playing with SDFStudio, truly great work, but I found the performance (say, chamfer distance) of each individual method lags behind what has been reported in their corresponding paper. For instance, on the DTU 65 dataset, the original VolSDF reports a chamfer distance of 0.7, but the vanilla VolSDF model in SDFStudio produced a chamfer distance of about 1.0 on the same dataset. And other methods are very much similar, e.g., the NeuS-facto method has a chamfer distance of about 1.0, etc.

Describe the solution you'd like

Could you provide configurations/instructions to reproduce the SOTA?
Could you also provide any benchmark you have done of SDFStudio on public benchmarks, say, DTU?

Describe alternatives you've considered

A systematic document on how to tune the hyperparameters, what components to use to achieve good performance as reported in the literature will be very useful.

Additional context
Add any other context or screenshots about the feature request here.

Get mesh with small aabb

Hi,Thanks for sharing the great work!
When I use neus-facto, I can get mesh with default parameters. There is a small object in the middle of the mesh, so I set aabb to [[-0.2,-0.2,-0.2], [0.2,0.2,0.2]], but I can’t get any mesh output, where is the parameter Having trouble setting it up?

Overriding variables in config file

Hi,

I have a question regarding the config priorities. In my use case, I'd like to use a .yml file with most parameters, and only override the data and output folders. I'm using the following command:

    ns-train neus-facto \
        --data $PROJECT_FOLDER \
        --output-dir $PROJECT_FOLDER/logs \
        --trainer.load-config $CONFIG

It seems that the $CONFIG is loaded, but the --data option is not overriden and fallbacks to the default data folder (data/DTU/scan65). Is overriding from the command line supported in SDFStudio?

How to improve training efficiency

Hi @niujinshuchong ! I noticed that the training speed of neus-facto is ~3 times slower than nerfacto.

I have tested the time per iteration

# nerfstudio
loss_time = 15.6 ms
backward_time = 6.8 ms
total = 22.5 ms
# sdfstudio
loss_time = 30.2 ms
backward_time = 44.0 ms
total = 74.3 ms

I'm wondering about your insights or suggestions to improve the training efficiency. Thanks!

How to reproduce the results in monosdf?

I've tried with the default setting of monosdf, but the result was not so good as in the monosdf paper. What should I do to reproduce the result in monosdf paper?

Mask on DTU dataset

Hi, congratulations for this great work. I have tried to train NeUS on a sample of DTU dataset. However, I get the following results:

image

Is there any way to train using background masks as well ?
Bests,

Riccardo

slower than the original monosdf

I found this implementation is slower than the original monosdf. The CPU usage is quite high and GPU usage is about 30%.

So I

  1. cache all images here
  2. implement a naive collate_fn without allocating memory here since batch size is 1
  3. also transfer images to GPU to avoid sampling in CPU
  4. remove the evaluation code.

These approaches can speed up each iteration from ~800ms to ~350ms, but is still slower than the original monosdf. The CPU usage is still high, most of my 96 cores are used. The GPU usage is about 60%.
Do you have suggestions about this problem?

Here is my test command:
CUDA_VISIBLE_DEVICES=0,1,2,3 ns-train monosdf --pipeline.model.sdf-field.use-grid-feature True --pipeline.model.sdf-field.hidden-dim 128 --pipeline.model.sdf-field.num-layers 2 --pipeline.model.sdf-field.num-layers-color 2 --pipeline.model.sdf-field.use-appearance-embedding True --pipeline.model.sdf-field.geometric-init True --pipeline.model.sdf-field.inside-outside True --pipeline.model.sdf-field.bias 0.6 --pipeline.model.sdf-field.beta-init 0.1 --pipeline.datamanager.train-num-images-to-sample-from 1 --pipeline.datamanager.train-num-times-to-repeat-images 0 --trainer.steps-per-eval-image 5000 --pipeline.model.background-model none --vis tensorboard --experiment-name monosdf-yungu-normal0.05-smooth --pipeline.model.mono-depth-loss-mult 0.001 --pipeline.model.mono-normal-loss-mult 0.05 --pipeline.model.smooth-loss-mult 0.005 --pipeline.datamanager.train-num-rays-per-batch 2048 --machine.num-gpus 4 --trainer.max-num-iterations 500 sdfstudio-data --data $MY_DATA --include_mono_prior True --skip_every_for_val_split 30

Texturing mesh

Hi,

Thanks for sharing the great work!

I spotted that we have scripts/texture.py from Nerfstudio.

Is texturing mesh ready to use here? Could you give some directions on how to use it in this repo?

Thanks again.

Sparse points loss?

In the GeoNeus paper, one the losses is what the authors call the "View-aware SDF loss" (eq. 15 in their paper).
Is this implemented somewhere in this repository? I can only find references of the patchmatch loss, but no code related to the SDF loss. Am I missing something? Is there any implementation plan for this?

Best,

Sergio

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.