Giter Site home page Giter Site logo

kwea123 / nsff_pl Goto Github PK

View Code? Open in Web Editor NEW
220.0 12.0 29.0 52.5 MB

Neural Scene Flow Fields using pytorch-lightning, with potential improvements

License: MIT License

Python 14.87% Jupyter Notebook 85.13%
nerf neural-radiance-fields pytorch view-synthesis pytorch-lightning nsff neural-scene-flow-fields

nsff_pl's Introduction

nsff_pl

Neural Scene Flow Fields using pytorch-lightning. This repo reimplements the NSFF idea, but modifies several operations based on observation of NSFF results and discussions with the authors. For discussion details, please see the issues of the original repo. The code is based on my previous implementation.

The main modifications are the followings:

  1. Remove the blending weight in static NeRF. I adopt the addition strategy in NeRF-W.
  2. Remove disocclusion head. I use warped dynamic weights as an indicator of whether occlusion occurs. At the beginning of training, this indicator acts reliably as shown below:


Top: Reference image. Center: Warped images, artifacts appear at boundaries. Bottom: Estimated disocclusion.

As training goes, the disocclusion tends to get close to 1 almost everywhere, i.e. occlusion does not exist even in warping. In my opinion, this means the empty space learns to "move a little" to avoid the space occupied by dynamic objects (although the network has never been trained to do so).

  1. Compose static dynamic also in image warping.

Implementation details are in models/rendering.py.

The implementation is verified on several sequences, and produces visually plausible results. Qualitatively, these modifications produces better result on the kid-running scene compared to the original repo.

Full reconstruction


Left: GT. Center: this repo (PSNR=35.02). Right: pretrained model of the original repo(PSNR=30.45).

Background reconstruction


Left: this repo. Right: pretrained model of the original repo (by setting raw_blend_w to 0).

Fix-view-change-time (view 8, times from 0 to 16)


Left: this repo. Right: pretrained model of the original repo.

Fix-time-change-view (time 8, views from 0 to 16)


Left: this repo. Right: pretrained model of the original repo.

Novel view synthesis (view 8, spiral)

Time interpolation (view 8, add 10 frames between each integer time from time 0 to 29)




Left: this repo. Right: pretrained model of the original repo. The 2nd and 3rd rows are 0th frame and 29th frame to show the difference of the background.

The color of our method is more vivid and closer to the GT images both qualitatively and quantitatively (not because of gif compression). Also, even without any kind of supervision (either direct or self supervision), the network learns to separate the foreground and the background more cleanly than the original implementation, which is unexpected! Bad fg/bg separation not only means the background actually changes each frame, but also the color information is not leverage across time, so the reconstruction quality degrades, as can be shown in the original NSFF result towards the end.

Bonus - Depth

Our method also produces smoother depths, although it might not have direct impact on image quality.



Top left: static depth from this repo. Top right: full depth from this repo.
Bottom left: static depth from the original repo. Bottom right: full depth from the original repo.

More results








๐Ÿ’ป Installation

Hardware

  • OS: Ubuntu 18.04
  • NVIDIA GPU with CUDA>=10.2 (tested with 1 RTX2080Ti)

Software

  • Clone this repo by git clone --recursive https://github.com/kwea123/nsff_pl
  • Python>=3.7 (installation via anaconda is recommended, use conda create -n nsff_pl python=3.7 to create a conda environment and activate it by conda activate nsff_pl)
  • Install core requirements by pip install -r requirements.txt
  • Install cupy via pip install cupy-cudaxxx by replacing xxx with your cuda version.

๐Ÿ”‘ Training

Steps

Data preparation

Create a root directory (e.g. foobar), create a folder named frames and prepare your images (it is recommended to have at least 30 images) under it, so the structure looks like:

โ””โ”€โ”€ foobar
 ย ย  โ””โ”€โ”€ frames
     ย ย  โ”œโ”€โ”€ 00000.png
        ...
 ย ย   ย ย  โ””โ”€โ”€ 00029.png

The image names can be arbitrary, but the lexical order should be the same as time order! E.g. you can name the images as a.png, c.png, dd.png but the time order must be a -> c -> dd.

Motion Mask

In order to correctly reconstruct the camera poses, we must first filter out the dynamic areas so that feature points in these areas are not matched during estimation.

I use maskrcnn from detectron2. Only semantic masks are used, as I find flow-based masks too noisy.

Install detectron2 by python -m pip install detectron2 -f https://dl.fbaipublicfiles.com/detectron2/wheels/cu102/torch1.8/index.html.

Modify the DYNAMIC_CATEGORIES variable in third_party/predict_mask.py to the dynamic classes in your data (only COCO classes are supported).

Next, NSFF requires depth and optical flows. We'll use some SOTA methods to perform the prediction.

Depth

The instructions and code are borrowed from DPT.

Download the model weights from here and put it in third_party/depth/weights/.

Optical Flow

The instructions and code are borrowed from RAFT.

Download raft-things.pth from google drive and put it in third_party/flow/models/.

Prediction

Thanks to owang, after preparing the images and the model weights, we can automate the whole process by a single command python preprocess.py --root_dir <path/to/foobar>.

Finally, your root directory will have all of this:

โ””โ”€โ”€ foobar
 ย ย  โ”œโ”€โ”€ frames (original images, not used, you can delete)
    โ”‚ย ย  โ”œโ”€โ”€ 00000.png
    โ”‚   ...
 ย ย  โ”‚ย ย  โ””โ”€โ”€ 00029.png
    โ”œโ”€โ”€ images_resized (resized images, not used, you can delete)
    โ”‚ย ย  โ”œโ”€โ”€ 00000.png
    โ”‚   ...
 ย ย  โ”‚ย ย  โ””โ”€โ”€ 00029.png
    โ”œโ”€โ”€ images (the images to use in training)
    โ”‚ย ย  โ”œโ”€โ”€ 00000.png
    โ”‚   ...
 ย ย  โ”‚ย ย  โ””โ”€โ”€ 00029.png
    โ”œโ”€โ”€ masks (not used but do not delete)
    โ”‚ย ย  โ”œโ”€โ”€ 00000.png.png
    โ”‚   ...
 ย ย  โ”‚ย ย  โ””โ”€โ”€ 00029.png.png
    โ”œโ”€โ”€ database.db
    โ”œโ”€โ”€ sparse
    โ”‚   โ””โ”€โ”€ 0
    โ”‚       โ”œโ”€โ”€ cameras.bin
    โ”‚       โ”œโ”€โ”€ images.bin
    โ”‚       โ”œโ”€โ”€ points3D.bin
    โ”‚       โ””โ”€โ”€ project.ini
    โ”œโ”€โ”€ disps
    โ”‚ย ย  โ”œโ”€โ”€ 00000.png
    โ”‚   ...
 ย ย  โ”‚ย ย  โ””โ”€โ”€ 00029.png
    โ”œโ”€โ”€ flow_fw
    โ”‚ย ย  โ”œโ”€โ”€ 00000.flo
    โ”‚   ...
 ย ย  โ”‚ย ย  โ””โ”€โ”€ 00028.flo
    โ””โ”€โ”€ flow_bw
     ย ย  โ”œโ”€โ”€ 00001.flo
        ...
 ย ย   ย ย  โ””โ”€โ”€ 00029.flo

Now you can start training!

Train!

Run the following command (modify the parameters according to opt.py):

python train.py \
  --dataset_name monocular --root_dir $ROOT_DIR \
  --img_wh 512 288 --start_end 0 30 \
  --N_samples 128 --N_importance 0 --encode_t --use_viewdir \
  --num_epochs 50 --batch_size 512 \
  --optimizer adam --lr 5e-4 --lr_scheduler cosine \
  --exp_name exp

I also implemented a hard sampling strategy to improve the quality of the hard regions. Add --hard_sampling to enable it.

Specifically, I compute the SSIM between the prediction and the GT at the end of each epoch, and use 1-SSIM as the sampling probability for the next epoch. This allows rays with larger errors to be sampled more frequently, and thus improve the result. The choice of SSIM is that it reflects more visual quality, and is less sensible to noise or small pixel displacement like PSNR.

๐Ÿ”Ž Testing

See test.ipynb for scene reconstruction, scene decomposition, fix-time-change-view, ..., etc. You can get almost everything out of this notebook. I will add more instructions inside in the future.

Use eval.py to create the whole sequence of moving views. E.g.

python eval.py \
  --dataset_name monocular --root_dir $ROOT_DIR \
  --N_samples 128 --N_importance 0 --img_wh 512 288 --start_end 0 30 \
  --encode_t --output_transient \
  --split test --video_format gif --fps 5 \
  --ckpt_path kid.ckpt --scene_name kid_reconstruction

More specifically, the split argument specifies which novel view to generate:

  • test: test on training pose and times
  • test_spiral: spiral path over the whole sequence, with time gradually advances (integer time for now)
  • test_spiralX: fix the time to X and generate spiral path around training view X.
  • test_fixviewX_interpY: fix the view to training pose X and interpolate the time from start to end, adding Y frames between each integer timestamps.

โš ๏ธ Other differences with the original paper

  1. I add entropy loss as suggested here. This allows the person to be "thin" and produces less artifact when the camera is far from the original pose.
  2. I explicitly zero the flow at far regions (where z>0.95).
  3. I add a cross entropy loss with thickness to encourage static and dynamic weights to peak at different locations (i.e. one sample points is either static or dynamic). The thickness specifies how many intervals should the peaks be separated. Empirically I found 15 to be a good value. Explanation about this loss is, for example for the "kid playing bubble scene", the kid is rotating around himself with the rotation axis almost fixed, so if no prior is added, the network learns the central part around the body as static (for example you can imagine a flag rotating around its pole, only the flag moves but the pole doesn't). This doesn't cause problem for reconstruction, but when it comes to novel view, the wrongly estimated static part causes artifacts. In order to make the network learn that the whole body is moving, I add this cross entropy loss to force the static peak to be at least thickness//2 far from the dynamic peak.
  4. In pose reconstruction, the original authors use the entire image to reconstruct the poses, without masking out the dynamic region. In my opinion this strategy might lead to totally wrong pose estimation in some cases, so I opt to reconstruct the poses with dynamic region masked out. In order to set the near plane correctly, I use COLMAP combined with monodepth to get the minimum depth.

TODO

  • Add COLMAP reconstruction tutorial (mask out dynamic region).
  • Remove NSFF dependency for data preparation. More precisely, the original code needs quite a lot modifications to work on own data, and the depth/flow are calculated on resized images, which might reduce their accuracy.
  • Add spiral path for testing.
  • Add mask hard mining at the beginning of training. Or prioritized experience replay.

Acknowledgment

Thank to the authors of the NSFF paper, owang zhengqili sniklaus, for fruitful discussions and supports!

nsff_pl's People

Contributors

kwea123 avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

nsff_pl's Issues

CUDA kernel error

Hello, I have been trying to train a model on Jumping sequence using your code but I have this cuda kernel error:

python train.py   --dataset_name monocular --root_dir $ROOT_DIR   --img_wh 512 288 --start_end 0 30   --N_samples 128 --N_importance 0 --encode_t --use_viewdir   --num_epochs 50 --batch_size 512   --optimizer adam --lr 5e-4 --lr_scheduler cosine   --exp_name exp
Global seed set to 42
/home/phong/miniconda3/envs/deep/lib/python3.8/site-packages/pytorch_lightning/plugins/training_type/ddp.py:20: LightningDeprecationWarning: The `pl.plugins.training_type.ddp.DDPPlugin` is deprecated in v1.6 and will be removed in v1.8. Use `pl.strategies.ddp.DDPStrategy` instead.
  rank_zero_deprecation(
/home/phong/miniconda3/envs/deep/lib/python3.8/site-packages/pytorch_lightning/trainer/connectors/accelerator_connector.py:312: LightningDeprecationWarning: Passing <pytorch_lightning.plugins.training_type.ddp.DDPPlugin object at 0x7f74534b47c0> `strategy` to the `plugins` flag in Trainer has been deprecated in v1.5 and will be removed in v1.7. Use `Trainer(strategy=<pytorch_lightning.plugins.training_type.ddp.DDPPlugin object at 0x7f74534b47c0>)` instead.
  rank_zero_deprecation(
/home/phong/miniconda3/envs/deep/lib/python3.8/site-packages/pytorch_lightning/trainer/connectors/callback_connector.py:96: LightningDeprecationWarning: Setting `Trainer(progress_bar_refresh_rate=1)` is deprecated in v1.5 and will be removed in v1.7. Please pass `pytorch_lightning.callbacks.progress.TQDMProgressBar` with `refresh_rate` directly to the Trainer's `callbacks` argument instead. Or, to disable the progress bar pass `enable_progress_bar = False` to the Trainer.
  rank_zero_deprecation(
/home/phong/miniconda3/envs/deep/lib/python3.8/site-packages/pytorch_lightning/trainer/connectors/callback_connector.py:171: LightningDeprecationWarning: Setting `Trainer(weights_summary=None)` is deprecated in v1.5 and will be removed in v1.7. Please set `Trainer(enable_model_summary=False)` instead.
  rank_zero_deprecation(
GPU available: True, used: True
TPU available: False, using: 0 TPU cores
IPU available: False, using: 0 IPUs
HPU available: False, using: 0 HPUs
/home/phong/miniconda3/envs/deep/lib/python3.8/site-packages/pytorch_lightning/trainer/configuration_validator.py:154: LightningDeprecationWarning: The `LightningModule.get_progress_bar_dict` method was deprecated in v1.5 and will be removed in v1.7. Please use the `ProgressBarBase.get_metrics` instead.
  rank_zero_deprecation(
Global seed set to 42
Initializing distributed: GLOBAL_RANK: 0, MEMBER: 1/1
----------------------------------------------------------------------------------------------------
distributed_backend=nccl
All distributed processes registered. Starting with 1 processes
----------------------------------------------------------------------------------------------------

/home/phong/miniconda3/envs/deep/lib/python3.8/site-packages/torch/functional.py:568: UserWarning: torch.meshgrid: in an upcoming release, it will be required to pass the indexing argument. (Triggered internally at  /opt/conda/conda-bld/pytorch_1646755903507/work/aten/src/ATen/native/TensorShape.cpp:2228.)
  return _VF.meshgrid(tensors, **kwargs)  # type: ignore[attr-defined]
LOCAL_RANK: 0 - CUDA_VISIBLE_DEVICES: [0]
Epoch 0:   1%| | 24/3539 [00:03<09:04,  6.46it/s, loss=0.208, train/col_l=0.0248, train/disp_l=0.0415, train/entropy_l=0.00239, train/cross_entropy_l=0.000, train/flow_fw_l=0.0434, train/flow_bw_l=0.0416, train/pho_l=0.0495, train/

/opt/conda/conda-bld/pytorch_1646755903507/work/aten/src/ATen/native/cuda/IndexKernel.cu:91: operator(): block: [11,0,0], thread: [0,0,0] Assertion `index >= -sizes[i] && index < sizes[i] && "index out of bounds"` failed.
/opt/conda/conda-bld/pytorch_1646755903507/work/aten/src/ATen/native/cuda/IndexKernel.cu:91: operator(): block: [11,0,0], thread: [1,0,0] Assertion `index >= -sizes[i] && index < sizes[i] && "index out of bounds"` failed.
/opt/conda/conda-bld/pytorch_1646755903507/work/aten/src/ATen/native/cuda/IndexKernel.cu:91: operator(): block: [11,0,0], thread: [2,0,0] Assertion `index >= -sizes[i] && index < sizes[i] && "index out of bounds"` failed.
/opt/conda/conda-bld/pytorch_1646755903507/work/aten/src/ATen/native/cuda/IndexKernel.cu:91: operator(): block: [11,0,0], thread: [3,0,0] Assertion `index >= -sizes[i] && index < sizes[i] && "index out of bounds"` failed.
/opt/conda/conda-bld/pytorch_1646755903507/work/aten/src/ATen/native/cuda/IndexKernel.cu:91: operator(): block: [11,0,0], thread: [4,0,0] Assertion `index >= -sizes[i] && index < sizes[i] && "index out of bounds"` failed.
/opt/conda/conda-bld/pytorch_1646755903507/work/aten/src/ATen/native/cuda/IndexKernel.cu:91: operator(): block: [11,0,0], thread: [5,0,0] Assertion `index >= -sizes[i] && index < sizes[i] && "index out of bounds"` failed.
/opt/conda/conda-bld/pytorch_1646755903507/work/aten/src/ATen/native/cuda/IndexKernel.cu:91: operator(): block: [11,0,0], thread: [6,0,0] Assertion `index >= -sizes[i] && index < sizes[i] && "index out of bounds"` failed.
/opt/conda/conda-bld/pytorch_1646755903507/work/aten/src/ATen/native/cuda/IndexKernel.cu:91: operator(): block: [11,0,0], thread: [7,0,0] Assertion `index >= -sizes[i] && index < sizes[i] && "index out of bounds"` failed.
/opt/conda/conda-bld/pytorch_1646755903507/work/aten/src/ATen/native/cuda/IndexKernel.cu:91: operator(): block: [11,0,0], thread: [8,0,0] Assertion `index >= -sizes[i] && index < sizes[i] && "index out of bounds"` failed.
/opt/conda/conda-bld/pytorch_1646755903507/work/aten/src/ATen/native/cuda/IndexKernel.cu:91: operator(): block: [11,0,0], thread: [9,0,0] Assertion `index >= -sizes[i] && index < sizes[i] && "index out of bounds"` failed.

Traceback (most recent call last):
  File "/home/phong/miniconda3/envs/deep/lib/python3.8/site-packages/pytorch_lightning/trainer/trainer.py", line 719, in _call_and_handle_interrupt
    return self.strategy.launcher.launch(trainer_fn, *args, trainer=self, **kwargs)
  File "/home/phong/miniconda3/envs/deep/lib/python3.8/site-packages/pytorch_lightning/strategies/launchers/subprocess_script.py", line 93, in launch
    return function(*args, **kwargs)
  File "/home/phong/miniconda3/envs/deep/lib/python3.8/site-packages/pytorch_lightning/trainer/trainer.py", line 809, in _fit_impl
    results = self._run(model, ckpt_path=self.ckpt_path)
  File "/home/phong/miniconda3/envs/deep/lib/python3.8/site-packages/pytorch_lightning/trainer/trainer.py", line 1234, in _run
    results = self._run_stage()
  File "/home/phong/miniconda3/envs/deep/lib/python3.8/site-packages/pytorch_lightning/trainer/trainer.py", line 1321, in _run_stage
    return self._run_train()
  File "/home/phong/miniconda3/envs/deep/lib/python3.8/site-packages/pytorch_lightning/trainer/trainer.py", line 1351, in _run_train
    self.fit_loop.run()
  File "/home/phong/miniconda3/envs/deep/lib/python3.8/site-packages/pytorch_lightning/loops/base.py", line 204, in run
    self.advance(*args, **kwargs)
  File "/home/phong/miniconda3/envs/deep/lib/python3.8/site-packages/pytorch_lightning/loops/fit_loop.py", line 268, in advance
    self._outputs = self.epoch_loop.run(self._data_fetcher)
  File "/home/phong/miniconda3/envs/deep/lib/python3.8/site-packages/pytorch_lightning/loops/base.py", line 204, in run
    self.advance(*args, **kwargs)
  File "/home/phong/miniconda3/envs/deep/lib/python3.8/site-packages/pytorch_lightning/loops/epoch/training_epoch_loop.py", line 208, in advance
    batch_output = self.batch_loop.run(batch, batch_idx)
  File "/home/phong/miniconda3/envs/deep/lib/python3.8/site-packages/pytorch_lightning/loops/base.py", line 204, in run
    self.advance(*args, **kwargs)
  File "/home/phong/miniconda3/envs/deep/lib/python3.8/site-packages/pytorch_lightning/loops/batch/training_batch_loop.py", line 88, in advance
    outputs = self.optimizer_loop.run(split_batch, optimizers, batch_idx)
  File "/home/phong/miniconda3/envs/deep/lib/python3.8/site-packages/pytorch_lightning/loops/base.py", line 204, in run
    self.advance(*args, **kwargs)
  File "/home/phong/miniconda3/envs/deep/lib/python3.8/site-packages/pytorch_lightning/loops/optimization/optimizer_loop.py", line 203, in advance
    result = self._run_optimization(
  File "/home/phong/miniconda3/envs/deep/lib/python3.8/site-packages/pytorch_lightning/loops/optimization/optimizer_loop.py", line 256, in _run_optimization
    self._optimizer_step(optimizer, opt_idx, batch_idx, closure)
  File "/home/phong/miniconda3/envs/deep/lib/python3.8/site-packages/pytorch_lightning/loops/optimization/optimizer_loop.py", line 369, in _optimizer_step
    self.trainer._call_lightning_module_hook(
  File "/home/phong/miniconda3/envs/deep/lib/python3.8/site-packages/pytorch_lightning/trainer/trainer.py", line 1593, in _call_lightning_module_hook
    output = fn(*args, **kwargs)
  File "/home/phong/miniconda3/envs/deep/lib/python3.8/site-packages/pytorch_lightning/core/lightning.py", line 1644, in optimizer_step
    optimizer.step(closure=optimizer_closure)
  File "/home/phong/miniconda3/envs/deep/lib/python3.8/site-packages/pytorch_lightning/core/optimizer.py", line 168, in step
    step_output = self._strategy.optimizer_step(self._optimizer, self._optimizer_idx, closure, **kwargs)
  File "/home/phong/miniconda3/envs/deep/lib/python3.8/site-packages/pytorch_lightning/strategies/ddp.py", line 278, in optimizer_step
    optimizer_output = super().optimizer_step(optimizer, opt_idx, closure, model, **kwargs)
  File "/home/phong/miniconda3/envs/deep/lib/python3.8/site-packages/pytorch_lightning/strategies/strategy.py", line 193, in optimizer_step
    return self.precision_plugin.optimizer_step(model, optimizer, opt_idx, closure, **kwargs)
  File "/home/phong/miniconda3/envs/deep/lib/python3.8/site-packages/pytorch_lightning/plugins/precision/precision_plugin.py", line 155, in optimizer_step
    return optimizer.step(closure=closure, **kwargs)
  File "/home/phong/miniconda3/envs/deep/lib/python3.8/site-packages/torch/optim/lr_scheduler.py", line 65, in wrapper
    return wrapped(*args, **kwargs)
  File "/home/phong/miniconda3/envs/deep/lib/python3.8/site-packages/torch/optim/optimizer.py", line 88, in wrapper
    return func(*args, **kwargs)
  File "/home/phong/miniconda3/envs/deep/lib/python3.8/site-packages/torch/autograd/grad_mode.py", line 27, in decorate_context
    return func(*args, **kwargs)
  File "/home/phong/miniconda3/envs/deep/lib/python3.8/site-packages/torch/optim/adam.py", line 100, in step
    loss = closure()
  File "/home/phong/miniconda3/envs/deep/lib/python3.8/site-packages/pytorch_lightning/plugins/precision/precision_plugin.py", line 140, in _wrap_closure
    closure_result = closure()
  File "/home/phong/miniconda3/envs/deep/lib/python3.8/site-packages/pytorch_lightning/loops/optimization/optimizer_loop.py", line 148, in __call__
    self._result = self.closure(*args, **kwargs)
  File "/home/phong/miniconda3/envs/deep/lib/python3.8/site-packages/pytorch_lightning/loops/optimization/optimizer_loop.py", line 134, in closure
    step_output = self._step_fn()
  File "/home/phong/miniconda3/envs/deep/lib/python3.8/site-packages/pytorch_lightning/loops/optimization/optimizer_loop.py", line 427, in _training_step
    training_step_output = self.trainer._call_strategy_hook("training_step", *step_kwargs.values())
  File "/home/phong/miniconda3/envs/deep/lib/python3.8/site-packages/pytorch_lightning/trainer/trainer.py", line 1763, in _call_strategy_hook
    output = fn(*args, **kwargs)
  File "/home/phong/miniconda3/envs/deep/lib/python3.8/site-packages/pytorch_lightning/strategies/ddp.py", line 341, in training_step
    return self.model(*args, **kwargs)
  File "/home/phong/miniconda3/envs/deep/lib/python3.8/site-packages/torch/nn/modules/module.py", line 1110, in _call_impl
    return forward_call(*input, **kwargs)
  File "/home/phong/miniconda3/envs/deep/lib/python3.8/site-packages/torch/nn/parallel/distributed.py", line 963, in forward
    output = self.module(*inputs[0], **kwargs[0])
  File "/home/phong/miniconda3/envs/deep/lib/python3.8/site-packages/torch/nn/modules/module.py", line 1110, in _call_impl
    return forward_call(*input, **kwargs)
  File "/home/phong/miniconda3/envs/deep/lib/python3.8/site-packages/pytorch_lightning/overrides/base.py", line 82, in forward
    output = self.module.training_step(*inputs, **kwargs)
  File "train.py", line 187, in training_step
    loss_d = self.loss(results, batch, **kwargs)
  File "/home/phong/miniconda3/envs/deep/lib/python3.8/site-packages/torch/nn/modules/module.py", line 1110, in _call_impl
    return forward_call(*input, **kwargs)
  File "/home/phong/data/Nvidia inter/Code/nsff_pl/losses.py", line 117, in forward
    if valid_geo_fw.any():
RuntimeError: CUDA error: device-side assert triggered
CUDA kernel errors might be asynchronously reported at some other API call,so the stacktrace below might be incorrect.
For debugging consider passing CUDA_LAUNCH_BLOCKING=1.

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
  File "train.py", line 316, in <module>
    main(hparams)
  File "train.py", line 300, in main
    trainer.fit(system)
  File "/home/phong/miniconda3/envs/deep/lib/python3.8/site-packages/pytorch_lightning/trainer/trainer.py", line 768, in fit
    self._call_and_handle_interrupt(
  File "/home/phong/miniconda3/envs/deep/lib/python3.8/site-packages/pytorch_lightning/trainer/trainer.py", line 736, in _call_and_handle_interrupt
    self._teardown()
  File "/home/phong/miniconda3/envs/deep/lib/python3.8/site-packages/pytorch_lightning/trainer/trainer.py", line 1298, in _teardown
    self.strategy.teardown()
  File "/home/phong/miniconda3/envs/deep/lib/python3.8/site-packages/pytorch_lightning/strategies/ddp.py", line 447, in teardown
    super().teardown()
  File "/home/phong/miniconda3/envs/deep/lib/python3.8/site-packages/pytorch_lightning/strategies/parallel.py", line 134, in teardown
    super().teardown()
  File "/home/phong/miniconda3/envs/deep/lib/python3.8/site-packages/pytorch_lightning/strategies/strategy.py", line 444, in teardown
    optimizers_to_device(self.optimizers, torch.device("cpu"))
  File "/home/phong/miniconda3/envs/deep/lib/python3.8/site-packages/pytorch_lightning/utilities/optimizer.py", line 27, in optimizers_to_device
    optimizer_to_device(opt, device)
  File "/home/phong/miniconda3/envs/deep/lib/python3.8/site-packages/pytorch_lightning/utilities/optimizer.py", line 33, in optimizer_to_device
    optimizer.state[p] = apply_to_collection(v, torch.Tensor, move_data_to_device, device)
  File "/home/phong/miniconda3/envs/deep/lib/python3.8/site-packages/pytorch_lightning/utilities/apply_func.py", line 107, in apply_to_collection
    v = apply_to_collection(
  File "/home/phong/miniconda3/envs/deep/lib/python3.8/site-packages/pytorch_lightning/utilities/apply_func.py", line 99, in apply_to_collection
    return function(data, *args, **kwargs)
  File "/home/phong/miniconda3/envs/deep/lib/python3.8/site-packages/pytorch_lightning/utilities/apply_func.py", line 354, in move_data_to_device
    return apply_to_collection(batch, dtype=dtype, function=batch_to)
  File "/home/phong/miniconda3/envs/deep/lib/python3.8/site-packages/pytorch_lightning/utilities/apply_func.py", line 99, in apply_to_collection
    return function(data, *args, **kwargs)
  File "/home/phong/miniconda3/envs/deep/lib/python3.8/site-packages/pytorch_lightning/utilities/apply_func.py", line 347, in batch_to
    data_output = data.to(device, **kwargs)
RuntimeError: CUDA error: device-side assert triggered
CUDA kernel errors might be asynchronously reported at some other API call,so the stacktrace below might be incorrect.
For debugging consider passing CUDA_LAUNCH_BLOCKING=1.


Would this work with 360ยฐ Captures?

If I'd like to make a 360ยฐ capture of my dog, which wouldn't stay quiet during capture. Could this help me to get a Radiance Field I can later export to Mesh?

Thank you!

Feature Request: Temporal interpolation

I find that being able to view the (spatio)-temporal interpolation (e.g., w/ the spiral path) is a good way to visualize whether the scene flow has been correctly reconstructed or not. The original paper repo supports the forward (splatting) approach to rendering temporal interpolation for reference.

The reproduction of NSFF in the hypernerf_vrig dataset

Hello author, thank you very much for such a good job. I have a request to ask you. One of my papers was reworked, and a reviewer asked me to add the results on hypernerf_vrig, but this was a bit difficult for me. So, I was wondering if you could send me your repro code on NSFF. Thank you very much.

Question about the 'static_transmittance'

I am not very clear about one point, and I really need your help. What's the differences between the mean of '_static_weights' and 'static_weights', take it a step further, why use 'static_transmittance' to calculate '_static_weights' even when 'output_transient' is True
image

potential tiny mistake in rendering.py

rendering.py
line 231 and 232
results['xyzs_fw_bw'] = xyz_fw + transient_flows_fw_bw
results['xyzs_bw_fw'] = xyz_bw + transient_flows_fw_bw
second line may be xyz_bw + transient_flows_bw_fw
should it a tiny but crucial mistake?

ๅฅ‡ๆ€ช็š„็ป“ๆžœ

hands_reconstruction
่ฎญ็ปƒๆ—ถpsnr่พพๅˆฐ30๏ผŒ็…ง็†ๆฅ่ฏดๅบ”่ฏฅๆ˜ฏๆˆๅŠŸไบ†ใ€‚ไฝ†evaluateๅ‡บๆฅๅพˆๅฅ‡ๆ€ช๏ผŒ่ฟ™ไธชgifๆ˜ฏ่ฎญ็ปƒ่ง†่ง’

A bug in eval script

Thank you for your impressive reimplementation. There should be --use_viewdir in eval script so as to output normal images.

code about flow-based generate motion mask method

Nice work! In the part of the motion mask in the readme, you mentioned that flow-base masks are too noisy. I also have the same feeling when using the code to generate the motion mask in Dynamic nerf, but I have some problems with the code of generate motion mask. In utils/generate_motion_mask.py line 128-131, why do h and w have to be divided by 2, which confuses me very much.

Pretrained models

Probably an obvious question, but where can I download the pretrained models? Thank you!

download videos 'kid-bubble', 'kid-jumping' and etc

Hi @kwea123,

Thanks for this great implementation of NSFF. May I ask do you have links to download videos (apart from the NVIDIA Dynamic Scenes dataset) presented in the NSFF paper? E.g. these ones (fig 7 in the original paper):

Screenshot from 2021-09-06 17-42-28

I see you have tried the kid with bubble video in your implementation but I cannot find it from the original paper and the official repo.

Best,
Zirui

Multi_gpu train issue

I changed the default parameters num_gpus-->8\num_nodes-->8, and keep other default parameters unchanged, then run the train.py with 8_v100, but it always stuck on the device initialization and can't go on the data read process, are there any additional settings require? ๐Ÿค”
image

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.