Giter Site home page Giter Site logo

wbhu / tri-miprf Goto Github PK

View Code? Open in Web Editor NEW
435.0 22.0 14.0 6.36 MB

Tri-MipRF: Tri-Mip Representation for Efficient Anti-Aliasing Neural Radiance Fields, ICCV'23 (Oral, Best Paper Finalist)

Home Page: https://wbhu.github.io/projects/Tri-MipRF

Python 98.27% Dockerfile 1.73%
mip-nerf-360 nerf tri-planes tri-mip tri-miprf iccv2023 i-c-c-v

tri-miprf's Introduction

Hi there ๐Ÿ‘‹

Wenbo's GitHub stats

tri-miprf's People

Contributors

joshpwrk avatar wbhu avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

tri-miprf's Issues

cuda error!

During handling of the above exception, another exception occurred:

  File "/home/zhoufan/Tri-MipRF/neural_field/field/trimipRF.py", line 97, in query_rgb
    self.mlp_head(h)
  File "/home/zhoufan/Tri-MipRF/neural_field/model/trimipRF.py", line 115, in rgb_sigma_fn
    rgb = self.field.query_rgb(dir=t_dirs, embedding=feature)['rgb']
  File "/home/zhoufan/Tri-MipRF/neural_field/model/trimipRF.py", line 140, in rendering
    rgbs, sigmas = rgb_sigma_fn(t_starts, t_ends, ray_indices.long())
  File "/home/zhoufan/Tri-MipRF/neural_field/model/trimipRF.py", line 118, in forward
    return self.rendering(
  File "/home/zhoufan/Tri-MipRF/trainer/trainer.py", line 169, in eval_img
    rb = self.model(
  File "/home/zhoufan/Tri-MipRF/trainer/trainer.py", line 141, in fit
    metrics, final_rb, target = self.eval_img(
  File "/home/zhoufan/Tri-MipRF/main.py", line 68, in main
    trainer.fit()
  File "/home/zhoufan/Tri-MipRF/main.py", line 108, in <module>
    main()

eval็š„ๆ—ถๅ€™๏ผŒtrimipRF.py ้‡Œ้ข็š„ = nerfacc.ray_marching()่ฟ”ๅ›ž็š„ๆ˜ฏ็ฉบๅ€ผ๏ผŒ่€Œ่ฎญ็ปƒ็š„ๆ—ถๅ€™ๆ˜ฏๆญฃๅธธ็š„ใ€‚

Negative in level?

Hi๏ผŒ I tried to print the level that has add log2planesize๏ผŒ and I found negative level shows in them which should not be happend though the model still can works

image
image

cuda error

RuntimeError: Cuda error: 9[cudaLaunchKernel(func_tbl[func_idx], gridSize, blockSize, args, 0, stream);]

How to understand "To sample the cone, we cannot follow MipNeRF [3] to use the multivariate Gaussian, since the multivariate Gaussian is anisotropic but the pre-filtering in our Tri-Mip encoding is isotropic."

Thanks for your impressive work.
I am confused about the sentence mentioned in your paper "To sample the cone, we cannot follow MipNeRF [3] to use the multivariate Gaussian, since the multivariate Gaussian is anisotropic but the pre-filtering in our Tri-Mip encoding is isotropic." I mean why does Tri-Mip need an isotropic characteristic? What is isotropic? Does it mean the same color is observed from different views?

Could you please give more explanation?

Radius computation when normalizing ray directions

Hello, I got a question regarding the ball radius.

In camera.py the ray directions are normalized, hence t values (and distances) are in euclidean space. But the radii are computed on the image plane w.r.t. the unnormalized distances. Shouldn't the radii also be scaled by 1 / ||directions|| before the directions are normalized so you get the radii on the unit sphere?

Greets!

export_mesh

Could you give a sample code to extract mesh?

I want to sample point in a meshgrid, and use these points to generate a mesh with pretrained model

unbounded scenes?

Hi, will this work well on unbounded scenes, such as the MipNerf 360 dataset?

Do you also provide rendering code?

Thank you for making this available!

AssertionError: can only test a child process

when i run the script of chair๏ผŒeverything is okใ€‚but in drums i meet this erro๏ผŒand the PSNR can not achieve the normal scoreใ€‚Is this error causing the impactใ€‚
Traceback (most recent call last):
File "/root/miniconda3/lib/python3.8/site-packages/torch/utils/data/dataloader.py", line 1466, in del
self._shutdown_workers()
File "/root/miniconda3/lib/python3.8/site-packages/torch/utils/data/dataloader.py", line 1449, in _shutdown_workers
if w.is_alive():
File "/root/miniconda3/lib/python3.8/multiprocessing/process.py", line 160, in is_alive
assert self._parent_pid == os.getpid(), 'can only test a child process'
AssertionError: can only test a child process
Exception ignored in: <function _MultiProcessingDataLoaderIter.del at 0x7f7e6ba823a0>
Traceback (most recent call last):
File "/root/miniconda3/lib/python3.8/site-packages/torch/utils/data/dataloader.py", line 1466, in del
self._shutdown_workers()
File "/root/miniconda3/lib/python3.8/site-packages/torch/utils/data/dataloader.py", line 1449, in _shutdown_workers
if w.is_alive():
File "/root/miniconda3/lib/python3.8/multiprocessing/process.py", line 160, in is_alive
assert self._parent_pid == os.getpid(), 'can only test a child process'
AssertionError: can only test a child process
100%|โ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆ| 800/800 [01:47<00:00, 7.47it/s]
2023-10-12 09:13:52.710 | INFO | trainer.trainer:eval:186 - ==> Evaluation done
2023-10-12 09:13:52.711 | INFO | utils.writer:write_scalar_dicts:79 - num_alive_ray:64939.67125 rendering_samples_actual:24635.15625 num_rays:375.0 PSNR:28.454062707424164

looking forward to your reply

ๅ…ณไบŽไฝฟ็”จ่‡ชๅฎšไน‰ๆ•ฐๆฎๅฏผ่‡ดCPUๅ†…ๅญ˜ไธ่ถณๅ’Œๆ•ˆๆžœไธๅฅฝ็š„ๆƒ…ๅ†ต

ๆ•ฐๆฎๅˆ†่พจ็Ž‡๏ผš1924*1080
ๆ•ฐ้‡๏ผš650ๅผ 
็ฑปๅž‹๏ผšjpg๏ผˆไธ‰้€š้“๏ผ‰

้—ฎ้ข˜ไธ€๏ผšไฝฟ็”จๆ•ฐๆฎ่ฝฌๆข่„šๆœฌ๏ผšconvert_blender_data.pyๆ—ถ๏ผŒCPU-mem้œ€่ฆ่พพๅˆฐ42GB
ไธดๆ—ถๆ–นๆกˆ๏ผšๅ‡ๅฐ‘ๆ•ฐ้‡๏ผŒ็ผฉๅฐๅˆ†่พจ็Ž‡

้—ฎ้ข˜ไบŒ๏ผšๆ•ˆๆžœไธไฝณ๏ผˆPSNR<16, loss>0.004๏ผ‰
่ฏฆ็ป†ๆ่ฟฐ๏ผš
ๅœจ19201072-163ๆ•ฐๆฎไธญ
image
ๅœจ800
800-325ๆ•ฐๆฎไธญ
image

ImportError: cannot import name 'csrc' from 'nerfacc

when using nerfacc=0.3.5 and 0.4.0, it raises "ImportError: cannot import name 'csrc' from 'nerfacc'";
when using nerfacc>=0.5.0, it raises "module 'nerfacc' has no attribute 'OccupancyGrid".

How to solve it, thank you~!

about the geo_feat_dim

Hi ๏ผŒwhat an excellent work
ๆˆ‘็ฎ€ๅ•็š„ๅฐ†TriMipRF ้‡Œ็š„geo_feat_dimๅ’Œfeature_dimๆๅ‡ๅˆฐไบ†31ๅ’Œ32๏ผŒๅœจ่ฎญ็ปƒ่ฟ‡็จ‹ไธญๆฒกๆœ‰ๅ‡บ็Žฐ้—ฎ้ข˜๏ผŒไฝ†ๆ˜ฏๅˆฐ่พพeval_stepๆ‰ง่กŒeval_imgๆŠฅ้”™๏ผš
image
ๅฐ†geo_feat_dim้‡ๆ–ฐ่ฎพ็ฝฎไธบ15ไธไผšๅ‡บ็Žฐ่ฏฅ้—ฎ้ข˜๏ผŒๆ‚จๆ˜ฏๅฆๆœ‰ๅคด็ปช๏ผŸไธ‡ๅˆ†ๆ„Ÿ่ฐข

why level is negative๏ผŸ

image
image
่ฏท้—ฎ่ดŸ็š„levelๅฏนnvdiffrast.torch.textureๅ‡ฝๆ•ฐๆฅ่ฏดๆ˜ฏไป€ไนˆๆ„ไน‰ๅ‘ข๏ผŸ

CUDA Error

Hi, great work!
When I run nerf_synthetic data I get a CUDA error, is there some configuration that I overlooked that is causing the error?

2023-11-02 09:44:24.958 | INFO     | utils.writer:write_scalar_dicts:79 - lr:0.002592 step:23000 iter_time:0.01472163200378418 ETA:0:00:29 num_alive_ray:13716 rendering_samples_actual:269133 num_rays:39829 PSNR:37.34233474731445 total_loss:0.0007085531251505017 
2023-11-02 09:44:42.329 | INFO     | utils.writer:write_scalar_dicts:79 - lr:0.002592 step:24000 iter_time:0.012811899185180664 ETA:0:00:12 num_alive_ray:13679 rendering_samples_actual:261635 num_rays:40487 PSNR:37.36977005004883 total_loss:0.0007570512825623155 
2023-11-02 09:44:59.344 | INFO     | utils.writer:write_scalar_dicts:79 - lr:0.002592 step:25000 iter_time:0.01546168327331543 ETA:0:00:00 num_alive_ray:13266 rendering_samples_actual:263475 num_rays:38886 PSNR:37.54179382324219 total_loss:0.0006864252500236034 
Traceback (most recent call last):
  File "main.py", line 96, in <module>
    main()
  File "/home/wll/miniconda3/envs/nerf/lib/python3.8/site-packages/gin/config.py", line 1605, in gin_wrapper
    utils.augment_exception_message_and_reraise(e, err_str)
  File "/home/wll/miniconda3/envs/nerf/lib/python3.8/site-packages/gin/utils.py", line 41, in augment_exception_message_and_reraise
    raise proxy.with_traceback(exception.__traceback__) from None
  File "/home/wll/miniconda3/envs/nerf/lib/python3.8/site-packages/gin/config.py", line 1582, in gin_wrapper
    return fn(*new_args, **new_kwargs)
  File "main.py", line 56, in main
    trainer.fit()
  File "/home/wll/workspace/nerf/Tri-MipRF/trainer/trainer.py", line 140, in fit
    metrics, final_rb, target = self.eval_img(
  File "/home/wll/miniconda3/envs/nerf/lib/python3.8/site-packages/torch/autograd/grad_mode.py", line 27, in decorate_context
    return func(*args, **kwargs)
  File "/home/wll/workspace/nerf/Tri-MipRF/trainer/trainer.py", line 168, in eval_img
    rb = self.model(
  File "/home/wll/miniconda3/envs/nerf/lib/python3.8/site-packages/torch/nn/modules/module.py", line 1130, in _call_impl
    return forward_call(*input, **kwargs)
  File "/home/wll/workspace/nerf/Tri-MipRF/neural_field/model/trimipRF.py", line 118, in forward
    return self.rendering(
  File "/home/wll/workspace/nerf/Tri-MipRF/neural_field/model/trimipRF.py", line 140, in rendering
    rgbs, sigmas = rgb_sigma_fn(t_starts, t_ends, ray_indices.long())
  File "/home/wll/workspace/nerf/Tri-MipRF/neural_field/model/trimipRF.py", line 115, in rgb_sigma_fn
    rgb = self.field.query_rgb(dir=t_dirs, embedding=feature)['rgb']
  File "/home/wll/workspace/nerf/Tri-MipRF/neural_field/field/trimipRF.py", line 97, in query_rgb
    self.mlp_head(h)
  File "/home/wll/miniconda3/envs/nerf/lib/python3.8/site-packages/torch/nn/modules/module.py", line 1130, in _call_impl
    return forward_call(*input, **kwargs)
  File "/home/wll/miniconda3/envs/nerf/lib/python3.8/site-packages/tinycudann-1.7-py3.8-linux-x86_64.egg/tinycudann/modules.py", line 189, in forward
    self.params.to(_torch_precision(self.native_tcnn_module.param_precision())).contiguous(),
RuntimeError: CUDA error: invalid configuration argument
CUDA kernel errors might be asynchronously reported at some other API call,so the stacktrace below might be incorrect.
For debugging consider passing CUDA_LAUNCH_BLOCKING=1.
  In call to configurable 'main' (<function main at 0x7fa137334700>)

About the training resource

Hi~ Thanks for your great work!
I wonder what kind of GPUs are required to train with the provided config config_files/ms_blender/TriMipRF.gin ?
I failed on 1080 due to some memory issues.

Question for the nvdiffrast query

Hello:
I note that u use the nvdiffrast for the mipmap query. Can u explain how could do this?

I find a function called nvdiffrast.torch.texture, but I don't know how to use it. Thanks

About Custom Datasets

How to run custom datasets or LLFF datasets on your project๏ผŸ
Looking forward to your reply!

CUDA out of memory

Hi, Wenbo, thanks for your great work.
When I tested your code, I got some very weird things: the CUDA memory try to allocate 19330GB....., how can I fix it?
image

About model size

Hi~ I find the reported model size of Tri-MipRF in Table 2 is about 48.2M.
However, i retrained Tri-MipRF as suggested and get checkpoints of 59M.
Can you help me to figure out what causes this different and how can i get the same model size as reported?

Difference between Zip-NeRF and Tri-MipRF?

Great Work!!!
Could you explain more about "Zip-NeRF introduces a multi-sampling-based method to address the same problem, efficient anti-aliasing, while our method belongs to the pre-filtering-based method."?
Tri-MipRF seems to be much faster. Does the PSNR also higher than Zip-NeRF?

About Dataset Split

When I check your configuration file, I see that dataset.split is "trainval". So, is the number of metrics reported in your paper based on this config file?

Question for the radiuses r

@staticmethod def compute_ball_radii(distance, radiis, cos): inverse_cos = 1.0 / cos tmp = (inverse_cos * inverse_cos - 1).sqrt() - radiis sample_ball_radii = distance * radiis * cos / (tmp * tmp + 1.0).sqrt() return sample_ball_radii

What is the meaning of the cos? Sound like the calculation is different from the paper. Thanks

How to visualize the result?

I have the model.ckpt now using nerf systhetic multiscale data to train, and I want to visualize it like the video provided in the video.

Code Release Time

Hi, thanks for this fantastic work. I'm very interested in the source code and its release date. I wonder when it will be available? Thanks!

eval_img error

hi, i run lego dataset. train step is ok, bug got this error when eval_img. enc.shape is torch.Size([0, 48]), why caused this?

File "/nerfs/Tri-MipRF/trainer/trainer.py", line 184, in eval
metric, rb, target = self.eval_img(data)
File "/root/miniconda3/envs/neurbf/lib/python3.9/site-packages/torch/autograd/grad_mode.py", line 27, in decorate_context
return func(*args, **kwargs)
File "/nerfs/Tri-MipRF/trainer/trainer.py", line 167, in eval_img
rb = self.model(
File "/root/miniconda3/envs/neurbf/lib/python3.9/site-packages/torch/nn/modules/module.py", line 1130, in _call_impl
return forward_call(*input, **kwargs)
File "/nerfs/Tri-MipRF/neural_field/model/trimipRF.py", line 86, in forward
ray_indices, t_starts, t_ends = nerfacc.ray_marching(
File "/root/miniconda3/envs/neurbf/lib/python3.9/site-packages/torch/autograd/grad_mode.py", line 27, in decorate_context
return func(*args, **kwargs)
File "/root/miniconda3/envs/neurbf/lib/python3.9/site-packages/nerfacc/ray_marching.py", line 196, in ray_marching
sigmas = sigma_fn(t_starts, t_ends, ray_indices)
File "/nerfs/Tri-MipRF/neural_field/model/trimipRF.py", line 84, in sigma_fn
return self.field.query_density(positions, level_vol)['density']
File "/nerfs/Tri-MipRF/neural_field/field/trimipRF.py", line 77, in query_density
self.mlp_base(enc)
File "/root/miniconda3/envs/neurbf/lib/python3.9/site-packages/torch/nn/modules/module.py", line 1130, in _call_impl
return forward_call(*input, **kwargs)
File "/root/miniconda3/envs/neurbf/lib/python3.9/site-packages/tinycudann/modules.py", line 186, in forward
output = _module_function.apply(
File "/root/miniconda3/envs/neurbf/lib/python3.9/site-packages/tinycudann/modules.py", line 98, in forward
native_ctx, output = native_tcnn_module.fwd(input, params)
RuntimeError: /nerfs/tiny-cuda-nn/include/tiny-cuda-nn/cutlass_matmul.h:330 status failed with error Error Internal

About the use of custom data resulting in insufficient CPU memory and poor performance

Data resolution: 1924*1080
Quantity: 650 pieces
Type: jpg (three-channel)

Problem 1: When using the data conversion script convert_blender_data.py, the CPU-mem must be 42GB
Temporary solution: Reduce the number, reduce the resolution
Q: how can I reduce CPU mem?

Problem 2: Poor effect (PSNR<16, loss>0.004)
Detailed description:
In the data of 1920 * 1072 - 163
image

In the data of 800 * 800 - 325
image
Q: how can I improve model and rending effect ?

NerfAcc: Setting up CUDA (This may take a few minutes the first time)Killed

2023-10-24 02:00:54.767 | INFO | main:main:25 - ==> Init dataloader ...
100%|โ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆ| 800/800 [00:00<00:00, 204837.51it/s]
2023-10-24 02:00:54.944 | INFO | dataset.ray_dataset:init:42 - ==> Find 4 cameras
100%|โ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆ| 200/200 [00:01<00:00, 111.36it/s]
100%|โ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆ| 200/200 [00:00<00:00, 550.07it/s]
100%|โ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆ| 200/200 [00:00<00:00, 2726.73it/s]
100%|โ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆ| 200/200 [00:00<00:00, 8469.83it/s]
100%|โ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆ| 800/800 [00:00<00:00, 222524.25it/s]
2023-10-24 02:00:58.635 | INFO | dataset.ray_dataset:init:42 - ==> Find 4 cameras
100%|โ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆ| 200/200 [00:01<00:00, 111.41it/s]
100%|โ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆ| 200/200 [00:00<00:00, 486.54it/s]
100%|โ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆ| 200/200 [00:00<00:00, 5384.91it/s]
100%|โ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆ| 200/200 [00:00<00:00, 6841.64it/s]
2023-10-24 02:01:02.330 | INFO | main:main:49 - ==> Init model ...
2023-10-24 02:01:04.057 | INFO | main:main:51 - TriMipRFModel(
(field): TriMipRF(
(encoding): TriMipEncoding()
(direction_encoding): Encoding(n_input_dims=3, n_output_dims=16, seed=1337, dtype=torch.float16, hyperparams={'degree': 4, 'otype': 'SphericalHarmonics'})
(mlp_base): Network(n_input_dims=48, n_output_dims=16, seed=1337, dtype=torch.float16, hyperparams={'encoding': {'offset': 0.0, 'otype': 'Identity', 'scale': 1.0}, 'network': {'activation': 'ReLU', 'n_hidden_layers': 2, 'n_neurons': 128, 'otype': 'FullyFusedMLP', 'output_activation': 'None'}, 'otype': 'NetworkWithInputEncoding'})
(mlp_head): Network(n_input_dims=31, n_output_dims=3, seed=1337, dtype=torch.float16, hyperparams={'encoding': {'offset': 0.0, 'otype': 'Identity', 'scale': 1.0}, 'network': {'activation': 'ReLU', 'n_hidden_layers': 4, 'n_neurons': 128, 'otype': 'FullyFusedMLP', 'output_activation': 'Sigmoid'}, 'otype': 'NetworkWithInputEncoding'})
)
(ray_sampler): OccupancyGrid()
)
2023-10-24 02:01:04.058 | INFO | main:main:53 - ==> Init trainer ...
2023-10-24 02:01:04.098 | INFO | trainer.trainer:init:60 - # Parameters for trimipRF.get_optimizer:

==============================================================================

trimipRF.get_optimizer.feature_lr_scale = 10.0
trimipRF.get_optimizer.lr = 0.002
trimipRF.get_optimizer.weight_decay = 1e-05

Parameters for get_scheduler:

==============================================================================

get_scheduler.gamma = 0.6

Parameters for main:

==============================================================================

main.batch_size = 24
main.model_name = 'Tri-MipRF'
main.num_workers = 4
main.seed = 42
main.stages = 'train_eval'
main.train_split = 'trainval'

Parameters for RayDataset:

==============================================================================

RayDataset.base_path =
'/home/jovyan/vol-1/Tri-MipRF/data/nerf_synthetic_multiscale'
RayDataset.num_rays = 8192
RayDataset.render_bkgd = 'white'
RayDataset.scene = 'chair'
RayDataset.scene_type = 'nerf_synthetic_multiscale'
RayDataset.to_world = True

Parameters for Trainer:

==============================================================================

Trainer.base_exp_dir = '/home/jovyan/vol-1/Tri-MipRF/output'
Trainer.dynamic_batch_size = True
Trainer.eval_step = 25000
Trainer.exp_name = 'nerf_synthetic_multiscale/chair/Tri-MipRF/2023-10-24_02-00-54'
Trainer.log_step = 1000
Trainer.max_steps = 25001
Trainer.num_rays = 8192
Trainer.target_sample_batch_size = 65536
Trainer.test_chunk_size = 8192
Trainer.varied_eval_img = True

Parameters for TriMipRF:

==============================================================================

TriMipRF.feature_dim = 16
TriMipRF.geo_feat_dim = 15
TriMipRF.n_levels = 8
TriMipRF.net_depth_base = 2
TriMipRF.net_depth_color = 4
TriMipRF.net_width = 128
TriMipRF.plane_size = 512

Parameters for TriMipRFModel:

==============================================================================

TriMipRFModel.occ_grid_resolution = 128
TriMipRFModel.samples_per_ray = 1024

2023-10-24 02:01:04.101 | INFO | trainer.trainer:fit:106 - ==> Start training ...
(โ— ) NerfAcc: Setting up CUDA (This may take a few minutes the first time)Killed

NoneType

Why does this bug always appear?

return getattr(_C, name)(*args, **kwargs)
AttributeError: 'NoneType' object has no attribute 'ContractionTypeโ€˜

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.