Giter Site home page Giter Site logo

dekuliutesla / citygaussian Goto Github PK

View Code? Open in Web Editor NEW
333.0 13.0 18.0 176.05 MB

[ECCV2024] CityGaussian: Real-time High-quality Large-Scale Scene Rendering with Gaussians

Home Page: https://dekuliutesla.github.io/citygs/

License: Other

Python 0.72% Shell 0.02% Jupyter Notebook 99.26%
3d computer-vision eccv2024 gaussian-splatting graphics neural-network neural-rendering novel-view-synthesis radiance-field level-of-details

citygaussian's Introduction

citygaussian's People

Contributors

dekuliutesla avatar eltociear avatar emepetres avatar gdrett avatar graphdeco avatar hrspythonix avatar jakubcerveny avatar jonathonluiten avatar khoa-nt avatar snosixtyboo avatar szymanowiczs avatar yzslab avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

citygaussian's Issues

How to determine aabb for custom dataset.

Hi, i am using custom dataset and my interested area is a 550x500m and 100m height area. What should be a good block_dim and aabb for this dataset? I set the block_dim to (3,3,1). But i did not quite understand the logic of setting aabb. What should my aabb be?

Lod rendering error

Hello, I trained the model with my custom data, the normal rendering is work, and LoD rendering occurred the following error:

Loading trained model at iteration 30000 [04/09 17:23:10]
Reading camera 432/432 [04/09 17:23:13]
Train cameras: 432, Test cameras: 0 [04/09 17:23:13]
Init LoD 3 with 20983781 points from output/test_light_40_vq [04/09 17:23:27]
Traceback (most recent call last):
File "/code/CityGaussian/render_large_lod.py", line 150, in
render_sets(lp, args.iteration, pp, args.load_vq, args.skip_train, args.skip_test, args.custom_test)
File "/code/CityGaussian/render_large_lod.py", line 102, in render_sets
lod_gs = BlockedGaussian(lod_gs, lp, compute_cov3D_python=pp.compute_cov3D_python)
File "/code/CityGaussian/scene/gaussian_model.py", line 443, in init
self.cell_divider(gaussians)
File "/code/CityGaussian/scene/gaussian_model.py", line 466, in cell_divider
xyz_median = torch.median(xyz[cell_mask], dim=0)[0]
IndexError: median(): Expected reduction dim 0 to have non-zero size.

No GPUs Available

I am trying to train on mill19/building-pixsfm dataset. When i run bash scripts/run_citygs_lod.sh, i get the error:
Traceback (most recent call last):
File "/home/pc_5053/CityGaussian/render_large_lod.py", line 143, in
safe_state(args.quiet)
File "/home/pc_5053/CityGaussian/utils/general_utils.py", line 149, in safe_state
torch.cuda.set_device(torch.device("cuda:0"))
File "/home/pc_5053/anaconda3/envs/citygs/lib/python3.9/site-packages/torch/cuda/init.py", line 350, in set_device
torch._C._cuda_setDevice(device)
File "/home/pc_5053/anaconda3/envs/citygs/lib/python3.9/site-packages/torch/cuda/init.py", line 247, in _lazy_init
torch._C._cuda_init()
RuntimeError: No CUDA GPUs are available
Traceback (most recent call last):
File "/home/pc_5053/CityGaussian/metrics_large.py", line 110, in
torch.cuda.set_device(device)
File "/home/pc_5053/anaconda3/envs/citygs/lib/python3.9/site-packages/torch/cuda/init.py", line 350, in set_device
torch._C._cuda_setDevice(device)
File "/home/pc_5053/anaconda3/envs/citygs/lib/python3.9/site-packages/torch/cuda/init.py", line 247, in _lazy_init
torch._C._cuda_init()
RuntimeError: No CUDA GPUs are available

Even though i have an Available GPU (RTX4090). What is the issue? Here is some extra information:

Python 3.9.19 | packaged by conda-forge | (main, Mar 20 2024, 12:50:21)
[GCC 12.3.0] on linux
Type "help", "copyright", "credits" or "license" for more information.

import torch
print("CUDA Available:", torch.cuda.is_available())
.cuda.device_count())
print("Current Device:", torch.cuda.current_device())CUDA Available: True
print("Number of GPUs:", torch.cuda.device_count())
Number of GPUs: 1
print("Current Device:", torch.cuda.current_device())
Current Device: 0
nvcc --version output:
nvcc: NVIDIA (R) Cuda compiler driver
Copyright (c) 2005-2022 NVIDIA Corporation
Built on Wed_Sep_21_10:33:58_PDT_2022
Cuda compilation tools, release 11.8, V11.8.89
Build cuda_11.8.r11.8/compiler.31833905_0

Using custom dataset

Your work is truly impressive. It would be great if you could share instruction on how to use custom dataset. For example, I record a video using drone and then want to use your technique for large scale 3D reconstruction of a scene.

Preparation of LoD part

          Hi there, I got some questions about LoD part. I wonder what 's the next step after merging parallel blocks, i.e., I followed the scripts but when calling render_large_lod.py, it went wrong with failing to find "output/datasetname_40_vq". I noticed that there is a "compress" operation in the LoD illustration figure, but I couldn't find coreesponding function. In addition, the config file config/datasetname_lod.yaml contains 'lod_configs' param in model_params, which I assume is needed to run render_large_lod.py. So how should 'lod_configs' be prepared, perhaps with a 'compress' operation? I am using a custom colormap dataset stuructured like:

xxx_COLMAP

  • data_partitions (generated by the scripts)
  • images
    • jpg1
    • jpg2
  • sparse
  • transform.json

Originally posted by @ArthurXu0101 in #3 (comment)

did you used raw resolution photos?

I have read your work and it is great. I have some confusion. Could you please answer it
I am also using data from matrix city (small city),
1.May I ask if you used Colmap to match all 7600 pieces of data at once?
2. May I ask if you trained coarse Gaussian using raw resolution photos (1920 * 1080) as input?

For question 1, I have reviewed the colmap results you provided and found that the file is quite large. I guess the answer is that colmap matched all 7600 original resolution data at once
For question 2, I find "resolution : -1"in mc_aerial_coarse.yaml. I guess the answer is that you trained coarse Gaussian using the original resolution photo as input

If I guessed wrong, please correct me. Based on my guess, I think this seems to be a difficult task
Colmap matches all 7600 pieces of data at once, which takes a long time and may cause the computer to crash
7600 original resolution photos (1920 * 1080) should be out of memory.
Looking forward to you answering my questions
Thank you very much.

ERROR in run_vectree_quantize_mc.sh

When i run bash scripts/run_vectree_quantize_mc.sh i get the error:
FileNotFoundError: [Errno 2] No such file or directory: './output/mc_aerial_c36_light_50_distill/point_cloud/iteration_40000/point_cloud.ply'
seems like thers no point_cloud.ply generated. When should it be generated? I could ran the previous 2 scripts but when i look at the logs, i see:
PIL.UnidentifiedImageError: cannot identify image file '/home/pc_5053/CityGaussian/LargeLightGaussian/data/matrix_city/aerial/train/block_all/images/1584.png'
Can this be the problem? Or why is there no point_cloud folder created? Should i have generated it myself? Thanks in advance

Can you provide your GPU VARM ?

Could you please tell me the configuration of your experimental computer or the VRAM capacity?
I often encounter out-of-memory issues with my 4090D 24G. Thank you!

I previously use LoD method to prune some merged cells(just 9 cells) of SmallCity, which allowed it to run normally. However, today I trained all 36 cells, but it couldn’t run properly, as the log shows it exceeded the VRAM.
I have a question: why don’t you prune the Gaussian splatting on every individual cells instead of the merged result?

I finished the block calculation, but a new problem appeared.

I encountered a new problem, but this time I completed the calculation of the results of the block calculation, and I feel that the effect is very good.
屏幕ζˆͺε›Ύ 2024-08-22 140604
屏幕ζˆͺε›Ύ 2024-08-22 140616

But when I calculated and merged all the blocks, new errors appeared.

1.AssertionError: Could not recognize scene type!

Saving merged 23765873 point cloud to output/mc_aerial_c36/point_cloud/iteration_30000/point_cloud.ply
Done
GPU 0 is available.
Loading trained model at iteration 30000 [22/08 13:42:33]
Traceback (most recent call last):
  File "/home/server9/CityGaussian/render_large.py", line 139, in <module>
    render_sets(lp, args.iteration, pp, args.load_vq, args.skip_train, args.skip_test, args.custom_test)
  File "/home/server9/CityGaussian/render_large.py", line 93, in render_sets
    scene = LargeScene(dataset, gaussians, load_iteration=iteration, load_vq=load_vq, shuffle=False)
  File "/home/server9/CityGaussian/scene/__init__.py", line 136, in __init__
    assert False, "Could not recognize scene type!"
AssertionError: Could not recognize scene type!

I checked scene/init.py,found source_path: "data/matrix_city/aerial/train/block_all",
there is only one transforms.json file in the file directory, and there seems to be no
if os.path.exists(os.path.join(args.source_path, "sparse")): scene_info = sceneLoadTypeCallbacks["Colmap"](args.source_path, args.images, args.eval, partition=partition) elif os.path.exists(os.path.join(args.source_path, "transforms_train.json")): print("Found transforms_train.json file, assuming Blender data set!") scene_info = sceneLoadTypeCallbacks["Blender"](args.source_path, args.white_background, args.eval) else: assert False, "Could not recognize scene type!"

I am not sure about the file with the name transforms_train.json. Even if I change the file name, it will still report an error.

2.FileNotFoundError: [Errno 2] No such file or directory: 'output/mc_aerial_c36/test'

Scene: output/mc_aerial_c36
Traceback (most recent call last):
  File "/home/server9/CityGaussian/metrics_large.py", line 118, in <module>
    evaluate(args.model_paths, args.test_sets, args.correct_color)
  File "/home/server9/CityGaussian/metrics_large.py", line 55, in evaluate
    for method in os.listdir(test_dir):
FileNotFoundError: [Errno 2] No such file or directory: 'output/mc_aerial_c36/test'

I changed out_name="test" in the run_citys.py code to this, but it still reports an error. I'm not sure if it's because the previous step was not executed successfully, or if it's because of some other reason. And there is indeed no test folder in the output/mc_aerial_c36 directory.

How to know aabb in lod yaml file

Hi, when I execute render_large_lod.py in my custom, an error exists that says I didn't set aabb. However, how do I know the xyz of aabb in my custom scene. I follow the configuration file in config/my_scene_lod.yaml and it says that if I want to use default setting (1/3 foreground) , I can comment out the aabb line, but the error occurs below:
image
It seems like if I don't set aabb, it will cause error.

Do I need to use 3dgs to calculate before using this command?

I used this command but got an error: bash scripts/run_citygs.sh,

(citygs) vizzio@vizzio-B660I-AORUS-PRO-DDR4:~/CityGaussian$ bash scripts/run_citygs.sh
Optimizing 
Output folder: ./output/rubble_coarse [12/08 13:08:32]
Reading camera 1657/1657 [12/08 13:08:35]
Train cameras: 1657, Test cameras: 0 [12/08 13:08:35]
Number of points at initialisation :  1694315 [12/08 13:08:35]
#16667 dataloader seed to 42 [12/08 13:08:36]
Training progress:   0%|                                                                                                                                                                          | 0/30000 [00:00<?, ?it/sscripts/run_citygs.sh: line 18: 16667 Killed                  CUDA_VISIBLE_DEVICES=$(get_available_gpu) python train_large.py --config config/$COARSE_CONFIG.yamlβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–‹       | 973/1024 [00:29<00:01, 30.30it/s]
Output folder: ./output/rubble_c9_r4 [12/08 13:09:14]
Reading camera 1657/1657 [12/08 13:09:17]
Train cameras: 1657, Test cameras: 0 [12/08 13:09:17]
Traceback (most recent call last):
  File "/home/vizzio/CityGaussian/data_partition.py", line 151, in <module>
    scene = LargeScene(lp, gaussians, shuffle=False)
  File "/home/vizzio/CityGaussian/scene/__init__.py", line 168, in __init__
    self.gaussians.load_ply(os.path.join(self.pretrain_path, "point_cloud.ply"))
  File "/home/vizzio/CityGaussian/scene/gaussian_model.py", line 229, in load_ply
    plydata = PlyData.read(path)
  File "/home/vizzio/miniconda3/envs/citygs/lib/python3.9/site-packages/plyfile.py", line 401, in read
    (must_close, stream) = _open_stream(stream, 'read')
  File "/home/vizzio/miniconda3/envs/citygs/lib/python3.9/site-packages/plyfile.py", line 481, in _open_stream
    return (True, open(stream, read_or_write[0] + 'b'))
FileNotFoundError: [Errno 2] No such file or directory: 'output/rubble_coarse/point_cloud/iteration_30000/point_cloud.ply'
GPU 0 is available. Starting training block '0'
Optimizing 
Output folder: ./output/rubble_c9_r4/cells/cell0 [12/08 13:09:20]
Traceback (most recent call last):
  File "/home/vizzio/CityGaussian/train_large.py", line 309, in <module>
    training(lp, op, pp, args.test_iterations, args.save_iterations, args.refilter_iterations, args.checkpoint_iterations, args.start_checkpoint, args.debug_from)
  File "/home/vizzio/CityGaussian/train_large.py", line 43, in training
    scene = LargeScene(dataset, gaussians)
  File "/home/vizzio/CityGaussian/scene/__init__.py", line 123, in __init__
    partition = np.load(os.path.join(args.source_path, "data_partitions", f"{args.partition_name}.npy"))[:, args.block_id]
  File "/home/vizzio/miniconda3/envs/citygs/lib/python3.9/site-packages/numpy/lib/npyio.py", line 427, in load
    fid = stack.enter_context(open(os_fspath(file), "rb"))
FileNotFoundError: [Errno 2] No such file or directory: 'data/mill19/rubble-pixsfm/train/data_partitions/rubble_c9_r4.npy'
GPU 0 is available. Starting training block '1'
Optimizing 
Output folder: ./output/rubble_c9_r4/cells/cell1 [12/08 13:11:20]
Traceback (most recent call last):
  File "/home/vizzio/CityGaussian/train_large.py", line 309, in <module>
    training(lp, op, pp, args.test_iterations, args.save_iterations, args.refilter_iterations, args.checkpoint_iterations, args.start_checkpoint, args.debug_from)
  File "/home/vizzio/CityGaussian/train_large.py", line 43, in training
    scene = LargeScene(dataset, gaussians)
  File "/home/vizzio/CityGaussian/scene/__init__.py", line 123, in __init__
    partition = np.load(os.path.join(args.source_path, "data_partitions", f"{args.partition_name}.npy"))[:, args.block_id]
  File "/home/vizzio/miniconda3/envs/citygs/lib/python3.9/site-packages/numpy/lib/npyio.py", line 427, in load
    fid = stack.enter_context(open(os_fspath(file), "rb"))
FileNotFoundError: [Errno 2] No such file or directory: 'data/mill19/rubble-pixsfm/train/data_partitions/rubble_c9_r4.npy'

I checked the output folder and there is no such file. I don't know if I am missing any steps.cause i dont have ply and npy

Point_cloud.ply file empty

Hi, thank you for your work. I had a problem training on my custom dataset. I adjusted the block_dim, aabb and max_block_id according to my dataset. I did the coarse training and data_partition. Seems like there is no problem. When i trained all the cells (16), there are point_cloud.ply files saved under point_cloud_blocks but they are empty and only containing header except cell number 10. I checked if there are any gaussian points detected, and there are. Points exist for all cells but they do not get saved to point_cloud.ply files except cell10. I dont know why. Can there be a problem with save function? Thank you in advance.

Resolution decreases during rendering

Resolution decreases during rendering
Thank you very much, I encountered a bug while using the CityGaussian viewer.
This viewer is based on VISER. I set the resolution to 1920, the photo quality to 100, and the moving resolution to 1920, the photo quality to 100.
1.During my movement, there was a noticeable decrease in resolution. When I finished moving and stopped pressing the keyboard, the rendering result suddenly became clear.
For example ,from 28 seconds to 31 seconds in the video. This phenomenon is abnormal, theoretically the resolution should remain unchanged when moving.
2.When I finish moving and no longer press the keyboard, the rendering result is unclear.
For example ,within 1 minute and 38 seconds of the video and beyond. This phenomenon is abnormal, theoretically the resolution should remain unchanged after the movement is completed.
Note: At other times, the resolution remains unchanged after the movement ends.
I'm not sure if this is a bug in Viser or GSPlat, can you help me? thank you.

the vedio:https://www.youtube.com/watch?v=KvWrNXg3BeA

Errors in UrbanScene3D examples, Need help.

I have prepared data as illustrated, and run:
bash scripts/run_citygs.sh
It outputs:
GPU 0 is available.
Optimizing
Output folder: ./output/residence_coarse [29/08 18:36:12]
Reading camera 1007/2561Traceback (most recent call last):
File "CityGaussian/train_large.py", line 309, in
training(lp, op, pp, args.test_iterations, args.save_iterations, args.refilter_iterations, args.checkpoint_iterations, args.start_checkpoint, args.debug_from)
File "CityGaussian/train_large.py", line 43, in training
scene = LargeScene(dataset, gaussians)
File "CityGaussian/scene/init.py", line 131, in init
File "CityGaussian/scene/dataset_readers.py", line 145, in readColmapSceneInfo
File "CityGaussian/scene/dataset_readers.py", line 99, in readColmapCameras
File "anaconda3/envs/citygs/lib/python3.9/site-packages/PIL/Image.py", line 3431, in open
OSError: [Errno 24] Too many open files: 'CityGaussian/data/urban_scene_3d/residence-pixsfm/train/images/000148.JPG'
Error in atexit._run_exitfuncs:
Traceback (most recent call last):
File "anaconda3/envs/citygs/lib/python3.9/shutil.py", line 727, in rmtree
OSError: [Errno 24] Too many open files: '/tmp/tmpy4j1ld6pwandb-media'
Error in atexit._run_exitfuncs:

Traceback (most recent call last):
File "anaconda3/envs/citygs/lib/python3.9/shutil.py", line 727, in rmtree
OSError: [Errno 24] Too many open files: '/tmp/tmp6j9c48awwandb-artifacts'
Traceback (most recent call last):
File "anaconda3/envs/citygs/lib/python3.9/weakref.py", line 667, in _exitfunc
File "anaconda3/envs/citygs/lib/python3.9/weakref.py", line 591, in call
File "anaconda3/envs/citygs/lib/python3.9/tempfile.py", line 829, in _cleanup
File "anaconda3/envs/citygs/lib/python3.9/tempfile.py", line 825, in _rmtree
File "anaconda3/envs/citygs/lib/python3.9/shutil.py", line 730, in rmtree
File "anaconda3/envs/citygs/lib/python3.9/shutil.py", line 727, in rmtree
OSError: [Errno 24] Too many open files: '/tmp/tmpuixxgyq1'
GPU 0 is available.
Output folder: ./output/residence_c20_r4 [29/08 18:36:20]
Reading camera 1008/2561Traceback (most recent call last):
File "/CityGaussian/data_partition.py", line 151, in
File "/CityGaussian/scene/init.py", line 131, in init
File "/CityGaussian/scene/dataset_readers.py", line 145, in readColmapSceneInfo
File "/CityGaussian/scene/dataset_readers.py", line 99, in readColmapCameras
File "anaconda3/envs/citygs/lib/python3.9/site-packages/PIL/Image.py", line 3431, in open
OSError: [Errno 24] Too many open files: '/CityGaussian/data/urban_scene_3d/residence-pixsfm/train/images/000484.JPG'
GPU 0 is available. Starting training block '0'
Optimizing
Output folder: ./output/residence_c20_r4/cells/cell0 [29/08 18:36:30]
Traceback (most recent call last):
File "/CityGaussian/train_large.py", line 309, in
training(lp, op, pp, args.test_iterations, args.save_iterations, args.refilter_iterations, args.checkpoint_iterations, args.start_checkpoint, args.debug_from)
File "/CityGaussian/train_large.py", line 43, in training
scene = LargeScene(dataset, gaussians)
File "/CityGaussian/scene/init.py", line 123, in init
partition = np.load(os.path.join(args.source_path, "data_partitions", f"{args.partition_name}.npy"))[:, args.block_id]
File anaconda3/envs/citygs/lib/python3.9/site-packages/numpy/lib/npyio.py", line 427, in load
fid = stack.enter_context(open(os_fspath(file), "rb"))
FileNotFoundError: [Errno 2] No such file or directory: 'data/urban_scene_3d/residence-pixsfm/train/data_partitions/residence_c20_r4.npy'

Could please give me some advice?

The evaluation results do not achieve the effect in the paper

I train models in the MatrixCity/aerial dataset with RTX 3090 24G, but the evaluation results are not good.

{
 "ours_30000": {
  "SSIM": 0.3807186186313629,
  "PSNR": 14.57436752319336,
  "LPIPS": 0.6211459040641785
 },
 "cells": {},
 "cfg_args": {},
 "input.ply": {},
 "point_cloud": {},
 "cameras.json": {},
 "costs.json": {}
}
{
 "Average FPS": 36.160666823899135,
 "Min FPS": 21.782481797314,
 "Average Memory(M)": 8202.54135103007,
 "Max Memory(M)": 9335.87646484375,
 "Number of Gaussians": 14247061
}

The results are so bad that I doubt if I do something wrong. What I did are as follows.

  1. download colmap data by Baidu Netdisk: https://pan.baidu.com/s/1zX34zftxj07dCM1x5bzmbA?pwd=1t6r and download MatrixCity/aerial image data by Baidu Netdisk: https://pan.baidu.com/s/187P0e5p1hz9t5mgdJXjL1g?pwd=hqnn#list/path=%2F
  2. my ./data folder is like
β”œβ”€β”€ data
β”‚   β”œβ”€β”€ images
β”‚   β”‚   β”œβ”€β”€ aerial
β”‚   β”‚   β”‚   β”œβ”€β”€ train
β”‚   β”‚   β”‚   β”‚   β”œβ”€β”€block_all
β”‚   β”‚   β”‚   β”‚   β”‚   β”œβ”€β”€ sparse
β”‚   β”‚   β”‚   β”‚   β”‚   β”‚   β”œβ”€β”€ 0
β”‚   β”‚   β”‚   β”‚   β”‚   β”‚   β”‚   β”œβ”€β”€ cameras.bin
β”‚   β”‚   β”‚   β”‚   β”‚   β”‚   β”‚   β”œβ”€β”€ points3D.bin
β”‚   β”‚   β”‚   β”‚   β”‚   β”‚   β”‚   β”œβ”€β”€ images.bin
β”‚   β”‚   β”‚   β”‚   β”‚   β”œβ”€β”€ input
β”‚   β”‚   β”‚   β”‚   β”‚   β”‚   β”œβ”€β”€ 0000.png
β”‚   β”‚   β”‚   β”‚   β”‚   β”‚   β”œβ”€β”€ 0001.png
β”‚   β”‚   β”‚   β”‚   β”‚   β”‚   β”œβ”€β”€ ......
β”‚   β”‚   β”‚   β”‚   β”‚   β”‚   β”œβ”€β”€ 5620.png
β”‚   β”‚   β”‚   β”œβ”€β”€ test
β”‚   β”‚   β”‚   β”‚   β”œβ”€β”€block_all_test
β”‚   β”‚   β”‚   β”‚   β”‚   β”œβ”€β”€ sparse
β”‚   β”‚   β”‚   β”‚   β”‚   β”‚   β”œβ”€β”€ 0
β”‚   β”‚   β”‚   β”‚   β”‚   β”‚   β”‚   β”œβ”€β”€ cameras.bin
β”‚   β”‚   β”‚   β”‚   β”‚   β”‚   β”‚   β”œβ”€β”€ points3D.bin
β”‚   β”‚   β”‚   β”‚   β”‚   β”‚   β”‚   β”œβ”€β”€ images.bin
β”‚   β”‚   β”‚   β”‚   β”‚   β”œβ”€β”€ input
β”‚   β”‚   β”‚   β”‚   β”‚   β”‚   β”œβ”€β”€ 0000.png
β”‚   β”‚   β”‚   β”‚   β”‚   β”‚   β”œβ”€β”€ 0001.png
β”‚   β”‚   β”‚   β”‚   β”‚   β”‚   β”œβ”€β”€ ......
β”‚   β”‚   β”‚   β”‚   β”‚   β”‚   β”œβ”€β”€ 0740.png
  1. edit run_citygs.sh and the edited part of run_citygs.sh is as follow
TEST_PATH="data/images/aerial/test/block_all_test/"

COARSE_CONFIG="mc_aerial_coarse"
CONFIG="mc_aerial_c36"

out_name="val"
max_block_id=35 # 35 is 6 * 6 - 1
port=4041
  1. run bash scripts/run_citygs.sh
  2. There are some evaluation results below (results/00000.png-00004.png)
    00000
    00001
    00002
    00003
    00004
    What wrong with the operations? What should I do?

No points merged

I did the training on my own dataset. When it came to merge.py, no points can be found to merge except block number 10. I checked point_cloud.ply files of every cell and it looks like there are no points saved in the .ply files except for cell 10. I dont know why. How could i check if points are being saved to .ply file? The training took almost 2 days for 315 images. Why could this possibly happen?
GPU 0 is available.
36
Merged 0 points from block 0 from iteration 30000.
Merged 0 points from block 1 from iteration 30000.
Merged 0 points from block 2 from iteration 30000.
Merged 0 points from block 3 from iteration 30000.
Merged 0 points from block 4 from iteration 30000.
Merged 0 points from block 5 from iteration 30000.
Merged 0 points from block 6 from iteration 30000.
Merged 0 points from block 7 from iteration 30000.
Merged 0 points from block 8 from iteration 30000.
Merged 0 points from block 9 from iteration 30000.
Merged 3929483 points from block 10 from iteration 30000.
Merged 0 points from block 11 from iteration 30000.
Merged 0 points from block 12 from iteration 30000.
Merged 0 points from block 13 from iteration 30000.
Merged 0 points from block 14 from iteration 30000.
Merged 0 points from block 15 from iteration 30000.
Merged 0 points from block 16 from iteration 30000.
Merged 0 points from block 17 from iteration 30000.
Merged 0 points from block 18 from iteration 30000.
Merged 0 points from block 19 from iteration 30000.
Merged 0 points from block 20 from iteration 30000.
Merged 0 points from block 21 from iteration 30000.
Merged 0 points from block 22 from iteration 30000.
Merged 0 points from block 23 from iteration 30000.
Merged 0 points from block 24 from iteration 30000.
Merged 0 points from block 25 from iteration 30000.
Merged 0 points from block 26 from iteration 30000.
Merged 0 points from block 27 from iteration 30000.
Merged 0 points from block 28 from iteration 30000.
Merged 0 points from block 29 from iteration 30000.
Merged 0 points from block 30 from iteration 30000.
Merged 0 points from block 31 from iteration 30000.
Merged 0 points from block 32 from iteration 30000.
Merged 0 points from block 33 from iteration 30000.
Merged 0 points from block 34 from iteration 30000.
Merged 0 points from block 35 from iteration 30000.
Saving merged 3929483 point cloud to output/stadium/point_cloud/iteration_30000/point_cloud.ply
Done
GPU 0 is available.
Loading trained model at iteration 30000 [23/08 11:16:55]
Reading camera 1/2 [23/08 11:16:55]
Reading camera 2/2 [23/08 11:16:55]
[23/08 11:16:55]
Train cameras: 2, Test cameras: 0 [23/08 11:16:55]
Converting point3d.bin to .ply, will happen only the first time you open the scene. [23/08 11:16:55]
Rendering progress: 0%| | 0/2 [00:00<?, ?it/s][ INFO ] Encountered quite large input images (>1.6K pixels width), rescaling to 1.6K.
If this is not desired, please explicitly specify '--resolution/-r' as 1 [23/08 11:16:58]
Rendering progress: 100%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ| 2/2 [00:01<00:00, 1.35it/s]
Average FPS: 40.3448 [23/08 11:17:00]
Min FPS: 26.7174 [23/08 11:17:00]
Average Memory: 2471.2698 M [23/08 11:17:00]
Max Memory: 2552.1191 M [23/08 11:17:00]
Number of Gaussians: 3929483 [23/08 11:17:00]
Skip both train and test, render all views [23/08 11:17:00]
GPU 0 is available.

Scene: output/stadium
Method: ours_30000
Metric evaluation progress: 0%| | 0/2 [00:00<?, ?it/s]Downloading: "https://download.pytorch.org/models/vgg16-397923af.pth" to /home/pc_5053/.cache/torch/hub/checkpoints/vgg16-397923af.pth
100%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ| 528M/528M [02:06<00:00, 4.36MB/s]
Downloading: "https://raw.githubusercontent.com/richzhang/PerceptualSimilarity/master/lpips/weights/v0.1/vgg.pth" to /home/pc_5053/.cache/torch/hub/checkpoints/vgg.pthβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–‰| 528M/528M [02:06<00:00, 4.35MB/s]
100%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ| 7.12k/7.12k [00:00<00:00, 7.54MB/s]
Metric evaluation progress: 100%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ| 2/2 [02:09<00:00, 64.73s/it]
SSIM : 0.1954435
PSNR : 10.1279850
LPIPS: 0.6798524

And the PSNR is pretty bad too. But during training, i saw PSNR values around 25-26. What is wrong? Thank you in advance.

Lod rendering error

Hello, I trained the model with my custom data, the normal rendering is work, and LoD rendering occurred the following error:

Loading trained model at iteration 30000 [04/09 17:23:10]
Reading camera 432/432 [04/09 17:23:13]
Train cameras: 432, Test cameras: 0 [04/09 17:23:13]
Init LoD 3 with 20983781 points from output/test_light_40_vq [04/09 17:23:27]
Traceback (most recent call last):
File "/code/CityGaussian/render_large_lod.py", line 150, in
render_sets(lp, args.iteration, pp, args.load_vq, args.skip_train, args.skip_test, args.custom_test)
File "/code/CityGaussian/render_large_lod.py", line 102, in render_sets
lod_gs = BlockedGaussian(lod_gs, lp, compute_cov3D_python=pp.compute_cov3D_python)
File "/code/CityGaussian/scene/gaussian_model.py", line 443, in init
self.cell_divider(gaussians)
File "/code/CityGaussian/scene/gaussian_model.py", line 466, in cell_divider
xyz_median = torch.median(xyz[cell_mask], dim=0)[0]
IndexError: median(): Expected reduction dim 0 to have non-zero size.

Setting max_block_id

First of all, thank you for replying every question. My question is, how does max_block_id affect the training process exactly ? If i set it too high or too low, what happens? I set it to 35 and it seems like the training is taking forever with 315 images.

How should the output folder look like?

When i run bash scripts/run_citygs.sh i get the error:

Output folder: ./output/building_c20_r4 [12/08 20:37:08]
Reading camera 1920/1920 [12/08 20:37:11]
Train cameras: 1920, Test cameras: 0 [12/08 20:37:11]
Traceback (most recent call last):
  File "/home/pc_5053/CityGaussian/data_partition.py", line 151, in <module>
    scene = LargeScene(lp, gaussians, shuffle=False)
  File "/home/pc_5053/CityGaussian/scene/__init__.py", line 168, in __init__
    self.gaussians.load_ply(os.path.join(self.pretrain_path, "point_cloud.ply"))
  File "/home/pc_5053/CityGaussian/scene/gaussian_model.py", line 229, in load_ply
    plydata = PlyData.read(path)
  File "/home/pc_5053/anaconda3/envs/citygs/lib/python3.9/site-packages/plyfile.py", line 401, in read
    (must_close, stream) = _open_stream(stream, 'read')
  File "/home/pc_5053/anaconda3/envs/citygs/lib/python3.9/site-packages/plyfile.py", line 481, in _open_stream
    return (True, open(stream, read_or_write[0] + 'b'))
FileNotFoundError: [Errno 2] No such file or directory: 'output/building_coarse/point_cloud/iteration_30000/point_cloud.ply'
GPU 0 is available. Starting training block '0'
Optimizing
Output folder: ./output/building_c20_r4/cells/cell0 [12/08 20:37:14]
Traceback (most recent call last):
  File "/home/pc_5053/CityGaussian/train_large.py", line 309, in <module>
    training(lp, op, pp, args.test_iterations, args.save_iterations, args.refilter_iterations, args.checkpoint_iterations, args.start_checkpoint, args.debug_from)
  File "/home/pc_5053/CityGaussian/train_large.py", line 43, in training
    scene = LargeScene(dataset, gaussians)
  File "/home/pc_5053/CityGaussian/scene/__init__.py", line 123, in __init__
    partition = np.load(os.path.join(args.source_path, "data_partitions", f"{args.partition_name}.npy"))[:, args.block_id]
  File "/home/pc_5053/anaconda3/envs/citygs/lib/python3.9/site-packages/numpy/lib/npyio.py", line 427, in load
    fid = stack.enter_context(open(os_fspath(file), "rb"))
FileNotFoundError: [Errno 2] No such file or directory: 'data/building-pixsfm/train/data_partitions/building_c20_r4.npy'

So what should my output folder look like? Seems like there is a problem but i could not get it from the instructions. Thank you in advance.

Error when using SIBR_remoteGaussian_app to see the running training process

I run python train_large.py --config config/mc_aerial_coarse.yaml, and run ./SIBR_remoteGaussian_app in another terminal then encounter this error, but the /data/matrix_city/aerial/train/block_all dir has sparse/0/images.bin, sparse/0/cameras.bin,' sparse/0/point3D.bin.
Also I use original gaussian splatting code train.py, the SIBR_remoteGaussian_app works well. I'm confused.

			LINE 560, FUNC getParsedData
			Cannot determine type of dataset at /data/matrix_city/aerial/train/block_all
[SIBR] ##  ERROR  ##:	FILE /home/farsee2/ALL_CODE/3DGS/CityGaussian/SIBR_viewers/src/projects/remote/apps/remoteGaussianUI/main.cpp
			LINE 54, FUNC resetScene
			Problem loading model info from input path data/matrix_city/aerial/train/block_all. Consider overriding path to model directory using --path.terminate called after throwing an instance of 'std::runtime_error'

Missing colmap when run mill19 dataset

Awesome work and codes!
I want to run the CityGaussian on the Mill 19 dataset, following the suggestion. I have downloaded the raw images of rubble-pixsfm and building pixsfm from mega-nerf (https://storage.cmusatyalab.org/mega-nerf-data/building-pixsfm.tgz)with the provided colmap through your provided Google Drive (https://drive.google.com/file/d/1Uz1pSTIpkagTml2jzkkzJ_rglS_z34p7/view?usp=sharing).
When I run the data_proc_mill19.sh, an error happens:
sh: 1: colmap : not found.(It seems that the colmap should exist in a newly created dictionary: distorted/database.db)

How to use custom dataset

There are no instructions on how to use custom dataset. Would appreciate one. For example, i have a folder named "stadium":
β”œβ”€β”€ stadium/
β”‚ β”œβ”€β”€ train/
β”‚ β”‚ β”œβ”€β”€ sparse/
β”‚ β”‚ β”‚ β”œβ”€β”€ cameras.bin
β”‚ β”‚ β”‚ β”œβ”€β”€ images.bin
β”‚ β”‚ β”‚ └── points3D.bin
β”‚ β”‚ └── images/
β”‚ └── val/
β”‚ β”‚ β”œβ”€β”€ sparse/
β”‚ β”‚ β”‚ β”œβ”€β”€ cameras.bin
β”‚ β”‚ β”‚ β”œβ”€β”€ images.bin
β”‚ β”‚ β”‚ └── points3D.bin
β”‚ β”‚ └── images/
How do i edit the COARSE_CONFIG and CONFIG, The max_block_id, out_name, and TEST_PATH in run_citygs.sh according to my data? Thanks for your work.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    πŸ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. πŸ“ŠπŸ“ˆπŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❀️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.