Giter Site home page Giter Site logo

mix3d's Introduction

Hi there πŸ‘‹

My name is Alexey, and I study robotics and computer vision in RWTH Aachen.

  • πŸ”­ I’m currently working in the field of 3d scene understanding
  • πŸ’¬ Ask me about anything! Just schedule a meeting or connect with me via Telegram @kumuji

mix3d's People

Contributors

francisengelmann avatar jonasschult avatar kumuji avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar

mix3d's Issues

Question about the visualization

Hi @kumuji ,

Thanks for sharing your excellent work! May i ask how you draw the point cloud as the graph showed in the readme? I think that is beatiful, but i don't know how to draw it. I'll be ver grateful if you can give me some hints.Thank you!

Performance improvement gap between val and test set on ScanNet

Hi, thanks for your job. I have a question about the performance on the ScanNet dataset.

I find that Mix3D augmentation applied on MinkowskiNet improves 1.2% (73.6 vs 72.4) mIoU on the ScanNet validation set reported in Table1. However, on the test set, Mix3D is 4.5% higher (78.1 vs 73.6). Do you know why Mix3D's performance is much better than MinkowskiNet on the test set?

Another question is about test time augmentation mentioned in Part 4.1 of the paper. What is test time augmentation? Do you mean augmenting one scene multiple times and averaging the semantic predictions to get a better result in the test time?

Looking forward to your reply! Thanks!

about reproducing the result mIoU 78.1

Hello, thank you for the nice work!
If I want to reproduce your result in your paper, I just need to change 'voxel_size=0.02'. Is it right? Anything else should be changed?

poetry install fails to install open3d

Anyone came across this issue: "Because mix3d depends on open3d (0.9.0) which doesn't match any versions, version solving failed."? Can't make the installation work because of this

pre-trained models

Can you release the pre-trained models that get these testing numbers in paper?

TypeError: can't pickle MinkowskiConvolutionFunction objects

Traceback (most recent call last):
  File "<string>", line 1, in <module>
  File "/data/djc/anaconda3/envs/semseg/lib/python3.7/site-packages/hydra/main.py", line 37, in decorated_main
    strict=strict,
  File "/data/djc/anaconda3/envs/semseg/lib/python3.7/site-packages/hydra/_internal/utils.py", line 347, in _run_hydra
    lambda: hydra.run(
  File "/data/djc/anaconda3/envs/semseg/lib/python3.7/site-packages/hydra/_internal/utils.py", line 201, in run_and_report
    raise ex
  File "/data/djc/anaconda3/envs/semseg/lib/python3.7/site-packages/hydra/_internal/utils.py", line 198, in run_and_report
    return func()
  File "/data/djc/anaconda3/envs/semseg/lib/python3.7/site-packages/hydra/_internal/utils.py", line 350, in <lambda>
    overrides=args.overrides,
  File "/data/djc/anaconda3/envs/semseg/lib/python3.7/site-packages/hydra/_internal/hydra.py", line 112, in run
    configure_logging=with_log_configuration,
  File "/data/djc/anaconda3/envs/semseg/lib/python3.7/site-packages/hydra/core/utils.py", line 128, in run_job
    ret.return_value = task_function(task_cfg)
  File "/ssd/djc/PointSegmentation/Semantic/LuckSeg/mix3d/__main__.py", line 95, in train
    runner.fit(model)
  File "/data/djc/anaconda3/envs/semseg/lib/python3.7/site-packages/pytorch_lightning/trainer/trainer.py", line 697, in fit
    self._fit_impl, model, train_dataloaders, val_dataloaders, datamodule, ckpt_path
  File "/data/djc/anaconda3/envs/semseg/lib/python3.7/site-packages/pytorch_lightning/trainer/trainer.py", line 648, in _call_and_handle_interrupt
    return self.strategy.launcher.launch(trainer_fn, *args, trainer=self, **kwargs)
  File "/data/djc/anaconda3/envs/semseg/lib/python3.7/site-packages/pytorch_lightning/strategies/launchers/multiprocessing.py", line 107, in launch
    start_method=self._start_method,
  File "/data/djc/anaconda3/envs/semseg/lib/python3.7/site-packages/torch/multiprocessing/spawn.py", line 189, in start_processes
    process.start()
  File "/data/djc/anaconda3/envs/semseg/lib/python3.7/multiprocessing/process.py", line 112, in start
    self._popen = self._Popen(self)
  File "/data/djc/anaconda3/envs/semseg/lib/python3.7/multiprocessing/context.py", line 284, in _Popen
    return Popen(process_obj)
  File "/data/djc/anaconda3/envs/semseg/lib/python3.7/multiprocessing/popen_spawn_posix.py", line 32, in __init__
    super().__init__(process_obj)
  File "/data/djc/anaconda3/envs/semseg/lib/python3.7/multiprocessing/popen_fork.py", line 20, in __init__
    self._launch(process_obj)
  File "/data/djc/anaconda3/envs/semseg/lib/python3.7/multiprocessing/popen_spawn_posix.py", line 47, in _launch
    reduction.dump(process_obj, fp)
  File "/data/djc/anaconda3/envs/semseg/lib/python3.7/multiprocessing/reduction.py", line 60, in dump
    ForkingPickler(file, protocol).dump(obj)
TypeError: can't pickle MinkowskiConvolutionFunction objects

I encountered the above error when trying to try dual card parallel computing, has anyone encountered the same problem and solved it please?

Applying data augmentation to another model

Thank you very much for your work.
When I apply data augmentation to another model using semantickitti, which module(file) should I use?
I am sorry. The paper says that it is easy to incorporate into existing code bases, but I did not understand it.

Scannet pre-preocess error

Hi,

Thank you for the excellent work. I run the preprocess code in scannet according to the instruction, but it raised an error, and i couldn't find the bug (T-T). I will very appreciate if you could give me some hint. Thanks
Here is the terminal output
`Running stage 'scannet':

poetry run python mix3d/datasets/preprocessing/scannet_preprocessing.py preprocess --git_repo=./data/raw/scannet/ScanNet --data_dir=/media/zebai/T7/Datasets/ScannetV2/ScanNet --save_dir=./data/processed/scannet
2021-10-22 11:15:24.771 | INFO | mix3d.datasets.preprocessing.base_preprocessing:preprocess:45 - Tasks for train: 1201
[Parallel(n_jobs=12)]: Using backend LokyBackend with 12 concurrent workers.
[Parallel(n_jobs=12)]: Done 1 tasks | elapsed: 4.7s
[Parallel(n_jobs=12)]: Done 8 tasks | elapsed: 6.5s
[Parallel(n_jobs=12)]: Done 17 tasks | elapsed: 14.1s
[Parallel(n_jobs=12)]: Done 26 tasks | elapsed: 18.1s
[Parallel(n_jobs=12)]: Done 37 tasks | elapsed: 24.1s
[Parallel(n_jobs=12)]: Done 48 tasks | elapsed: 30.2s
[Parallel(n_jobs=12)]: Done 61 tasks | elapsed: 37.2s
[Parallel(n_jobs=12)]: Done 74 tasks | elapsed: 44.2s
[Parallel(n_jobs=12)]: Done 89 tasks | elapsed: 51.4s
[Parallel(n_jobs=12)]: Done 104 tasks | elapsed: 1.1min
[Parallel(n_jobs=12)]: Done 121 tasks | elapsed: 1.2min
[Parallel(n_jobs=12)]: Done 138 tasks | elapsed: 1.3min
[Parallel(n_jobs=12)]: Done 157 tasks | elapsed: 1.4min
[Parallel(n_jobs=12)]: Done 176 tasks | elapsed: 1.6min
[Parallel(n_jobs=12)]: Done 197 tasks | elapsed: 1.8min
[Parallel(n_jobs=12)]: Done 218 tasks | elapsed: 2.0min
[Parallel(n_jobs=12)]: Done 219 tasks | elapsed: 2.0min
[Parallel(n_jobs=12)]: Done 220 tasks | elapsed: 2.0min
[Parallel(n_jobs=12)]: Done 221 tasks | elapsed: 2.0min
[Parallel(n_jobs=12)]: Done 222 tasks | elapsed: 2.0min
[Parallel(n_jobs=12)]: Done 223 tasks | elapsed: 2.0min
[Parallel(n_jobs=12)]: Done 224 tasks | elapsed: 2.0min
[Parallel(n_jobs=12)]: Done 225 tasks | elapsed: 2.0min
[Parallel(n_jobs=12)]: Done 226 tasks | elapsed: 2.0min
[Parallel(n_jobs=12)]: Done 227 tasks | elapsed: 2.0min
[Parallel(n_jobs=12)]: Done 228 tasks | elapsed: 2.0min
[Parallel(n_jobs=12)]: Done 229 tasks | elapsed: 2.0min
[Parallel(n_jobs=12)]: Done 230 tasks | elapsed: 2.0min
[Parallel(n_jobs=12)]: Done 231 tasks | elapsed: 2.0min
[Parallel(n_jobs=12)]: Done 232 tasks | elapsed: 2.0min
[Parallel(n_jobs=12)]: Done 233 tasks | elapsed: 2.0min
[Parallel(n_jobs=12)]: Done 234 tasks | elapsed: 2.0min
[Parallel(n_jobs=12)]: Done 235 tasks | elapsed: 2.0min
[Parallel(n_jobs=12)]: Done 236 tasks | elapsed: 2.0min
[Parallel(n_jobs=12)]: Done 237 tasks | elapsed: 2.0min
[Parallel(n_jobs=12)]: Done 238 tasks | elapsed: 2.0min
[Parallel(n_jobs=12)]: Done 239 tasks | elapsed: 2.0min
[Parallel(n_jobs=12)]: Done 240 tasks | elapsed: 2.0min
[Parallel(n_jobs=12)]: Done 241 tasks | elapsed: 2.0min
2021-10-22 11:17:25.204 | ERROR | fire.core:_CallAndUpdateTrace:681 - An error has been caught in function '_CallAndUpdateTrace', process 'MainProcess' (136997), thread 'MainThread' (139858255038272):
Traceback (most recent call last):

File "mix3d/datasets/preprocessing/scannet_preprocessing.py", line 212, in
Fire(ScannetPreprocessing)
β”‚ β”” <class 'main.ScannetPreprocessing'>
β”” <function Fire at 0x7f3325018d30>

File "/home/zebai/.conda/envs/predator/lib/python3.8/site-packages/fire/core.py", line 141, in Fire
component_trace = _Fire(component, args, parsed_flag_args, context, name)
β”‚ β”‚ β”‚ β”‚ β”‚ β”” 'scannet_preprocessing.py'
β”‚ β”‚ β”‚ β”‚ β”” {}
β”‚ β”‚ β”‚ β”” Namespace(completion=None, help=False, interactive=False, separator='-', trace=False, verbose=False)
β”‚ β”‚ β”” ['preprocess', '--git_repo=./data/raw/scannet/ScanNet', '--data_dir=/media/zebai/T7/Datasets/ScannetV2/ScanNet', '--save_dir=...
β”‚ β”” <class 'main.ScannetPreprocessing'>
β”” <function _Fire at 0x7f3324b12c10>
File "/home/zebai/.conda/envs/predator/lib/python3.8/site-packages/fire/core.py", line 466, in _Fire
component, remaining_args = _CallAndUpdateTrace(
β”‚ β”” <function _CallAndUpdateTrace at 0x7f3324b12d30>
β”” <bound method BasePreprocessing.preprocess of <main.ScannetPreprocessing object at 0x7f3323679fa0>>

File "/home/zebai/.conda/envs/predator/lib/python3.8/site-packages/fire/core.py", line 681, in _CallAndUpdateTrace
component = fn(*varargs, **kwargs)
β”‚ β”‚ β”” {}
β”‚ β”” []
β”” <bound method BasePreprocessing.preprocess of <main.ScannetPreprocessing object at 0x7f3323679fa0>>

File "/home/zebai/mix3d/mix3d/datasets/preprocessing/base_preprocessing.py", line 46, in preprocess
parallel_results = Parallel(n_jobs=self.n_jobs, verbose=10)(
β”‚ β”‚ β”” 12
β”‚ β”” <main.ScannetPreprocessing object at 0x7f3323679fa0>
β”” <class 'joblib.parallel.Parallel'>

File "/home/zebai/.conda/envs/predator/lib/python3.8/site-packages/joblib/parallel.py", line 1054, in call
self.retrieve()
β”‚ β”” <function Parallel.retrieve at 0x7f332493fa60>
β”” Parallel(n_jobs=12)
File "/home/zebai/.conda/envs/predator/lib/python3.8/site-packages/joblib/parallel.py", line 933, in retrieve
self._output.extend(job.get(timeout=self.timeout))
β”‚ β”‚ β”‚ β”‚ β”‚ β”‚ β”” None
β”‚ β”‚ β”‚ β”‚ β”‚ β”” Parallel(n_jobs=12)
β”‚ β”‚ β”‚ β”‚ β”” functools.partial(<function LokyBackend.wrap_future_result at 0x7f332493c670>, <Future at 0x7f33234898b0 state=finished raise...
β”‚ β”‚ β”‚ β”” <Future at 0x7f33234898b0 state=finished raised TerminatedWorkerError>
β”‚ β”‚ β”” <method 'extend' of 'list' objects>
β”‚ β”” [{'filepath': 'data/processed/scannet/train/0000_00.npy', 'scene': 0, 'sub_scene': 0, 'raw_filepath': '/media/zebai/T7/Datase...
β”” Parallel(n_jobs=12)
File "/home/zebai/.conda/envs/predator/lib/python3.8/site-packages/joblib/_parallel_backends.py", line 542, in wrap_future_result
return future.result(timeout=timeout)
β”‚ β”‚ β”” None
β”‚ β”” <function Future.result at 0x7f3324d9d1f0>
β”” <Future at 0x7f33234898b0 state=finished raised TerminatedWorkerError>
File "/home/zebai/.conda/envs/predator/lib/python3.8/concurrent/futures/_base.py", line 439, in result
return self.__get_result()
β”” <Future at 0x7f33234898b0 state=finished raised TerminatedWorkerError>
File "/home/zebai/.conda/envs/predator/lib/python3.8/concurrent/futures/_base.py", line 388, in __get_result
raise self._exception
β”‚ β”” TerminatedWorkerError('A worker process managed by the executor was unexpectedly terminated. This could be caused by a segmen...
β”” <Future at 0x7f33234898b0 state=finished raised TerminatedWorkerError>

joblib.externals.loky.process_executor.TerminatedWorkerError: A worker process managed by the executor was unexpectedly terminated. This could be caused by a segmentation fault while calling the function or by an excessive memory usage causing the Operating System to kill the worker.

The exit codes of the workers are {SIGSEGV(-11)}
ERROR: failed to reproduce 'dvc.yaml': output 'data/processed/scannet/color_mean_std.yaml' does not exist`

Why downsample with kernelsize=2

Hi,
I'm new in 3D, can anyone tell me why using kernelsize=2 instead of 3 for downsample operation in backbones, e.g., resunet? Thanks in advance.

Does Mix3D also works for cubical (or spherical) chunks?

Hi,

Thanks for sharing the code. In your paper, you apply Mix3D to MinkowskiNets, which takes the whole scene directly as input. I am wondering if mix3d also works for chunked scene input, which pointnet uses?
(I guess it works as well as it essentially mixes two complete (or incomplete) scenes )

how to find the centroid of the point cloud?

Hi,

Thanks for releasing the code! I have a question regarding how to find the centroid of the point cloud. In your path file for semantickitti, Why do you treat the mean of coordinates as the centroid?
image

Additionally, I want to apply mix3d to my own dataset. In this case, how to find the centroid of a point cloud?

mixing with unlabeled datasets

Hi, @kumuji
Thank you for your excellent work! How to mix labeled and unlabeled data? I want to know if it is right or not when I want to mix labeled data and unlabeled data. I first moved them to the origin and concatenated their coordinates directly, then I changed one data label to -1 as unlabeled data and concatenated the labels of the two data. I mainly want to know if the operation of the label is correct and consistent with what you mentioned in your paper.

Augmentation policies forming mix3d

Hi,

Thanks for your work. I am having hard time matching the Data Augmentation pipeline described in the paper with the augmentations policies in volumentations_aug.yaml . The two does not seem to be the same, for example I can't see flip, elastic distortion etc... described in the paper .
Am I missing something?

With Regards,

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    πŸ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. πŸ“ŠπŸ“ˆπŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❀️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.