My name is Alexey, and I study robotics and computer vision in RWTH Aachen.
- π Iβm currently working in the field of 3d scene understanding
- π¬ Ask me about anything! Just schedule a meeting or connect with me via Telegram @kumuji
Mix3D: Out-of-Context Data Augmentation for 3D Scenes (3DV 2021 Oral)
My name is Alexey, and I study robotics and computer vision in RWTH Aachen.
Hello, Mix3d is a great project !
Could you Upload patch file βkpconv_pytorch_mix3d.patchβ for S3DIS dataset in recently οΌ
Look forward your prompt reply !
Hi @kumuji ,
Thanks for sharing your excellent work! May i ask how you draw the point cloud as the graph showed in the readme? I think that is beatiful, but i don't know how to draw it. I'll be ver grateful if you can give me some hints.Thank you!
Hi, thanks for your job. I have a question about the performance on the ScanNet dataset.
I find that Mix3D augmentation applied on MinkowskiNet improves 1.2% (73.6 vs 72.4) mIoU on the ScanNet validation set reported in Table1. However, on the test set, Mix3D is 4.5% higher (78.1 vs 73.6). Do you know why Mix3D's performance is much better than MinkowskiNet on the test set?
Another question is about test time augmentation
mentioned in Part 4.1 of the paper. What is test time augmentation? Do you mean augmenting one scene multiple times and averaging the semantic predictions to get a better result in the test time?
Looking forward to your reply! Thanks!
Thank you for your great work. Could you please share with me your trained model?
Hello, thank you for the nice work!
If I want to reproduce your result in your paper, I just need to change 'voxel_size=0.02'. Is it right? Anything else should be changed?
Anyone came across this issue: "Because mix3d depends on open3d (0.9.0) which doesn't match any versions, version solving failed."? Can't make the installation work because of this
Can you release the pre-trained models that get these testing numbers in paper?
Hi @kumuji @JonasSchult @francisengelmann
Which are the specific codes of mix3d to apply to anther dataset or model? Could you please help me out? Thanks a lot.
Best regards.
Xiaobing
Traceback (most recent call last):
File "<string>", line 1, in <module>
File "/data/djc/anaconda3/envs/semseg/lib/python3.7/site-packages/hydra/main.py", line 37, in decorated_main
strict=strict,
File "/data/djc/anaconda3/envs/semseg/lib/python3.7/site-packages/hydra/_internal/utils.py", line 347, in _run_hydra
lambda: hydra.run(
File "/data/djc/anaconda3/envs/semseg/lib/python3.7/site-packages/hydra/_internal/utils.py", line 201, in run_and_report
raise ex
File "/data/djc/anaconda3/envs/semseg/lib/python3.7/site-packages/hydra/_internal/utils.py", line 198, in run_and_report
return func()
File "/data/djc/anaconda3/envs/semseg/lib/python3.7/site-packages/hydra/_internal/utils.py", line 350, in <lambda>
overrides=args.overrides,
File "/data/djc/anaconda3/envs/semseg/lib/python3.7/site-packages/hydra/_internal/hydra.py", line 112, in run
configure_logging=with_log_configuration,
File "/data/djc/anaconda3/envs/semseg/lib/python3.7/site-packages/hydra/core/utils.py", line 128, in run_job
ret.return_value = task_function(task_cfg)
File "/ssd/djc/PointSegmentation/Semantic/LuckSeg/mix3d/__main__.py", line 95, in train
runner.fit(model)
File "/data/djc/anaconda3/envs/semseg/lib/python3.7/site-packages/pytorch_lightning/trainer/trainer.py", line 697, in fit
self._fit_impl, model, train_dataloaders, val_dataloaders, datamodule, ckpt_path
File "/data/djc/anaconda3/envs/semseg/lib/python3.7/site-packages/pytorch_lightning/trainer/trainer.py", line 648, in _call_and_handle_interrupt
return self.strategy.launcher.launch(trainer_fn, *args, trainer=self, **kwargs)
File "/data/djc/anaconda3/envs/semseg/lib/python3.7/site-packages/pytorch_lightning/strategies/launchers/multiprocessing.py", line 107, in launch
start_method=self._start_method,
File "/data/djc/anaconda3/envs/semseg/lib/python3.7/site-packages/torch/multiprocessing/spawn.py", line 189, in start_processes
process.start()
File "/data/djc/anaconda3/envs/semseg/lib/python3.7/multiprocessing/process.py", line 112, in start
self._popen = self._Popen(self)
File "/data/djc/anaconda3/envs/semseg/lib/python3.7/multiprocessing/context.py", line 284, in _Popen
return Popen(process_obj)
File "/data/djc/anaconda3/envs/semseg/lib/python3.7/multiprocessing/popen_spawn_posix.py", line 32, in __init__
super().__init__(process_obj)
File "/data/djc/anaconda3/envs/semseg/lib/python3.7/multiprocessing/popen_fork.py", line 20, in __init__
self._launch(process_obj)
File "/data/djc/anaconda3/envs/semseg/lib/python3.7/multiprocessing/popen_spawn_posix.py", line 47, in _launch
reduction.dump(process_obj, fp)
File "/data/djc/anaconda3/envs/semseg/lib/python3.7/multiprocessing/reduction.py", line 60, in dump
ForkingPickler(file, protocol).dump(obj)
TypeError: can't pickle MinkowskiConvolutionFunction objects
I encountered the above error when trying to try dual card parallel computing, has anyone encountered the same problem and solved it please?
Thank you very much for your work.
When I apply data augmentation to another model using semantickitti, which module(file) should I use?
I am sorry. The paper says that it is easy to incorporate into existing code bases, but I did not understand it.
Hi,
Thank you for the excellent work. I run the preprocess code in scannet according to the instruction, but it raised an error, and i couldn't find the bug (T-T). I will very appreciate if you could give me some hint. Thanks
Here is the terminal output
`Running stage 'scannet':
poetry run python mix3d/datasets/preprocessing/scannet_preprocessing.py preprocess --git_repo=./data/raw/scannet/ScanNet --data_dir=/media/zebai/T7/Datasets/ScannetV2/ScanNet --save_dir=./data/processed/scannet
2021-10-22 11:15:24.771 | INFO | mix3d.datasets.preprocessing.base_preprocessing:preprocess:45 - Tasks for train: 1201
[Parallel(n_jobs=12)]: Using backend LokyBackend with 12 concurrent workers.
[Parallel(n_jobs=12)]: Done 1 tasks | elapsed: 4.7s
[Parallel(n_jobs=12)]: Done 8 tasks | elapsed: 6.5s
[Parallel(n_jobs=12)]: Done 17 tasks | elapsed: 14.1s
[Parallel(n_jobs=12)]: Done 26 tasks | elapsed: 18.1s
[Parallel(n_jobs=12)]: Done 37 tasks | elapsed: 24.1s
[Parallel(n_jobs=12)]: Done 48 tasks | elapsed: 30.2s
[Parallel(n_jobs=12)]: Done 61 tasks | elapsed: 37.2s
[Parallel(n_jobs=12)]: Done 74 tasks | elapsed: 44.2s
[Parallel(n_jobs=12)]: Done 89 tasks | elapsed: 51.4s
[Parallel(n_jobs=12)]: Done 104 tasks | elapsed: 1.1min
[Parallel(n_jobs=12)]: Done 121 tasks | elapsed: 1.2min
[Parallel(n_jobs=12)]: Done 138 tasks | elapsed: 1.3min
[Parallel(n_jobs=12)]: Done 157 tasks | elapsed: 1.4min
[Parallel(n_jobs=12)]: Done 176 tasks | elapsed: 1.6min
[Parallel(n_jobs=12)]: Done 197 tasks | elapsed: 1.8min
[Parallel(n_jobs=12)]: Done 218 tasks | elapsed: 2.0min
[Parallel(n_jobs=12)]: Done 219 tasks | elapsed: 2.0min
[Parallel(n_jobs=12)]: Done 220 tasks | elapsed: 2.0min
[Parallel(n_jobs=12)]: Done 221 tasks | elapsed: 2.0min
[Parallel(n_jobs=12)]: Done 222 tasks | elapsed: 2.0min
[Parallel(n_jobs=12)]: Done 223 tasks | elapsed: 2.0min
[Parallel(n_jobs=12)]: Done 224 tasks | elapsed: 2.0min
[Parallel(n_jobs=12)]: Done 225 tasks | elapsed: 2.0min
[Parallel(n_jobs=12)]: Done 226 tasks | elapsed: 2.0min
[Parallel(n_jobs=12)]: Done 227 tasks | elapsed: 2.0min
[Parallel(n_jobs=12)]: Done 228 tasks | elapsed: 2.0min
[Parallel(n_jobs=12)]: Done 229 tasks | elapsed: 2.0min
[Parallel(n_jobs=12)]: Done 230 tasks | elapsed: 2.0min
[Parallel(n_jobs=12)]: Done 231 tasks | elapsed: 2.0min
[Parallel(n_jobs=12)]: Done 232 tasks | elapsed: 2.0min
[Parallel(n_jobs=12)]: Done 233 tasks | elapsed: 2.0min
[Parallel(n_jobs=12)]: Done 234 tasks | elapsed: 2.0min
[Parallel(n_jobs=12)]: Done 235 tasks | elapsed: 2.0min
[Parallel(n_jobs=12)]: Done 236 tasks | elapsed: 2.0min
[Parallel(n_jobs=12)]: Done 237 tasks | elapsed: 2.0min
[Parallel(n_jobs=12)]: Done 238 tasks | elapsed: 2.0min
[Parallel(n_jobs=12)]: Done 239 tasks | elapsed: 2.0min
[Parallel(n_jobs=12)]: Done 240 tasks | elapsed: 2.0min
[Parallel(n_jobs=12)]: Done 241 tasks | elapsed: 2.0min
2021-10-22 11:17:25.204 | ERROR | fire.core:_CallAndUpdateTrace:681 - An error has been caught in function '_CallAndUpdateTrace', process 'MainProcess' (136997), thread 'MainThread' (139858255038272):
Traceback (most recent call last):
File "mix3d/datasets/preprocessing/scannet_preprocessing.py", line 212, in
Fire(ScannetPreprocessing)
β β <class 'main.ScannetPreprocessing'>
β <function Fire at 0x7f3325018d30>
File "/home/zebai/.conda/envs/predator/lib/python3.8/site-packages/fire/core.py", line 141, in Fire
component_trace = _Fire(component, args, parsed_flag_args, context, name)
β β β β β β 'scannet_preprocessing.py'
β β β β β {}
β β β β Namespace(completion=None, help=False, interactive=False, separator='-', trace=False, verbose=False)
β β β ['preprocess', '--git_repo=./data/raw/scannet/ScanNet', '--data_dir=/media/zebai/T7/Datasets/ScannetV2/ScanNet', '--save_dir=...
β β <class 'main.ScannetPreprocessing'>
β <function _Fire at 0x7f3324b12c10>
File "/home/zebai/.conda/envs/predator/lib/python3.8/site-packages/fire/core.py", line 466, in _Fire
component, remaining_args = _CallAndUpdateTrace(
β β <function _CallAndUpdateTrace at 0x7f3324b12d30>
β <bound method BasePreprocessing.preprocess of <main.ScannetPreprocessing object at 0x7f3323679fa0>>
File "/home/zebai/.conda/envs/predator/lib/python3.8/site-packages/fire/core.py", line 681, in _CallAndUpdateTrace
component = fn(*varargs, **kwargs)
β β β {}
β β []
β <bound method BasePreprocessing.preprocess of <main.ScannetPreprocessing object at 0x7f3323679fa0>>
File "/home/zebai/mix3d/mix3d/datasets/preprocessing/base_preprocessing.py", line 46, in preprocess
parallel_results = Parallel(n_jobs=self.n_jobs, verbose=10)(
β β β 12
β β <main.ScannetPreprocessing object at 0x7f3323679fa0>
β <class 'joblib.parallel.Parallel'>
File "/home/zebai/.conda/envs/predator/lib/python3.8/site-packages/joblib/parallel.py", line 1054, in call
self.retrieve()
β β <function Parallel.retrieve at 0x7f332493fa60>
β Parallel(n_jobs=12)
File "/home/zebai/.conda/envs/predator/lib/python3.8/site-packages/joblib/parallel.py", line 933, in retrieve
self._output.extend(job.get(timeout=self.timeout))
β β β β β β β None
β β β β β β Parallel(n_jobs=12)
β β β β β functools.partial(<function LokyBackend.wrap_future_result at 0x7f332493c670>, <Future at 0x7f33234898b0 state=finished raise...
β β β β <Future at 0x7f33234898b0 state=finished raised TerminatedWorkerError>
β β β <method 'extend' of 'list' objects>
β β [{'filepath': 'data/processed/scannet/train/0000_00.npy', 'scene': 0, 'sub_scene': 0, 'raw_filepath': '/media/zebai/T7/Datase...
β Parallel(n_jobs=12)
File "/home/zebai/.conda/envs/predator/lib/python3.8/site-packages/joblib/_parallel_backends.py", line 542, in wrap_future_result
return future.result(timeout=timeout)
β β β None
β β <function Future.result at 0x7f3324d9d1f0>
β <Future at 0x7f33234898b0 state=finished raised TerminatedWorkerError>
File "/home/zebai/.conda/envs/predator/lib/python3.8/concurrent/futures/_base.py", line 439, in result
return self.__get_result()
β <Future at 0x7f33234898b0 state=finished raised TerminatedWorkerError>
File "/home/zebai/.conda/envs/predator/lib/python3.8/concurrent/futures/_base.py", line 388, in __get_result
raise self._exception
β β TerminatedWorkerError('A worker process managed by the executor was unexpectedly terminated. This could be caused by a segmen...
β <Future at 0x7f33234898b0 state=finished raised TerminatedWorkerError>
joblib.externals.loky.process_executor.TerminatedWorkerError: A worker process managed by the executor was unexpectedly terminated. This could be caused by a segmentation fault while calling the function or by an excessive memory usage causing the Operating System to kill the worker.
The exit codes of the workers are {SIGSEGV(-11)}
ERROR: failed to reproduce 'dvc.yaml': output 'data/processed/scannet/color_mean_std.yaml' does not exist`
Hi,
I'm new in 3D, can anyone tell me why using kernelsize=2 instead of 3 for downsample operation in backbones, e.g., resunet? Thanks in advance.
int_to_region_type = {m.value: m for m in ME.RegionType},TypeError: 'pybind11_type' object is not iterable
in the paperοΌpoint cloud was visualized by small ballοΌi want to know what library are usedοΌ
Hi,
Thanks for sharing the code. In your paper, you apply Mix3D to MinkowskiNets, which takes the whole scene directly as input. I am wondering if mix3d also works for chunked scene input, which pointnet uses?
(I guess it works as well as it essentially mixes two complete (or incomplete) scenes )
Hi,
Thanks for releasing the code! I have a question regarding how to find the centroid of the point cloud. In your path file for semantickitti, Why do you treat the mean of coordinates as the centroid?
Additionally, I want to apply mix3d to my own dataset. In this case, how to find the centroid of a point cloud?
Hi, @kumuji
Thank you for your excellent work! How to mix labeled and unlabeled data? I want to know if it is right or not when I want to mix labeled data and unlabeled data. I first moved them to the origin and concatenated their coordinates directly, then I changed one data label to -1 as unlabeled data and concatenated the labels of the two data. I mainly want to know if the operation of the label is correct and consistent with what you mentioned in your paper.
After the data process , i run the command "poetry run train" , meet the error "AssertionError: repo is dirty, commit first" . And I have connected the project to my own git repository.
Thanks for your share. Do you have the plan to extend mix3d with Cuda11.1?
Hi,
Thanks for your work. I am having hard time matching the Data Augmentation pipeline described in the paper with the augmentations policies in volumentations_aug.yaml . The two does not seem to be the same, for example I can't see flip, elastic distortion etc... described in the paper .
Am I missing something?
With Regards,
How can the points in the two scenes not overlap during the mixing process? If two points of a random mix fall in the same area(xyz), whether it is randomly marked as one of category or whether the mix operation has special treatment.
What visible tools are used in your papers
A declarative, efficient, and flexible JavaScript library for building user interfaces.
π Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
An Open Source Machine Learning Framework for Everyone
The Web framework for perfectionists with deadlines.
A PHP framework for web artisans
Bring data to life with SVG, Canvas and HTML. πππ
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
Some thing interesting about web. New door for the world.
A server is a program made to process requests and deliver data to clients.
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
Some thing interesting about visualization, use data art
Some thing interesting about game, make everyone happy.
We are working to build community through open source technology. NB: members must have two-factor auth.
Open source projects and samples from Microsoft.
Google β€οΈ Open Source for everyone.
Alibaba Open Source for everyone
Data-Driven Documents codes.
China tencent open source team.