Giter Site home page Giter Site logo

linhaojia13 / pointmetabase Goto Github PK

View Code? Open in Web Editor NEW
82.0 82.0 9.0 6 MB

This is a PyTorch implementation of PointMetaBase proposed by our paper "Meta Architecure for Point Cloud Analysis"

License: MIT License

Python 71.17% Shell 3.24% Cuda 4.28% C++ 20.74% C 0.23% Cython 0.34%

pointmetabase's People

Contributors

linhaojia13 avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar

pointmetabase's Issues

Ablation study on S3DIS

Thank you for releasing the source code of your excellent work :)
I found that there is a problem of center point filling in ballquery, which makes it possible that some points in maxpooling redundant. Have you done the ablation study of hyperparameters nsample and radius in Ballquery on S3DIS data set? If so, could you share the results of the experiment?

CUDA_VISIBLE_DEVICES=0 bash script/main_segmentation.sh cfgs/s3dis/pointmetabase-l.yaml

Traceback (most recent call last):
File "/root/PointMetaBase/examples/segmentation/../../openpoints/utils/registry.py", line 291, in build_from_cfg
return obj_cls(**obj_cfg)
File "/root/PointMetaBase/examples/segmentation/../../openpoints/dataset/s3dis/s3dis.py", line 80, in init
data_list = sorted(os.listdir(raw_root))
FileNotFoundError: [Errno 2] No such file or directory: '/dev/shm/MEMORY_DATA/s3disfull/raw'

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
File "examples/segmentation/main.py", line 737, in
main(0, cfg)
File "examples/segmentation/main.py", line 159, in main
distributed=cfg.distributed
File "/root/PointMetaBase/examples/segmentation/../../openpoints/dataset/build.py", line 71, in build_dataloader_from_cfg
dataset = build_dataset_from_cfg(dataset_cfg.common, split_cfg)
File "/root/PointMetaBase/examples/segmentation/../../openpoints/dataset/build.py", line 38, in build_dataset_from_cfg
return DATASETS.build(cfg, default_args=default_args)
File "/root/PointMetaBase/examples/segmentation/../../openpoints/utils/registry.py", line 149, in build
return self.build_func(*args, **kwargs, registry=self)
File "/root/PointMetaBase/examples/segmentation/../../openpoints/utils/registry.py", line 294, in build_from_cfg
raise type(e)(f'{obj_cls.name}: {e}')
FileNotFoundError: S3DIS: [Errno 2] No such file or directory: '/dev/shm/MEMORY_DATA/s3disfull/raw'

How to preprocess and test my own data

Hello, Thank you for the excellent work!,
I ask about how to preprocess and test my own collected data using S3DIS pretrained model.
Thank you so much.

val on S3DIS but CUDA out of memory

I tried to use KNN instead of Ballquery to find nearest neighbors (group_args['NAME'] == 'knn'). I used a single NVIDIA RTX 3090 24GB GPU and it occupied about 45% of the memory during training, but I encountered CUDA out of memory during validation. However, this situation did not occur when using Ballquery.

How visualization

Hello, I found your code and only found that vis_3d.py is related to visualization. I would like to ask you, I want to visualize gt and pred, what should I do (if possible, please be as detailed as possible, thank you very much)? I got the txt files of gt and pred were obtained during the test.

Result of PointNeXt in scannet.

Congratulations on your amazing work!
I notice that your paper lists the result of PointNeXt in scannet. However, I can't find the result in the PoinNeXt paper. Could you tell me where the result came from?

About the torch seed configuration in the code

Hi Haojia,

Thank you for your excellent work and the released code. I have a question regarding the torch seed configuration. In your code, you use different manual seeds for different models (e.g, pointmetabase-l ,pointmetabase-xl, pointmetabase-xxl). I wonder about your motivation behind this design, and how would the seeds influence the network training? Thanks a lot in advance. :)

Best

About Throughtput

How do I get throughput? Following your script, I tested throughput on a 3090. In my tests, PointMetaBase-XL cannot be 2x faster than PointNeXt-XL, only ~1.5x. Any suggestions?

Pointnet++ implementation for ShapeNet Part Segmentation

Thank you for your great work!
I cannot find any config that can run pointnet++ for ShapeNet part segmentation.
Could you please provide the config file?
I tried to write config by myself but it is not working. Loss becomes NaN after few steps.

model:
  NAME: BasePartSeg
  encoder_args:
    NAME: PointNet2Encoder
    in_channels: 7
    width: null
    strides: [4, 4, 1]
    layers: 3
    use_res: False
    mlps: [[[64, 64, 128]],
        [[128, 128, 256]],
        [[256, 512, 1024]]]
    radius: [0.2, 0.4, null]
    num_samples: [32, 64, null]
    sampler: fps
    aggr_args:
      NAME: 'convpool'
      feature_type: 'dp_fj'
      anisotropic: False
      reduction: 'max'
    group_args:
      NAME: 'ballquery'
      use_xyz: True
    conv_args:
      order: conv-norm-act
    act_args:
      act: 'relu'
    norm_args:
      norm: 'bn'
  decoder_args:
    NAME: PointNet2PartDecoder
    fp_mlps: [[128, 128, 128], [256, 128], [256, 256]]
    norm_args:
      norm: 'bn'
  cls_args:
    NAME: SegHead
    globals: max,avg  # apped global feature to each point feature
    num_classes: 50
    in_channels: null
    norm_args:
      norm: 'bn'

Installation errors

Details:

- Ubuntu 22.04.2 TLS
- Conda 23.1.0
- CUDA 12.0
- gcc 11.3.0

I have followed the installation instructions, modifying in the installation.sh this:

module load cuda/11.3.1

For:

module load cuda/12.0 #module load cuda/11.3.1

It works, but with the following errors:

ERROR: Unable to locate a modulefile for 'cuda/12.0'
ERROR: Unable to locate a modulefile for 'gcc/7.5.0'
OSError: CUDA_HOME environment variable is not set. Please set it to your CUDA install root.

Errors with module I think are not a problem. From what I've read they are no longer used in this version of ubuntu.

About the problem with cuda:

torch.cuda.is_cuda_available()
True

However, when running:

CUDA_VISIBLE_DEVICES=0 bash script/main_segmentation.sh cfgs/s3dis/pointmetabase-l.yaml wandb.use_wandb=False                                                    

I get:

script/main_segmentation.sh: line 31: nvcc: command not found
lupus-fon.mines-paristech.local
1
Traceback (most recent call last):
  File "examples/segmentation/main.py", line 15, in <module>
    from openpoints.utils import set_random_seed, save_checkpoint, load_checkpoint, resume_checkpoint, setup_logger_dist, \
  File "/home/dlamasnovoa/Documents/repositories/PointMetaBase/examples/segmentation/../../openpoints/__init__.py", line 1, in <module>
    from .transforms import *
  File "/home/dlamasnovoa/Documents/repositories/PointMetaBase/examples/segmentation/../../openpoints/transforms/__init__.py", line 5, in <module>
    from .transforms_factory import * 
  File "/home/dlamasnovoa/Documents/repositories/PointMetaBase/examples/segmentation/../../openpoints/transforms/transforms_factory.py", line 2, in <module>
    from ..utils.registry import Registry
  File "/home/dlamasnovoa/Documents/repositories/PointMetaBase/examples/segmentation/../../openpoints/utils/__init__.py", line 3, in <module>
    from .logger import setup_logger_dist, generate_exp_directory, resume_exp_directory
  File "/home/dlamasnovoa/Documents/repositories/PointMetaBase/examples/segmentation/../../openpoints/utils/logger.py", line 9, in <module>
    import shortuuid
ModuleNotFoundError: No module named 'shortuuid'

First modification

So I have modified in the installation.sh this:

conda install -y pytorch=1.10.1 torchvision cudatoolkit=11.3 -c pytorch -c nvidia

For:

conda install -c "nvidia/label/cuda-11.3.1" cuda-toolkit
conda install -y pytorch=1.10.1 torchvision cudatoolkit=11.3 -c pytorch

And a get during the installation:

The above exception was the direct cause of the following exception:

Traceback (most recent call last):
  File "setup.py", line 26, in <module>
    'build_ext': BuildExtension
  File "/home/dlamasnovoa/miniconda3/envs/openpoints/lib/python3.7/site-packages/setuptools/__init__.py", line 153, in setup
    return distutils.core.setup(**attrs)
  File "/home/dlamasnovoa/miniconda3/envs/openpoints/lib/python3.7/distutils/core.py", line 148, in setup
    dist.run_commands()
  File "/home/dlamasnovoa/miniconda3/envs/openpoints/lib/python3.7/distutils/dist.py", line 966, in run_commands
    self.run_command(cmd)
  File "/home/dlamasnovoa/miniconda3/envs/openpoints/lib/python3.7/distutils/dist.py", line 985, in run_command
    cmd_obj.run()
  File "/home/dlamasnovoa/miniconda3/envs/openpoints/lib/python3.7/site-packages/setuptools/command/install.py", line 74, in run
    self.do_egg_install()
  File "/home/dlamasnovoa/miniconda3/envs/openpoints/lib/python3.7/site-packages/setuptools/command/install.py", line 116, in do_egg_install
    self.run_command('bdist_egg')
  File "/home/dlamasnovoa/miniconda3/envs/openpoints/lib/python3.7/distutils/cmd.py", line 313, in run_command
    self.distribution.run_command(command)
  File "/home/dlamasnovoa/miniconda3/envs/openpoints/lib/python3.7/distutils/dist.py", line 985, in run_command
    cmd_obj.run()
  File "/home/dlamasnovoa/miniconda3/envs/openpoints/lib/python3.7/site-packages/setuptools/command/bdist_egg.py", line 164, in run
    cmd = self.call_command('install_lib', warn_dir=0)
  File "/home/dlamasnovoa/miniconda3/envs/openpoints/lib/python3.7/site-packages/setuptools/command/bdist_egg.py", line 150, in call_command
    self.run_command(cmdname)
  File "/home/dlamasnovoa/miniconda3/envs/openpoints/lib/python3.7/distutils/cmd.py", line 313, in run_command
    self.distribution.run_command(command)
  File "/home/dlamasnovoa/miniconda3/envs/openpoints/lib/python3.7/distutils/dist.py", line 985, in run_command
    cmd_obj.run()
  File "/home/dlamasnovoa/miniconda3/envs/openpoints/lib/python3.7/site-packages/setuptools/command/install_lib.py", line 11, in run
    self.build()
  File "/home/dlamasnovoa/miniconda3/envs/openpoints/lib/python3.7/distutils/command/install_lib.py", line 107, in build
    self.run_command('build_ext')
  File "/home/dlamasnovoa/miniconda3/envs/openpoints/lib/python3.7/distutils/cmd.py", line 313, in run_command
    self.distribution.run_command(command)
  File "/home/dlamasnovoa/miniconda3/envs/openpoints/lib/python3.7/distutils/dist.py", line 985, in run_command
    cmd_obj.run()
  File "/home/dlamasnovoa/miniconda3/envs/openpoints/lib/python3.7/site-packages/setuptools/command/build_ext.py", line 79, in run
    _build_ext.run(self)
  File "/home/dlamasnovoa/miniconda3/envs/openpoints/lib/python3.7/site-packages/Cython/Distutils/old_build_ext.py", line 186, in run
    _build_ext.build_ext.run(self)
  File "/home/dlamasnovoa/miniconda3/envs/openpoints/lib/python3.7/distutils/command/build_ext.py", line 340, in run
    self.build_extensions()
  File "/home/dlamasnovoa/miniconda3/envs/openpoints/lib/python3.7/site-packages/torch/utils/cpp_extension.py", line 735, in build_extensions
    build_ext.build_extensions(self)
  File "/home/dlamasnovoa/miniconda3/envs/openpoints/lib/python3.7/site-packages/Cython/Distutils/old_build_ext.py", line 195, in build_extensions
    _build_ext.build_ext.build_extensions(self)
  File "/home/dlamasnovoa/miniconda3/envs/openpoints/lib/python3.7/distutils/command/build_ext.py", line 449, in build_extensions
    self._build_extensions_serial()
  File "/home/dlamasnovoa/miniconda3/envs/openpoints/lib/python3.7/distutils/command/build_ext.py", line 474, in _build_extensions_serial
    self.build_extension(ext)
  File "/home/dlamasnovoa/miniconda3/envs/openpoints/lib/python3.7/site-packages/setuptools/command/build_ext.py", line 202, in build_extension
    _build_ext.build_extension(self, ext)
  File "/home/dlamasnovoa/miniconda3/envs/openpoints/lib/python3.7/distutils/command/build_ext.py", line 534, in build_extension
    depends=ext.depends)
  File "/home/dlamasnovoa/miniconda3/envs/openpoints/lib/python3.7/site-packages/torch/utils/cpp_extension.py", line 565, in unix_wrap_ninja_compile
    with_cuda=with_cuda)
  File "/home/dlamasnovoa/miniconda3/envs/openpoints/lib/python3.7/site-packages/torch/utils/cpp_extension.py", line 1404, in _write_ninja_file_and_compile_objects
    error_prefix='Error compiling objects for extension')
  File "/home/dlamasnovoa/miniconda3/envs/openpoints/lib/python3.7/site-packages/torch/utils/cpp_extension.py", line 1733, in _run_ninja_build
    raise RuntimeError(message) from e
RuntimeError: Error compiling objects for extension

Second modification

So I have modified in the installation.sh this:

conda install -y pytorch=1.10.1 torchvision cudatoolkit=11.3 -c pytorch -c nvidia

For:

 conda install -c nvidia cuda-toolkit
conda install -y pytorch=1.10.1 torchvision cudatoolkit=11.3 -c pytorch

And a get:

RuntimeError: 
The detected CUDA version (12.1) mismatches the version that was used to compile
PyTorch (11.3). Please make sure to use the same CUDA version

However:

torch.cuda.is_cuda_available()
True

But when running:

CUDA_VISIBLE_DEVICES=0 bash script/main_segmentation.sh cfgs/s3dis/pointmetabase-l.yaml wandb.use_wandb=False                                                    

I get:

Traceback (most recent call last):
  File "examples/segmentation/main.py", line 15, in <module>
    from openpoints.utils import set_random_seed, save_checkpoint, load_checkpoint, resume_checkpoint, setup_logger_dist, \
  File "/home/dlamasnovoa/Documents/repositories/PointMetaBase/examples/segmentation/../../openpoints/__init__.py", line 1, in <module>
    from .transforms import *
  File "/home/dlamasnovoa/Documents/repositories/PointMetaBase/examples/segmentation/../../openpoints/transforms/__init__.py", line 5, in <module>
    from .transforms_factory import * 
  File "/home/dlamasnovoa/Documents/repositories/PointMetaBase/examples/segmentation/../../openpoints/transforms/transforms_factory.py", line 2, in <module>
    from ..utils.registry import Registry
  File "/home/dlamasnovoa/Documents/repositories/PointMetaBase/examples/segmentation/../../openpoints/utils/__init__.py", line 3, in <module>
    from .logger import setup_logger_dist, generate_exp_directory, resume_exp_directory
  File "/home/dlamasnovoa/Documents/repositories/PointMetaBase/examples/segmentation/../../openpoints/utils/logger.py", line 9, in <module>
    import shortuuid
ModuleNotFoundError: No module named 'shortuuid'

Doing:

pip install shortuuid

When runing:

CUDA_VISIBLE_DEVICES=0 bash script/main_segmentation.sh cfgs/s3dis/pointmetabase-l.yaml wandb.use_wandb=False                                                    

I get:

Traceback (most recent call last):
  File "examples/segmentation/main.py", line 18, in <module>
    from openpoints.dataset import build_dataloader_from_cfg, get_features_by_keys, get_class_weights
  File "/home/dlamasnovoa/Documents/repositories/PointMetaBase/examples/segmentation/../../openpoints/dataset/__init__.py", line 8, in <module>
    from .scanobjectnn import * # comment for chamfer error
  File "/home/dlamasnovoa/Documents/repositories/PointMetaBase/examples/segmentation/../../openpoints/dataset/scanobjectnn/__init__.py", line 1, in <module>
    from .scanobjectnn import ScanObjectNNHardest
  File "/home/dlamasnovoa/Documents/repositories/PointMetaBase/examples/segmentation/../../openpoints/dataset/scanobjectnn/scanobjectnn.py", line 5, in <module>
    from openpoints.models.layers import fps
  File "/home/dlamasnovoa/Documents/repositories/PointMetaBase/examples/segmentation/../../openpoints/models/__init__.py", line 6, in <module>
    from .backbone import *
  File "/home/dlamasnovoa/Documents/repositories/PointMetaBase/examples/segmentation/../../openpoints/models/backbone/__init__.py", line 2, in <module>
    from .pointnetv2 import PointNet2Encoder, PointNet2Decoder, PointNetFPModule
  File "/home/dlamasnovoa/Documents/repositories/PointMetaBase/examples/segmentation/../../openpoints/models/backbone/pointnetv2.py", line 14, in <module>
    from ..layers import furthest_point_sample, random_sample,  LocalAggregation, three_interpolation, create_convblock1d # grid_subsampling,
  File "/home/dlamasnovoa/Documents/repositories/PointMetaBase/examples/segmentation/../../openpoints/models/layers/__init__.py", line 9, in <module>
    from .group_embed import SubsampleGroup, PointPatchEmbed
  File "/home/dlamasnovoa/Documents/repositories/PointMetaBase/examples/segmentation/../../openpoints/models/layers/group_embed.py", line 6, in <module>
    from .subsample import furthest_point_sample, random_sample
  File "/home/dlamasnovoa/Documents/repositories/PointMetaBase/examples/segmentation/../../openpoints/models/layers/subsample.py", line 8, in <module>
    from openpoints.cpp.pointnet2_batch import pointnet2_cuda
  File "/home/dlamasnovoa/Documents/repositories/PointMetaBase/examples/segmentation/../../openpoints/cpp/__init__.py", line 6, in <module>
    from .pointnet2_batch import pointnet2_cuda
  File "/home/dlamasnovoa/Documents/repositories/PointMetaBase/examples/segmentation/../../openpoints/cpp/pointnet2_batch/__init__.py", line 2, in <module>
    import pointnet2_batch_cuda as pointnet2_cuda
ModuleNotFoundError: No module named 'pointnet2_batch_cuda'

FileNotFoundError: S3DIS: [Errno 2] No such file or directory

When I follow your Readme and train with S3DIS, this is the error I got, pretty sure the data is already prepared as PointNeXt, do you have ant idea how to resolve this?

Traceback (most recent call last):
File "/mnt/d/PointMetaBase/examples/segmentation/../../openpoints/utils/registry.py", line 291, in build_from_cfg
return obj_cls(**obj_cfg)
File "/mnt/d/PointMetaBase/examples/segmentation/../../openpoints/dataset/s3dis/s3dis.py", line 80, in init
data_list = sorted(os.listdir(raw_root))
FileNotFoundError: [Errno 2] No such file or directory: '/dev/shm/MEMORY_DATA/s3disfull/raw'

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
File "examples/segmentation/main.py", line 737, in
main(0, cfg)
File "examples/segmentation/main.py", line 154, in main
val_loader = build_dataloader_from_cfg(cfg.get('val_batch_size', cfg.batch_size),
File "/mnt/d/PointMetaBase/examples/segmentation/../../openpoints/dataset/build.py", line 71, in build_dataloader_from_cfg
dataset = build_dataset_from_cfg(dataset_cfg.common, split_cfg)
File "/mnt/d/PointMetaBase/examples/segmentation/../../openpoints/dataset/build.py", line 38, in build_dataset_from_cfg
return DATASETS.build(cfg, default_args=default_args)
File "/mnt/d/PointMetaBase/examples/segmentation/../../openpoints/utils/registry.py", line 149, in build
return self.build_func(*args, **kwargs, registry=self)
File "/mnt/d/PointMetaBase/examples/segmentation/../../openpoints/utils/registry.py", line 294, in build_from_cfg
raise type(e)(f'{obj_cls.name}: {e}')
FileNotFoundError: S3DIS: [Errno 2] No such file or directory: '/dev/shm/MEMORY_DATA/s3disfull/raw'

val miou high but test miou low

Hello, I try to change your model, but there is a situation where the test proves that the miou is high but the test miou is low, do you know why?

code version

could you release the code version only use python?

Downloading and evaluate the dataset

I have two question:
1-I ask about how to download the dataset (S3DIS), how to make the preprocessing step ?
2-If i want to test my own collected data, how to apply the preprocess and then evaluate it ?

thank you for your help.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.