Giter Site home page Giter Site logo

torch-points3d / torch-points-kernels Goto Github PK

View Code? Open in Web Editor NEW
95.0 95.0 23.0 324 KB

Pytorch kernels for spatial operations on point clouds

License: MIT License

C++ 54.28% Cuda 22.52% Python 20.61% C 1.22% Dockerfile 0.66% Shell 0.53% Batchfile 0.17%

torch-points-kernels's Introduction

PyPI version codecov Actions Status Documentation Status slack

This is a framework for running common deep learning models for point cloud analysis tasks against classic benchmark. It heavily relies on Pytorch Geometric and Facebook Hydra.

The framework allows lean and yet complex model to be built with minimum effort and great reproducibility. It also provide a high level API to democratize deep learning on pointclouds. See our paper at 3DV for an overview of the framework capacities and benchmarks of state-of-the-art networks.

Overview

Requirements

  • CUDA 10 or higher (if you want GPU version)
  • Python 3.7 or higher + headers (python-dev)
  • PyTorch 1.8.1 or higher (PyTorch >= 1.9 is recommended)
  • A Sparse convolution backend (optional) see here for installation instructions

For a more seamless setup, it is recommended to use Docker. This approach ensures compatibility and eases the installation process, particularly when working with specific versions of CUDA and PyTorch. You can pull the appropriate Docker image as follows:

docker pull pytorch/pytorch:1.10.0-cuda11.3-cudnn8-devel

After setting up the environment (either natively or through Docker), install the required Python package using pip:

pip install torch-points3d

Project structure

├─ benchmark               # Output from various benchmark runs
├─ conf                    # All configurations for training nad evaluation leave there
├─ notebooks               # A collection of notebooks that allow result exploration and network debugging
├─ docker                  # Docker image that can be used for inference or training
├─ docs                    # All the doc
├─ eval.py                 # Eval script
├─ find_neighbour_dist.py  # Script to find optimal #neighbours within neighbour search operations
├─ forward_scripts         # Script that runs a forward pass on possibly non annotated data
├─ outputs                 # All outputs from your runs sorted by date
├─ scripts                 # Some scripts to help manage the project
├─ torch_points3d
    ├─ core                # Core components
    ├─ datasets            # All code related to datasets
    ├─ metrics             # All metrics and trackers
    ├─ models              # All models
    ├─ modules             # Basic modules that can be used in a modular way
    ├─ utils               # Various utils
    └─ visualization       # Visualization
├─ test
└─ train.py                # Main script to launch a training

As a general philosophy we have split datasets and models by task. For example, datasets has five subfolders:

  • segmentation
  • classification
  • registration
  • object_detection
  • panoptic

where each folder contains the dataset related to each task.

Methods currently implemented

Please refer to our documentation for accessing some of those models directly from the API and see our example notebooks for KPconv and RSConv for more details.

Available Tasks

Tasks

Examples

Classification / Part Segmentation


Segmentation


Object Detection

Panoptic Segmentation

Registration

Available datasets

Segmentation

* S3DIS 1x1
* S3DIS Room
* S3DIS Fused - Sphere | Cylinder

Object detection and panoptic

* S3DIS Fused - Sphere | Cylinder

Registration

Classification

3D Sparse convolution support

We currently support Minkowski Engine > v0.5 and torchsparse >= v1.4.0 as backends for sparse convolutions. Those packages need to be installed independently from Torch Points3d, please follow installation instructions and troubleshooting notes on the respective repositories. At the moment MinkowskiEngine see here (thank you Chris Choy) demonstrates faster training. Please be aware that torchsparse is still in beta and does not support CPU only training.

Once you have setup one of those two sparse convolution framework you can start using are high level to define a unet backbone or simply an encoder:

from torch_points3d.applications.sparseconv3d import SparseConv3d

model = SparseConv3d("unet", input_nc=3, output_nc=5, num_layers=4, backend="torchsparse") # minkowski by default

You can also assemble your own networks by using the modules provided in torch_points3d/modules/SparseConv3d/nn. For example if you wish to use torchsparse backend you can do the following:

import torch_points3d.modules.SparseConv3d as sp3d

sp3d.nn.set_backend("torchsparse")
conv = sp3d.nn.Conv3d(10, 10)
bn = sp3d.nn.BatchNorm(10)

Mixed Precision Training

Mixed precision allows for lower memory on the GPU and slightly faster training times by performing the sparse convolution, pooling, and gradient ops in float16. Mixed precision training is currently supported for CUDA training on SparseConv3d networks with the torchsparse backend. To enable mixed precision, ensure you have the latest version of torchsparse with pip install --upgrade git+https://github.com/mit-han-lab/torchsparse.git. Then, set training.enable_mixed=True in your training configuration files. If all the conditions are met, when you start training you will see a log entry stating:

[torch_points3d.models.base_model][INFO] - Model will use mixed precision

If, however, you try to use mixed precision training with an unsupported backend, you will see:

[torch_points3d.models.base_model][WARNING] - Mixed precision is not supported on this model, using default precision...

Adding your model to the PretrainedRegistry.

The PretrainedRegistry enables anyone to add their own pre-trained models and re-create them with only 2 lines of code for finetunning or production purposes.

  • [You] Launch your model training with Wandb activated (wandb.log=True)
  • [TorchPoints3d] Once the training finished, TorchPoints3d will upload your trained model within our custom checkpoint to your wandb.
  • [You] Within PretainedRegistry class, add a key-value pair within its attribute MODELS. The key should be describe your model, dataset and training hyper-parameters (possibly the best model), the value should be the url referencing the .pt file on your wandb.

Example: Key: pointnet2_largemsg-s3dis-1 and URL value: https://api.wandb.ai/files/loicland/benchmark-torch-points-3d-s3dis/1e1p0csk/pointnet2_largemsg.pt for the pointnet2_largemsg.pt file. The key desribes a pointnet2 largemsg trained on s3dis fold 1.

  • [Anyone] By using the PretainedRegistry class and by providing the key, the associated model weights will be downloaded and the pre-trained model will be ready to use with its transforms.
[In]:
from torch_points3d.applications.pretrained_api import PretainedRegistry

model = PretainedRegistry.from_pretrained("pointnet2_largemsg-s3dis-1")

print(model.wandb)
print(model.print_transforms())

[Out]:
=================================================== WANDB URLS ======================================================
WEIGHT_URL: https://api.wandb.ai/files/loicland/benchmark-torch-points-3d-s3dis/1e1p0csk/pointnet2_largemsg.pt
LOG_URL: https://app.wandb.ai/loicland/benchmark-torch-points-3d-s3dis/runs/1e1p0csk/logs
CHART_URL: https://app.wandb.ai/loicland/benchmark-torch-points-3d-s3dis/runs/1e1p0csk
OVERVIEW_URL: https://app.wandb.ai/loicland/benchmark-torch-points-3d-s3dis/runs/1e1p0csk/overview
HYDRA_CONFIG_URL: https://app.wandb.ai/loicland/benchmark-torch-points-3d-s3dis/runs/1e1p0csk/files/hydra-config.yaml
OVERRIDES_URL: https://app.wandb.ai/loicland/benchmark-torch-points-3d-s3dis/runs/1e1p0csk/files/overrides.yaml
======================================================================================================================

pre_transform = None
test_transform = Compose([
    FixedPoints(20000, replace=True),
    XYZFeature(axis=['z']),
    AddFeatsByKeys(rgb=True, pos_z=True),
    Center(),
    ScalePos(scale=0.5),
])
train_transform = Compose([
    FixedPoints(20000, replace=True),
    RandomNoise(sigma=0.001, clip=0.05),
    RandomRotate((-180, 180), axis=2),
    RandomScaleAnisotropic([0.8, 1.2]),
    RandomAxesSymmetry(x=True, y=False, z=False),
    DropFeature(proba=0.2, feature='rgb'),
    XYZFeature(axis=['z']),
    AddFeatsByKeys(rgb=True, pos_z=True),
    Center(),
    ScalePos(scale=0.5),
])
val_transform = Compose([
    FixedPoints(20000, replace=True),
    XYZFeature(axis=['z']),
    AddFeatsByKeys(rgb=True, pos_z=True),
    Center(),
    ScalePos(scale=0.5),
])
inference_transform = Compose([
    FixedPoints(20000, replace=True),
    XYZFeature(axis=['z']),
    AddFeatsByKeys(rgb=True, pos_z=True),
    Center(),
    ScalePos(scale=0.5),
])
pre_collate_transform = Compose([
    PointCloudFusion(),
    SaveOriginalPosId,
    GridSampling3D(grid_size=0.04, quantize_coords=False, mode=mean),
])

Developer guidelines

Setting repo

We use Poetry for managing our packages. In order to get started, clone this repositories and run the following command from the root of the repo

poetry install --no-root

This will install all required dependencies in a new virtual environment.

Activate the environment

poetry shell

You can check that the install has been successful by running

python -m unittest -v

For pycuda support (only needed for the registration tasks):

pip install pycuda

Getting started: Train pointnet++ on part segmentation task for dataset shapenet

poetry run python train.py task=segmentation models=segmentation/pointnet2 model_name=pointnet2_charlesssg data=segmentation/shapenet-fixed

And you should see something like that

logging

The config for pointnet++ is a good example of how to define a model and is as follow:

# PointNet++: Deep Hierarchical Feature Learning on Point Sets in a Metric Space (https://arxiv.org/abs/1706.02413)
# Credit Charles R. Qi: https://github.com/charlesq34/pointnet2/blob/master/models/pointnet2_part_seg_msg_one_hot.py

pointnet2_onehot:
  architecture: pointnet2.PointNet2_D
  conv_type: "DENSE"
  use_category: True
  down_conv:
    module_name: PointNetMSGDown
    npoint: [1024, 256, 64, 16]
    radii: [[0.05, 0.1], [0.1, 0.2], [0.2, 0.4], [0.4, 0.8]]
    nsamples: [[16, 32], [16, 32], [16, 32], [16, 32]]
    down_conv_nn:
      [
        [[FEAT, 16, 16, 32], [FEAT, 32, 32, 64]],
        [[32 + 64, 64, 64, 128], [32 + 64, 64, 96, 128]],
        [[128 + 128, 128, 196, 256], [128 + 128, 128, 196, 256]],
        [[256 + 256, 256, 256, 512], [256 + 256, 256, 384, 512]],
      ]
  up_conv:
    module_name: DenseFPModule
    up_conv_nn:
      [
        [512 + 512 + 256 + 256, 512, 512],
        [512 + 128 + 128, 512, 512],
        [512 + 64 + 32, 256, 256],
        [256 + FEAT, 128, 128],
      ]
    skip: True
  mlp_cls:
    nn: [128, 128]
    dropout: 0.5

Inference

Inference script

We provide a script for running a given pre trained model on custom data that may not be annotated. You will find an example of this for the part segmentation task on Shapenet. Just like for the rest of the codebase most of the customization happens through config files and the provided example can be extended to other datasets. You can also easily create your own from there. Going back to the part segmentation task, say you have a folder full of point clouds that you know are Airplanes, and you have the checkpoint of a model trained on Airplanes and potentially other classes, simply edit the config.yaml and shapenet.yaml and run the following command:

python forward_scripts/forward.py

The result of the forward run will be placed in the specified output_folder and you can use the notebook provided to explore the results. Below is an example of the outcome of using a model trained on caps only to find the parts of airplanes and caps.

resexplore

Containerizing your model with Docker

Finally, for people interested in deploying their models to production environments, we provide a Dockerfile as well as a build script. Say you have trained a network for semantic segmentation that gave the weight <outputfolder/weights.pt>, the following command will build a docker image for you:

cd docker
./build.sh outputfolder/weights.pt

You can then use it to run a forward pass on a all the point clouds in input_path and generate the results in output_path

docker run -v /test_data:/in -v /test_data/out:/out pointnet2_charlesssg:latest python3 forward_scripts/forward.py dataset=shapenet data.forward_category=Cap input_path="/in" output_path="/out"

The -v option mounts a local directory to the container's file system. For example in the command line above, /test_data/out will be mounted at the location /out. As a consequence, all files written in /out will be available in the folder /test_data/out on your machine.

Profiling

We advice to use snakeviz and cProfile

Use cProfile to profile your code

poetry run python -m cProfile -o {your_name}.prof train.py ... debugging.profiling=True

And visualize results using snakeviz.

snakeviz {your_name}.prof

It is also possible to use torch.utils.bottleneck

python -m torch.utils.bottleneck /path/to/source/script.py [args]

Troubleshooting

Cannot compile certain CUDA Kernels or seg faults while running the tests

Ensure that at least PyTorch 1.8.0 is installed and verify that cuda/bin and cuda/include are in your $PATH and $CPATH respectively, e.g.:

$ python -c "import torch; print(torch.__version__)"
>>> 1.8.0

$ echo $PATH
>>> /usr/local/cuda/bin:...

$ echo $CPATH
>>> /usr/local/cuda/include:...

Undefined symbol / Updating Pytorch

When we update the version of Pytorch that is used, the compiled packages need to be reinstalled, otherwise you will run into an error that looks like this:

... scatter_cpu.cpython-36m-x86_64-linux-gnu.so: undefined symbol: _ZN3c1012CUDATensorIdEv

This can happen for the following libraries:

  • torch-points-kernels
  • torch-scatter
  • torch-cluster
  • torch-sparse

An easy way to fix this is to run the following command with the virtual env activated:

pip uninstall torch-scatter torch-sparse torch-cluster torch-points-kernels -y
rm -rf ~/.cache/pip
poetry install

CUDA kernel failed : no kernel image is available for execution on the device

This can happen when trying to run the code on a different GPU than the one used to compile the torch-points-kernels library. Uninstall torch-points-kernels, clear cache, and reinstall after setting the TORCH_CUDA_ARCH_LIST environment variable. For example, for compiling with a Tesla T4 (Turing 7.5) and running the code on a Tesla V100 (Volta 7.0) use:

export TORCH_CUDA_ARCH_LIST="7.0;7.5"

See this useful chart for more architecture compatibility.

Cannot use wandb on Windows

Raises OSError: [WinError 6] The handle is invalid / wandb: ERROR W&B process failed to launch Wandb is currently broken on Windows (see this issue), a workaround is to use the command line argument wandb.log=false

Exploring your experiments

We provide a notebook based pyvista and panel that allows you to explore your past experiments visually. When using jupyter lab you will have to install an extension:

jupyter labextension install @pyviz/jupyterlab_pyviz

Run through the notebook and you should see a dashboard starting that looks like the following:

dashboard

Contributing

Contributions are welcome! The only asks are that you stick to the styling and that you add tests as you add more features!

For styling you can use pre-commit hooks to help you:

pre-commit install

A sequence of checks will be run for you and you may have to add the fixed files again to the stashed files.

When it comes to docstrings we use numpy style docstrings, for those who use Visual Studio Code, there is a great extension that can help with that. Install it and set the format to numpy and you should be good to go!

Finaly, if you want to have a direct chat with us feel free to join our slack, just shoot us an email and we'll add you.

Citing

If you find our work useful, do not hesitate to cite it:

@inproceedings{
  tp3d,
  title={Torch-Points3D: A Modular Multi-Task Frameworkfor Reproducible Deep Learning on 3D Point Clouds},
  author={Chaton, Thomas and Chaulet Nicolas and Horache, Sofiane and Landrieu, Loic},
  booktitle={2020 International Conference on 3D Vision (3DV)},
  year={2020},
  organization={IEEE},
  url = {\url{https://github.com/nicolas-chaulet/torch-points3d}}
}

and please also include a citation to the models or the datasets you have used in your experiments!

torch-points-kernels's People

Contributors

ccinc avatar duducheng avatar humanpose1 avatar hzxie avatar milesial avatar nicolas-chaulet avatar pre-commit-ci[bot] avatar tchaton avatar tristanheywood avatar uakh avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

torch-points-kernels's Issues

I can‘t install this module successfully,what is the process?

ERROR: Command errored out with exit status 1: /opt/anaconda3/envs/randla/bin/python -u -c 'import io, os, sys, setuptools, tokenize; sys.argv[0] = '"'"'/private/var/folders/7c/d0wzvfk159bb8lrgpr_52jl80000gn/T/pip-install-4j8cxrgv/torch-points-kernels_c8978416228f42a7a560a4c21dea52eb/setup.py'"'"'; file='"'"'/private/var/folders/7c/d0wzvfk159bb8lrgpr_52jl80000gn/T/pip-install-4j8cxrgv/torch-points-kernels_c8978416228f42a7a560a4c21dea52eb/setup.py'"'"';f = getattr(tokenize, '"'"'open'"'"', open)(file) if os.path.exists(file) else io.StringIO('"'"'from setuptools import setup; setup()'"'"');code = f.read().replace('"'"'\r\n'"'"', '"'"'\n'"'"');f.close();exec(compile(code, file, '"'"'exec'"'"'))' install --record /private/var/folders/7c/d0wzvfk159bb8lrgpr_52jl80000gn/T/pip-record-pjr563e1/install-record.txt --single-version-externally-managed --compile --install-headers /opt/anaconda3/envs/randla/include/python3.7m/torch-points-kernels Check the logs for full command output.

ModuleNotFoundError: No module named 'torch_points_kernels.points_cpu'

Hi, in installed torch_points_kernels via pip but I got this error while executing from torch_points_kernels import knn

In the torch_points_kernels directory in my venv I can't find the setup.py file to compile the project.
Is there something I'm missing?

Btw I'm using python 3.9 and PyCharm as IDE.

poetry doesn't work

https://github.com/torch-points3d/torch-points-kernels' readme says

poetry add torch-points-kernels

However, my centos can't use it as


(RandLA) [kimd999@tlogin2 python]$ ~/.poetry/bin/poetry add torch-points-kernels

  RuntimeError

  Poetry could not find a pyproject.toml file in /home/kimd999/script/python or its parents

  at ~/.poetry/lib/poetry/_vendor/py3.9/poetry/core/factory.py:369 in locate
      365│             if poetry_file.exists():
      366│                 return poetry_file
      367│
      368│         else:
    → 369│             raise RuntimeError(
      370│                 "Poetry could not find a pyproject.toml file in {} or its parents".format(
      371│                     cwd
      372│                 )
      373│             )

can not install the package even install the pytorch

running build_ext
building 'torch_points_kernels.points_cuda' extension
creating /home/laiming/PointNet/torch-points-kernels/build
creating /home/laiming/PointNet/torch-points-kernels/build/temp.linux-x86_64-3.6
creating /home/laiming/PointNet/torch-points-kernels/build/temp.linux-x86_64-3.6/cuda
creating /home/laiming/PointNet/torch-points-kernels/build/temp.linux-x86_64-3.6/cuda/src
Emitting ninja build file /home/laiming/PointNet/torch-points-kernels/build/temp.linux-x86_64-3.6/build.ninja...
Compiling objects...
Allowing ninja to set a default number of workers... (overridable by setting the environment variable MAX_JOBS=N)
[1/7] c++ -MMD -MF /home/laiming/PointNet/torch-points-kernels/build/temp.linux-x86_64-3.6/cuda/src/ball_query.o.d -pthread -B /home/laiming/anaconda3/envs/RandLA_torch/compiler_compat -Wl,--sysroot=/ -Wsign-compare -DNDEBUG -g -fwrapv -O3 -Wall -Wstrict-prototypes -fPIC -Icuda/include -I/home/laiming/anaconda3/envs/RandLA_torch/lib/python3.6/site-packages/torch/include -I/home/laiming/anaconda3/envs/RandLA_torch/lib/python3.6/site-packages/torch/include/torch/csrc/api/include -I/home/laiming/anaconda3/envs/RandLA_torch/lib/python3.6/site-packages/torch/include/TH -I/home/laiming/anaconda3/envs/RandLA_torch/lib/python3.6/site-packages/torch/include/THC -I/usr/local/cuda-10.1/include -I/home/laiming/anaconda3/envs/RandLA_torch/include/python3.6m -c -c /home/laiming/PointNet/torch-points-kernels/cuda/src/ball_query.cpp -o /home/laiming/PointNet/torch-points-kernels/build/temp.linux-x86_64-3.6/cuda/src/ball_query.o -O3 -DVERSION_GE_1_3 -DTORCH_API_INCLUDE_EXTENSION_H -DTORCH_API_INCLUDE_EXTENSION_H -DTORCH_EXTENSION_NAME=points_cuda -DTORCH_EXTENSION_NAME=points_cuda -D_GLIBCXX_USE_CXX11_ABI=0 -D_GLIBCXX_USE_CXX11_ABI=0 -std=c++14
FAILED: /home/laiming/PointNet/torch-points-kernels/build/temp.linux-x86_64-3.6/cuda/src/ball_query.o
c++ -MMD -MF /home/laiming/PointNet/torch-points-kernels/build/temp.linux-x86_64-3.6/cuda/src/ball_query.o.d -pthread -B /home/laiming/anaconda3/envs/RandLA_torch/compiler_compat -Wl,--sysroot=/ -Wsign-compare -DNDEBUG -g -fwrapv -O3 -Wall -Wstrict-prototypes -fPIC -Icuda/include -I/home/laiming/anaconda3/envs/RandLA_torch/lib/python3.6/site-packages/torch/include -I/home/laiming/anaconda3/envs/RandLA_torch/lib/python3.6/site-packages/torch/include/torch/csrc/api/include -I/home/laiming/anaconda3/envs/RandLA_torch/lib/python3.6/site-packages/torch/include/TH -I/home/laiming/anaconda3/envs/RandLA_torch/lib/python3.6/site-packages/torch/include/THC -I/usr/local/cuda-10.1/include -I/home/laiming/anaconda3/envs/RandLA_torch/include/python3.6m -c -c /home/laiming/PointNet/torch-points-kernels/cuda/src/ball_query.cpp -o /home/laiming/PointNet/torch-points-kernels/build/temp.linux-x86_64-3.6/cuda/src/ball_query.o -O3 -DVERSION_GE_1_3 -DTORCH_API_INCLUDE_EXTENSION_H -DTORCH_API_INCLUDE_EXTENSION_H -DTORCH_EXTENSION_NAME=points_cuda -DTORCH_EXTENSION_NAME=points_cuda -D_GLIBCXX_USE_CXX11_ABI=0 -D_GLIBCXX_USE_CXX11_ABI=0 -std=c++14
cc1plus: warning: command line option ‘-Wstrict-prototypes’ is valid for C/ObjC but not for C++
/home/laiming/PointNet/torch-points-kernels/cuda/src/ball_query.cpp:1:10: fatal error: ball_query.h: 没有那个文件或目录
#include "ball_query.h"
^~~~~~~~~~~~~~
compilation terminated.
[2/7] c++ -MMD -MF /home/laiming/PointNet/torch-points-kernels/build/temp.linux-x86_64-3.6/cuda/src/bindings.o.d -pthread -B /home/laiming/anaconda3/envs/RandLA_torch/compiler_compat -Wl,--sysroot=/ -Wsign-compare -DNDEBUG -g -fwrapv -O3 -Wall -Wstrict-prototypes -fPIC -Icuda/include -I/home/laiming/anaconda3/envs/RandLA_torch/lib/python3.6/site-packages/torch/include -I/home/laiming/anaconda3/envs/RandLA_torch/lib/python3.6/site-packages/torch/include/torch/csrc/api/include -I/home/laiming/anaconda3/envs/RandLA_torch/lib/python3.6/site-packages/torch/include/TH -I/home/laiming/anaconda3/envs/RandLA_torch/lib/python3.6/site-packages/torch/include/THC -I/usr/local/cuda-10.1/include -I/home/laiming/anaconda3/envs/RandLA_torch/include/python3.6m -c -c /home/laiming/PointNet/torch-points-kernels/cuda/src/bindings.cpp -o /home/laiming/PointNet/torch-points-kernels/build/temp.linux-x86_64-3.6/cuda/src/bindings.o -O3 -DVERSION_GE_1_3 -DTORCH_API_INCLUDE_EXTENSION_H -DTORCH_API_INCLUDE_EXTENSION_H -DTORCH_EXTENSION_NAME=points_cuda -DTORCH_EXTENSION_NAME=points_cuda -D_GLIBCXX_USE_CXX11_ABI=0 -D_GLIBCXX_USE_CXX11_ABI=0 -std=c++14
FAILED: /home/laiming/PointNet/torch-points-kernels/build/temp.linux-x86_64-3.6/cuda/src/bindings.o
c++ -MMD -MF /home/laiming/PointNet/torch-points-kernels/build/temp.linux-x86_64-3.6/cuda/src/bindings.o.d -pthread -B /home/laiming/anaconda3/envs/RandLA_torch/compiler_compat -Wl,--sysroot=/ -Wsign-compare -DNDEBUG -g -fwrapv -O3 -Wall -Wstrict-prototypes -fPIC -Icuda/include -I/home/laiming/anaconda3/envs/RandLA_torch/lib/python3.6/site-packages/torch/include -I/home/laiming/anaconda3/envs/RandLA_torch/lib/python3.6/site-packages/torch/include/torch/csrc/api/include -I/home/laiming/anaconda3/envs/RandLA_torch/lib/python3.6/site-packages/torch/include/TH -I/home/laiming/anaconda3/envs/RandLA_torch/lib/python3.6/site-packages/torch/include/THC -I/usr/local/cuda-10.1/include -I/home/laiming/anaconda3/envs/RandLA_torch/include/python3.6m -c -c /home/laiming/PointNet/torch-points-kernels/cuda/src/bindings.cpp -o /home/laiming/PointNet/torch-points-kernels/build/temp.linux-x86_64-3.6/cuda/src/bindings.o -O3 -DVERSION_GE_1_3 -DTORCH_API_INCLUDE_EXTENSION_H -DTORCH_API_INCLUDE_EXTENSION_H -DTORCH_EXTENSION_NAME=points_cuda -DTORCH_EXTENSION_NAME=points_cuda -D_GLIBCXX_USE_CXX11_ABI=0 -D_GLIBCXX_USE_CXX11_ABI=0 -std=c++14
cc1plus: warning: command line option ‘-Wstrict-prototypes’ is valid for C/ObjC but not for C++
/home/laiming/PointNet/torch-points-kernels/cuda/src/bindings.cpp:1:10: fatal error: ball_query.h: 没有那个文件或目录
#include "ball_query.h"
^~~~~~~~~~~~~~
compilation terminated.
[3/7] c++ -MMD -MF /home/laiming/PointNet/torch-points-kernels/build/temp.linux-x86_64-3.6/cuda/src/interpolate.o.d -pthread -B /home/laiming/anaconda3/envs/RandLA_torch/compiler_compat -Wl,--sysroot=/ -Wsign-compare -DNDEBUG -g -fwrapv -O3 -Wall -Wstrict-prototypes -fPIC -Icuda/include -I/home/laiming/anaconda3/envs/RandLA_torch/lib/python3.6/site-packages/torch/include -I/home/laiming/anaconda3/envs/RandLA_torch/lib/python3.6/site-packages/torch/include/torch/csrc/api/include -I/home/laiming/anaconda3/envs/RandLA_torch/lib/python3.6/site-packages/torch/include/TH -I/home/laiming/anaconda3/envs/RandLA_torch/lib/python3.6/site-packages/torch/include/THC -I/usr/local/cuda-10.1/include -I/home/laiming/anaconda3/envs/RandLA_torch/include/python3.6m -c -c /home/laiming/PointNet/torch-points-kernels/cuda/src/interpolate.cpp -o /home/laiming/PointNet/torch-points-kernels/build/temp.linux-x86_64-3.6/cuda/src/interpolate.o -O3 -DVERSION_GE_1_3 -DTORCH_API_INCLUDE_EXTENSION_H -DTORCH_API_INCLUDE_EXTENSION_H -DTORCH_EXTENSION_NAME=points_cuda -DTORCH_EXTENSION_NAME=points_cuda -D_GLIBCXX_USE_CXX11_ABI=0 -D_GLIBCXX_USE_CXX11_ABI=0 -std=c++14
FAILED: /home/laiming/PointNet/torch-points-kernels/build/temp.linux-x86_64-3.6/cuda/src/interpolate.o
c++ -MMD -MF /home/laiming/PointNet/torch-points-kernels/build/temp.linux-x86_64-3.6/cuda/src/interpolate.o.d -pthread -B /home/laiming/anaconda3/envs/RandLA_torch/compiler_compat -Wl,--sysroot=/ -Wsign-compare -DNDEBUG -g -fwrapv -O3 -Wall -Wstrict-prototypes -fPIC -Icuda/include -I/home/laiming/anaconda3/envs/RandLA_torch/lib/python3.6/site-packages/torch/include -I/home/laiming/anaconda3/envs/RandLA_torch/lib/python3.6/site-packages/torch/include/torch/csrc/api/include -I/home/laiming/anaconda3/envs/RandLA_torch/lib/python3.6/site-packages/torch/include/TH -I/home/laiming/anaconda3/envs/RandLA_torch/lib/python3.6/site-packages/torch/include/THC -I/usr/local/cuda-10.1/include -I/home/laiming/anaconda3/envs/RandLA_torch/include/python3.6m -c -c /home/laiming/PointNet/torch-points-kernels/cuda/src/interpolate.cpp -o /home/laiming/PointNet/torch-points-kernels/build/temp.linux-x86_64-3.6/cuda/src/interpolate.o -O3 -DVERSION_GE_1_3 -DTORCH_API_INCLUDE_EXTENSION_H -DTORCH_API_INCLUDE_EXTENSION_H -DTORCH_EXTENSION_NAME=points_cuda -DTORCH_EXTENSION_NAME=points_cuda -D_GLIBCXX_USE_CXX11_ABI=0 -D_GLIBCXX_USE_CXX11_ABI=0 -std=c++14
cc1plus: warning: command line option ‘-Wstrict-prototypes’ is valid for C/ObjC but not for C++
/home/laiming/PointNet/torch-points-kernels/cuda/src/interpolate.cpp:1:10: fatal error: interpolate.h: 没有那个文件或目录
#include "interpolate.h"
^~~~~~~~~~~~~~~
compilation terminated.
[4/7] c++ -MMD -MF /home/laiming/PointNet/torch-points-kernels/build/temp.linux-x86_64-3.6/cuda/src/sampling.o.d -pthread -B /home/laiming/anaconda3/envs/RandLA_torch/compiler_compat -Wl,--sysroot=/ -Wsign-compare -DNDEBUG -g -fwrapv -O3 -Wall -Wstrict-prototypes -fPIC -Icuda/include -I/home/laiming/anaconda3/envs/RandLA_torch/lib/python3.6/site-packages/torch/include -I/home/laiming/anaconda3/envs/RandLA_torch/lib/python3.6/site-packages/torch/include/torch/csrc/api/include -I/home/laiming/anaconda3/envs/RandLA_torch/lib/python3.6/site-packages/torch/include/TH -I/home/laiming/anaconda3/envs/RandLA_torch/lib/python3.6/site-packages/torch/include/THC -I/usr/local/cuda-10.1/include -I/home/laiming/anaconda3/envs/RandLA_torch/include/python3.6m -c -c /home/laiming/PointNet/torch-points-kernels/cuda/src/sampling.cpp -o /home/laiming/PointNet/torch-points-kernels/build/temp.linux-x86_64-3.6/cuda/src/sampling.o -O3 -DVERSION_GE_1_3 -DTORCH_API_INCLUDE_EXTENSION_H -DTORCH_API_INCLUDE_EXTENSION_H -DTORCH_EXTENSION_NAME=points_cuda -DTORCH_EXTENSION_NAME=points_cuda -D_GLIBCXX_USE_CXX11_ABI=0 -D_GLIBCXX_USE_CXX11_ABI=0 -std=c++14
FAILED: /home/laiming/PointNet/torch-points-kernels/build/temp.linux-x86_64-3.6/cuda/src/sampling.o
c++ -MMD -MF /home/laiming/PointNet/torch-points-kernels/build/temp.linux-x86_64-3.6/cuda/src/sampling.o.d -pthread -B /home/laiming/anaconda3/envs/RandLA_torch/compiler_compat -Wl,--sysroot=/ -Wsign-compare -DNDEBUG -g -fwrapv -O3 -Wall -Wstrict-prototypes -fPIC -Icuda/include -I/home/laiming/anaconda3/envs/RandLA_torch/lib/python3.6/site-packages/torch/include -I/home/laiming/anaconda3/envs/RandLA_torch/lib/python3.6/site-packages/torch/include/torch/csrc/api/include -I/home/laiming/anaconda3/envs/RandLA_torch/lib/python3.6/site-packages/torch/include/TH -I/home/laiming/anaconda3/envs/RandLA_torch/lib/python3.6/site-packages/torch/include/THC -I/usr/local/cuda-10.1/include -I/home/laiming/anaconda3/envs/RandLA_torch/include/python3.6m -c -c /home/laiming/PointNet/torch-points-kernels/cuda/src/sampling.cpp -o /home/laiming/PointNet/torch-points-kernels/build/temp.linux-x86_64-3.6/cuda/src/sampling.o -O3 -DVERSION_GE_1_3 -DTORCH_API_INCLUDE_EXTENSION_H -DTORCH_API_INCLUDE_EXTENSION_H -DTORCH_EXTENSION_NAME=points_cuda -DTORCH_EXTENSION_NAME=points_cuda -D_GLIBCXX_USE_CXX11_ABI=0 -D_GLIBCXX_USE_CXX11_ABI=0 -std=c++14
cc1plus: warning: command line option ‘-Wstrict-prototypes’ is valid for C/ObjC but not for C++
/home/laiming/PointNet/torch-points-kernels/cuda/src/sampling.cpp:1:10: fatal error: sampling.h: 没有那个文件或目录
#include "sampling.h"
^~~~~~~~~~~~
compilation terminated.
[5/7] /usr/local/cuda-10.1/bin/nvcc -Icuda/include -I/home/laiming/anaconda3/envs/RandLA_torch/lib/python3.6/site-packages/torch/include -I/home/laiming/anaconda3/envs/RandLA_torch/lib/python3.6/site-packages/torch/include/torch/csrc/api/include -I/home/laiming/anaconda3/envs/RandLA_torch/lib/python3.6/site-packages/torch/include/TH -I/home/laiming/anaconda3/envs/RandLA_torch/lib/python3.6/site-packages/torch/include/THC -I/usr/local/cuda-10.1/include -I/home/laiming/anaconda3/envs/RandLA_torch/include/python3.6m -c -c /home/laiming/PointNet/torch-points-kernels/cuda/src/interpolate_gpu.cu -o /home/laiming/PointNet/torch-points-kernels/build/temp.linux-x86_64-3.6/cuda/src/interpolate_gpu.o -D__CUDA_NO_HALF_OPERATORS__ -D__CUDA_NO_HALF_CONVERSIONS__ -D__CUDA_NO_HALF2_OPERATORS__ --expt-relaxed-constexpr --compiler-options '-fPIC' -O3 -DVERSION_GE_1_3 -DTORCH_API_INCLUDE_EXTENSION_H -DTORCH_API_INCLUDE_EXTENSION_H -DTORCH_EXTENSION_NAME=points_cuda -DTORCH_EXTENSION_NAME=points_cuda -D_GLIBCXX_USE_CXX11_ABI=0 -D_GLIBCXX_USE_CXX11_ABI=0 -std=c++14 -gencode=arch=compute_61,code=sm_61
FAILED: /home/laiming/PointNet/torch-points-kernels/build/temp.linux-x86_64-3.6/cuda/src/interpolate_gpu.o
/usr/local/cuda-10.1/bin/nvcc -Icuda/include -I/home/laiming/anaconda3/envs/RandLA_torch/lib/python3.6/site-packages/torch/include -I/home/laiming/anaconda3/envs/RandLA_torch/lib/python3.6/site-packages/torch/include/torch/csrc/api/include -I/home/laiming/anaconda3/envs/RandLA_torch/lib/python3.6/site-packages/torch/include/TH -I/home/laiming/anaconda3/envs/RandLA_torch/lib/python3.6/site-packages/torch/include/THC -I/usr/local/cuda-10.1/include -I/home/laiming/anaconda3/envs/RandLA_torch/include/python3.6m -c -c /home/laiming/PointNet/torch-points-kernels/cuda/src/interpolate_gpu.cu -o /home/laiming/PointNet/torch-points-kernels/build/temp.linux-x86_64-3.6/cuda/src/interpolate_gpu.o -D__CUDA_NO_HALF_OPERATORS__ -D__CUDA_NO_HALF_CONVERSIONS__ -D__CUDA_NO_HALF2_OPERATORS__ --expt-relaxed-constexpr --compiler-options '-fPIC' -O3 -DVERSION_GE_1_3 -DTORCH_API_INCLUDE_EXTENSION_H -DTORCH_API_INCLUDE_EXTENSION_H -DTORCH_EXTENSION_NAME=points_cuda -DTORCH_EXTENSION_NAME=points_cuda -D_GLIBCXX_USE_CXX11_ABI=0 -D_GLIBCXX_USE_CXX11_ABI=0 -std=c++14 -gencode=arch=compute_61,code=sm_61
/home/laiming/PointNet/torch-points-kernels/cuda/src/interpolate_gpu.cu:5:10: fatal error: cuda_utils.h: 没有那个文件或目录
#include "cuda_utils.h"
^~~~~~~~~~~~~~
compilation terminated.
[6/7] /usr/local/cuda-10.1/bin/nvcc -Icuda/include -I/home/laiming/anaconda3/envs/RandLA_torch/lib/python3.6/site-packages/torch/include -I/home/laiming/anaconda3/envs/RandLA_torch/lib/python3.6/site-packages/torch/include/torch/csrc/api/include -I/home/laiming/anaconda3/envs/RandLA_torch/lib/python3.6/site-packages/torch/include/TH -I/home/laiming/anaconda3/envs/RandLA_torch/lib/python3.6/site-packages/torch/include/THC -I/usr/local/cuda-10.1/include -I/home/laiming/anaconda3/envs/RandLA_torch/include/python3.6m -c -c /home/laiming/PointNet/torch-points-kernels/cuda/src/sampling_gpu.cu -o /home/laiming/PointNet/torch-points-kernels/build/temp.linux-x86_64-3.6/cuda/src/sampling_gpu.o -D__CUDA_NO_HALF_OPERATORS__ -D__CUDA_NO_HALF_CONVERSIONS__ -D__CUDA_NO_HALF2_OPERATORS__ --expt-relaxed-constexpr --compiler-options '-fPIC' -O3 -DVERSION_GE_1_3 -DTORCH_API_INCLUDE_EXTENSION_H -DTORCH_API_INCLUDE_EXTENSION_H -DTORCH_EXTENSION_NAME=points_cuda -DTORCH_EXTENSION_NAME=points_cuda -D_GLIBCXX_USE_CXX11_ABI=0 -D_GLIBCXX_USE_CXX11_ABI=0 -std=c++14 -gencode=arch=compute_61,code=sm_61
FAILED: /home/laiming/PointNet/torch-points-kernels/build/temp.linux-x86_64-3.6/cuda/src/sampling_gpu.o
/usr/local/cuda-10.1/bin/nvcc -Icuda/include -I/home/laiming/anaconda3/envs/RandLA_torch/lib/python3.6/site-packages/torch/include -I/home/laiming/anaconda3/envs/RandLA_torch/lib/python3.6/site-packages/torch/include/torch/csrc/api/include -I/home/laiming/anaconda3/envs/RandLA_torch/lib/python3.6/site-packages/torch/include/TH -I/home/laiming/anaconda3/envs/RandLA_torch/lib/python3.6/site-packages/torch/include/THC -I/usr/local/cuda-10.1/include -I/home/laiming/anaconda3/envs/RandLA_torch/include/python3.6m -c -c /home/laiming/PointNet/torch-points-kernels/cuda/src/sampling_gpu.cu -o /home/laiming/PointNet/torch-points-kernels/build/temp.linux-x86_64-3.6/cuda/src/sampling_gpu.o -D__CUDA_NO_HALF_OPERATORS__ -D__CUDA_NO_HALF_CONVERSIONS__ -D__CUDA_NO_HALF2_OPERATORS__ --expt-relaxed-constexpr --compiler-options '-fPIC' -O3 -DVERSION_GE_1_3 -DTORCH_API_INCLUDE_EXTENSION_H -DTORCH_API_INCLUDE_EXTENSION_H -DTORCH_EXTENSION_NAME=points_cuda -DTORCH_EXTENSION_NAME=points_cuda -D_GLIBCXX_USE_CXX11_ABI=0 -D_GLIBCXX_USE_CXX11_ABI=0 -std=c++14 -gencode=arch=compute_61,code=sm_61
/home/laiming/PointNet/torch-points-kernels/cuda/src/sampling_gpu.cu:4:10: fatal error: cuda_utils.h: 没有那个文件或目录
#include "cuda_utils.h"
^~~~~~~~~~~~~~
compilation terminated.
[7/7] /usr/local/cuda-10.1/bin/nvcc -Icuda/include -I/home/laiming/anaconda3/envs/RandLA_torch/lib/python3.6/site-packages/torch/include -I/home/laiming/anaconda3/envs/RandLA_torch/lib/python3.6/site-packages/torch/include/torch/csrc/api/include -I/home/laiming/anaconda3/envs/RandLA_torch/lib/python3.6/site-packages/torch/include/TH -I/home/laiming/anaconda3/envs/RandLA_torch/lib/python3.6/site-packages/torch/include/THC -I/usr/local/cuda-10.1/include -I/home/laiming/anaconda3/envs/RandLA_torch/include/python3.6m -c -c /home/laiming/PointNet/torch-points-kernels/cuda/src/ball_query_gpu.cu -o /home/laiming/PointNet/torch-points-kernels/build/temp.linux-x86_64-3.6/cuda/src/ball_query_gpu.o -D__CUDA_NO_HALF_OPERATORS__ -D__CUDA_NO_HALF_CONVERSIONS__ -D__CUDA_NO_HALF2_OPERATORS__ --expt-relaxed-constexpr --compiler-options '-fPIC' -O3 -DVERSION_GE_1_3 -DTORCH_API_INCLUDE_EXTENSION_H -DTORCH_API_INCLUDE_EXTENSION_H -DTORCH_EXTENSION_NAME=points_cuda -DTORCH_EXTENSION_NAME=points_cuda -D_GLIBCXX_USE_CXX11_ABI=0 -D_GLIBCXX_USE_CXX11_ABI=0 -std=c++14 -gencode=arch=compute_61,code=sm_61
FAILED: /home/laiming/PointNet/torch-points-kernels/build/temp.linux-x86_64-3.6/cuda/src/ball_query_gpu.o
/usr/local/cuda-10.1/bin/nvcc -Icuda/include -I/home/laiming/anaconda3/envs/RandLA_torch/lib/python3.6/site-packages/torch/include -I/home/laiming/anaconda3/envs/RandLA_torch/lib/python3.6/site-packages/torch/include/torch/csrc/api/include -I/home/laiming/anaconda3/envs/RandLA_torch/lib/python3.6/site-packages/torch/include/TH -I/home/laiming/anaconda3/envs/RandLA_torch/lib/python3.6/site-packages/torch/include/THC -I/usr/local/cuda-10.1/include -I/home/laiming/anaconda3/envs/RandLA_torch/include/python3.6m -c -c /home/laiming/PointNet/torch-points-kernels/cuda/src/ball_query_gpu.cu -o /home/laiming/PointNet/torch-points-kernels/build/temp.linux-x86_64-3.6/cuda/src/ball_query_gpu.o -D__CUDA_NO_HALF_OPERATORS__ -D__CUDA_NO_HALF_CONVERSIONS__ -D__CUDA_NO_HALF2_OPERATORS__ --expt-relaxed-constexpr --compiler-options '-fPIC' -O3 -DVERSION_GE_1_3 -DTORCH_API_INCLUDE_EXTENSION_H -DTORCH_API_INCLUDE_EXTENSION_H -DTORCH_EXTENSION_NAME=points_cuda -DTORCH_EXTENSION_NAME=points_cuda -D_GLIBCXX_USE_CXX11_ABI=0 -D_GLIBCXX_USE_CXX11_ABI=0 -std=c++14 -gencode=arch=compute_61,code=sm_61
/home/laiming/PointNet/torch-points-kernels/cuda/src/ball_query_gpu.cu:5:10: fatal error: cuda_utils.h: 没有那个文件或目录
#include "cuda_utils.h"
^~~~~~~~~~~~~~
compilation terminated.
ninja: build stopped: subcommand failed.
Traceback (most recent call last):
File "/home/laiming/anaconda3/envs/RandLA_torch/lib/python3.6/site-packages/torch/utils/cpp_extension.py", line 1400, in _run_ninja_build
check=True)
File "/home/laiming/anaconda3/envs/RandLA_torch/lib/python3.6/subprocess.py", line 438, in run
output=stdout, stderr=stderr)
subprocess.CalledProcessError: Command '['ninja', '-v']' returned non-zero exit status 1.

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
File "setup.py", line 86, in
"License :: OSI Approved :: MIT License",
File "/home/laiming/anaconda3/envs/RandLA_torch/lib/python3.6/site-packages/setuptools/init.py", line 144, in setup
return distutils.core.setup(**attrs)
File "/home/laiming/anaconda3/envs/RandLA_torch/lib/python3.6/distutils/core.py", line 148, in setup
dist.run_commands()
File "/home/laiming/anaconda3/envs/RandLA_torch/lib/python3.6/distutils/dist.py", line 955, in run_commands
self.run_command(cmd)
File "/home/laiming/anaconda3/envs/RandLA_torch/lib/python3.6/distutils/dist.py", line 974, in run_command
cmd_obj.run()
File "/home/laiming/anaconda3/envs/RandLA_torch/lib/python3.6/site-packages/setuptools/command/build_ext.py", line 87, in run
_build_ext.run(self)
File "/home/laiming/anaconda3/envs/RandLA_torch/lib/python3.6/distutils/command/build_ext.py", line 339, in run
self.build_extensions()
File "/home/laiming/anaconda3/envs/RandLA_torch/lib/python3.6/site-packages/torch/utils/cpp_extension.py", line 580, in build_extensions
build_ext.build_extensions(self)
File "/home/laiming/anaconda3/envs/RandLA_torch/lib/python3.6/distutils/command/build_ext.py", line 448, in build_extensions
self._build_extensions_serial()
File "/home/laiming/anaconda3/envs/RandLA_torch/lib/python3.6/distutils/command/build_ext.py", line 473, in _build_extensions_serial
self.build_extension(ext)
File "/home/laiming/anaconda3/envs/RandLA_torch/lib/python3.6/site-packages/setuptools/command/build_ext.py", line 208, in build_extension
_build_ext.build_extension(self, ext)
File "/home/laiming/anaconda3/envs/RandLA_torch/lib/python3.6/distutils/command/build_ext.py", line 533, in build_extension
depends=ext.depends)
File "/home/laiming/anaconda3/envs/RandLA_torch/lib/python3.6/site-packages/torch/utils/cpp_extension.py", line 423, in unix_wrap_ninja_compile
with_cuda=with_cuda)
File "/home/laiming/anaconda3/envs/RandLA_torch/lib/python3.6/site-packages/torch/utils/cpp_extension.py", line 1140, in _write_ninja_file_and_compile_objects
error_prefix='Error compiling objects for extension')
File "/home/laiming/anaconda3/envs/RandLA_torch/lib/python3.6/site-packages/torch/utils/cpp_extension.py", line 1413, in _run_ninja_build
raise RuntimeError(message)
RuntimeError: Error compiling objects for extension

cublas_v2 not found

Hello,
I get the following error when installing with pip
"""
/usr/local/lib/python3.8/dist-packages/torch/include/ATen/cuda/CUDAContext.h:7:10: fatal error: cublas_v2.h: No such file or directory
7 | #include <cublas_v2.h>
| ^~~~~~~~~~~~~
compilation terminated.
error: command 'x86_64-linux-gnu-gcc' failed with exit status 1
"""

torch: 1.7.0
CUDA: 10.2

"""
locate cublas_v2.h
/usr/local/cuda-10.0/targets/x86_64-linux/include/cublas_v2.h
/usr/local/cuda-10.2/targets/x86_64-linux/include/cublas_v2.h
/usr/local/cuda-11.0/targets/x86_64-linux/include/cublas_v2.h
"""
thanks,

Contributing to versions to pypi

Hi,
I am running in to issues when trying to install torch-points-kernels using pip, with the issue being that no compatible build can be found there.

I have since compiled my own build from sources but was wondering if there is a way to contribute that to pypi so that future installs might be simpler.

Getting CUDA arch error while building from source.

ValueError: Unknown CUDA arch (8.6) or GPU not supported

Below is output of nvidi-smi

+-----------------------------------------------------------------------------+
| NVIDIA-SMI 510.47.03    Driver Version: 510.47.03    CUDA Version: 11.6     |
|-------------------------------+----------------------+----------------------+
| GPU  Name        Persistence-M| Bus-Id        Disp.A | Volatile Uncorr. ECC |
| Fan  Temp  Perf  Pwr:Usage/Cap|         Memory-Usage | GPU-Util  Compute M. |
|                               |                      |               MIG M. |
|===============================+======================+======================|
|   0  NVIDIA RTX A5000    Off  | 00000000:81:00.0 Off |                  Off |
| 30%   19C    P8     1W / 230W |  23715MiB / 24564MiB |      0%      Default |
|                               |                      |                  N/A |
+-------------------------------+----------------------+----------------------+
|   1  NVIDIA RTX A5000    Off  | 00000000:A1:00.0 Off |                  Off |
| 30%   19C    P8     7W / 230W |    367MiB / 24564MiB |      0%      Default |
|                               |                      |                  N/A |
+-------------------------------+----------------------+----------------------+
|   2  NVIDIA RTX A5000    Off  | 00000000:C1:00.0 Off |                  Off |
| 30%   19C    P8     5W / 230W |    367MiB / 24564MiB |      0%      Default |
|                               |                      |                  N/A |
+-------------------------------+----------------------+----------------------+
|   3  NVIDIA RTX A5000    Off  | 00000000:E1:00.0 Off |                  Off |
| 30%   21C    P8    16W / 230W |    367MiB / 24564MiB |      0%      Default |
|                               |                      |                  N/A |
+-------------------------------+----------------------+----------------------+

Below is the output of conda list,

$ conda list | grep 'python\|torch\|cuda'
cudatoolkit               10.1.243             h6bb024c_0
python                    3.7.0                h6e4f718_3
pytorch                   1.4.0           py3.7_cuda10.1.243_cudnn7.6.3_0    pytorch
torchvision               0.5.0                py37_cu101    pytorch

Failed building wheel and failed running setup.py

ERROR: Failed building wheel for torch-points-kernels
Running setup.py clean for torch-points-kernels
Failed to build torch-points-kernels
Installing collected packages: torch-points-kernels
Running setup.py install for torch-points-kernels ... error
ERROR: Command errored out with exit status 1:
command: /home/autowise/data_ssd/zhangrun/torch-points3d/conda_envs/bin/python -u -c 'import io, os, sys, setuptools, tokenize; sys.argv[0] = '"'"'/tmp/pip-install-tzq0x9va/torch-points-kernels_3213b20f557c4c839378d77048b071fe/setup.py'"'"'; file='"'"'/tmp/pip-install-tzq0x9va/torch-points-kernels_3213b20f557c4c839378d77048b071fe/setup.py'"'"';f = getattr(tokenize, '"'"'open'"'"', open)(file) if os.path.exists(file) else io.StringIO('"'"'from setuptools import setup; setup()'"'"');code = f.read().replace('"'"'\r\n'"'"', '"'"'\n'"'"');f.close();exec(compile(code, file, '"'"'exec'"'"'))' install --record /tmp/pip-record-3ql3jcs_/install-record.txt --single-version-externally-managed --compile --install-headers /home/autowise/data_ssd/zhangrun/torch-points3d/conda_envs/include/python3.8/torch-points-kernels
cwd: /tmp/pip-install-tzq0x9va/torch-points-kernels_3213b20f557c4c839378d77048b071fe/
Complete output (197 lines):
running install
running build
running build_py
creating build
creating build/lib.linux-x86_64-3.8
creating build/lib.linux-x86_64-3.8/torch_points_kernels
copying torch_points_kernels/init.py -> build/lib.linux-x86_64-3.8/torch_points_kernels
copying torch_points_kernels/torchpoints.py -> build/lib.linux-x86_64-3.8/torch_points_kernels
copying torch_points_kernels/cubic_feature_sampling.py -> build/lib.linux-x86_64-3.8/torch_points_kernels
copying torch_points_kernels/cluster.py -> build/lib.linux-x86_64-3.8/torch_points_kernels
copying torch_points_kernels/knn.py -> build/lib.linux-x86_64-3.8/torch_points_kernels
copying torch_points_kernels/chamfer_dist.py -> build/lib.linux-x86_64-3.8/torch_points_kernels
copying torch_points_kernels/gridding.py -> build/lib.linux-x86_64-3.8/torch_points_kernels
copying torch_points_kernels/metrics.py -> build/lib.linux-x86_64-3.8/torch_points_kernels
running build_ext
building 'torch_points_kernels.points_cuda' extension
creating build/temp.linux-x86_64-3.8
creating build/temp.linux-x86_64-3.8/cuda
creating build/temp.linux-x86_64-3.8/cuda/src
gcc -pthread -B /home/autowise/data_ssd/zhangrun/torch-points3d/conda_envs/compiler_compat -Wl,--sysroot=/ -Wsign-compare -DNDEBUG -g -fwrapv -O3 -Wall -Wstrict-prototypes -fPIC -Icuda/include -I/home/autowise/data_ssd/zhangrun/torch-points3d/conda_envs/lib/python3.8/site-packages/torch/include -I/home/autowise/data_ssd/zhangrun/torch-points3d/conda_envs/lib/python3.8/site-packages/torch/include/torch/csrc/api/include -I/home/autowise/data_ssd/zhangrun/torch-points3d/conda_envs/lib/python3.8/site-packages/torch/include/TH -I/home/autowise/data_ssd/zhangrun/torch-points3d/conda_envs/lib/python3.8/site-packages/torch/include/THC -I/usr/local/cuda-11.3/include -I/home/autowise/data_ssd/zhangrun/torch-points3d/conda_envs/include/python3.8 -c cuda/src/interpolate.cpp -o build/temp.linux-x86_64-3.8/cuda/src/interpolate.o -O3 -DVERSION_GE_1_3 -DTORCH_API_INCLUDE_EXTENSION_H -DTORCH_API_INCLUDE_EXTENSION_H -DPYBIND11_COMPILER_TYPE="_gcc" -DPYBIND11_COMPILER_TYPE="_gcc" -DPYBIND11_STDLIB="_libstdcpp" -DPYBIND11_STDLIB="_libstdcpp" -DPYBIND11_BUILD_ABI="_cxxabi1011" -DPYBIND11_BUILD_ABI="_cxxabi1011" -DTORCH_EXTENSION_NAME=points_cuda -DTORCH_EXTENSION_NAME=points_cuda -D_GLIBCXX_USE_CXX11_ABI=0 -D_GLIBCXX_USE_CXX11_ABI=0 -std=c++14
cc1plus: warning: command line option ‘-Wstrict-prototypes’ is valid for C/ObjC but not for C++
gcc -pthread -B /home/autowise/data_ssd/zhangrun/torch-points3d/conda_envs/compiler_compat -Wl,--sysroot=/ -Wsign-compare -DNDEBUG -g -fwrapv -O3 -Wall -Wstrict-prototypes -fPIC -Icuda/include -I/home/autowise/data_ssd/zhangrun/torch-points3d/conda_envs/lib/python3.8/site-packages/torch/include -I/home/autowise/data_ssd/zhangrun/torch-points3d/conda_envs/lib/python3.8/site-packages/torch/include/torch/csrc/api/include -I/home/autowise/data_ssd/zhangrun/torch-points3d/conda_envs/lib/python3.8/site-packages/torch/include/TH -I/home/autowise/data_ssd/zhangrun/torch-points3d/conda_envs/lib/python3.8/site-packages/torch/include/THC -I/usr/local/cuda-11.3/include -I/home/autowise/data_ssd/zhangrun/torch-points3d/conda_envs/include/python3.8 -c cuda/src/ball_query.cpp -o build/temp.linux-x86_64-3.8/cuda/src/ball_query.o -O3 -DVERSION_GE_1_3 -DTORCH_API_INCLUDE_EXTENSION_H -DTORCH_API_INCLUDE_EXTENSION_H -DPYBIND11_COMPILER_TYPE="_gcc" -DPYBIND11_COMPILER_TYPE="_gcc" -DPYBIND11_STDLIB="_libstdcpp" -DPYBIND11_STDLIB="_libstdcpp" -DPYBIND11_BUILD_ABI="_cxxabi1011" -DPYBIND11_BUILD_ABI="_cxxabi1011" -DTORCH_EXTENSION_NAME=points_cuda -DTORCH_EXTENSION_NAME=points_cuda -D_GLIBCXX_USE_CXX11_ABI=0 -D_GLIBCXX_USE_CXX11_ABI=0 -std=c++14
cc1plus: warning: command line option ‘-Wstrict-prototypes’ is valid for C/ObjC but not for C++
gcc -pthread -B /home/autowise/data_ssd/zhangrun/torch-points3d/conda_envs/compiler_compat -Wl,--sysroot=/ -Wsign-compare -DNDEBUG -g -fwrapv -O3 -Wall -Wstrict-prototypes -fPIC -Icuda/include -I/home/autowise/data_ssd/zhangrun/torch-points3d/conda_envs/lib/python3.8/site-packages/torch/include -I/home/autowise/data_ssd/zhangrun/torch-points3d/conda_envs/lib/python3.8/site-packages/torch/include/torch/csrc/api/include -I/home/autowise/data_ssd/zhangrun/torch-points3d/conda_envs/lib/python3.8/site-packages/torch/include/TH -I/home/autowise/data_ssd/zhangrun/torch-points3d/conda_envs/lib/python3.8/site-packages/torch/include/THC -I/usr/local/cuda-11.3/include -I/home/autowise/data_ssd/zhangrun/torch-points3d/conda_envs/include/python3.8 -c cuda/src/sampling.cpp -o build/temp.linux-x86_64-3.8/cuda/src/sampling.o -O3 -DVERSION_GE_1_3 -DTORCH_API_INCLUDE_EXTENSION_H -DTORCH_API_INCLUDE_EXTENSION_H -DPYBIND11_COMPILER_TYPE="_gcc" -DPYBIND11_COMPILER_TYPE="_gcc" -DPYBIND11_STDLIB="_libstdcpp" -DPYBIND11_STDLIB="_libstdcpp" -DPYBIND11_BUILD_ABI="_cxxabi1011" -DPYBIND11_BUILD_ABI="_cxxabi1011" -DTORCH_EXTENSION_NAME=points_cuda -DTORCH_EXTENSION_NAME=points_cuda -D_GLIBCXX_USE_CXX11_ABI=0 -D_GLIBCXX_USE_CXX11_ABI=0 -std=c++14
cc1plus: warning: command line option ‘-Wstrict-prototypes’ is valid for C/ObjC but not for C++
gcc -pthread -B /home/autowise/data_ssd/zhangrun/torch-points3d/conda_envs/compiler_compat -Wl,--sysroot=/ -Wsign-compare -DNDEBUG -g -fwrapv -O3 -Wall -Wstrict-prototypes -fPIC -Icuda/include -I/home/autowise/data_ssd/zhangrun/torch-points3d/conda_envs/lib/python3.8/site-packages/torch/include -I/home/autowise/data_ssd/zhangrun/torch-points3d/conda_envs/lib/python3.8/site-packages/torch/include/torch/csrc/api/include -I/home/autowise/data_ssd/zhangrun/torch-points3d/conda_envs/lib/python3.8/site-packages/torch/include/TH -I/home/autowise/data_ssd/zhangrun/torch-points3d/conda_envs/lib/python3.8/site-packages/torch/include/THC -I/usr/local/cuda-11.3/include -I/home/autowise/data_ssd/zhangrun/torch-points3d/conda_envs/include/python3.8 -c cuda/src/cubic_feature_sampling.cpp -o build/temp.linux-x86_64-3.8/cuda/src/cubic_feature_sampling.o -O3 -DVERSION_GE_1_3 -DTORCH_API_INCLUDE_EXTENSION_H -DTORCH_API_INCLUDE_EXTENSION_H -DPYBIND11_COMPILER_TYPE="_gcc" -DPYBIND11_COMPILER_TYPE="_gcc" -DPYBIND11_STDLIB="_libstdcpp" -DPYBIND11_STDLIB="_libstdcpp" -DPYBIND11_BUILD_ABI="_cxxabi1011" -DPYBIND11_BUILD_ABI="_cxxabi1011" -DTORCH_EXTENSION_NAME=points_cuda -DTORCH_EXTENSION_NAME=points_cuda -D_GLIBCXX_USE_CXX11_ABI=0 -D_GLIBCXX_USE_CXX11_ABI=0 -std=c++14
cc1plus: warning: command line option ‘-Wstrict-prototypes’ is valid for C/ObjC but not for C++
gcc -pthread -B /home/autowise/data_ssd/zhangrun/torch-points3d/conda_envs/compiler_compat -Wl,--sysroot=/ -Wsign-compare -DNDEBUG -g -fwrapv -O3 -Wall -Wstrict-prototypes -fPIC -Icuda/include -I/home/autowise/data_ssd/zhangrun/torch-points3d/conda_envs/lib/python3.8/site-packages/torch/include -I/home/autowise/data_ssd/zhangrun/torch-points3d/conda_envs/lib/python3.8/site-packages/torch/include/torch/csrc/api/include -I/home/autowise/data_ssd/zhangrun/torch-points3d/conda_envs/lib/python3.8/site-packages/torch/include/TH -I/home/autowise/data_ssd/zhangrun/torch-points3d/conda_envs/lib/python3.8/site-packages/torch/include/THC -I/usr/local/cuda-11.3/include -I/home/autowise/data_ssd/zhangrun/torch-points3d/conda_envs/include/python3.8 -c cuda/src/gridding.cpp -o build/temp.linux-x86_64-3.8/cuda/src/gridding.o -O3 -DVERSION_GE_1_3 -DTORCH_API_INCLUDE_EXTENSION_H -DTORCH_API_INCLUDE_EXTENSION_H -DPYBIND11_COMPILER_TYPE="_gcc" -DPYBIND11_COMPILER_TYPE="_gcc" -DPYBIND11_STDLIB="_libstdcpp" -DPYBIND11_STDLIB="_libstdcpp" -DPYBIND11_BUILD_ABI="_cxxabi1011" -DPYBIND11_BUILD_ABI="_cxxabi1011" -DTORCH_EXTENSION_NAME=points_cuda -DTORCH_EXTENSION_NAME=points_cuda -D_GLIBCXX_USE_CXX11_ABI=0 -D_GLIBCXX_USE_CXX11_ABI=0 -std=c++14
cc1plus: warning: command line option ‘-Wstrict-prototypes’ is valid for C/ObjC but not for C++
gcc -pthread -B /home/autowise/data_ssd/zhangrun/torch-points3d/conda_envs/compiler_compat -Wl,--sysroot=/ -Wsign-compare -DNDEBUG -g -fwrapv -O3 -Wall -Wstrict-prototypes -fPIC -Icuda/include -I/home/autowise/data_ssd/zhangrun/torch-points3d/conda_envs/lib/python3.8/site-packages/torch/include -I/home/autowise/data_ssd/zhangrun/torch-points3d/conda_envs/lib/python3.8/site-packages/torch/include/torch/csrc/api/include -I/home/autowise/data_ssd/zhangrun/torch-points3d/conda_envs/lib/python3.8/site-packages/torch/include/TH -I/home/autowise/data_ssd/zhangrun/torch-points3d/conda_envs/lib/python3.8/site-packages/torch/include/THC -I/usr/local/cuda-11.3/include -I/home/autowise/data_ssd/zhangrun/torch-points3d/conda_envs/include/python3.8 -c cuda/src/chamfer_dist.cpp -o build/temp.linux-x86_64-3.8/cuda/src/chamfer_dist.o -O3 -DVERSION_GE_1_3 -DTORCH_API_INCLUDE_EXTENSION_H -DTORCH_API_INCLUDE_EXTENSION_H -DPYBIND11_COMPILER_TYPE="_gcc" -DPYBIND11_COMPILER_TYPE="_gcc" -DPYBIND11_STDLIB="_libstdcpp" -DPYBIND11_STDLIB="_libstdcpp" -DPYBIND11_BUILD_ABI="_cxxabi1011" -DPYBIND11_BUILD_ABI="_cxxabi1011" -DTORCH_EXTENSION_NAME=points_cuda -DTORCH_EXTENSION_NAME=points_cuda -D_GLIBCXX_USE_CXX11_ABI=0 -D_GLIBCXX_USE_CXX11_ABI=0 -std=c++14
cc1plus: warning: command line option ‘-Wstrict-prototypes’ is valid for C/ObjC but not for C++
gcc -pthread -B /home/autowise/data_ssd/zhangrun/torch-points3d/conda_envs/compiler_compat -Wl,--sysroot=/ -Wsign-compare -DNDEBUG -g -fwrapv -O3 -Wall -Wstrict-prototypes -fPIC -Icuda/include -I/home/autowise/data_ssd/zhangrun/torch-points3d/conda_envs/lib/python3.8/site-packages/torch/include -I/home/autowise/data_ssd/zhangrun/torch-points3d/conda_envs/lib/python3.8/site-packages/torch/include/torch/csrc/api/include -I/home/autowise/data_ssd/zhangrun/torch-points3d/conda_envs/lib/python3.8/site-packages/torch/include/TH -I/home/autowise/data_ssd/zhangrun/torch-points3d/conda_envs/lib/python3.8/site-packages/torch/include/THC -I/usr/local/cuda-11.3/include -I/home/autowise/data_ssd/zhangrun/torch-points3d/conda_envs/include/python3.8 -c cuda/src/metrics.cpp -o build/temp.linux-x86_64-3.8/cuda/src/metrics.o -O3 -DVERSION_GE_1_3 -DTORCH_API_INCLUDE_EXTENSION_H -DTORCH_API_INCLUDE_EXTENSION_H -DPYBIND11_COMPILER_TYPE="_gcc" -DPYBIND11_COMPILER_TYPE="_gcc" -DPYBIND11_STDLIB="_libstdcpp" -DPYBIND11_STDLIB="_libstdcpp" -DPYBIND11_BUILD_ABI="cxxabi1011" -DPYBIND11_BUILD_ABI="cxxabi1011" -DTORCH_EXTENSION_NAME=points_cuda -DTORCH_EXTENSION_NAME=points_cuda -D_GLIBCXX_USE_CXX11_ABI=0 -D_GLIBCXX_USE_CXX11_ABI=0 -std=c++14
cc1plus: warning: command line option ‘-Wstrict-prototypes’ is valid for C/ObjC but not for C++
gcc -pthread -B /home/autowise/data_ssd/zhangrun/torch-points3d/conda_envs/compiler_compat -Wl,--sysroot=/ -Wsign-compare -DNDEBUG -g -fwrapv -O3 -Wall -Wstrict-prototypes -fPIC -Icuda/include -I/home/autowise/data_ssd/zhangrun/torch-points3d/conda_envs/lib/python3.8/site-packages/torch/include -I/home/autowise/data_ssd/zhangrun/torch-points3d/conda_envs/lib/python3.8/site-packages/torch/include/torch/csrc/api/include -I/home/autowise/data_ssd/zhangrun/torch-points3d/conda_envs/lib/python3.8/site-packages/torch/include/TH -I/home/autowise/data_ssd/zhangrun/torch-points3d/conda_envs/lib/python3.8/site-packages/torch/include/THC -I/usr/local/cuda-11.3/include -I/home/autowise/data_ssd/zhangrun/torch-points3d/conda_envs/include/python3.8 -c cuda/src/bindings.cpp -o build/temp.linux-x86_64-3.8/cuda/src/bindings.o -O3 -DVERSION_GE_1_3 -DTORCH_API_INCLUDE_EXTENSION_H -DTORCH_API_INCLUDE_EXTENSION_H -DPYBIND11_COMPILER_TYPE="gcc" -DPYBIND11_COMPILER_TYPE="gcc" -DPYBIND11_STDLIB="libstdcpp" -DPYBIND11_STDLIB="libstdcpp" -DPYBIND11_BUILD_ABI="cxxabi1011" -DPYBIND11_BUILD_ABI="cxxabi1011" -DTORCH_EXTENSION_NAME=points_cuda -DTORCH_EXTENSION_NAME=points_cuda -D_GLIBCXX_USE_CXX11_ABI=0 -D_GLIBCXX_USE_CXX11_ABI=0 -std=c++14
cc1plus: warning: command line option ‘-Wstrict-prototypes’ is valid for C/ObjC but not for C++
/usr/local/cuda-11.3/bin/nvcc -Icuda/include -I/home/autowise/data_ssd/zhangrun/torch-points3d/conda_envs/lib/python3.8/site-packages/torch/include -I/home/autowise/data_ssd/zhangrun/torch-points3d/conda_envs/lib/python3.8/site-packages/torch/include/torch/csrc/api/include -I/home/autowise/data_ssd/zhangrun/torch-points3d/conda_envs/lib/python3.8/site-packages/torch/include/TH -I/home/autowise/data_ssd/zhangrun/torch-points3d/conda_envs/lib/python3.8/site-packages/torch/include/THC -I/usr/local/cuda-11.3/include -I/home/autowise/data_ssd/zhangrun/torch-points3d/conda_envs/include/python3.8 -c cuda/src/chamfer_dist_gpu.cu -o build/temp.linux-x86_64-3.8/cuda/src/chamfer_dist_gpu.o -D__CUDA_NO_HALF_OPERATORS
-D__CUDA_NO_HALF_CONVERSIONS
-D__CUDA_NO_BFLOAT16_CONVERSIONS
-D__CUDA_NO_HALF2_OPERATORS
--expt-relaxed-constexpr --compiler-options '-fPIC' -O3 -DVERSION_GE_1_3 -DTORCH_API_INCLUDE_EXTENSION_H -DTORCH_API_INCLUDE_EXTENSION_H -DPYBIND11_COMPILER_TYPE="_gcc" -DPYBIND11_COMPILER_TYPE="_gcc" -DPYBIND11_STDLIB="_libstdcpp" -DPYBIND11_STDLIB="_libstdcpp" -DPYBIND11_BUILD_ABI="_cxxabi1011" -DPYBIND11_BUILD_ABI="cxxabi1011" -DTORCH_EXTENSION_NAME=points_cuda -DTORCH_EXTENSION_NAME=points_cuda -D_GLIBCXX_USE_CXX11_ABI=0 -D_GLIBCXX_USE_CXX11_ABI=0 -gencode=arch=compute_75,code=compute_75 -gencode=arch=compute_75,code=sm_75 -std=c++14
/home/autowise/data_ssd/zhangrun/torch-points3d/conda_envs/lib/python3.8/site-packages/torch/include/torch/csrc/api/include/torch/nn/cloneable.h: In instantiation of ‘std::shared_ptrtorch::nn::Module torch::nn::Cloneable::clone(const c10::optionalc10::Device&) const [with Derived = torch::nn::CrossMapLRN2dImpl]’:
/tmp/tmpxft_0000d06c_00000000-6_chamfer_dist_gpu.cudafe1.stub.c:43:27: required from here
/home/autowise/data_ssd/zhangrun/torch-points3d/conda_envs/lib/python3.8/site-packages/torch/include/torch/csrc/api/include/torch/nn/cloneable.h:57:59: error: invalid static_cast from type ‘const torch::OrderedDict<std::basic_string, at::Tensor>’ to type ‘torch::OrderedDict<std::basic_string, at::Tensor>&’
/home/autowise/data_ssd/zhangrun/torch-points3d/conda_envs/lib/python3.8/site-packages/torch/include/torch/csrc/api/include/torch/nn/cloneable.h:69:61: error: invalid static_cast from type ‘const torch::OrderedDict<std::basic_string, std::shared_ptrtorch::nn::Module >’ to type ‘torch::OrderedDict<std::basic_string, std::shared_ptrtorch::nn::Module >&’
/home/autowise/data_ssd/zhangrun/torch-points3d/conda_envs/lib/python3.8/site-packages/torch/include/torch/csrc/api/include/torch/nn/cloneable.h: In instantiation of ‘std::shared_ptrtorch::nn::Module torch::nn::Cloneable::clone(const c10::optionalc10::Device&) const [with Derived = torch::nn::EmbeddingBagImpl]’:
/tmp/tmpxft_0000d06c_00000000-6_chamfer_dist_gpu.cudafe1.stub.c:43:27: required from here
/home/autowise/data_ssd/zhangrun/torch-points3d/conda_envs/lib/python3.8/site-packages/torch/include/torch/csrc/api/include/torch/nn/cloneable.h:57:59: error: invalid static_cast from type ‘const torch::OrderedDict<std::basic_string, at::Tensor>’ to type ‘torch::OrderedDict<std::basic_string, at::Tensor>&’
/home/autowise/data_ssd/zhangrun/torch-points3d/conda_envs/lib/python3.8/site-packages/torch/include/torch/csrc/api/include/torch/nn/cloneable.h:69:61: error: invalid static_cast from type ‘const torch::OrderedDict<std::basic_string, std::shared_ptrtorch::nn::Module >’ to type ‘torch::OrderedDict<std::basic_string, std::shared_ptrtorch::nn::Module >&’
/home/autowise/data_ssd/zhangrun/torch-points3d/conda_envs/lib/python3.8/site-packages/torch/include/torch/csrc/api/include/torch/nn/cloneable.h: In instantiation of ‘std::shared_ptrtorch::nn::Module torch::nn::Cloneable::clone(const c10::optionalc10::Device&) const [with Derived = torch::nn::EmbeddingImpl]’:
/tmp/tmpxft_0000d06c_00000000-6_chamfer_dist_gpu.cudafe1.stub.c:43:27: required from here
/home/autowise/data_ssd/zhangrun/torch-points3d/conda_envs/lib/python3.8/site-packages/torch/include/torch/csrc/api/include/torch/nn/cloneable.h:57:59: error: invalid static_cast from type ‘const torch::OrderedDict<std::basic_string, at::Tensor>’ to type ‘torch::OrderedDict<std::basic_string, at::Tensor>&’
/home/autowise/data_ssd/zhangrun/torch-points3d/conda_envs/lib/python3.8/site-packages/torch/include/torch/csrc/api/include/torch/nn/cloneable.h:69:61: error: invalid static_cast from type ‘const torch::OrderedDict<std::basic_string, std::shared_ptrtorch::nn::Module >’ to type ‘torch::OrderedDict<std::basic_string, std::shared_ptrtorch::nn::Module >&’
/home/autowise/data_ssd/zhangrun/torch-points3d/conda_envs/lib/python3.8/site-packages/torch/include/torch/csrc/api/include/torch/nn/cloneable.h: In instantiation of ‘std::shared_ptrtorch::nn::Module torch::nn::Cloneable::clone(const c10::optionalc10::Device&) const [with Derived = torch::nn::ParameterDictImpl]’:
/tmp/tmpxft_0000d06c_00000000-6_chamfer_dist_gpu.cudafe1.stub.c:43:27: required from here
/home/autowise/data_ssd/zhangrun/torch-points3d/conda_envs/lib/python3.8/site-packages/torch/include/torch/csrc/api/include/torch/nn/cloneable.h:57:59: error: invalid static_cast from type ‘const torch::OrderedDict<std::basic_string, at::Tensor>’ to type ‘torch::OrderedDict<std::basic_string, at::Tensor>&’
/home/autowise/data_ssd/zhangrun/torch-points3d/conda_envs/lib/python3.8/site-packages/torch/include/torch/csrc/api/include/torch/nn/cloneable.h:69:61: error: invalid static_cast from type ‘const torch::OrderedDict<std::basic_string, std::shared_ptrtorch::nn::Module >’ to type ‘torch::OrderedDict<std::basic_string, std::shared_ptrtorch::nn::Module >&’
/home/autowise/data_ssd/zhangrun/torch-points3d/conda_envs/lib/python3.8/site-packages/torch/include/torch/csrc/api/include/torch/nn/cloneable.h: In instantiation of ‘std::shared_ptrtorch::nn::Module torch::nn::Cloneable::clone(const c10::optionalc10::Device&) const [with Derived = torch::nn::SequentialImpl]’:
/tmp/tmpxft_0000d06c_00000000-6_chamfer_dist_gpu.cudafe1.stub.c:43:27: required from here
/home/autowise/data_ssd/zhangrun/torch-points3d/conda_envs/lib/python3.8/site-packages/torch/include/torch/csrc/api/include/torch/nn/cloneable.h:57:59: error: invalid static_cast from type ‘const torch::OrderedDict<std::basic_string, at::Tensor>’ to type ‘torch::OrderedDict<std::basic_string, at::Tensor>&’
/home/autowise/data_ssd/zhangrun/torch-points3d/conda_envs/lib/python3.8/site-packages/torch/include/torch/csrc/api/include/torch/nn/cloneable.h:69:61: error: invalid static_cast from type ‘const torch::OrderedDict<std::basic_string, std::shared_ptrtorch::nn::Module >’ to type ‘torch::OrderedDict<std::basic_string, std::shared_ptrtorch::nn::Module >&’
/home/autowise/data_ssd/zhangrun/torch-points3d/conda_envs/lib/python3.8/site-packages/torch/include/torch/csrc/api/include/torch/nn/cloneable.h: In instantiation of ‘std::shared_ptrtorch::nn::Module torch::nn::Cloneable::clone(const c10::optionalc10::Device&) const [with Derived = torch::nn::ModuleListImpl]’:
/tmp/tmpxft_0000d06c_00000000-6_chamfer_dist_gpu.cudafe1.stub.c:43:27: required from here
/home/autowise/data_ssd/zhangrun/torch-points3d/conda_envs/lib/python3.8/site-packages/torch/include/torch/csrc/api/include/torch/nn/cloneable.h:57:59: error: invalid static_cast from type ‘const torch::OrderedDict<std::basic_string, at::Tensor>’ to type ‘torch::OrderedDict<std::basic_string, at::Tensor>&’
/home/autowise/data_ssd/zhangrun/torch-points3d/conda_envs/lib/python3.8/site-packages/torch/include/torch/csrc/api/include/torch/nn/cloneable.h:69:61: error: invalid static_cast from type ‘const torch::OrderedDict<std::basic_string, std::shared_ptrtorch::nn::Module >’ to type ‘torch::OrderedDict<std::basic_string, std::shared_ptrtorch::nn::Module >&’
/home/autowise/data_ssd/zhangrun/torch-points3d/conda_envs/lib/python3.8/site-packages/torch/include/torch/csrc/api/include/torch/nn/cloneable.h: In instantiation of ‘std::shared_ptrtorch::nn::Module torch::nn::Cloneable::clone(const c10::optionalc10::Device&) const [with Derived = torch::nn::ModuleDictImpl]’:
/tmp/tmpxft_0000d06c_00000000-6_chamfer_dist_gpu.cudafe1.stub.c:43:27: required from here
/home/autowise/data_ssd/zhangrun/torch-points3d/conda_envs/lib/python3.8/site-packages/torch/include/torch/csrc/api/include/torch/nn/cloneable.h:57:59: error: invalid static_cast from type ‘const torch::OrderedDict<std::basic_string, at::Tensor>’ to type ‘torch::OrderedDict<std::basic_string, at::Tensor>&’
/home/autowise/data_ssd/zhangrun/torch-points3d/conda_envs/lib/python3.8/site-packages/torch/include/torch/csrc/api/include/torch/nn/cloneable.h:69:61: error: invalid static_cast from type ‘const torch::OrderedDict<std::basic_string, std::shared_ptrtorch::nn::Module >’ to type ‘torch::OrderedDict<std::basic_string, std::shared_ptrtorch::nn::Module >&’
/home/autowise/data_ssd/zhangrun/torch-points3d/conda_envs/lib/python3.8/site-packages/torch/include/torch/csrc/api/include/torch/nn/cloneable.h: In instantiation of ‘std::shared_ptrtorch::nn::Module torch::nn::Cloneable::clone(const c10::optionalc10::Device&) const [with Derived = torch::nn::TransformerDecoderImpl]’:
/home/autowise/data_ssd/zhangrun/torch-points3d/conda_envs/lib/python3.8/site-packages/torch/include/torch/csrc/api/include/torch/optim/sgd.h:49:48: required from here
/home/autowise/data_ssd/zhangrun/torch-points3d/conda_envs/lib/python3.8/site-packages/torch/include/torch/csrc/api/include/torch/nn/cloneable.h:57:59: error: invalid static_cast from type ‘const torch::OrderedDict<std::basic_string, at::Tensor>’ to type ‘torch::OrderedDict<std::basic_string, at::Tensor>&’
/home/autowise/data_ssd/zhangrun/torch-points3d/conda_envs/lib/python3.8/site-packages/torch/include/torch/csrc/api/include/torch/nn/cloneable.h:69:61: error: invalid static_cast from type ‘const torch::OrderedDict<std::basic_string, std::shared_ptrtorch::nn::Module >’ to type ‘torch::OrderedDict<std::basic_string, std::shared_ptrtorch::nn::Module >&’
/home/autowise/data_ssd/zhangrun/torch-points3d/conda_envs/lib/python3.8/site-packages/torch/include/torch/csrc/api/include/torch/nn/cloneable.h: In instantiation of ‘std::shared_ptrtorch::nn::Module torch::nn::Cloneable::clone(const c10::optionalc10::Device&) const [with Derived = torch::nn::TransformerEncoderImpl]’:
/home/autowise/data_ssd/zhangrun/torch-points3d/conda_envs/lib/python3.8/site-packages/torch/include/torch/csrc/api/include/torch/optim/sgd.h:49:48: required from here
/home/autowise/data_ssd/zhangrun/torch-points3d/conda_envs/lib/python3.8/site-packages/torch/include/torch/csrc/api/include/torch/nn/cloneable.h:57:59: error: invalid static_cast from type ‘const torch::OrderedDict<std::basic_string, at::Tensor>’ to type ‘torch::OrderedDict<std::basic_string, at::Tensor>&’
/home/autowise/data_ssd/zhangrun/torch-points3d/conda_envs/lib/python3.8/site-packages/torch/include/torch/csrc/api/include/torch/nn/cloneable.h:69:61: error: invalid static_cast from type ‘const torch::OrderedDict<std::basic_string, std::shared_ptrtorch::nn::Module >’ to type ‘torch::OrderedDict<std::basic_string, std::shared_ptrtorch::nn::Module >&’
/home/autowise/data_ssd/zhangrun/torch-points3d/conda_envs/lib/python3.8/site-packages/torch/include/torch/csrc/api/include/torch/nn/cloneable.h: In instantiation of ‘std::shared_ptrtorch::nn::Module torch::nn::Cloneable::clone(const c10::optionalc10::Device&) const [with Derived = torch::nn::TransformerDecoderLayerImpl]’:
/home/autowise/data_ssd/zhangrun/torch-points3d/conda_envs/lib/python3.8/site-packages/torch/include/torch/csrc/api/include/torch/optim/sgd.h:49:48: required from here
/home/autowise/data_ssd/zhangrun/torch-points3d/conda_envs/lib/python3.8/site-packages/torch/include/torch/csrc/api/include/torch/nn/cloneable.h:57:59: error: invalid static_cast from type ‘const torch::OrderedDict<std::basic_string, at::Tensor>’ to type ‘torch::OrderedDict<std::basic_string, at::Tensor>&’
/home/autowise/data_ssd/zhangrun/torch-points3d/conda_envs/lib/python3.8/site-packages/torch/include/torch/csrc/api/include/torch/nn/cloneable.h:69:61: error: invalid static_cast from type ‘const torch::OrderedDict<std::basic_string, std::shared_ptrtorch::nn::Module >’ to type ‘torch::OrderedDict<std::basic_string, std::shared_ptrtorch::nn::Module >&’
/home/autowise/data_ssd/zhangrun/torch-points3d/conda_envs/lib/python3.8/site-packages/torch/include/torch/csrc/api/include/torch/nn/cloneable.h: In instantiation of ‘std::shared_ptrtorch::nn::Module torch::nn::Cloneable::clone(const c10::optionalc10::Device&) const [with Derived = torch::nn::TransformerEncoderLayerImpl]’:
/home/autowise/data_ssd/zhangrun/torch-points3d/conda_envs/lib/python3.8/site-packages/torch/include/torch/csrc/api/include/torch/optim/sgd.h:49:48: required from here
/home/autowise/data_ssd/zhangrun/torch-points3d/conda_envs/lib/python3.8/site-packages/torch/include/torch/csrc/api/include/torch/nn/cloneable.h:57:59: error: invalid static_cast from type ‘const torch::OrderedDict<std::basic_string, at::Tensor>’ to type ‘torch::OrderedDict<std::basic_string, at::Tensor>&’
/home/autowise/data_ssd/zhangrun/torch-points3d/conda_envs/lib/python3.8/site-packages/torch/include/torch/csrc/api/include/torch/nn/cloneable.h:69:61: error: invalid static_cast from type ‘const torch::OrderedDict<std::basic_string, std::shared_ptrtorch::nn::Module >’ to type ‘torch::OrderedDict<std::basic_string, std::shared_ptrtorch::nn::Module >&’
/home/autowise/data_ssd/zhangrun/torch-points3d/conda_envs/lib/python3.8/site-packages/torch/include/torch/csrc/api/include/torch/nn/cloneable.h: In instantiation of ‘std::shared_ptrtorch::nn::Module torch::nn::Cloneable::clone(const c10::optionalc10::Device&) const [with Derived = torch::nn::GroupNormImpl]’:
/home/autowise/data_ssd/zhangrun/torch-points3d/conda_envs/lib/python3.8/site-packages/torch/include/torch/csrc/api/include/torch/optim/sgd.h:49:48: required from here
/home/autowise/data_ssd/zhangrun/torch-points3d/conda_envs/lib/python3.8/site-packages/torch/include/torch/csrc/api/include/torch/nn/cloneable.h:57:59: error: invalid static_cast from type ‘const torch::OrderedDict<std::basic_string, at::Tensor>’ to type ‘torch::OrderedDict<std::basic_string, at::Tensor>&’
/home/autowise/data_ssd/zhangrun/torch-points3d/conda_envs/lib/python3.8/site-packages/torch/include/torch/csrc/api/include/torch/nn/cloneable.h:69:61: error: invalid static_cast from type ‘const torch::OrderedDict<std::basic_string, std::shared_ptrtorch::nn::Module >’ to type ‘torch::OrderedDict<std::basic_string, std::shared_ptrtorch::nn::Module >&’
/home/autowise/data_ssd/zhangrun/torch-points3d/conda_envs/lib/python3.8/site-packages/torch/include/torch/csrc/api/include/torch/nn/cloneable.h: In instantiation of ‘std::shared_ptrtorch::nn::Module torch::nn::Cloneable::clone(const c10::optionalc10::Device&) const [with Derived = torch::nn::LocalResponseNormImpl]’:
/home/autowise/data_ssd/zhangrun/torch-points3d/conda_envs/lib/python3.8/site-packages/torch/include/torch/csrc/api/include/torch/optim/sgd.h:49:48: required from here
/home/autowise/data_ssd/zhangrun/torch-points3d/conda_envs/lib/python3.8/site-packages/torch/include/torch/csrc/api/include/torch/nn/cloneable.h:57:59: error: invalid static_cast from type ‘const torch::OrderedDict<std::basic_string, at::Tensor>’ to type ‘torch::OrderedDict<std::basic_string, at::Tensor>&’
/home/autowise/data_ssd/zhangrun/torch-points3d/conda_envs/lib/python3.8/site-packages/torch/include/torch/csrc/api/include/torch/nn/cloneable.h:69:61: error: invalid static_cast from type ‘const torch::OrderedDict<std::basic_string, std::shared_ptrtorch::nn::Module >’ to type ‘torch::OrderedDict<std::basic_string, std::shared_ptrtorch::nn::Module >&’
/home/autowise/data_ssd/zhangrun/torch-points3d/conda_envs/lib/python3.8/site-packages/torch/include/torch/csrc/api/include/torch/nn/cloneable.h: In instantiation of ‘std::shared_ptrtorch::nn::Module torch::nn::Cloneable::clone(const c10::optionalc10::Device&) const [with Derived = torch::nn::LayerNormImpl]’:
/home/autowise/data_ssd/zhangrun/torch-points3d/conda_envs/lib/python3.8/site-packages/torch/include/torch/csrc/api/include/torch/optim/sgd.h:49:48: required from here
/home/autowise/data_ssd/zhangrun/torch-points3d/conda_envs/lib/python3.8/site-packages/torch/include/torch/csrc/api/include/torch/nn/cloneable.h:57:59: error: invalid static_cast from type ‘const torch::OrderedDict<std::basic_string, at::Tensor>’ to type ‘torch::OrderedDict<std::basic_string, at::Tensor>&’
/home/autowise/data_ssd/zhangrun/torch-points3d/conda_envs/lib/python3.8/site-packages/torch/include/torch/csrc/api/include/torch/nn/cloneable.h:69:61: error: invalid static_cast from type ‘const torch::OrderedDict<std::basic_string, std::shared_ptrtorch::nn::Module >’ to type ‘torch::OrderedDict<std::basic_string, std::shared_ptrtorch::nn::Module >&’
/home/autowise/data_ssd/zhangrun/torch-points3d/conda_envs/lib/python3.8/site-packages/torch/include/torch/csrc/api/include/torch/nn/cloneable.h: In instantiation of ‘std::shared_ptrtorch::nn::Module torch::nn::Cloneable::clone(const c10::optionalc10::Device&) const [with Derived = torch::nn::MultiheadAttentionImpl]’:
/home/autowise/data_ssd/zhangrun/torch-points3d/conda_envs/lib/python3.8/site-packages/torch/include/torch/csrc/api/include/torch/optim/sgd.h:49:48: required from here
/home/autowise/data_ssd/zhangrun/torch-points3d/conda_envs/lib/python3.8/site-packages/torch/include/torch/csrc/api/include/torch/nn/cloneable.h:57:59: error: invalid static_cast from type ‘const torch::OrderedDict<std::basic_string, at::Tensor>’ to type ‘torch::OrderedDict<std::basic_string, at::Tensor>&’
/home/autowise/data_ssd/zhangrun/torch-points3d/conda_envs/lib/python3.8/site-packages/torch/include/torch/csrc/api/include/torch/nn/cloneable.h:69:61: error: invalid static_cast from type ‘const torch::OrderedDict<std::basic_string, std::shared_ptrtorch::nn::Module >’ to type ‘torch::OrderedDict<std::basic_string, std::shared_ptrtorch::nn::Module >&’
/home/autowise/data_ssd/zhangrun/torch-points3d/conda_envs/lib/python3.8/site-packages/torch/include/torch/csrc/api/include/torch/nn/cloneable.h: In instantiation of ‘std::shared_ptrtorch::nn::Module torch::nn::Cloneable::clone(const c10::optionalc10::Device&) const [with Derived = torch::nn::ThresholdImpl]’:
/home/autowise/data_ssd/zhangrun/torch-points3d/conda_envs/lib/python3.8/site-packages/torch/include/torch/csrc/api/include/torch/optim/sgd.h:49:48: required from here
/home/autowise/data_ssd/zhangrun/torch-points3d/conda_envs/lib/python3.8/site-packages/torch/include/torch/csrc/api/include/torch/nn/cloneable.h:57:59: error: invalid static_cast from type ‘const torch::OrderedDict<std::basic_string, at::Tensor>’ to type ‘torch::OrderedDict<std::basic_string, at::Tensor>&’
/home/autowise/data_ssd/zhangrun/torch-points3d/conda_envs/lib/python3.8/site-packages/torch/include/torch/csrc/api/include/torch/nn/cloneable.h:69:61: error: invalid static_cast from type ‘const torch::OrderedDict<std::basic_string, std::shared_ptrtorch::nn::Module >’ to type ‘torch::OrderedDict<std::basic_string, std::shared_ptrtorch::nn::Module >&’
/home/autowise/data_ssd/zhangrun/torch-points3d/conda_envs/lib/python3.8/site-packages/torch/include/torch/csrc/api/include/torch/nn/cloneable.h: In instantiation of ‘std::shared_ptrtorch::nn::Module torch::nn::Cloneable::clone(const c10::optionalc10::Device&) const [with Derived = torch::nn::LogSoftmaxImpl]’:
/home/autowise/data_ssd/zhangrun/torch-points3d/conda_envs/lib/python3.8/site-packages/torch/include/torch/csrc/api/include/torch/optim/sgd.h:49:48: required from here
/home/autowise/data_ssd/zhangrun/torch-points3d/conda_envs/lib/python3.8/site-packages/torch/include/torch/csrc/api/include/torch/nn/cloneable.h:57:59: error: invalid static_cast from type ‘const torch::OrderedDict<std::basic_string, at::Tensor>’ to type ‘torch::OrderedDict<std::basic_string, at::Tensor>&’
/home/autowise/data_ssd/zhangrun/torch-points3d/conda_envs/lib/python3.8/site-packages/torch/include/torch/csrc/api/include/torch/nn/cloneable.h:69:61: error: invalid static_cast from type ‘const torch::OrderedDict<std::basic_string, std::shared_ptrtorch::nn::Module >’ to type ‘torch::OrderedDict<std::basic_string, std::shared_ptrtorch::nn::Module >&’
/home/autowise/data_ssd/zhangrun/torch-points3d/conda_envs/lib/python3.8/site-packages/torch/include/torch/csrc/api/include/torch/nn/cloneable.h: In instantiation of ‘std::shared_ptrtorch::nn::Module torch::nn::Cloneable::clone(const c10::optionalc10::Device&) const [with Derived = torch::nn::SoftminImpl]’:
/home/autowise/data_ssd/zhangrun/torch-points3d/conda_envs/lib/python3.8/site-packages/torch/include/torch/csrc/api/include/torch/optim/sgd.h:49:48: required from here
/home/autowise/data_ssd/zhangrun/torch-points3d/conda_envs/lib/python3.8/site-packages/torch/include/torch/csrc/api/include/torch/nn/cloneable.h:57:59: error: invalid static_cast from type ‘const torch::OrderedDict<std::basic_string, at::Tensor>’ to type ‘torch::OrderedDict<std::basic_string, at::Tensor>&’
/home/autowise/data_ssd/zhangrun/torch-points3d/conda_envs/lib/python3.8/site-packages/torch/include/torch/csrc/api/include/torch/nn/cloneable.h:69:61: error: invalid static_cast from type ‘const torch::OrderedDict<std::basic_string, std::shared_ptrtorch::nn::Module >’ to type ‘torch::OrderedDict<std::basic_string, std::shared_ptrtorch::nn::Module >&’
/home/autowise/data_ssd/zhangrun/torch-points3d/conda_envs/lib/python3.8/site-packages/torch/include/torch/csrc/api/include/torch/nn/cloneable.h: In instantiation of ‘std::shared_ptrtorch::nn::Module torch::nn::Cloneable::clone(const c10::optionalc10::Device&) const [with Derived = torch::nn::SoftmaxImpl]’:
/home/autowise/data_ssd/zhangrun/torch-points3d/conda_envs/lib/python3.8/site-packages/torch/include/torch/csrc/api/include/torch/optim/sgd.h:49:48: required from here
/home/autowise/data_ssd/zhangrun/torch-points3d/conda_envs/lib/python3.8/site-packages/torch/include/torch/csrc/api/include/torch/nn/cloneable.h:57:59: error: invalid static_cast from type ‘const torch::OrderedDict<std::basic_string, at::Tensor>’ to type ‘torch::OrderedDict<std::basic_string, at::Tensor>&’
/home/autowise/data_ssd/zhangrun/torch-points3d/conda_envs/lib/python3.8/site-packages/torch/include/torch/csrc/api/include/torch/nn/cloneable.h:69:61: error: invalid static_cast from type ‘const torch::OrderedDict<std::basic_string, std::shared_ptrtorch::nn::Module >’ to type ‘torch::OrderedDict<std::basic_string, std::shared_ptrtorch::nn::Module >&’
/home/autowise/data_ssd/zhangrun/torch-points3d/conda_envs/lib/python3.8/site-packages/torch/include/torch/csrc/api/include/torch/nn/cloneable.h: In instantiation of ‘std::shared_ptrtorch::nn::Module torch::nn::Cloneable::clone(const c10::optionalc10::Device&) const [with Derived = torch::nn::GRUCellImpl]’:
/home/autowise/data_ssd/zhangrun/torch-points3d/conda_envs/lib/python3.8/site-packages/torch/include/torch/csrc/api/include/torch/optim/sgd.h:49:48: required from here
/home/autowise/data_ssd/zhangrun/torch-points3d/conda_envs/lib/python3.8/site-packages/torch/include/torch/csrc/api/include/torch/nn/cloneable.h:57:59: error: invalid static_cast from type ‘const torch::OrderedDict<std::basic_string, at::Tensor>’ to type ‘torch::OrderedDict<std::basic_string, at::Tensor>&’
/home/autowise/data_ssd/zhangrun/torch-points3d/conda_envs/lib/python3.8/site-packages/torch/include/torch/csrc/api/include/torch/nn/cloneable.h:69:61: error: invalid static_cast from type ‘const torch::OrderedDict<std::basic_string, std::shared_ptrtorch::nn::Module >’ to type ‘torch::OrderedDict<std::basic_string, std::shared_ptrtorch::nn::Module >&’
/home/autowise/data_ssd/zhangrun/torch-points3d/conda_envs/lib/python3.8/site-packages/torch/include/torch/csrc/api/include/torch/nn/cloneable.h: In instantiation of ‘std::shared_ptrtorch::nn::Module torch::nn::Cloneable::clone(const c10::optionalc10::Device&) const [with Derived = torch::nn::LSTMCellImpl]’:
/home/autowise/data_ssd/zhangrun/torch-points3d/conda_envs/lib/python3.8/site-packages/torch/include/torch/csrc/api/include/torch/optim/sgd.h:49:48: required from here
/home/autowise/data_ssd/zhangrun/torch-points3d/conda_envs/lib/python3.8/site-packages/torch/include/torch/csrc/api/include/torch/nn/cloneable.h:57:59: error: invalid static_cast from type ‘const torch::OrderedDict<std::basic_string, at::Tensor>’ to type ‘torch::OrderedDict<std::basic_string, at::Tensor>&’
/home/autowise/data_ssd/zhangrun/torch-points3d/conda_envs/lib/python3.8/site-packages/torch/include/torch/csrc/api/include/torch/nn/cloneable.h:69:61: error: invalid static_cast from type ‘const torch::OrderedDict<std::basic_string, std::shared_ptrtorch::nn::Module >’ to type ‘torch::OrderedDict<std::basic_string, std::shared_ptrtorch::nn::Module >&’
/home/autowise/data_ssd/zhangrun/torch-points3d/conda_envs/lib/python3.8/site-packages/torch/include/torch/csrc/api/include/torch/nn/cloneable.h: In instantiation of ‘std::shared_ptrtorch::nn::Module torch::nn::Cloneable::clone(const c10::optionalc10::Device&) const [with Derived = torch::nn::RNNCellImpl]’:
/home/autowise/data_ssd/zhangrun/torch-points3d/conda_envs/lib/python3.8/site-packages/torch/include/torch/csrc/api/include/torch/optim/sgd.h:49:48: required from here
/home/autowise/data_ssd/zhangrun/torch-points3d/conda_envs/lib/python3.8/site-packages/torch/include/torch/csrc/api/include/torch/nn/cloneable.h:57:59: error: invalid static_cast from type ‘const torch::OrderedDict<std::basic_string, at::Tensor>’ to type ‘torch::OrderedDict<std::basic_string, at::Tensor>&’
/home/autowise/data_ssd/zhangrun/torch-points3d/conda_envs/lib/python3.8/site-packages/torch/include/torch/csrc/api/include/torch/nn/cloneable.h:69:61: error: invalid static_cast from type ‘const torch::OrderedDict<std::basic_string, std::shared_ptrtorch::nn::Module >’ to type ‘torch::OrderedDict<std::basic_string, std::shared_ptrtorch::nn::Module >&’
/home/autowise/data_ssd/zhangrun/torch-points3d/conda_envs/lib/python3.8/site-packages/torch/include/torch/csrc/api/include/torch/nn/cloneable.h: In instantiation of ‘std::shared_ptrtorch::nn::Module torch::nn::Cloneable::clone(const c10::optionalc10::Device&) const [with Derived = torch::nn::GRUImpl]’:
/home/autowise/data_ssd/zhangrun/torch-points3d/conda_envs/lib/python3.8/site-packages/torch/include/torch/csrc/api/include/torch/optim/sgd.h:49:48: required from here
/home/autowise/data_ssd/zhangrun/torch-points3d/conda_envs/lib/python3.8/site-packages/torch/include/torch/csrc/api/include/torch/nn/cloneable.h:57:59: error: invalid static_cast from type ‘const torch::OrderedDict<std::basic_string, at::Tensor>’ to type ‘torch::OrderedDict<std::basic_string, at::Tensor>&’
/home/autowise/data_ssd/zhangrun/torch-points3d/conda_envs/lib/python3.8/site-packages/torch/include/torch/csrc/api/include/torch/nn/cloneable.h:69:61: error: invalid static_cast from type ‘const torch::OrderedDict<std::basic_string, std::shared_ptrtorch::nn::Module >’ to type ‘torch::OrderedDict<std::basic_string, std::shared_ptrtorch::nn::Module >&’
/home/autowise/data_ssd/zhangrun/torch-points3d/conda_envs/lib/python3.8/site-packages/torch/include/torch/csrc/api/include/torch/nn/cloneable.h: In instantiation of ‘std::shared_ptrtorch::nn::Module torch::nn::Cloneable::clone(const c10::optionalc10::Device&) const [with Derived = torch::nn::LSTMImpl]’:
/home/autowise/data_ssd/zhangrun/torch-points3d/conda_envs/lib/python3.8/site-packages/torch/include/torch/csrc/api/include/torch/optim/sgd.h:49:48: required from here
/home/autowise/data_ssd/zhangrun/torch-points3d/conda_envs/lib/python3.8/site-packages/torch/include/torch/csrc/api/include/torch/nn/cloneable.h:57:59: error: invalid static_cast from type ‘const torch::OrderedDict<std::basic_string, at::Tensor>’ to type ‘torch::OrderedDict<std::basic_string, at::Tensor>&’
/home/autowise/data_ssd/zhangrun/torch-points3d/conda_envs/lib/python3.8/site-packages/torch/include/torch/csrc/api/include/torch/nn/cloneable.h:69:61: error: invalid static_cast from type ‘const torch::OrderedDict<std::basic_string, std::shared_ptrtorch::nn::Module >’ to type ‘torch::OrderedDict<std::basic_string, std::shared_ptrtorch::nn::Module >&’
/home/autowise/data_ssd/zhangrun/torch-points3d/conda_envs/lib/python3.8/site-packages/torch/include/torch/csrc/api/include/torch/nn/cloneable.h: In instantiation of ‘std::shared_ptrtorch::nn::Module torch::nn::Cloneable::clone(const c10::optionalc10::Device&) const [with Derived = torch::nn::RNNImpl]’:
/home/autowise/data_ssd/zhangrun/torch-points3d/conda_envs/lib/python3.8/site-packages/torch/include/torch/csrc/api/include/torch/optim/sgd.h:49:48: required from here
/home/autowise/data_ssd/zhangrun/torch-points3d/conda_envs/lib/python3.8/site-packages/torch/include/torch/csrc/api/include/torch/nn/cloneable.h:57:59: error: invalid static_cast from type ‘const torch::OrderedDict<std::basic_string, at::Tensor>’ to type ‘torch::OrderedDict<std::basic_string, at::Tensor>&’
/home/autowise/data_ssd/zhangrun/torch-points3d/conda_envs/lib/python3.8/site-packages/torch/include/torch/csrc/api/include/torch/nn/cloneable.h:69:61: error: invalid static_cast from type ‘const torch::OrderedDict<std::basic_string, std::shared_ptrtorch::nn::Module >’ to type ‘torch::OrderedDict<std::basic_string, std::shared_ptrtorch::nn::Module >&’
/home/autowise/data_ssd/zhangrun/torch-points3d/conda_envs/lib/python3.8/site-packages/torch/include/torch/csrc/api/include/torch/nn/cloneable.h: In instantiation of ‘std::shared_ptrtorch::nn::Module torch::nn::Cloneable::clone(const c10::optionalc10::Device&) const [with Derived = torch::nn::FractionalMaxPool3dImpl]’:
/home/autowise/data_ssd/zhangrun/torch-points3d/conda_envs/lib/python3.8/site-packages/torch/include/torch/csrc/api/include/torch/optim/sgd.h:49:48: required from here
/home/autowise/data_ssd/zhangrun/torch-points3d/conda_envs/lib/python3.8/site-packages/torch/include/torch/csrc/api/include/torch/nn/cloneable.h:57:59: error: invalid static_cast from type ‘const torch::OrderedDict<std::basic_string, at::Tensor>’ to type ‘torch::OrderedDict<std::basic_string, at::Tensor>&’
/home/autowise/data_ssd/zhangrun/torch-points3d/conda_envs/lib/python3.8/site-packages/torch/include/torch/csrc/api/include/torch/nn/cloneable.h:69:61: error: invalid static_cast from type ‘const torch::OrderedDict<std::basic_string, std::shared_ptrtorch::nn::Module >’ to type ‘torch::OrderedDict<std::basic_string, std::shared_ptrtorch::nn::Module >&’
/home/autowise/data_ssd/zhangrun/torch-points3d/conda_envs/lib/python3.8/site-packages/torch/include/torch/csrc/api/include/torch/nn/cloneable.h: In instantiation of ‘std::shared_ptrtorch::nn::Module torch::nn::Cloneable::clone(const c10::optionalc10::Device&) const [with Derived = torch::nn::FractionalMaxPool2dImpl]’:
/home/autowise/data_ssd/zhangrun/torch-points3d/conda_envs/lib/python3.8/site-packages/torch/include/torch/csrc/api/include/torch/optim/sgd.h:49:48: required from here
/home/autowise/data_ssd/zhangrun/torch-points3d/conda_envs/lib/python3.8/site-packages/torch/include/torch/csrc/api/include/torch/nn/cloneable.h:57:59: error: invalid static_cast from type ‘const torch::OrderedDict<std::basic_string, at::Tensor>’ to type ‘torch::OrderedDict<std::basic_string, at::Tensor>&’
/home/autowise/data_ssd/zhangrun/torch-points3d/conda_envs/lib/python3.8/site-packages/torch/include/torch/csrc/api/include/torch/nn/cloneable.h:69:61: error: invalid static_cast from type ‘const torch::OrderedDict<std::basic_string, std::shared_ptrtorch::nn::Module >’ to type ‘torch::OrderedDict<std::basic_string, std::shared_ptrtorch::nn::Module >&’
/home/autowise/data_ssd/zhangrun/torch-points3d/conda_envs/lib/python3.8/site-packages/torch/include/torch/csrc/api/include/torch/nn/cloneable.h: In instantiation of ‘std::shared_ptrtorch::nn::Module torch::nn::Cloneable::clone(const c10::optionalc10::Device&) const [with Derived = torch::nn::ZeroPad2dImpl]’:
/home/autowise/data_ssd/zhangrun/torch-points3d/conda_envs/lib/python3.8/site-packages/torch/include/torch/csrc/api/include/torch/optim/sgd.h:49:48: required from here
/home/autowise/data_ssd/zhangrun/torch-points3d/conda_envs/lib/python3.8/site-packages/torch/include/torch/csrc/api/include/torch/nn/cloneable.h:57:59: error: invalid static_cast from type ‘const torch::OrderedDict<std::basic_string, at::Tensor>’ to type ‘torch::OrderedDict<std::basic_string, at::Tensor>&’
/home/autowise/data_ssd/zhangrun/torch-points3d/conda_envs/lib/python3.8/site-packages/torch/include/torch/csrc/api/include/torch/nn/cloneable.h:69:61: error: invalid static_cast from type ‘const torch::OrderedDict<std::basic_string, std::shared_ptrtorch::nn::Module >’ to type ‘torch::OrderedDict<std::basic_string, std::shared_ptrtorch::nn::Module >&’
/home/autowise/data_ssd/zhangrun/torch-points3d/conda_envs/lib/python3.8/site-packages/torch/include/torch/csrc/api/include/torch/nn/cloneable.h: In instantiation of ‘std::shared_ptrtorch::nn::Module torch::nn::Cloneable::clone(const c10::optionalc10::Device&) const [with Derived = torch::nn::UnfoldImpl]’:
/home/autowise/data_ssd/zhangrun/torch-points3d/conda_envs/lib/python3.8/site-packages/torch/include/torch/csrc/api/include/torch/optim/sgd.h:49:48: required from here
/home/autowise/data_ssd/zhangrun/torch-points3d/conda_envs/lib/python3.8/site-packages/torch/include/torch/csrc/api/include/torch/nn/cloneable.h:57:59: error: invalid static_cast from type ‘const torch::OrderedDict<std::basic_string, at::Tensor>’ to type ‘torch::OrderedDict<std::basic_string, at::Tensor>&’
/home/autowise/data_ssd/zhangrun/torch-points3d/conda_envs/lib/python3.8/site-packages/torch/include/torch/csrc/api/include/torch/nn/cloneable.h:69:61: error: invalid static_cast from type ‘const torch::OrderedDict<std::basic_string, std::shared_ptrtorch::nn::Module >’ to type ‘torch::OrderedDict<std::basic_string, std::shared_ptrtorch::nn::Module >&’
/home/autowise/data_ssd/zhangrun/torch-points3d/conda_envs/lib/python3.8/site-packages/torch/include/torch/csrc/api/include/torch/nn/cloneable.h: In instantiation of ‘std::shared_ptrtorch::nn::Module torch::nn::Cloneable::clone(const c10::optionalc10::Device&) const [with Derived = torch::nn::FoldImpl]’:
/home/autowise/data_ssd/zhangrun/torch-points3d/conda_envs/lib/python3.8/site-packages/torch/include/torch/csrc/api/include/torch/optim/sgd.h:49:48: required from here
/home/autowise/data_ssd/zhangrun/torch-points3d/conda_envs/lib/python3.8/site-packages/torch/include/torch/csrc/api/include/torch/nn/cloneable.h:57:59: error: invalid static_cast from type ‘const torch::OrderedDict<std::basic_string, at::Tensor>’ to type ‘torch::OrderedDict<std::basic_string, at::Tensor>&’
/home/autowise/data_ssd/zhangrun/torch-points3d/conda_envs/lib/python3.8/site-packages/torch/include/torch/csrc/api/include/torch/nn/cloneable.h:69:61: error: invalid static_cast from type ‘const torch::OrderedDict<std::basic_string, std::shared_ptrtorch::nn::Module >’ to type ‘torch::OrderedDict<std::basic_string, std::shared_ptrtorch::nn::Module >&’
/home/autowise/data_ssd/zhangrun/torch-points3d/conda_envs/lib/python3.8/site-packages/torch/include/torch/csrc/api/include/torch/nn/cloneable.h: In instantiation of ‘std::shared_ptrtorch::nn::Module torch::nn::Cloneable::clone(const c10::optionalc10::Device&) const [with Derived = torch::nn::ConvTranspose3dImpl]’:
/home/autowise/data_ssd/zhangrun/torch-points3d/conda_envs/lib/python3.8/site-packages/torch/include/torch/csrc/api/include/torch/optim/sgd.h:49:48: required from here
/home/autowise/data_ssd/zhangrun/torch-points3d/conda_envs/lib/python3.8/site-packages/torch/include/torch/csrc/api/include/torch/nn/cloneable.h:57:59: error: invalid static_cast from type ‘const torch::OrderedDict<std::basic_string, at::Tensor>’ to type ‘torch::OrderedDict<std::basic_string, at::Tensor>&’
/home/autowise/data_ssd/zhangrun/torch-points3d/conda_envs/lib/python3.8/site-packages/torch/include/torch/csrc/api/include/torch/nn/cloneable.h:69:61: error: invalid static_cast from type ‘const torch::OrderedDict<std::basic_string, std::shared_ptrtorch::nn::Module >’ to type ‘torch::OrderedDict<std::basic_string, std::shared_ptrtorch::nn::Module >&’
/home/autowise/data_ssd/zhangrun/torch-points3d/conda_envs/lib/python3.8/site-packages/torch/include/torch/csrc/api/include/torch/nn/cloneable.h: In instantiation of ‘std::shared_ptrtorch::nn::Module torch::nn::Cloneable::clone(const c10::optionalc10::Device&) const [with Derived = torch::nn::ConvTranspose2dImpl]’:
/home/autowise/data_ssd/zhangrun/torch-points3d/conda_envs/lib/python3.8/site-packages/torch/include/torch/csrc/api/include/torch/optim/sgd.h:49:48: required from here
/home/autowise/data_ssd/zhangrun/torch-points3d/conda_envs/lib/python3.8/site-packages/torch/include/torch/csrc/api/include/torch/nn/cloneable.h:57:59: error: invalid static_cast from type ‘const torch::OrderedDict<std::basic_string, at::Tensor>’ to type ‘torch::OrderedDict<std::basic_string, at::Tensor>&’
/home/autowise/data_ssd/zhangrun/torch-points3d/conda_envs/lib/python3.8/site-packages/torch/include/torch/csrc/api/include/torch/nn/cloneable.h:69:61: error: invalid static_cast from type ‘const torch::OrderedDict<std::basic_string, std::shared_ptrtorch::nn::Module >’ to type ‘torch::OrderedDict<std::basic_string, std::shared_ptrtorch::nn::Module >&’
/home/autowise/data_ssd/zhangrun/torch-points3d/conda_envs/lib/python3.8/site-packages/torch/include/torch/csrc/api/include/torch/nn/cloneable.h: In instantiation of ‘std::shared_ptrtorch::nn::Module torch::nn::Cloneable::clone(const c10::optionalc10::Device&) const [with Derived = torch::nn::ConvTranspose1dImpl]’:
/home/autowise/data_ssd/zhangrun/torch-points3d/conda_envs/lib/python3.8/site-packages/torch/include/torch/csrc/api/include/torch/optim/sgd.h:49:48: required from here
/home/autowise/data_ssd/zhangrun/torch-points3d/conda_envs/lib/python3.8/site-packages/torch/include/torch/csrc/api/include/torch/nn/cloneable.h:57:59: error: invalid static_cast from type ‘const torch::OrderedDict<std::basic_string, at::Tensor>’ to type ‘torch::OrderedDict<std::basic_string, at::Tensor>&’
/home/autowise/data_ssd/zhangrun/torch-points3d/conda_envs/lib/python3.8/site-packages/torch/include/torch/csrc/api/include/torch/nn/cloneable.h:69:61: error: invalid static_cast from type ‘const torch::OrderedDict<std::basic_string, std::shared_ptrtorch::nn::Module >’ to type ‘torch::OrderedDict<std::basic_string, std::shared_ptrtorch::nn::Module >&’
/home/autowise/data_ssd/zhangrun/torch-points3d/conda_envs/lib/python3.8/site-packages/torch/include/torch/csrc/api/include/torch/nn/cloneable.h: In instantiation of ‘std::shared_ptrtorch::nn::Module torch::nn::Cloneable::clone(const c10::optionalc10::Device&) const [with Derived = torch::nn::Conv3dImpl]’:
/home/autowise/data_ssd/zhangrun/torch-points3d/conda_envs/lib/python3.8/site-packages/torch/include/torch/csrc/api/include/torch/optim/sgd.h:49:48: required from here
/home/autowise/data_ssd/zhangrun/torch-points3d/conda_envs/lib/python3.8/site-packages/torch/include/torch/csrc/api/include/torch/nn/cloneable.h:57:59: error: invalid static_cast from type ‘const torch::OrderedDict<std::basic_string, at::Tensor>’ to type ‘torch::OrderedDict<std::basic_string, at::Tensor>&’
/home/autowise/data_ssd/zhangrun/torch-points3d/conda_envs/lib/python3.8/site-packages/torch/include/torch/csrc/api/include/torch/nn/cloneable.h:69:61: error: invalid static_cast from type ‘const torch::OrderedDict<std::basic_string, std::shared_ptrtorch::nn::Module >’ to type ‘torch::OrderedDict<std::basic_string, std::shared_ptrtorch::nn::Module >&’
/home/autowise/data_ssd/zhangrun/torch-points3d/conda_envs/lib/python3.8/site-packages/torch/include/torch/csrc/api/include/torch/nn/cloneable.h: In instantiation of ‘std::shared_ptrtorch::nn::Module torch::nn::Cloneable::clone(const c10::optionalc10::Device&) const [with Derived = torch::nn::Conv2dImpl]’:
/home/autowise/data_ssd/zhangrun/torch-points3d/conda_envs/lib/python3.8/site-packages/torch/include/torch/csrc/api/include/torch/optim/sgd.h:49:48: required from here
/home/autowise/data_ssd/zhangrun/torch-points3d/conda_envs/lib/python3.8/site-packages/torch/include/torch/csrc/api/include/torch/nn/cloneable.h:57:59: error: invalid static_cast from type ‘const torch::OrderedDict<std::basic_string, at::Tensor>’ to type ‘torch::OrderedDict<std::basic_string, at::Tensor>&’
/home/autowise/data_ssd/zhangrun/torch-points3d/conda_envs/lib/python3.8/site-packages/torch/include/torch/csrc/api/include/torch/nn/cloneable.h:69:61: error: invalid static_cast from type ‘const torch::OrderedDict<std::basic_string, std::shared_ptrtorch::nn::Module >’ to type ‘torch::OrderedDict<std::basic_string, std::shared_ptrtorch::nn::Module >&’
/home/autowise/data_ssd/zhangrun/torch-points3d/conda_envs/lib/python3.8/site-packages/torch/include/torch/csrc/api/include/torch/nn/cloneable.h: In instantiation of ‘std::shared_ptrtorch::nn::Module torch::nn::Cloneable::clone(const c10::optionalc10::Device&) const [with Derived = torch::nn::Conv1dImpl]’:
/home/autowise/data_ssd/zhangrun/torch-points3d/conda_envs/lib/python3.8/site-packages/torch/include/torch/csrc/api/include/torch/optim/sgd.h:49:48: required from here
/home/autowise/data_ssd/zhangrun/torch-points3d/conda_envs/lib/python3.8/site-packages/torch/include/torch/csrc/api/include/torch/nn/cloneable.h:57:59: error: invalid static_cast from type ‘const torch::OrderedDict<std::basic_string, at::Tensor>’ to type ‘torch::OrderedDict<std::basic_string, at::Tensor>&’
/home/autowise/data_ssd/zhangrun/torch-points3d/conda_envs/lib/python3.8/site-packages/torch/include/torch/csrc/api/include/torch/nn/cloneable.h:69:61: error: invalid static_cast from type ‘const torch::OrderedDict<std::basic_string, std::shared_ptrtorch::nn::Module >’ to type ‘torch::OrderedDict<std::basic_string, std::shared_ptrtorch::nn::Module >&’
/home/autowise/data_ssd/zhangrun/torch-points3d/conda_envs/lib/python3.8/site-packages/torch/include/torch/csrc/api/include/torch/nn/cloneable.h: In instantiation of ‘std::shared_ptrtorch::nn::Module torch::nn::Cloneable::clone(const c10::optionalc10::Device&) const [with Derived = torch::nn::AdaptiveLogSoftmaxWithLossImpl]’:
/home/autowise/data_ssd/zhangrun/torch-points3d/conda_envs/lib/python3.8/site-packages/torch/include/torch/csrc/api/include/torch/optim/sgd.h:49:48: required from here
/home/autowise/data_ssd/zhangrun/torch-points3d/conda_envs/lib/python3.8/site-packages/torch/include/torch/csrc/api/include/torch/nn/cloneable.h:57:59: error: invalid static_cast from type ‘const torch::OrderedDict<std::basic_string, at::Tensor>’ to type ‘torch::OrderedDict<std::basic_string, at::Tensor>&’
/home/autowise/data_ssd/zhangrun/torch-points3d/conda_envs/lib/python3.8/site-packages/torch/include/torch/csrc/api/include/torch/nn/cloneable.h:69:61: error: invalid static_cast from type ‘const torch::OrderedDict<std::basic_string, std::shared_ptrtorch::nn::Module >’ to type ‘torch::OrderedDict<std::basic_string, std::shared_ptrtorch::nn::Module >&’
/home/autowise/data_ssd/zhangrun/torch-points3d/conda_envs/lib/python3.8/site-packages/torch/include/torch/csrc/api/include/torch/nn/cloneable.h: In instantiation of ‘std::shared_ptrtorch::nn::Module torch::nn::Cloneable::clone(const c10::optionalc10::Device&) const [with Derived = torch::nn::BilinearImpl]’:
/home/autowise/data_ssd/zhangrun/torch-points3d/conda_envs/lib/python3.8/site-packages/torch/include/torch/csrc/api/include/torch/optim/sgd.h:49:48: required from here
/home/autowise/data_ssd/zhangrun/torch-points3d/conda_envs/lib/python3.8/site-packages/torch/include/torch/csrc/api/include/torch/nn/cloneable.h:57:59: error: invalid static_cast from type ‘const torch::OrderedDict<std::basic_string, at::Tensor>’ to type ‘torch::OrderedDict<std::basic_string, at::Tensor>&’
/home/autowise/data_ssd/zhangrun/torch-points3d/conda_envs/lib/python3.8/site-packages/torch/include/torch/csrc/api/include/torch/nn/cloneable.h:69:61: error: invalid static_cast from type ‘const torch::OrderedDict<std::basic_string, std::shared_ptrtorch::nn::Module >’ to type ‘torch::OrderedDict<std::basic_string, std::shared_ptrtorch::nn::Module >&’
/home/autowise/data_ssd/zhangrun/torch-points3d/conda_envs/lib/python3.8/site-packages/torch/include/torch/csrc/api/include/torch/nn/cloneable.h: In instantiation of ‘std::shared_ptrtorch::nn::Module torch::nn::Cloneable::clone(const c10::optionalc10::Device&) const [with Derived = torch::nn::UnflattenImpl]’:
/home/autowise/data_ssd/zhangrun/torch-points3d/conda_envs/lib/python3.8/site-packages/torch/include/torch/csrc/api/include/torch/optim/sgd.h:49:48: required from here
/home/autowise/data_ssd/zhangrun/torch-points3d/conda_envs/lib/python3.8/site-packages/torch/include/torch/csrc/api/include/torch/nn/cloneable.h:57:59: error: invalid static_cast from type ‘const torch::OrderedDict<std::basic_string, at::Tensor>’ to type ‘torch::OrderedDict<std::basic_string, at::Tensor>&’
/home/autowise/data_ssd/zhangrun/torch-points3d/conda_envs/lib/python3.8/site-packages/torch/include/torch/csrc/api/include/torch/nn/cloneable.h:69:61: error: invalid static_cast from type ‘const torch::OrderedDict<std::basic_string, std::shared_ptrtorch::nn::Module >’ to type ‘torch::OrderedDict<std::basic_string, std::shared_ptrtorch::nn::Module >&’
/home/autowise/data_ssd/zhangrun/torch-points3d/conda_envs/lib/python3.8/site-packages/torch/include/torch/csrc/api/include/torch/nn/cloneable.h: In instantiation of ‘std::shared_ptrtorch::nn::Module torch::nn::Cloneable::clone(const c10::optionalc10::Device&) const [with Derived = torch::nn::LinearImpl]’:
/home/autowise/data_ssd/zhangrun/torch-points3d/conda_envs/lib/python3.8/site-packages/torch/include/torch/csrc/api/include/torch/optim/sgd.h:49:48: required from here
/home/autowise/data_ssd/zhangrun/torch-points3d/conda_envs/lib/python3.8/site-packages/torch/include/torch/csrc/api/include/torch/nn/cloneable.h:57:59: error: invalid static_cast from type ‘const torch::OrderedDict<std::basic_string, at::Tensor>’ to type ‘torch::OrderedDict<std::basic_string, at::Tensor>&’
/home/autowise/data_ssd/zhangrun/torch-points3d/conda_envs/lib/python3.8/site-packages/torch/include/torch/csrc/api/include/torch/nn/cloneable.h:69:61: error: invalid static_cast from type ‘const torch::OrderedDict<std::basic_string, std::shared_ptrtorch::nn::Module >’ to type ‘torch::OrderedDict<std::basic_string, std::shared_ptrtorch::nn::Module >&’
error: command '/usr/local/cuda-11.3/bin/nvcc' failed with exit status 1
----------------------------------------
ERROR: Command errored out with exit status 1: /home/autowise/data_ssd/zhangrun/torch-points3d/conda_envs/bin/python -u -c 'import io, os, sys, setuptools, tokenize; sys.argv[0] = '"'"'/tmp/pip-install-tzq0x9va/torch-points-kernels_3213b20f557c4c839378d77048b071fe/setup.py'"'"'; file='"'"'/tmp/pip-install-tzq0x9va/torch-points-kernels_3213b20f557c4c839378d77048b071fe/setup.py'"'"';f = getattr(tokenize, '"'"'open'"'"', open)(file) if os.path.exists(file) else io.StringIO('"'"'from setuptools import setup; setup()'"'"');code = f.read().replace('"'"'\r\n'"'"', '"'"'\n'"'"');f.close();exec(compile(code, file, '"'"'exec'"'"'))' install --record /tmp/pip-record-3ql3jcs
/install-record.txt --single-version-externally-managed --compile --install-headers /home/autowise/data_ssd/zhangrun/torch-points3d/conda_envs/include/python3.8/torch-points-kernels Check the logs for full command output.

ERROR: Failed building wheel for torch-points-kernels

Hi, I have a problem installing torch-points-kernels

os: windows 10
python: 3.8
torch: 1.9.1+cu102
cuda: 10.2


Collecting torch-points-kernels
  Using cached torch-points-kernels-0.7.0.tar.gz (44 kB)
Requirement already satisfied: torch>=1.1.0 in c:\users\mac\appdata\roaming\python\python38\site-packages (from torch-points-kernels) (1.9.1+cu102)
Requirement already satisfied: numba in c:\programdata\anaconda3\lib\site-packages (from torch-points-kernels) (0.53.1)
Requirement already satisfied: numpy<1.20 in c:\programdata\anaconda3\lib\site-packages (from torch-points-kernels) (1.19.5)
Requirement already satisfied: scikit-learn in c:\programdata\anaconda3\lib\site-packages (from torch-points-kernels) (0.24.1)
Requirement already satisfied: typing-extensions in c:\programdata\anaconda3\lib\site-packages (from torch>=1.1.0->torch-points-kernels) (3.7.4.3)
Requirement already satisfied: setuptools in c:\users\mac\appdata\roaming\python\python38\site-packages (from numba->torch-points-kernels) (58.2.0)
Requirement already satisfied: llvmlite<0.37,>=0.36.0rc1 in c:\programdata\anaconda3\lib\site-packages (from numba->torch-points-kernels) (0.36.0)
Requirement already satisfied: scipy>=0.19.1 in c:\programdata\anaconda3\lib\site-packages (from scikit-learn->torch-points-kernels) (1.6.2)
Requirement already satisfied: threadpoolctl>=2.0.0 in c:\programdata\anaconda3\lib\site-packages (from scikit-learn->torch-points-kernels) (2.1.0)
Requirement already satisfied: joblib>=0.11 in c:\programdata\anaconda3\lib\site-packages (from scikit-learn->torch-points-kernels) (1.0.1)
Building wheels for collected packages: torch-points-kernels
  Building wheel for torch-points-kernels (setup.py) ... error
  ERROR: Command errored out with exit status 1:
   command: 'C:\ProgramData\Anaconda3\python.exe' -u -c 'import sys, setuptools, tokenize; sys.argv[0] = '"'"'C:\\Users\\Mac\\AppData\\Local\\Temp\\pip-install-daj3vjag\\torch-points-kernels_13394f8421bb46d1a01a43372a63b313\\setup.py'"'"'; __file__='"'"'C:\\Users\\Mac\\AppData\\Local\\Temp\\pip-install-daj3vjag\\torch-points-kernels_13394f8421bb46d1a01a43372a63b313\\setup.py'"'"';f=getattr(tokenize, '"'"'open'"'"', open)(__file__);code=f.read().replace('"'"'\r\n'"'"', '"'"'\n'"'"');f.close();exec(compile(code, __file__, '"'"'exec'"'"'))' bdist_wheel -d 'C:\Users\Mac\AppData\Local\Temp\pip-wheel-wa5vj41f'
       cwd: C:\Users\Mac\AppData\Local\Temp\pip-install-daj3vjag\torch-points-kernels_13394f8421bb46d1a01a43372a63b313\
  Complete output (22 lines):
  running bdist_wheel
  running build
  running build_py
  creating build
  creating build\lib.win-amd64-3.8
  creating build\lib.win-amd64-3.8\torch_points_kernels
  copying torch_points_kernels\chamfer_dist.py -> build\lib.win-amd64-3.8\torch_points_kernels
  copying torch_points_kernels\cluster.py -> build\lib.win-amd64-3.8\torch_points_kernels
  copying torch_points_kernels\cubic_feature_sampling.py -> build\lib.win-amd64-3.8\torch_points_kernels
  copying torch_points_kernels\gridding.py -> build\lib.win-amd64-3.8\torch_points_kernels
  copying torch_points_kernels\knn.py -> build\lib.win-amd64-3.8\torch_points_kernels
  copying torch_points_kernels\metrics.py -> build\lib.win-amd64-3.8\torch_points_kernels
  copying torch_points_kernels\torchpoints.py -> build\lib.win-amd64-3.8\torch_points_kernels
  copying torch_points_kernels\__init__.py -> build\lib.win-amd64-3.8\torch_points_kernels
  running build_ext
  building 'torch_points_kernels.points_cuda' extension
  error: Microsoft Visual C++ 14.0 or greater is required. Get it with "Microsoft C++ Build Tools": https://visualstudio.microsoft.com/visual-cpp-build-tools/
  Error in atexit._run_exitfuncs:
  Traceback (most recent call last):
    File "C:\ProgramData\Anaconda3\lib\site-packages\colorama\ansitowin32.py", line 59, in closed
      return stream.closed
  ValueError: underlying buffer has been detached
  ----------------------------------------
  ERROR: Failed building wheel for torch-points-kernels
  Running setup.py clean for torch-points-kernels
Failed to build torch-points-kernels
Installing collected packages: torch-points-kernels
    Running setup.py install for torch-points-kernels ... error
    ERROR: Command errored out with exit status 1:
     command: 'C:\ProgramData\Anaconda3\python.exe' -u -c 'import sys, setuptools, tokenize; sys.argv[0] = '"'"'C:\\Users\\Mac\\AppData\\Local\\Temp\\pip-install-daj3vjag\\torch-points-kernels_13394f8421bb46d1a01a43372a63b313\\setup.py'"'"'; __file__='"'"'C:\\Users\\Mac\\AppData\\Local\\Temp\\pip-install-daj3vjag\\torch-points-kernels_13394f8421bb46d1a01a43372a63b313\\setup.py'"'"';f=getattr(tokenize, '"'"'open'"'"', open)(__file__);code=f.read().replace('"'"'\r\n'"'"', '"'"'\n'"'"');f.close();exec(compile(code, __file__, '"'"'exec'"'"'))' install --record 'C:\Users\Mac\AppData\Local\Temp\pip-record-d5yt1w01\install-record.txt' --single-version-externally-managed --compile --install-headers 'C:\ProgramData\Anaconda3\Include\torch-points-kernels'
         cwd: C:\Users\Mac\AppData\Local\Temp\pip-install-daj3vjag\torch-points-kernels_13394f8421bb46d1a01a43372a63b313\
    Complete output (22 lines):
    running install
    running build
    running build_py
    creating build
    creating build\lib.win-amd64-3.8
    creating build\lib.win-amd64-3.8\torch_points_kernels
    copying torch_points_kernels\chamfer_dist.py -> build\lib.win-amd64-3.8\torch_points_kernels
    copying torch_points_kernels\cluster.py -> build\lib.win-amd64-3.8\torch_points_kernels
    copying torch_points_kernels\cubic_feature_sampling.py -> build\lib.win-amd64-3.8\torch_points_kernels
    copying torch_points_kernels\gridding.py -> build\lib.win-amd64-3.8\torch_points_kernels
    copying torch_points_kernels\knn.py -> build\lib.win-amd64-3.8\torch_points_kernels
    copying torch_points_kernels\metrics.py -> build\lib.win-amd64-3.8\torch_points_kernels
    copying torch_points_kernels\torchpoints.py -> build\lib.win-amd64-3.8\torch_points_kernels
    copying torch_points_kernels\__init__.py -> build\lib.win-amd64-3.8\torch_points_kernels
    running build_ext
    building 'torch_points_kernels.points_cuda' extension
    error: Microsoft Visual C++ 14.0 or greater is required. Get it with "Microsoft C++ Build Tools": https://visualstudio.microsoft.com/visual-cpp-build-tools/
    Error in atexit._run_exitfuncs:
    Traceback (most recent call last):
      File "C:\ProgramData\Anaconda3\lib\site-packages\colorama\ansitowin32.py", line 59, in closed
        return stream.closed
    ValueError: underlying buffer has been detached
    ----------------------------------------
ERROR: Command errored out with exit status 1: 'C:\ProgramData\Anaconda3\python.exe' -u -c 'import sys, setuptools, tokenize; sys.argv[0] = '"'"'C:\\Users\\Mac\\AppData\\Local\\Temp\\pip-install-daj3vjag\\torch-points-kernels_13394f8421bb46d1a01a43372a63b313\\setup.py'"'"'; __file__='"'"'C:\\Users\\Mac\\AppData\\Local\\Temp\\pip-install-daj3vjag\\torch-points-kernels_13394f8421bb46d1a01a43372a63b313\\setup.py'"'"';f=getattr(tokenize, '"'"'open'"'"', open)(__file__);code=f.read().replace('"'"'\r\n'"'"', '"'"'\n'"'"');f.close();exec(compile(code, __file__, '"'"'exec'"'"'))' install --record 'C:\Users\Mac\AppData\Local\Temp\pip-record-d5yt1w01\install-record.txt' --single-version-externally-managed --compile --install-headers 'C:\ProgramData\Anaconda3\Include\torch-points-kernels' Check the logs for full command output.`

The error appears at 'Failed building wheel for torch-points-kernels' and it tips 'Microsoft Visual C++ 14.0 or greater is required'. However, I have installed the latest Microsoft C++ Build Tools. But somehow it still reports this error. I wonder if you have come across the same problem before and how you solve it.

Moreover, if this problem remains unsolved, may i ask you for the .whl files in order to skip 'building wheels' part?

Failed to install torch-points-kernels

During installation of torch-points-kernels using this command pip install torch-points-kernels==0.7.0, I will faced the error for build wheels

Error :-
-->
Building wheel for torch-points-kernels (setup.py) ... error
error: subprocess-exited-with-error

× python setup.py bdist_wheel did not run successfully.
│ exit code: 1
╰─> [738 lines of output]
running bdist_wheel
running build
running build_py
creating build
creating build\lib.win-amd64-cpython-38
creating build\lib.win-amd64-cpython-38\torch_points_kernels
copying torch_points_kernels\chamfer_dist.py -> build\lib.win-amd64-cpython-38\torch_points_kernels
copying torch_points_kernels\cluster.py -> build\lib.win-amd64-cpython-38\torch_points_kernels
copying torch_points_kernels\cubic_feature_sampling.py -> build\lib.win-amd64-cpython-38\torch_points_kernels
copying torch_points_kernels\gridding.py -> build\lib.win-amd64-cpython-38\torch_points_kernels
copying torch_points_kernels\knn.py -> build\lib.win-amd64-cpython-38\torch_points_kernels
copying torch_points_kernels\metrics.py -> build\lib.win-amd64-cpython-38\torch_points_kernels
copying torch_points_kernels\torchpoints.py -> build\lib.win-amd64-cpython-38\torch_points_kernels
copying torch_points_kernels_init_.py -> build\lib.win-amd64-cpython-38\torch_points_kernels
running build_ext
C:\ProgramData\miniconda3\envs\WIN_PCAIST\lib\site-packages\torch\utils\cpp_extension.py:304: UserWarning: Error checking compiler version for cl: [WinError 2] The system cannot find the file specified
warnings.warn(f'Error checking compiler version for {compiler}: {error}')
building 'torch_points_kernels.points_cuda' extension
creating build\temp.win-amd64-cpython-38
creating build\temp.win-amd64-cpython-38\Release
creating build\temp.win-amd64-cpython-38\Release\cuda
creating build\temp.win-amd64-cpython-38\Release\cuda\src
"C:\Program Files (x86)\Microsoft Visual Studio\2019\BuildTools\VC\Tools\MSVC\14.29.30133\bin\HostX86\x64\cl.exe" /c /nologo /O2 /W3 /GL /DNDEBUG /MD -Icuda/include -IC:\ProgramData\miniconda3\envs\WIN_PCAIST\lib\site-packages\torch\include -IC:\ProgramData\miniconda3\envs\WIN_PCAIST\lib\site-packages\torch\include\torch\csrc\api\include -IC:\ProgramData\miniconda3\envs\WIN_PCAIST\lib\site-packages\torch\include\TH -IC:\ProgramData\miniconda3\envs\WIN_PCAIST\lib\site-packages\torch\include\THC "-IC:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.0\include" "-IC:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.0\include;C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.0\lib\x64;C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.0\bin;C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.0;\include" -IC:\ProgramData\miniconda3\envs\WIN_PCAIST\include -IC:\ProgramData\miniconda3\envs\WIN_PCAIST\Include "-IC:\Program Files (x86)\Microsoft Visual Studio\2019\BuildTools\VC\Tools\MSVC\14.29.30133\include" "-IC:\Program Files (x86)\Windows Kits\NETFXSDK\4.8\include\um" "-IC:\Program Files (x86)\Windows Kits\10\include\10.0.19041.0\ucrt" "-IC:\Program Files (x86)\Windows Kits\10\include\10.0.19041.0\shared" "-IC:\Program Files (x86)\Windows Kits\10\include\10.0.19041.0\um" "-IC:\Program Files (x86)\Windows Kits\10\include\10.0.19041.0\winrt" "-IC:\Program Files (x86)\Windows Kits\10\include\10.0.19041.0\cppwinrt" /EHsc /Tpcuda/src\ball_query.cpp /Fobuild\temp.win-amd64-cpython-38\Release\cuda/src\ball_query.obj /MD /wd4819 /wd4251 /wd4244 /wd4267 /wd4275 /wd4018 /wd4190 /EHsc -O3 -DVERSION_GE_1_3 -DTORCH_API_INCLUDE_EXTENSION_H -DTORCH_API_INCLUDE_EXTENSION_H -DTORCH_EXTENSION_NAME=points_cuda -DTORCH_EXTENSION_NAME=points_cuda -D_GLIBCXX_USE_CXX11_ABI=0 -D_GLIBCXX_USE_CXX11_ABI=0
cl : Command line warning D9002 : ignoring unknown option '-O3'
ball_query.cpp

  C:\ProgramData\miniconda3\envs\WIN_PCAIST\lib\site-packages\torch\include\pybind11\detail/common.h(108): warning C4005: 'HAVE_SNPRINTF': macro redefinition
  C:\ProgramData\miniconda3\envs\WIN_PCAIST\include\pyerrors.h(315): note: see previous definition of 'HAVE_SNPRINTF'
  "C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.0\bin\nvcc" -c cuda/src\ball_query_gpu.cu -o build\temp.win-amd64-cpython-38\Release\cuda/src\ball_query_gpu.obj -Icuda/include -IC:\ProgramData\miniconda3\envs\WIN_PCAIST\lib\site-packages\torch\include -IC:\ProgramData\miniconda3\envs\WIN_PCAIST\lib\site-packages\torch\include\torch\csrc\api\include -IC:\ProgramData\miniconda3\envs\WIN_PCAIST\lib\site-packages\torch\include\TH -IC:\ProgramData\miniconda3\envs\WIN_PCAIST\lib\site-packages\torch\include\THC "-IC:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.0\include" "-IC:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.0\include;C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.0\lib\x64;C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.0\bin;C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.0;\include" -IC:\ProgramData\miniconda3\envs\WIN_PCAIST\include -IC:\ProgramData\miniconda3\envs\WIN_PCAIST\Include "-IC:\Program Files (x86)\Microsoft Visual Studio\2019\BuildTools\VC\Tools\MSVC\14.29.30133\include" "-IC:\Program Files (x86)\Windows Kits\NETFXSDK\4.8\include\um" "-IC:\Program Files (x86)\Windows Kits\10\include\10.0.19041.0\ucrt" "-IC:\Program Files (x86)\Windows Kits\10\include\10.0.19041.0\shared" "-IC:\Program Files (x86)\Windows Kits\10\include\10.0.19041.0\um" "-IC:\Program Files (x86)\Windows Kits\10\include\10.0.19041.0\winrt" "-IC:\Program Files (x86)\Windows Kits\10\include\10.0.19041.0\cppwinrt" -Xcudafe --diag_suppress=dll_interface_conflict_dllexport_assumed -Xcudafe --diag_suppress=dll_interface_conflict_none_assumed -Xcudafe --diag_suppress=field_without_dll_interface -Xcudafe --diag_suppress=base_class_has_different_dll_interface -Xcompiler /EHsc -Xcompiler /wd4190 -Xcompiler /wd4018 -Xcompiler /wd4275 -Xcompiler /wd4267 -Xcompiler /wd4244 -Xcompiler /wd4251 -Xcompiler /wd4819 -Xcompiler /MD -D__CUDA_NO_HALF_OPERATORS__ -D__CUDA_NO_HALF_CONVERSIONS__ -D__CUDA_NO_BFLOAT16_CONVERSIONS__ -D__CUDA_NO_HALF2_OPERATORS__ --expt-relaxed-constexpr -O3 -DVERSION_GE_1_3 -DTORCH_API_INCLUDE_EXTENSION_H -DTORCH_API_INCLUDE_EXTENSION_H -DTORCH_EXTENSION_NAME=points_cuda -DTORCH_EXTENSION_NAME=points_cuda -D_GLIBCXX_USE_CXX11_ABI=0 -D_GLIBCXX_USE_CXX11_ABI=0 -gencode=arch=compute_86,code=compute_86 -gencode=arch=compute_86,code=sm_86 --use-local-env
  nvcc fatal   : Unsupported gpu architecture 'compute_86'
  error: command 'C:\\Program Files\\NVIDIA GPU Computing Toolkit\\CUDA\\v11.0\\bin\\nvcc.exe' failed with exit code 1
  [end of output]

note: This error originates from a subprocess, and is likely not a problem with pip.
ERROR: Failed building wheel for torch-points-kernels
Running setup.py clean for torch-points-kernels
Failed to build torch-points-kernels
ERROR: Could not build wheels for torch-points-kernels, which is required to install pyproject.toml-based projects

-->

This the output shown on the console during installation, Please guide me how to solve this type of error

ModuleNotFoundError: No module named 'torch_points_kernels.points_cpu

Hi Nicolas,
Thank you so much for all of the great work you are doing on this repository, it is greatly appreciated.
Unfortunately, I am experiencing an installation issue, and I was hoping you could please provide some insight. I am working with Docker.

With the latest PyTorch 1.8.0 update, my working version of everything broke.
Here is an excerpt of the previously functioning environment:

ARG VERSION=1.7.0
ARG CUDA="102"
pip3 install --no-cache-dir
        torch-scatter torch-sparse torch-cluster torch-spline-conv \
        -f https://pytorch-geometric.com/whl/torch-${VERSION}+${CUDA}.html &&\
        pip3 install --no-cache-dir \
        torch-geometric torch-points-kernels==0.6.1

I have been able to successfully build torch-scatter, torch-sparse, torch-cluster, torch-spline-conv, and torch-geometric within the container by upgrading to PyTorch 1.8.0. The current issue comes when installing torch-points-kernels.

I am using an official pytorch image, which can be obtained via: docker pull pytorch/pytorch:1.8.0-cuda11.1-cudnn8-devel

Here is a minimally reproducible example:

FROM pytorch/pytorch:1.8.0-cuda11.1-cudnn8-devel
RUN pip3 install --no-cache-dir \
        torch-points-kernels==0.6.1
ENTRYPOINT ["python3"]

Here is the Traceback:

import torch_points_kernels
Traceback (most recent call last):
  File "<stdin>", line 1, in <module>
  File "/opt/conda/lib/python3.8/site-packages/torch_points_kernels/__init__.py", line 1, in <module>
    from .torchpoints import *
  File "/opt/conda/lib/python3.8/site-packages/torch_points_kernels/torchpoints.py", line 7, in <module>
    import torch_points_kernels.points_cpu as tpcpu
ModuleNotFoundError: No module named 'torch_points_kernels.points_cpu'

The environment has:

  • Python 3.8.8
  • GCC 7.3.0

Following your troubleshooting guide:

$ python -c "import torch; print(torch.__version__)"
>>> 1.8.0
$ echo $PATH
>>> /opt/conda/bin:/usr/local/nvidia/bin:/usr/local/cuda/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin
$ echo $CPATH
>>> /usr/local/cuda/include

Here is the output of the unnittest, building inside the container via:

$ python setup.py build_ext --inplace
$ python -m unittest
Ran 32 tests in 32.106s
OK

I can then successfully import torch-points-kernels, so it appears installing from source works. Unfortunately, this is extremely time consuming (~ 300 seconds); I notice that for any version > 1.6.1 this is the case. Hope this information is useful.

Best,
Mackenzie

build torch-points-kernels

trouble installing / building torch-point-kernels:

RuntimeError:
The detected CUDA version (11.3) mismatches the version that was used to compile
PyTorch (10.2). Please make sure to use the same CUDA versions.

my conda env has cudatoolkit=10.2.89,

Prebuilt Wheels

Torch geometric provides pre-built wheels for the torch-scatter, torch-sparse and all others that need to be built, would it be possible to get a wheel for this library?

Useful for interpolating unstructured data onto a grid?

Hiya,

I'm searching for a GPU implementation of something like scipy's griddata - in order to interpolate points from an unstructured point cloud. My data is technically 2D, but I'm imagining just appending a z=0 coordinate to each point.

griddata above does the following. Can this library do something similar?

import numpy as np
from scipy.interpolate import LinearNDInterpolator
from scipy.spatial import Delaunay

N = 10000 # number of points
known_points = np.random.random((N,2))
known_intensities = np.random.random((N,))
tesselated_grid = Delaunay(known_points) # create a triangulated grid out of the known points
func = LinearNDInterpolator(tesselated_grid , known_intensities , fill_value=np.nan) # create an interpolation function
func((0.5, 0.3)) # What is the interpolated value at (0.5, 0.3)?

Problem about the chamfer distance

I have found that the return value of the ChamferFunction is not consistent with the source code of the cuda version
I have tested the code, and show it in the following

Python 3.7.4 (default, Aug 13 2019, 20:35:49)
[GCC 7.3.0] :: Anaconda, Inc. on linux
Type "help", "copyright", "credits" or "license" for more information.

import torch
a = torch.randn((100,3)).cuda()
b = torch.randn((10,3)).cuda()
from torch_points_kernels.chamfer_dist import ChamferFunction
d1, d2 = ChamferFunction.apply(a,b)
d1.shape
torch.Size([100, 3])
d2.shape
torch.Size([100, 3])

Python version 3.7
torch-points-kernel verison 3.6.6

I want to know why this happens
Hope to get a reply, thank you

Trouble compiling

Hi there,
Thank you for the great work here !

I encountered the following error when compiling (either from the pip wheel or from the github repository):

D:\EAL\envs\tp3d\lib\site-packages\torch\include\pybind11\cast.h(1503): error: too few arguments for template template parameter "Tuple"
          detected during instantiation of class "pybind11::detail::tuple_caster<Tuple, Ts...> [with Tuple=std::pair, Ts=<T1, T2>]"
(1507): here

2 errors detected in the compilation of "cuda/src/chamfer_dist_gpu.cu".
error: command 'C:\\Program Files\\NVIDIA GPU Computing Toolkit\\CUDA\\v11.4\\bin\\nvcc.exe' failed with exit status 1

Here are more details about the config:

  • Windows 10 version 1909
  • CUDA 11.4
  • Python 3.7
  • pytorch 1.8.1

Any clue where it could come from ?

Thanks for your feedback.

undefined symbol: _ZN5torch3jit6tracer9addInputsEPNS0_4NodeEPKcRKN3c1013TensorOptionsE

Hi,
I just installed torch-points-kernels using pip and when I try to import the package, I get the following error:

>>> import torch_points_kernels
Traceback (most recent call last):
  File "<stdin>", line 1, in <module>
  File "/home/alain/.local/lib/python3.8/site-packages/torch_points_kernels/__init__.py", line 1, in <module>
    from .torchpoints import *
  File "/home/alain/.local/lib/python3.8/site-packages/torch_points_kernels/torchpoints.py", line 7, in <module>
    import torch_points_kernels.points_cpu as tpcpu
ImportError: /home/alain/.local/lib/python3.8/site-packages/torch_points_kernels/points_cpu.so: undefined symbol: _ZN5torch3jit6tracer9addInputsEPNS0_4NodeEPKcRKN3c1013TensorOptionsE

I use:

  • Ubuntu 20.04
  • Python 3.8.5
  • PyTorch 1.7.0
  • CPU only (my laptop does not have a GPU)

The error occured in a file called points_cpu.so so I suppose it is not related to some CUDA stuff. Have you ever encountered such an error, or do you have any idea what it can be?

Thanks,
Alain

error: could not create 'torch_points_kernels/points_cpu.so': No such file or directory.

Hi,

I am using your torch-points-kernels repo for a project I am working on.

When I run the compile command for the repo, python setup.py build_ext --inplace, from the root directory, I get the following error:

error: could not create 'torch_points_kernels/points_cpu.so': No such file or directory.

The full trace looks like this:

running build_ext
copying build/lib.linux-x86_64-3.8/torch_points_kernels/points_cpu.so -> torch_points_kernels
error: could not create 'torch_points_kernels/points_cpu.so': No such file or directory

Do you know how I can resolve this?

Thanks.

Pip installation error on windows

I was trying to install this library with pip on my windows 10 laptop, could not seem to get around these error prompts.

D:\SOFTWARE\Anaconda\envs\torch3d\include\pyerrors.h(490): note: see previous definition of 'HAVE_SNPRINTF'
cuda/src/cubic_feature_sampling_gpu.cu(46): error: calling a host function("__floorf") from a global function("cubic_feature_sampling_kernel ") is not allowed

cuda/src/cubic_feature_sampling_gpu.cu(46): error: identifier "__floorf" is undefined in device code

cuda/src/cubic_feature_sampling_gpu.cu(47): error: calling a host function("__ceilf") from a global function("cubic_feature_sampling_kernel ") is not allowed

cuda/src/cubic_feature_sampling_gpu.cu(47): error: identifier "__ceilf" is undefined in device code

cuda/src/cubic_feature_sampling_gpu.cu(52): error: calling a host function("__floorf") from a global function("cubic_feature_sampling_kernel ") is not allowed

cuda/src/cubic_feature_sampling_gpu.cu(52): error: identifier "__floorf" is undefined in device code

cuda/src/cubic_feature_sampling_gpu.cu(53): error: calling a host function("__ceilf") from a global function("cubic_feature_sampling_kernel ") is not allowed

cuda/src/cubic_feature_sampling_gpu.cu(53): error: identifier "__ceilf" is undefined in device code

cuda/src/cubic_feature_sampling_gpu.cu(58): error: calling a host function("__floorf") from a global function("cubic_feature_sampling_kernel ") is not allowed

cuda/src/cubic_feature_sampling_gpu.cu(58): error: identifier "__floorf" is undefined in device code

cuda/src/cubic_feature_sampling_gpu.cu(59): error: calling a host function("__ceilf") from a global function("cubic_feature_sampling_kernel ") is not allowed

cuda/src/cubic_feature_sampling_gpu.cu(59): error: identifier "__ceilf" is undefined in device code

12 errors detected in the compilation of "cuda/src/cubic_feature_sampling_gpu.cu".

Help needed. Thanks in advance!!!

"CUDA error: No kernel image" still exists after reinstalling torch-points-kernels

Hi,

I have to compile the "torch-points-kernels" library in my workstation and then run the code in a remote server using the same conda environment.

The "CUDA error" happened after I submitted the job to the remote server although I could run the code well in my workstation.

Following your solution, I uninstalled the library, cleared the cache, and reinstalled it on my workstation after setting the TORCH_CUDA_ARCH_LIST.

But the same error still happened.

I checked the two GPU cards, which were Quadro RTX 6000 (Turing SM 75) and Tesla V100 (Volta SM70), respectively. And I set 'export TORCH_CUDA_ARCH_LIST="7.0;7.5"' before I reinstalled the library.

The error details are as follows,

Traceback (most recent call last):
File "train_s_stransformer.py", line 613, in
main()
File "train_s_stransformer.py", line 92, in main
main_worker(args.train_gpu, args.ngpus_per_node, args)
File "train_s_stransformer.py", line 327, in main_worker
loss_train, mIoU_train, mAcc_train, allAcc_train= train(train_loader, model, criterion, optimizer, epoch, scaler, scheduler)
File "train_s_stransformer.py", line 426, in train
output = model(feat, coord, offset, batch, neighbor_idx)
File "/home/xxx/.conda/envs/s_transformer10/lib/python3.7/site-packages/torch/nn/modules/module.py", line 1130, in _call_impl
return forward_call(*input, **kwargs)
File "/home/xxx/.conda/envs/s_transformer10/lib/python3.7/site-packages/torch/nn/parallel/data_parallel.py", line 166, in forward
return self.module(*inputs[0], **kwargs[0])
File "/home/xxx/.conda/envs/s_transformer10/lib/python3.7/site-packages/torch/nn/modules/module.py", line 1130, in _call_impl
return forward_call(*input, **kwargs)
File "/home/xxx/3dSegmentation/stratified_transformer/Stratified-Transformer-main/model/stratified_transformer.py", line 453, in forward
feats, xyz, offset, feats_down, xyz_down, offset_down = layer(feats, xyz, offset)
File "/home/xxx/.conda/envs/s_transformer10/lib/python3.7/site-packages/torch/nn/modules/module.py", line 1130, in _call_impl
return forward_call(*input, **kwargs)
File "/home/xxx/3dSegmentation/stratified_transformer/Stratified-Transformer-main/model/stratified_transformer.py", line 281, in forward
v2p_map, p2v_map, counts = grid_sample(xyz, batch, window_size, start=None)
File "/home/xxx/3dSegmentation/stratified_transformer/Stratified-Transformer-main/model/stratified_transformer.py", line 59, in grid_sample
unique, cluster, counts = torch.unique(cluster, sorted=True, return_inverse=True, return_counts=True)
File "/home/xxx/.conda/envs/s_transformer10/lib/python3.7/site-packages/torch/_jit_internal.py", line 421, in fn
return if_true(*args, **kwargs)
File "/home/xxx/.conda/envs/s_transformer10/lib/python3.7/site-packages/torch/_jit_internal.py", line 421, in fn
return if_true(*args, **kwargs)
File "/home/xxx/.conda/envs/s_transformer10/lib/python3.7/site-packages/torch/functional.py", line 769, in _unique_impl
return_counts=return_counts,
RuntimeError: CUDA error: no kernel image is available for execution on the device
CUDA kernel errors might be asynchronously reported at some other API call,so the stacktrace below might be incorrect.
For debugging consider passing CUDA_LAUNCH_BLOCKING=1.

Please give me some advice on how to use it.

Best,

Eric.

Difference between dense knn and knn

Hi @nicolas-chaulet, I really like this library which is useful for point cloud. I wonder the difference between dense knn and knn. Also, what does it mean by dense ball query?

Also, a brief document with sample code will be useful.

Numpy version

What's the reason for requiring numpy <1.20? Is it possible to support newer versions soon?

import torch_points_kernels.points_cpu as tpcpu ModuleNotFoundError: No module named 'torch_points_kernels.points_cpu'

Hello, thank you so much for your work, its been really helpful. I am using the latest version of the repo as a submodule and built my docker container. I have installed CUDA and my GPU is accessible.
My ubuntu version: 22.04 LTS
CUDA Version: 12.2
Nvidia Driver Version: 535.171.04
echo $CPATH result: /usr/local/cuda/include
echo Pythonpath: torch points kernel is added

from deepviewaggregation.torch_points3d.trainer import Trainer
File "/home/developer/deepviewaggregation/torch_points3d/trainer.py", line 13, in
from torch_points3d.datasets.dataset_factory import instantiate_dataset
File "/home/developer/deepviewaggregation/torch_points3d/datasets/init.py", line 2, in
from .segmentation import *
File "/home/developer/deepviewaggregation/torch_points3d/datasets/segmentation/init.py", line 3, in
from .shapenet import ShapeNet, ShapeNetDataset
File "/home/developer/deepviewaggregation/torch_points3d/datasets/segmentation/shapenet.py", line 13, in
from torch_points3d.core.data_transform.grid_transform import SaveOriginalPosId
File "/home/developer/deepviewaggregation/torch_points3d/core/data_transform/init.py", line 5, in
from .transforms import *
File "/home/developer/deepviewaggregation/torch_points3d/core/data_transform/transforms.py", line 16, in
from torch_points_kernels.points_cpu import ball_query
File "/home/developer/torch-points-kernels/torch_points_kernels/init.py", line 1, in
from .torchpoints import *
File "/home/developer/torch-points-kernels/torch_points_kernels/torchpoints.py", line 7, in
import torch_points_kernels.points_cpu as tpcpu
ModuleNotFoundError: No module named 'torch_points_kernels.points_cpu'

Please help.

Could not build wheels for torch-points-kernels which use PEP 517 and cannot be installed directly

get the following error

ERROR: Command errored out with exit status 1:
   command: /home/usr/anaconda3/envs/pointcloud/bin/python /home/usr/anaconda3/envs/pointcloud/lib/python3.9/site-packages/pip/_vendor/pep517/_in_process.py build_wheel /tmp/tmpihwbdbxi
       cwd: /tmp/pip-install-tz64jo92/torch-points-kernels_8b6e97835125433ba4a7d359df438095
  Complete output (132 lines):
  running bdist_wheel
  running build
  running build_py
  creating build
  creating build/lib.linux-x86_64-3.9
  creating build/lib.linux-x86_64-3.9/torch_points_kernels
  copying torch_points_kernels/__init__.py -> build/lib.linux-x86_64-3.9/torch_points_kernels
  copying torch_points_kernels/torchpoints.py -> build/lib.linux-x86_64-3.9/torch_points_kernels
  copying torch_points_kernels/knn.py -> build/lib.linux-x86_64-3.9/torch_points_kernels
  running build_ext
  building 'torch_points_kernels.points_cuda' extension
  creating /tmp/pip-install-tz64jo92/torch-points-kernels_8b6e97835125433ba4a7d359df438095/build/temp.linux-x86_64-3.9
  creating /tmp/pip-install-tz64jo92/torch-points-kernels_8b6e97835125433ba4a7d359df438095/build/temp.linux-x86_64-3.9/cuda
  creating /tmp/pip-install-tz64jo92/torch-points-kernels_8b6e97835125433ba4a7d359df438095/build/temp.linux-x86_64-3.9/cuda/src
  Emitting ninja build file /tmp/pip-install-tz64jo92/torch-points-kernels_8b6e97835125433ba4a7d359df438095/build/temp.linux-x86_64-3.9/build.ninja...
  Compiling objects...
  Allowing ninja to set a default number of workers... (overridable by setting the environment variable MAX_JOBS=N)
  [1/7] c++ -MMD -MF /tmp/pip-install-tz64jo92/torch-points-kernels_8b6e97835125433ba4a7d359df438095/build/temp.linux-x86_64-3.9/cuda/src/ball_query.o.d -pthread -B /home/usr/anaconda3/envs/pointcloud/compiler_compat -Wl,--sysroot=/ -Wno-unused-result -Wsign-compare -DNDEBUG -O2 -Wall -fPIC -O2 -isystem /home/usr/anaconda3/envs/pointcloud/include -fPIC -O2 -isystem /home/usr/anaconda3/envs/pointcloud/include -fPIC -Icuda/include -I/tmp/pip-build-env-krppne2_/overlay/lib/python3.9/site-packages/torch/include -I/tmp/pip-build-env-krppne2_/overlay/lib/python3.9/site-packages/torch/include/torch/csrc/api/include -I/tmp/pip-build-env-krppne2_/overlay/lib/python3.9/site-packages/torch/include/TH -I/tmp/pip-build-env-krppne2_/overlay/lib/python3.9/site-packages/torch/include/THC -I/usr/local/cuda/include -I/home/usr/anaconda3/envs/pointcloud/include/python3.9 -c -c /tmp/pip-install-tz64jo92/torch-points-kernels_8b6e97835125433ba4a7d359df438095/cuda/src/ball_query.cpp -o /tmp/pip-install-tz64jo92/torch-points-kernels_8b6e97835125433ba4a7d359df438095/build/temp.linux-x86_64-3.9/cuda/src/ball_query.o -O3 -DVERSION_GE_1_3 -DTORCH_API_INCLUDE_EXTENSION_H -DTORCH_API_INCLUDE_EXTENSION_H '-DPYBIND11_COMPILER_TYPE="_gcc"' '-DPYBIND11_COMPILER_TYPE="_gcc"' '-DPYBIND11_STDLIB="_libstdcpp"' '-DPYBIND11_STDLIB="_libstdcpp"' '-DPYBIND11_BUILD_ABI="_cxxabi1011"' '-DPYBIND11_BUILD_ABI="_cxxabi1011"' -DTORCH_EXTENSION_NAME=points_cuda -DTORCH_EXTENSION_NAME=points_cuda -D_GLIBCXX_USE_CXX11_ABI=0 -D_GLIBCXX_USE_CXX11_ABI=0 -std=c++14
  FAILED: /tmp/pip-install-tz64jo92/torch-points-kernels_8b6e97835125433ba4a7d359df438095/build/temp.linux-x86_64-3.9/cuda/src/ball_query.o
  c++ -MMD -MF /tmp/pip-install-tz64jo92/torch-points-kernels_8b6e97835125433ba4a7d359df438095/build/temp.linux-x86_64-3.9/cuda/src/ball_query.o.d -pthread -B /home/usr/anaconda3/envs/pointcloud/compiler_compat -Wl,--sysroot=/ -Wno-unused-result -Wsign-compare -DNDEBUG -O2 -Wall -fPIC -O2 -isystem /home/usr/anaconda3/envs/pointcloud/include -fPIC -O2 -isystem /home/usr/anaconda3/envs/pointcloud/include -fPIC -Icuda/include -I/tmp/pip-build-env-krppne2_/overlay/lib/python3.9/site-packages/torch/include -I/tmp/pip-build-env-krppne2_/overlay/lib/python3.9/site-packages/torch/include/torch/csrc/api/include -I/tmp/pip-build-env-krppne2_/overlay/lib/python3.9/site-packages/torch/include/TH -I/tmp/pip-build-env-krppne2_/overlay/lib/python3.9/site-packages/torch/include/THC -I/usr/local/cuda/include -I/home/usr/anaconda3/envs/pointcloud/include/python3.9 -c -c /tmp/pip-install-tz64jo92/torch-points-kernels_8b6e97835125433ba4a7d359df438095/cuda/src/ball_query.cpp -o /tmp/pip-install-tz64jo92/torch-points-kernels_8b6e97835125433ba4a7d359df438095/build/temp.linux-x86_64-3.9/cuda/src/ball_query.o -O3 -DVERSION_GE_1_3 -DTORCH_API_INCLUDE_EXTENSION_H -DTORCH_API_INCLUDE_EXTENSION_H '-DPYBIND11_COMPILER_TYPE="_gcc"' '-DPYBIND11_COMPILER_TYPE="_gcc"' '-DPYBIND11_STDLIB="_libstdcpp"' '-DPYBIND11_STDLIB="_libstdcpp"' '-DPYBIND11_BUILD_ABI="_cxxabi1011"' '-DPYBIND11_BUILD_ABI="_cxxabi1011"' -DTORCH_EXTENSION_NAME=points_cuda -DTORCH_EXTENSION_NAME=points_cuda -D_GLIBCXX_USE_CXX11_ABI=0 -D_GLIBCXX_USE_CXX11_ABI=0 -std=c++14
  /tmp/pip-install-tz64jo92/torch-points-kernels_8b6e97835125433ba4a7d359df438095/cuda/src/ball_query.cpp:1:10: fatal error: ball_query.h: No such file or directory
   #include "ball_query.h"
            ^~~~~~~~~~~~~~
  compilation terminated.
  [2/7] c++ -MMD -MF /tmp/pip-install-tz64jo92/torch-points-kernels_8b6e97835125433ba4a7d359df438095/build/temp.linux-x86_64-3.9/cuda/src/sampling.o.d -pthread -B /home/usr/anaconda3/envs/pointcloud/compiler_compat -Wl,--sysroot=/ -Wno-unused-result -Wsign-compare -DNDEBUG -O2 -Wall -fPIC -O2 -isystem /home/usr/anaconda3/envs/pointcloud/include -fPIC -O2 -isystem /home/usr/anaconda3/envs/pointcloud/include -fPIC -Icuda/include -I/tmp/pip-build-env-krppne2_/overlay/lib/python3.9/site-packages/torch/include -I/tmp/pip-build-env-krppne2_/overlay/lib/python3.9/site-packages/torch/include/torch/csrc/api/include -I/tmp/pip-build-env-krppne2_/overlay/lib/python3.9/site-packages/torch/include/TH -I/tmp/pip-build-env-krppne2_/overlay/lib/python3.9/site-packages/torch/include/THC -I/usr/local/cuda/include -I/home/usr/anaconda3/envs/pointcloud/include/python3.9 -c -c /tmp/pip-install-tz64jo92/torch-points-kernels_8b6e97835125433ba4a7d359df438095/cuda/src/sampling.cpp -o /tmp/pip-install-tz64jo92/torch-points-kernels_8b6e97835125433ba4a7d359df438095/build/temp.linux-x86_64-3.9/cuda/src/sampling.o -O3 -DVERSION_GE_1_3 -DTORCH_API_INCLUDE_EXTENSION_H -DTORCH_API_INCLUDE_EXTENSION_H '-DPYBIND11_COMPILER_TYPE="_gcc"' '-DPYBIND11_COMPILER_TYPE="_gcc"' '-DPYBIND11_STDLIB="_libstdcpp"' '-DPYBIND11_STDLIB="_libstdcpp"' '-DPYBIND11_BUILD_ABI="_cxxabi1011"' '-DPYBIND11_BUILD_ABI="_cxxabi1011"' -DTORCH_EXTENSION_NAME=points_cuda -DTORCH_EXTENSION_NAME=points_cuda -D_GLIBCXX_USE_CXX11_ABI=0 -D_GLIBCXX_USE_CXX11_ABI=0 -std=c++14
  FAILED: /tmp/pip-install-tz64jo92/torch-points-kernels_8b6e97835125433ba4a7d359df438095/build/temp.linux-x86_64-3.9/cuda/src/sampling.o
  c++ -MMD -MF /tmp/pip-install-tz64jo92/torch-points-kernels_8b6e97835125433ba4a7d359df438095/build/temp.linux-x86_64-3.9/cuda/src/sampling.o.d -pthread -B /home/usr/anaconda3/envs/pointcloud/compiler_compat -Wl,--sysroot=/ -Wno-unused-result -Wsign-compare -DNDEBUG -O2 -Wall -fPIC -O2 -isystem /home/usr/anaconda3/envs/pointcloud/include -fPIC -O2 -isystem /home/usr/anaconda3/envs/pointcloud/include -fPIC -Icuda/include -I/tmp/pip-build-env-krppne2_/overlay/lib/python3.9/site-packages/torch/include -I/tmp/pip-build-env-krppne2_/overlay/lib/python3.9/site-packages/torch/include/torch/csrc/api/include -I/tmp/pip-build-env-krppne2_/overlay/lib/python3.9/site-packages/torch/include/TH -I/tmp/pip-build-env-krppne2_/overlay/lib/python3.9/site-packages/torch/include/THC -I/usr/local/cuda/include -I/home/usr/anaconda3/envs/pointcloud/include/python3.9 -c -c /tmp/pip-install-tz64jo92/torch-points-kernels_8b6e97835125433ba4a7d359df438095/cuda/src/sampling.cpp -o /tmp/pip-install-tz64jo92/torch-points-kernels_8b6e97835125433ba4a7d359df438095/build/temp.linux-x86_64-3.9/cuda/src/sampling.o -O3 -DVERSION_GE_1_3 -DTORCH_API_INCLUDE_EXTENSION_H -DTORCH_API_INCLUDE_EXTENSION_H '-DPYBIND11_COMPILER_TYPE="_gcc"' '-DPYBIND11_COMPILER_TYPE="_gcc"' '-DPYBIND11_STDLIB="_libstdcpp"' '-DPYBIND11_STDLIB="_libstdcpp"' '-DPYBIND11_BUILD_ABI="_cxxabi1011"' '-DPYBIND11_BUILD_ABI="_cxxabi1011"' -DTORCH_EXTENSION_NAME=points_cuda -DTORCH_EXTENSION_NAME=points_cuda -D_GLIBCXX_USE_CXX11_ABI=0 -D_GLIBCXX_USE_CXX11_ABI=0 -std=c++14
  /tmp/pip-install-tz64jo92/torch-points-kernels_8b6e97835125433ba4a7d359df438095/cuda/src/sampling.cpp:1:10: fatal error: sampling.h: No such file or directory
   #include "sampling.h"
            ^~~~~~~~~~~~
  compilation terminated.
  [3/7] c++ -MMD -MF /tmp/pip-install-tz64jo92/torch-points-kernels_8b6e97835125433ba4a7d359df438095/build/temp.linux-x86_64-3.9/cuda/src/interpolate.o.d -pthread -B /home/usr/anaconda3/envs/pointcloud/compiler_compat -Wl,--sysroot=/ -Wno-unused-result -Wsign-compare -DNDEBUG -O2 -Wall -fPIC -O2 -isystem /home/usr/anaconda3/envs/pointcloud/include -fPIC -O2 -isystem /home/usr/anaconda3/envs/pointcloud/include -fPIC -Icuda/include -I/tmp/pip-build-env-krppne2_/overlay/lib/python3.9/site-packages/torch/include -I/tmp/pip-build-env-krppne2_/overlay/lib/python3.9/site-packages/torch/include/torch/csrc/api/include -I/tmp/pip-build-env-krppne2_/overlay/lib/python3.9/site-packages/torch/include/TH -I/tmp/pip-build-env-krppne2_/overlay/lib/python3.9/site-packages/torch/include/THC -I/usr/local/cuda/include -I/home/usr/anaconda3/envs/pointcloud/include/python3.9 -c -c /tmp/pip-install-tz64jo92/torch-points-kernels_8b6e97835125433ba4a7d359df438095/cuda/src/interpolate.cpp -o /tmp/pip-install-tz64jo92/torch-points-kernels_8b6e97835125433ba4a7d359df438095/build/temp.linux-x86_64-3.9/cuda/src/interpolate.o -O3 -DVERSION_GE_1_3 -DTORCH_API_INCLUDE_EXTENSION_H -DTORCH_API_INCLUDE_EXTENSION_H '-DPYBIND11_COMPILER_TYPE="_gcc"' '-DPYBIND11_COMPILER_TYPE="_gcc"' '-DPYBIND11_STDLIB="_libstdcpp"' '-DPYBIND11_STDLIB="_libstdcpp"' '-DPYBIND11_BUILD_ABI="_cxxabi1011"' '-DPYBIND11_BUILD_ABI="_cxxabi1011"' -DTORCH_EXTENSION_NAME=points_cuda -DTORCH_EXTENSION_NAME=points_cuda -D_GLIBCXX_USE_CXX11_ABI=0 -D_GLIBCXX_USE_CXX11_ABI=0 -std=c++14
  FAILED: /tmp/pip-install-tz64jo92/torch-points-kernels_8b6e97835125433ba4a7d359df438095/build/temp.linux-x86_64-3.9/cuda/src/interpolate.o
  c++ -MMD -MF /tmp/pip-install-tz64jo92/torch-points-kernels_8b6e97835125433ba4a7d359df438095/build/temp.linux-x86_64-3.9/cuda/src/interpolate.o.d -pthread -B /home/usr/anaconda3/envs/pointcloud/compiler_compat -Wl,--sysroot=/ -Wno-unused-result -Wsign-compare -DNDEBUG -O2 -Wall -fPIC -O2 -isystem /home/usr/anaconda3/envs/pointcloud/include -fPIC -O2 -isystem /home/usr/anaconda3/envs/pointcloud/include -fPIC -Icuda/include -I/tmp/pip-build-env-krppne2_/overlay/lib/python3.9/site-packages/torch/include -I/tmp/pip-build-env-krppne2_/overlay/lib/python3.9/site-packages/torch/include/torch/csrc/api/include -I/tmp/pip-build-env-krppne2_/overlay/lib/python3.9/site-packages/torch/include/TH -I/tmp/pip-build-env-krppne2_/overlay/lib/python3.9/site-packages/torch/include/THC -I/usr/local/cuda/include -I/home/usr/anaconda3/envs/pointcloud/include/python3.9 -c -c /tmp/pip-install-tz64jo92/torch-points-kernels_8b6e97835125433ba4a7d359df438095/cuda/src/interpolate.cpp -o /tmp/pip-install-tz64jo92/torch-points-kernels_8b6e97835125433ba4a7d359df438095/build/temp.linux-x86_64-3.9/cuda/src/interpolate.o -O3 -DVERSION_GE_1_3 -DTORCH_API_INCLUDE_EXTENSION_H -DTORCH_API_INCLUDE_EXTENSION_H '-DPYBIND11_COMPILER_TYPE="_gcc"' '-DPYBIND11_COMPILER_TYPE="_gcc"' '-DPYBIND11_STDLIB="_libstdcpp"' '-DPYBIND11_STDLIB="_libstdcpp"' '-DPYBIND11_BUILD_ABI="_cxxabi1011"' '-DPYBIND11_BUILD_ABI="_cxxabi1011"' -DTORCH_EXTENSION_NAME=points_cuda -DTORCH_EXTENSION_NAME=points_cuda -D_GLIBCXX_USE_CXX11_ABI=0 -D_GLIBCXX_USE_CXX11_ABI=0 -std=c++14
  /tmp/pip-install-tz64jo92/torch-points-kernels_8b6e97835125433ba4a7d359df438095/cuda/src/interpolate.cpp:1:10: fatal error: interpolate.h: No such file or directory
   #include "interpolate.h"
            ^~~~~~~~~~~~~~~
  compilation terminated.
  [4/7] c++ -MMD -MF /tmp/pip-install-tz64jo92/torch-points-kernels_8b6e97835125433ba4a7d359df438095/build/temp.linux-x86_64-3.9/cuda/src/bindings.o.d -pthread -B /home/usr/anaconda3/envs/pointcloud/compiler_compat -Wl,--sysroot=/ -Wno-unused-result -Wsign-compare -DNDEBUG -O2 -Wall -fPIC -O2 -isystem /home/usr/anaconda3/envs/pointcloud/include -fPIC -O2 -isystem /home/usr/anaconda3/envs/pointcloud/include -fPIC -Icuda/include -I/tmp/pip-build-env-krppne2_/overlay/lib/python3.9/site-packages/torch/include -I/tmp/pip-build-env-krppne2_/overlay/lib/python3.9/site-packages/torch/include/torch/csrc/api/include -I/tmp/pip-build-env-krppne2_/overlay/lib/python3.9/site-packages/torch/include/TH -I/tmp/pip-build-env-krppne2_/overlay/lib/python3.9/site-packages/torch/include/THC -I/usr/local/cuda/include -I/home/usr/anaconda3/envs/pointcloud/include/python3.9 -c -c /tmp/pip-install-tz64jo92/torch-points-kernels_8b6e97835125433ba4a7d359df438095/cuda/src/bindings.cpp -o /tmp/pip-install-tz64jo92/torch-points-kernels_8b6e97835125433ba4a7d359df438095/build/temp.linux-x86_64-3.9/cuda/src/bindings.o -O3 -DVERSION_GE_1_3 -DTORCH_API_INCLUDE_EXTENSION_H -DTORCH_API_INCLUDE_EXTENSION_H '-DPYBIND11_COMPILER_TYPE="_gcc"' '-DPYBIND11_COMPILER_TYPE="_gcc"' '-DPYBIND11_STDLIB="_libstdcpp"' '-DPYBIND11_STDLIB="_libstdcpp"' '-DPYBIND11_BUILD_ABI="_cxxabi1011"' '-DPYBIND11_BUILD_ABI="_cxxabi1011"' -DTORCH_EXTENSION_NAME=points_cuda -DTORCH_EXTENSION_NAME=points_cuda -D_GLIBCXX_USE_CXX11_ABI=0 -D_GLIBCXX_USE_CXX11_ABI=0 -std=c++14
  FAILED: /tmp/pip-install-tz64jo92/torch-points-kernels_8b6e97835125433ba4a7d359df438095/build/temp.linux-x86_64-3.9/cuda/src/bindings.o
  c++ -MMD -MF /tmp/pip-install-tz64jo92/torch-points-kernels_8b6e97835125433ba4a7d359df438095/build/temp.linux-x86_64-3.9/cuda/src/bindings.o.d -pthread -B /home/usr/anaconda3/envs/pointcloud/compiler_compat -Wl,--sysroot=/ -Wno-unused-result -Wsign-compare -DNDEBUG -O2 -Wall -fPIC -O2 -isystem /home/usr/anaconda3/envs/pointcloud/include -fPIC -O2 -isystem /home/usr/anaconda3/envs/pointcloud/include -fPIC -Icuda/include -I/tmp/pip-build-env-krppne2_/overlay/lib/python3.9/site-packages/torch/include -I/tmp/pip-build-env-krppne2_/overlay/lib/python3.9/site-packages/torch/include/torch/csrc/api/include -I/tmp/pip-build-env-krppne2_/overlay/lib/python3.9/site-packages/torch/include/TH -I/tmp/pip-build-env-krppne2_/overlay/lib/python3.9/site-packages/torch/include/THC -I/usr/local/cuda/include -I/home/usr/anaconda3/envs/pointcloud/include/python3.9 -c -c /tmp/pip-install-tz64jo92/torch-points-kernels_8b6e97835125433ba4a7d359df438095/cuda/src/bindings.cpp -o /tmp/pip-install-tz64jo92/torch-points-kernels_8b6e97835125433ba4a7d359df438095/build/temp.linux-x86_64-3.9/cuda/src/bindings.o -O3 -DVERSION_GE_1_3 -DTORCH_API_INCLUDE_EXTENSION_H -DTORCH_API_INCLUDE_EXTENSION_H '-DPYBIND11_COMPILER_TYPE="_gcc"' '-DPYBIND11_COMPILER_TYPE="_gcc"' '-DPYBIND11_STDLIB="_libstdcpp"' '-DPYBIND11_STDLIB="_libstdcpp"' '-DPYBIND11_BUILD_ABI="_cxxabi1011"' '-DPYBIND11_BUILD_ABI="_cxxabi1011"' -DTORCH_EXTENSION_NAME=points_cuda -DTORCH_EXTENSION_NAME=points_cuda -D_GLIBCXX_USE_CXX11_ABI=0 -D_GLIBCXX_USE_CXX11_ABI=0 -std=c++14
  /tmp/pip-install-tz64jo92/torch-points-kernels_8b6e97835125433ba4a7d359df438095/cuda/src/bindings.cpp:1:10: fatal error: ball_query.h: No such file or directory
   #include "ball_query.h"
            ^~~~~~~~~~~~~~
  compilation terminated.
  [5/7] /usr/local/cuda/bin/nvcc -Icuda/include -I/tmp/pip-build-env-krppne2_/overlay/lib/python3.9/site-packages/torch/include -I/tmp/pip-build-env-krppne2_/overlay/lib/python3.9/site-packages/torch/include/torch/csrc/api/include -I/tmp/pip-build-env-krppne2_/overlay/lib/python3.9/site-packages/torch/include/TH -I/tmp/pip-build-env-krppne2_/overlay/lib/python3.9/site-packages/torch/include/THC -I/usr/local/cuda/include -I/home/usr/anaconda3/envs/pointcloud/include/python3.9 -c -c /tmp/pip-install-tz64jo92/torch-points-kernels_8b6e97835125433ba4a7d359df438095/cuda/src/sampling_gpu.cu -o /tmp/pip-install-tz64jo92/torch-points-kernels_8b6e97835125433ba4a7d359df438095/build/temp.linux-x86_64-3.9/cuda/src/sampling_gpu.o -D__CUDA_NO_HALF_OPERATORS__ -D__CUDA_NO_HALF_CONVERSIONS__ -D__CUDA_NO_HALF2_OPERATORS__ --expt-relaxed-constexpr --compiler-options ''"'"'-fPIC'"'"'' -O3 -DVERSION_GE_1_3 -DTORCH_API_INCLUDE_EXTENSION_H -DTORCH_API_INCLUDE_EXTENSION_H '-DPYBIND11_COMPILER_TYPE="_gcc"' '-DPYBIND11_COMPILER_TYPE="_gcc"' '-DPYBIND11_STDLIB="_libstdcpp"' '-DPYBIND11_STDLIB="_libstdcpp"' '-DPYBIND11_BUILD_ABI="_cxxabi1011"' '-DPYBIND11_BUILD_ABI="_cxxabi1011"' -DTORCH_EXTENSION_NAME=points_cuda -DTORCH_EXTENSION_NAME=points_cuda -D_GLIBCXX_USE_CXX11_ABI=0 -D_GLIBCXX_USE_CXX11_ABI=0 -std=c++14 -gencode=arch=compute_61,code=sm_61
  FAILED: /tmp/pip-install-tz64jo92/torch-points-kernels_8b6e97835125433ba4a7d359df438095/build/temp.linux-x86_64-3.9/cuda/src/sampling_gpu.o
  /usr/local/cuda/bin/nvcc -Icuda/include -I/tmp/pip-build-env-krppne2_/overlay/lib/python3.9/site-packages/torch/include -I/tmp/pip-build-env-krppne2_/overlay/lib/python3.9/site-packages/torch/include/torch/csrc/api/include -I/tmp/pip-build-env-krppne2_/overlay/lib/python3.9/site-packages/torch/include/TH -I/tmp/pip-build-env-krppne2_/overlay/lib/python3.9/site-packages/torch/include/THC -I/usr/local/cuda/include -I/home/usr/anaconda3/envs/pointcloud/include/python3.9 -c -c /tmp/pip-install-tz64jo92/torch-points-kernels_8b6e97835125433ba4a7d359df438095/cuda/src/sampling_gpu.cu -o /tmp/pip-install-tz64jo92/torch-points-kernels_8b6e97835125433ba4a7d359df438095/build/temp.linux-x86_64-3.9/cuda/src/sampling_gpu.o -D__CUDA_NO_HALF_OPERATORS__ -D__CUDA_NO_HALF_CONVERSIONS__ -D__CUDA_NO_HALF2_OPERATORS__ --expt-relaxed-constexpr --compiler-options ''"'"'-fPIC'"'"'' -O3 -DVERSION_GE_1_3 -DTORCH_API_INCLUDE_EXTENSION_H -DTORCH_API_INCLUDE_EXTENSION_H '-DPYBIND11_COMPILER_TYPE="_gcc"' '-DPYBIND11_COMPILER_TYPE="_gcc"' '-DPYBIND11_STDLIB="_libstdcpp"' '-DPYBIND11_STDLIB="_libstdcpp"' '-DPYBIND11_BUILD_ABI="_cxxabi1011"' '-DPYBIND11_BUILD_ABI="_cxxabi1011"' -DTORCH_EXTENSION_NAME=points_cuda -DTORCH_EXTENSION_NAME=points_cuda -D_GLIBCXX_USE_CXX11_ABI=0 -D_GLIBCXX_USE_CXX11_ABI=0 -std=c++14 -gencode=arch=compute_61,code=sm_61
  /tmp/pip-install-tz64jo92/torch-points-kernels_8b6e97835125433ba4a7d359df438095/cuda/src/sampling_gpu.cu:4:10: fatal error: cuda_utils.h: No such file or directory
   #include "cuda_utils.h"
            ^~~~~~~~~~~~~~
  compilation terminated.
  [6/7] /usr/local/cuda/bin/nvcc -Icuda/include -I/tmp/pip-build-env-krppne2_/overlay/lib/python3.9/site-packages/torch/include -I/tmp/pip-build-env-krppne2_/overlay/lib/python3.9/site-packages/torch/include/torch/csrc/api/include -I/tmp/pip-build-env-krppne2_/overlay/lib/python3.9/site-packages/torch/include/TH -I/tmp/pip-build-env-krppne2_/overlay/lib/python3.9/site-packages/torch/include/THC -I/usr/local/cuda/include -I/home/usr/anaconda3/envs/pointcloud/include/python3.9 -c -c /tmp/pip-install-tz64jo92/torch-points-kernels_8b6e97835125433ba4a7d359df438095/cuda/src/interpolate_gpu.cu -o /tmp/pip-install-tz64jo92/torch-points-kernels_8b6e97835125433ba4a7d359df438095/build/temp.linux-x86_64-3.9/cuda/src/interpolate_gpu.o -D__CUDA_NO_HALF_OPERATORS__ -D__CUDA_NO_HALF_CONVERSIONS__ -D__CUDA_NO_HALF2_OPERATORS__ --expt-relaxed-constexpr --compiler-options ''"'"'-fPIC'"'"'' -O3 -DVERSION_GE_1_3 -DTORCH_API_INCLUDE_EXTENSION_H -DTORCH_API_INCLUDE_EXTENSION_H '-DPYBIND11_COMPILER_TYPE="_gcc"' '-DPYBIND11_COMPILER_TYPE="_gcc"' '-DPYBIND11_STDLIB="_libstdcpp"' '-DPYBIND11_STDLIB="_libstdcpp"' '-DPYBIND11_BUILD_ABI="_cxxabi1011"' '-DPYBIND11_BUILD_ABI="_cxxabi1011"' -DTORCH_EXTENSION_NAME=points_cuda -DTORCH_EXTENSION_NAME=points_cuda -D_GLIBCXX_USE_CXX11_ABI=0 -D_GLIBCXX_USE_CXX11_ABI=0 -std=c++14 -gencode=arch=compute_61,code=sm_61
  FAILED: /tmp/pip-install-tz64jo92/torch-points-kernels_8b6e97835125433ba4a7d359df438095/build/temp.linux-x86_64-3.9/cuda/src/interpolate_gpu.o
  /usr/local/cuda/bin/nvcc -Icuda/include -I/tmp/pip-build-env-krppne2_/overlay/lib/python3.9/site-packages/torch/include -I/tmp/pip-build-env-krppne2_/overlay/lib/python3.9/site-packages/torch/include/torch/csrc/api/include -I/tmp/pip-build-env-krppne2_/overlay/lib/python3.9/site-packages/torch/include/TH -I/tmp/pip-build-env-krppne2_/overlay/lib/python3.9/site-packages/torch/include/THC -I/usr/local/cuda/include -I/home/usr/anaconda3/envs/pointcloud/include/python3.9 -c -c /tmp/pip-install-tz64jo92/torch-points-kernels_8b6e97835125433ba4a7d359df438095/cuda/src/interpolate_gpu.cu -o /tmp/pip-install-tz64jo92/torch-points-kernels_8b6e97835125433ba4a7d359df438095/build/temp.linux-x86_64-3.9/cuda/src/interpolate_gpu.o -D__CUDA_NO_HALF_OPERATORS__ -D__CUDA_NO_HALF_CONVERSIONS__ -D__CUDA_NO_HALF2_OPERATORS__ --expt-relaxed-constexpr --compiler-options ''"'"'-fPIC'"'"'' -O3 -DVERSION_GE_1_3 -DTORCH_API_INCLUDE_EXTENSION_H -DTORCH_API_INCLUDE_EXTENSION_H '-DPYBIND11_COMPILER_TYPE="_gcc"' '-DPYBIND11_COMPILER_TYPE="_gcc"' '-DPYBIND11_STDLIB="_libstdcpp"' '-DPYBIND11_STDLIB="_libstdcpp"' '-DPYBIND11_BUILD_ABI="_cxxabi1011"' '-DPYBIND11_BUILD_ABI="_cxxabi1011"' -DTORCH_EXTENSION_NAME=points_cuda -DTORCH_EXTENSION_NAME=points_cuda -D_GLIBCXX_USE_CXX11_ABI=0 -D_GLIBCXX_USE_CXX11_ABI=0 -std=c++14 -gencode=arch=compute_61,code=sm_61
  /tmp/pip-install-tz64jo92/torch-points-kernels_8b6e97835125433ba4a7d359df438095/cuda/src/interpolate_gpu.cu:5:10: fatal error: cuda_utils.h: No such file or directory
   #include "cuda_utils.h"
            ^~~~~~~~~~~~~~
  compilation terminated.
  [7/7] /usr/local/cuda/bin/nvcc -Icuda/include -I/tmp/pip-build-env-krppne2_/overlay/lib/python3.9/site-packages/torch/include -I/tmp/pip-build-env-krppne2_/overlay/lib/python3.9/site-packages/torch/include/torch/csrc/api/include -I/tmp/pip-build-env-krppne2_/overlay/lib/python3.9/site-packages/torch/include/TH -I/tmp/pip-build-env-krppne2_/overlay/lib/python3.9/site-packages/torch/include/THC -I/usr/local/cuda/include -I/home/usr/anaconda3/envs/pointcloud/include/python3.9 -c -c /tmp/pip-install-tz64jo92/torch-points-kernels_8b6e97835125433ba4a7d359df438095/cuda/src/ball_query_gpu.cu -o /tmp/pip-install-tz64jo92/torch-points-kernels_8b6e97835125433ba4a7d359df438095/build/temp.linux-x86_64-3.9/cuda/src/ball_query_gpu.o -D__CUDA_NO_HALF_OPERATORS__ -D__CUDA_NO_HALF_CONVERSIONS__ -D__CUDA_NO_HALF2_OPERATORS__ --expt-relaxed-constexpr --compiler-options ''"'"'-fPIC'"'"'' -O3 -DVERSION_GE_1_3 -DTORCH_API_INCLUDE_EXTENSION_H -DTORCH_API_INCLUDE_EXTENSION_H '-DPYBIND11_COMPILER_TYPE="_gcc"' '-DPYBIND11_COMPILER_TYPE="_gcc"' '-DPYBIND11_STDLIB="_libstdcpp"' '-DPYBIND11_STDLIB="_libstdcpp"' '-DPYBIND11_BUILD_ABI="_cxxabi1011"' '-DPYBIND11_BUILD_ABI="_cxxabi1011"' -DTORCH_EXTENSION_NAME=points_cuda -DTORCH_EXTENSION_NAME=points_cuda -D_GLIBCXX_USE_CXX11_ABI=0 -D_GLIBCXX_USE_CXX11_ABI=0 -std=c++14 -gencode=arch=compute_61,code=sm_61
  FAILED: /tmp/pip-install-tz64jo92/torch-points-kernels_8b6e97835125433ba4a7d359df438095/build/temp.linux-x86_64-3.9/cuda/src/ball_query_gpu.o
  /usr/local/cuda/bin/nvcc -Icuda/include -I/tmp/pip-build-env-krppne2_/overlay/lib/python3.9/site-packages/torch/include -I/tmp/pip-build-env-krppne2_/overlay/lib/python3.9/site-packages/torch/include/torch/csrc/api/include -I/tmp/pip-build-env-krppne2_/overlay/lib/python3.9/site-packages/torch/include/TH -I/tmp/pip-build-env-krppne2_/overlay/lib/python3.9/site-packages/torch/include/THC -I/usr/local/cuda/include -I/home/usr/anaconda3/envs/pointcloud/include/python3.9 -c -c /tmp/pip-install-tz64jo92/torch-points-kernels_8b6e97835125433ba4a7d359df438095/cuda/src/ball_query_gpu.cu -o /tmp/pip-install-tz64jo92/torch-points-kernels_8b6e97835125433ba4a7d359df438095/build/temp.linux-x86_64-3.9/cuda/src/ball_query_gpu.o -D__CUDA_NO_HALF_OPERATORS__ -D__CUDA_NO_HALF_CONVERSIONS__ -D__CUDA_NO_HALF2_OPERATORS__ --expt-relaxed-constexpr --compiler-options ''"'"'-fPIC'"'"'' -O3 -DVERSION_GE_1_3 -DTORCH_API_INCLUDE_EXTENSION_H -DTORCH_API_INCLUDE_EXTENSION_H '-DPYBIND11_COMPILER_TYPE="_gcc"' '-DPYBIND11_COMPILER_TYPE="_gcc"' '-DPYBIND11_STDLIB="_libstdcpp"' '-DPYBIND11_STDLIB="_libstdcpp"' '-DPYBIND11_BUILD_ABI="_cxxabi1011"' '-DPYBIND11_BUILD_ABI="_cxxabi1011"' -DTORCH_EXTENSION_NAME=points_cuda -DTORCH_EXTENSION_NAME=points_cuda -D_GLIBCXX_USE_CXX11_ABI=0 -D_GLIBCXX_USE_CXX11_ABI=0 -std=c++14 -gencode=arch=compute_61,code=sm_61
  /tmp/pip-install-tz64jo92/torch-points-kernels_8b6e97835125433ba4a7d359df438095/cuda/src/ball_query_gpu.cu:5:10: fatal error: cuda_utils.h: No such file or directory
   #include "cuda_utils.h"
            ^~~~~~~~~~~~~~
  compilation terminated.
  ninja: build stopped: subcommand failed.
  Traceback (most recent call last):
    File "/tmp/pip-build-env-krppne2_/overlay/lib/python3.9/site-packages/torch/utils/cpp_extension.py", line 1533, in _run_ninja_build
      subprocess.run(
    File "/home/usr/anaconda3/envs/pointcloud/lib/python3.9/subprocess.py", line 524, in run
      raise CalledProcessError(retcode, process.args,
  subprocess.CalledProcessError: Command '['ninja', '-v']' returned non-zero exit status 1.
  
  The above exception was the direct cause of the following exception:
  
  Traceback (most recent call last):
    File "/home/usr/anaconda3/envs/pointcloud/lib/python3.9/site-packages/pip/_vendor/pep517/_in_process.py", line 280, in <module>
      main()
    File "/home/usr/anaconda3/envs/pointcloud/lib/python3.9/site-packages/pip/_vendor/pep517/_in_process.py", line 263, in main
      json_out['return_val'] = hook(**hook_input['kwargs'])
    File "/home/usr/anaconda3/envs/pointcloud/lib/python3.9/site-packages/pip/_vendor/pep517/_in_process.py", line 204, in build_wheel
      return _build_backend().build_wheel(wheel_directory, config_settings,
    File "/tmp/pip-build-env-krppne2_/overlay/lib/python3.9/site-packages/setuptools/build_meta.py", line 216, in build_wheel
      return self._build_with_temp_dir(['bdist_wheel'], '.whl',
    File "/tmp/pip-build-env-krppne2_/overlay/lib/python3.9/site-packages/setuptools/build_meta.py", line 202, in _build_with_temp_dir
      self.run_setup()
    File "/tmp/pip-build-env-krppne2_/overlay/lib/python3.9/site-packages/setuptools/build_meta.py", line 145, in run_setup
      exec(compile(code, __file__, 'exec'), locals())
    File "setup.py", line 52, in <module>
      setup(
    File "/tmp/pip-build-env-krppne2_/overlay/lib/python3.9/site-packages/setuptools/__init__.py", line 153, in setup
      return distutils.core.setup(**attrs)
    File "/home/usr/anaconda3/envs/pointcloud/lib/python3.9/distutils/core.py", line 148, in setup
      dist.run_commands()
    File "/home/usr/anaconda3/envs/pointcloud/lib/python3.9/distutils/dist.py", line 966, in run_commands
      self.run_command(cmd)
    File "/home/usr/anaconda3/envs/pointcloud/lib/python3.9/distutils/dist.py", line 985, in run_command
      cmd_obj.run()
    File "/tmp/pip-build-env-krppne2_/overlay/lib/python3.9/site-packages/wheel/bdist_wheel.py", line 299, in run
      self.run_command('build')
    File "/home/usr/anaconda3/envs/pointcloud/lib/python3.9/distutils/cmd.py", line 313, in run_command
      self.distribution.run_command(command)
    File "/home/usr/anaconda3/envs/pointcloud/lib/python3.9/distutils/dist.py", line 985, in run_command
      cmd_obj.run()
    File "/home/usr/anaconda3/envs/pointcloud/lib/python3.9/distutils/command/build.py", line 135, in run
      self.run_command(cmd_name)
    File "/home/usr/anaconda3/envs/pointcloud/lib/python3.9/distutils/cmd.py", line 313, in run_command
      self.distribution.run_command(command)
    File "/home/usr/anaconda3/envs/pointcloud/lib/python3.9/distutils/dist.py", line 985, in run_command
      cmd_obj.run()
    File "/tmp/pip-build-env-krppne2_/overlay/lib/python3.9/site-packages/setuptools/command/build_ext.py", line 79, in run
      _build_ext.run(self)
    File "/home/usr/anaconda3/envs/pointcloud/lib/python3.9/distutils/command/build_ext.py", line 340, in run
      self.build_extensions()
    File "/tmp/pip-build-env-krppne2_/overlay/lib/python3.9/site-packages/torch/utils/cpp_extension.py", line 670, in build_extensions
      build_ext.build_extensions(self)
    File "/home/usr/anaconda3/envs/pointcloud/lib/python3.9/distutils/command/build_ext.py", line 449, in build_extensions
      self._build_extensions_serial()
    File "/home/usr/anaconda3/envs/pointcloud/lib/python3.9/distutils/command/build_ext.py", line 474, in _build_extensions_serial
      self.build_extension(ext)
    File "/tmp/pip-build-env-krppne2_/overlay/lib/python3.9/site-packages/setuptools/command/build_ext.py", line 196, in build_extension
      _build_ext.build_extension(self, ext)
    File "/home/usr/anaconda3/envs/pointcloud/lib/python3.9/distutils/command/build_ext.py", line 529, in build_extension
      objects = self.compiler.compile(sources,
    File "/tmp/pip-build-env-krppne2_/overlay/lib/python3.9/site-packages/torch/utils/cpp_extension.py", line 491, in unix_wrap_ninja_compile
      _write_ninja_file_and_compile_objects(
    File "/tmp/pip-build-env-krppne2_/overlay/lib/python3.9/site-packages/torch/utils/cpp_extension.py", line 1250, in _write_ninja_file_and_compile_objects
      _run_ninja_build(
    File "/tmp/pip-build-env-krppne2_/overlay/lib/python3.9/site-packages/torch/utils/cpp_extension.py", line 1555, in _run_ninja_build
      raise RuntimeError(message) from e
  RuntimeError: Error compiling objects for extension
  ----------------------------------------
  ERROR: Failed building wheel for torch-points-kernels
Failed to build torch-points-kernels
ERROR: Could not build wheels for torch-points-kernels which use PEP 517 and cannot be installed directly

any help will be appreciated... 😢

CPU only install

Because I only use knn.py and which seems performed in cpu. :)

pip installation doesn't include cuda version

Hi,

I installed torch-points-kernels unsing the pip command:
pip install torch-points-kernels, but suprinsingly, it doesn't install the cuda version. In fact import torch_points_kernels.points_cuda produce an ModuleNotFoundError.

I had to clone the git repo, and build the code on my machine.
Is this a problem related to the installed version of torch, cuda, etc?

Both batches need to have the same number of samples

Hi!

Using torch-points-kernels via torch-points3d I stumbled upon this error:

  File "C:\Users\EAL\Anaconda3\envs\testenv6_git_inst\lib\site-packages\torch_points_kernels\torchpoints.py", line 168, in ball_query_partial_dense
    ind, dist = tpcpu.batch_ball_query(x, y, batch_x, batch_y, radius, nsample, mode=0, sorted=sort)
RuntimeError: Both batches need to have the same number of samples.

But when checking the tensors shapes given in input right before the function call, it is identical:

x.shape = torch.Size([2786, 3])
y.shape = torch.Size([2786, 3])
batch_x.shape = torch.Size([2786])
batch_y.shape = torch.Size([2786])

There must be something wrong but I can't see what or where...

The error takes places always at the the same point during training (during the first epoch):

24%|████████████▎                                      | 915/3772 [09:15<28:53,  1.65it/s, data_loading=0.005, iteration=0.318, train_acc=50.96, train_loss_seg=1.279, train_macc=32.70, train_miou=12.57)]

Thanks you for your feedback!

ModuleNotFoundError: No module named 'torch_points_kernels.points_cuda'

Hi,

I am using your torch-points-kernels repo for a project I am working on.
The torchpoints.py file imports 'torch_points_kernels.points_cuda', however I get a ModuleNotFoundError as the points_cuda.py file doesn't exist. I checked the repo too but I couldn't find it.
Do you know how I can resolve this error?

Thanks in advance.

Fail to built

OS:windows11 22H2
C++ Compiler: MSVC2022
Python:3.8
CudatoolKit:release 11.8, V11.8.89
pytorch:2.0.1

I am try to install torch points kernels 0.6.10 via pip
then I got these

No CUDA runtime is found, using CUDA_HOME='C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.8'
      running bdist_wheel
      running build
      running build_py
      creating build
      creating build\lib.win32-cpython-38
      creating build\lib.win32-cpython-38\torch_points_kernels
      copying torch_points_kernels\chamfer_dist.py -> build\lib.win32-cpython-38\torch_points_kernels
      copying torch_points_kernels\cluster.py -> build\lib.win32-cpython-38\torch_points_kernels
      copying torch_points_kernels\cubic_feature_sampling.py -> build\lib.win32-cpython-38\torch_points_kernels
      copying torch_points_kernels\gridding.py -> build\lib.win32-cpython-38\torch_points_kernels
      copying torch_points_kernels\knn.py -> build\lib.win32-cpython-38\torch_points_kernels
      copying torch_points_kernels\metrics.py -> build\lib.win32-cpython-38\torch_points_kernels
      copying torch_points_kernels\torchpoints.py -> build\lib.win32-cpython-38\torch_points_kernels
      copying torch_points_kernels\__init__.py -> build\lib.win32-cpython-38\torch_points_kernels
      running build_ext
      Traceback (most recent call last):
        File "<string>", line 2, in <module>
        File "<pip-setuptools-caller>", line 34, in <module>
        File "C:\Users\ASUS\AppData\Local\Temp\pip-install-9vtektg3\torch-points-kernels_43eb08ed507e4a51ac0e9e548f108dbd\setup.py", line 76, in <module>
          setup(
        File "C:\Users\ASUS\miniconda3\envs\p3d\lib\site-packages\setuptools\__init__.py", line 107, in setup
          return distutils.core.setup(**attrs)
        File "C:\Users\ASUS\miniconda3\envs\p3d\lib\site-packages\setuptools\_distutils\core.py", line 185, in setup
          return run_commands(dist)
        File "C:\Users\ASUS\miniconda3\envs\p3d\lib\site-packages\setuptools\_distutils\core.py", line 201, in run_commands
          dist.run_commands()
        File "C:\Users\ASUS\miniconda3\envs\p3d\lib\site-packages\setuptools\_distutils\dist.py", line 969, in run_commands
          self.run_command(cmd)
        File "C:\Users\ASUS\miniconda3\envs\p3d\lib\site-packages\setuptools\dist.py", line 1244, in run_command
          super().run_command(command)
        File "C:\Users\ASUS\miniconda3\envs\p3d\lib\site-packages\setuptools\_distutils\dist.py", line 988, in run_command
          cmd_obj.run()
        File "C:\Users\ASUS\miniconda3\envs\p3d\lib\site-packages\wheel\bdist_wheel.py", line 325, in run
          self.run_command("build")
        File "C:\Users\ASUS\miniconda3\envs\p3d\lib\site-packages\setuptools\_distutils\cmd.py", line 318, in run_command
          self.distribution.run_command(command)
        File "C:\Users\ASUS\miniconda3\envs\p3d\lib\site-packages\setuptools\dist.py", line 1244, in run_command
          super().run_command(command)
        File "C:\Users\ASUS\miniconda3\envs\p3d\lib\site-packages\setuptools\_distutils\dist.py", line 988, in run_command
          cmd_obj.run()
        File "C:\Users\ASUS\miniconda3\envs\p3d\lib\site-packages\setuptools\_distutils\command\build.py", line 131, in run
          self.run_command(cmd_name)
        File "C:\Users\ASUS\miniconda3\envs\p3d\lib\site-packages\setuptools\_distutils\cmd.py", line 318, in run_command
          self.distribution.run_command(command)
        File "C:\Users\ASUS\miniconda3\envs\p3d\lib\site-packages\setuptools\dist.py", line 1244, in run_command
          super().run_command(command)
        File "C:\Users\ASUS\miniconda3\envs\p3d\lib\site-packages\setuptools\_distutils\dist.py", line 988, in run_command
          cmd_obj.run()
        File "C:\Users\ASUS\miniconda3\envs\p3d\lib\site-packages\setuptools\command\build_ext.py", line 84, in run
          _build_ext.run(self)
        File "C:\Users\ASUS\miniconda3\envs\p3d\lib\site-packages\setuptools\_distutils\command\build_ext.py", line 345, in run
          self.build_extensions()
        File "C:\Users\ASUS\miniconda3\envs\p3d\lib\site-packages\torch\utils\cpp_extension.py", line 499, in build_extensions
          _check_cuda_version(compiler_name, compiler_version)
        File "C:\Users\ASUS\miniconda3\envs\p3d\lib\site-packages\torch\utils\cpp_extension.py", line 383, in _check_cuda_version
          torch_cuda_version = packaging.version.parse(torch.version.cuda)
        File "C:\Users\ASUS\miniconda3\envs\p3d\lib\site-packages\pkg_resources\_vendor\packaging\version.py", line 52, in parse
          return Version(version)
        File "C:\Users\ASUS\miniconda3\envs\p3d\lib\site-packages\pkg_resources\_vendor\packaging\version.py", line 195, in __init__
          match = self._regex.search(version)
      TypeError: expected string or bytes-like object
      [end of output]

Unable to build wheel for torch-points-kernels

Hello @nicolas-chaulet, thank you for the library. I'm attempting to install the kernels (as well as torch-points3d) across multiple servers and am running into installation issues for the kernels. I am basically unable to build the wheel and it leads to an extremely long error trace.

I was wondering if you could assist me here or if my system configurations are just incompatible with the kernels?

System:

  • OS: Ubuntu 18.04
  • CUDA: 11.0
  • Python: 3.7
  • PyTorch: 1.7.1
  • GPU: GeForce RTX 3090

Before installation, I ensure that CUDA is configured properly and nvcc works.

$ export PATH=/usr/local/cuda/bin:$PATH
$ export CPATH=/usr/local/cuda/include:$CPATH
$ export LD_LIBRARY_PATH=/usr/local/cuda/lib64:$LD_LIBRARY_PATH

$ python -c "import torch; print(torch.version.cuda)"
11.0

$ nvcc --version
nvcc: NVIDIA (R) Cuda compiler driver
Copyright (c) 2005-2020 NVIDIA Corporation
Built on Wed_Jul_22_19:09:09_PDT_2020
Cuda compilation tools, release 11.0, V11.0.221
Build cuda_11.0_bu.TC445_37.28845127_0

When installing torch-points-kernels, I get the following error trace:
(The error seems to be nvcc fatal : Unsupported gpu architecture compute_86)

$ pip3 install torch-points-kernels
Collecting torch-points-kernels
  Downloading torch-points-kernels-0.7.0.tar.gz (44 kB)
     |████████████████████████████████| 44 kB 7.3 MB/s 
Requirement already satisfied: torch>=1.1.0 in /home/user/miniconda3/envs/pointcloud/lib/python3.7/site-packages (from torch-points-kernels) (1.7.1)
Requirement already satisfied: numba in /home/user/miniconda3/envs/pointcloud/lib/python3.7/site-packages (from torch-points-kernels) (0.49.1)
Requirement already satisfied: numpy<1.20 in /home/user/miniconda3/envs/pointcloud/lib/python3.7/site-packages (from torch-points-kernels) (1.19.4)
Requirement already satisfied: scikit-learn in /home/user/miniconda3/envs/pointcloud/lib/python3.7/site-packages (from torch-points-kernels) (0.23.2)
Requirement already satisfied: typing_extensions in /home/user/miniconda3/envs/pointcloud/lib/python3.7/site-packages (from torch>=1.1.0->torch-points-kernels) (3.7.4.3)
Requirement already satisfied: llvmlite<=0.33.0.dev0,>=0.31.0.dev0 in /home/user/miniconda3/envs/pointcloud/lib/python3.7/site-packages (from numba->torch-points-kernels) (0.32.1)
Requirement already satisfied: setuptools in /home/user/miniconda3/envs/pointcloud/lib/python3.7/site-packages (from numba->torch-points-kernels) (52.0.0.post20210125)
Requirement already satisfied: scipy>=0.19.1 in /home/user/miniconda3/envs/pointcloud/lib/python3.7/site-packages (from scikit-learn->torch-points-kernels) (1.5.4)
Requirement already satisfied: threadpoolctl>=2.0.0 in /home/user/miniconda3/envs/pointcloud/lib/python3.7/site-packages (from scikit-learn->torch-points-kernels) (2.1.0)
Requirement already satisfied: joblib>=0.11 in /home/user/miniconda3/envs/pointcloud/lib/python3.7/site-packages (from scikit-learn->torch-points-kernels) (0.17.0)
Building wheels for collected packages: torch-points-kernels
  Building wheel for torch-points-kernels (setup.py) ... error
  ERROR: Command errored out with exit status 1:
   command: /home/user/miniconda3/envs/pointcloud/bin/python -u -c 'import sys, setuptools, tokenize; sys.argv[0] = '"'"'/tmp/pip-install-3twio4ga/torch-points-kernels_4d1bdf9ec06f4aa8984512eb395e074d/setup.py'"'"'; __file__='"'"'/tmp/pip-install-3twio4ga/torch-points-kernels_4d1bdf9ec06f4aa8984512eb395e074d/setup.py'"'"';f=getattr(tokenize, '"'"'open'"'"', open)(__file__);code=f.read().replace('"'"'\r\n'"'"', '"'"'\n'"'"');f.close();exec(compile(code, __file__, '"'"'exec'"'"'))' bdist_wheel -d /tmp/pip-wheel-mk73r6oo
       cwd: /tmp/pip-install-3twio4ga/torch-points-kernels_4d1bdf9ec06f4aa8984512eb395e074d/
  Complete output (126 lines):
  running bdist_wheel
  running build
  running build_py
  creating build
  creating build/lib.linux-x86_64-3.7
  creating build/lib.linux-x86_64-3.7/torch_points_kernels
  copying torch_points_kernels/knn.py -> build/lib.linux-x86_64-3.7/torch_points_kernels
  copying torch_points_kernels/cluster.py -> build/lib.linux-x86_64-3.7/torch_points_kernels
  copying torch_points_kernels/chamfer_dist.py -> build/lib.linux-x86_64-3.7/torch_points_kernels
  copying torch_points_kernels/gridding.py -> build/lib.linux-x86_64-3.7/torch_points_kernels
  copying torch_points_kernels/cubic_feature_sampling.py -> build/lib.linux-x86_64-3.7/torch_points_kernels
  copying torch_points_kernels/__init__.py -> build/lib.linux-x86_64-3.7/torch_points_kernels
  copying torch_points_kernels/torchpoints.py -> build/lib.linux-x86_64-3.7/torch_points_kernels
  copying torch_points_kernels/metrics.py -> build/lib.linux-x86_64-3.7/torch_points_kernels
  running build_ext
  building 'torch_points_kernels.points_cuda' extension
  creating build/temp.linux-x86_64-3.7
  creating build/temp.linux-x86_64-3.7/cuda
  creating build/temp.linux-x86_64-3.7/cuda/src
  gcc -pthread -B /home/user/miniconda3/envs/pointcloud/compiler_compat -Wl,--sysroot=/ -Wsign-compare -DNDEBUG -g -fwrapv -O3 -Wall -Wstrict-prototypes -fPIC -Icuda/include -I/home/user/miniconda3/envs/pointcloud/lib/python3.7/site-packages/torch/include -I/home/user/miniconda3/envs/pointcloud/lib/python3.7/site-packages/torch/include/torch/csrc/api/include -I/home/user/miniconda3/envs/pointcloud/lib/python3.7/site-packages/torch/include/TH -I/home/user/miniconda3/envs/pointcloud/lib/python3.7/site-packages/torch/include/THC -I/usr/local/cuda/include -I/home/user/miniconda3/envs/pointcloud/include/python3.7m -c cuda/src/interpolate.cpp -o build/temp.linux-x86_64-3.7/cuda/src/interpolate.o -O3 -DVERSION_GE_1_3 -DTORCH_API_INCLUDE_EXTENSION_H -DTORCH_API_INCLUDE_EXTENSION_H -DPYBIND11_COMPILER_TYPE="_gcc" -DPYBIND11_COMPILER_TYPE="_gcc" -DPYBIND11_STDLIB="_libstdcpp" -DPYBIND11_STDLIB="_libstdcpp" -DPYBIND11_BUILD_ABI="_cxxabi1011" -DPYBIND11_BUILD_ABI="_cxxabi1011" -DTORCH_EXTENSION_NAME=points_cuda -DTORCH_EXTENSION_NAME=points_cuda -D_GLIBCXX_USE_CXX11_ABI=0 -D_GLIBCXX_USE_CXX11_ABI=0 -std=c++14
  cc1plus: warning: command line option ‘-Wstrict-prototypes’ is valid for C/ObjC but not for C++
  In file included from /home/user/miniconda3/envs/pointcloud/lib/python3.7/site-packages/torch/include/ATen/Parallel.h:149:0,
                   from /home/user/miniconda3/envs/pointcloud/lib/python3.7/site-packages/torch/include/torch/csrc/api/include/torch/utils.h:3,
                   from /home/user/miniconda3/envs/pointcloud/lib/python3.7/site-packages/torch/include/torch/csrc/api/include/torch/nn/cloneable.h:5,
                   from /home/user/miniconda3/envs/pointcloud/lib/python3.7/site-packages/torch/include/torch/csrc/api/include/torch/nn.h:3,
                   from /home/user/miniconda3/envs/pointcloud/lib/python3.7/site-packages/torch/include/torch/csrc/api/include/torch/all.h:12,
                   from /home/user/miniconda3/envs/pointcloud/lib/python3.7/site-packages/torch/include/torch/extension.h:4,
                   from cuda/include/interpolate.h:3,
                   from cuda/src/interpolate.cpp:1:
  /home/user/miniconda3/envs/pointcloud/lib/python3.7/site-packages/torch/include/ATen/ParallelOpenMP.h:84:0: warning: ignoring #pragma omp parallel [-Wunknown-pragmas]
   #pragma omp parallel for if ((end - begin) >= grain_size)
  
  gcc -pthread -B /home/user/miniconda3/envs/pointcloud/compiler_compat -Wl,--sysroot=/ -Wsign-compare -DNDEBUG -g -fwrapv -O3 -Wall -Wstrict-prototypes -fPIC -Icuda/include -I/home/user/miniconda3/envs/pointcloud/lib/python3.7/site-packages/torch/include -I/home/user/miniconda3/envs/pointcloud/lib/python3.7/site-packages/torch/include/torch/csrc/api/include -I/home/user/miniconda3/envs/pointcloud/lib/python3.7/site-packages/torch/include/TH -I/home/user/miniconda3/envs/pointcloud/lib/python3.7/site-packages/torch/include/THC -I/usr/local/cuda/include -I/home/user/miniconda3/envs/pointcloud/include/python3.7m -c cuda/src/cubic_feature_sampling.cpp -o build/temp.linux-x86_64-3.7/cuda/src/cubic_feature_sampling.o -O3 -DVERSION_GE_1_3 -DTORCH_API_INCLUDE_EXTENSION_H -DTORCH_API_INCLUDE_EXTENSION_H -DPYBIND11_COMPILER_TYPE="_gcc" -DPYBIND11_COMPILER_TYPE="_gcc" -DPYBIND11_STDLIB="_libstdcpp" -DPYBIND11_STDLIB="_libstdcpp" -DPYBIND11_BUILD_ABI="_cxxabi1011" -DPYBIND11_BUILD_ABI="_cxxabi1011" -DTORCH_EXTENSION_NAME=points_cuda -DTORCH_EXTENSION_NAME=points_cuda -D_GLIBCXX_USE_CXX11_ABI=0 -D_GLIBCXX_USE_CXX11_ABI=0 -std=c++14
  cc1plus: warning: command line option ‘-Wstrict-prototypes’ is valid for C/ObjC but not for C++
  In file included from /home/user/miniconda3/envs/pointcloud/lib/python3.7/site-packages/torch/include/ATen/Parallel.h:149:0,
                   from /home/user/miniconda3/envs/pointcloud/lib/python3.7/site-packages/torch/include/torch/csrc/api/include/torch/utils.h:3,
                   from /home/user/miniconda3/envs/pointcloud/lib/python3.7/site-packages/torch/include/torch/csrc/api/include/torch/nn/cloneable.h:5,
                   from /home/user/miniconda3/envs/pointcloud/lib/python3.7/site-packages/torch/include/torch/csrc/api/include/torch/nn.h:3,
                   from /home/user/miniconda3/envs/pointcloud/lib/python3.7/site-packages/torch/include/torch/csrc/api/include/torch/all.h:12,
                   from /home/user/miniconda3/envs/pointcloud/lib/python3.7/site-packages/torch/include/torch/extension.h:4,
                   from cuda/include/cubic_feature_sampling.h:4,
                   from cuda/src/cubic_feature_sampling.cpp:1:
  /home/user/miniconda3/envs/pointcloud/lib/python3.7/site-packages/torch/include/ATen/ParallelOpenMP.h:84:0: warning: ignoring #pragma omp parallel [-Wunknown-pragmas]
   #pragma omp parallel for if ((end - begin) >= grain_size)
  
  gcc -pthread -B /home/user/miniconda3/envs/pointcloud/compiler_compat -Wl,--sysroot=/ -Wsign-compare -DNDEBUG -g -fwrapv -O3 -Wall -Wstrict-prototypes -fPIC -Icuda/include -I/home/user/miniconda3/envs/pointcloud/lib/python3.7/site-packages/torch/include -I/home/user/miniconda3/envs/pointcloud/lib/python3.7/site-packages/torch/include/torch/csrc/api/include -I/home/user/miniconda3/envs/pointcloud/lib/python3.7/site-packages/torch/include/TH -I/home/user/miniconda3/envs/pointcloud/lib/python3.7/site-packages/torch/include/THC -I/usr/local/cuda/include -I/home/user/miniconda3/envs/pointcloud/include/python3.7m -c cuda/src/ball_query.cpp -o build/temp.linux-x86_64-3.7/cuda/src/ball_query.o -O3 -DVERSION_GE_1_3 -DTORCH_API_INCLUDE_EXTENSION_H -DTORCH_API_INCLUDE_EXTENSION_H -DPYBIND11_COMPILER_TYPE="_gcc" -DPYBIND11_COMPILER_TYPE="_gcc" -DPYBIND11_STDLIB="_libstdcpp" -DPYBIND11_STDLIB="_libstdcpp" -DPYBIND11_BUILD_ABI="_cxxabi1011" -DPYBIND11_BUILD_ABI="_cxxabi1011" -DTORCH_EXTENSION_NAME=points_cuda -DTORCH_EXTENSION_NAME=points_cuda -D_GLIBCXX_USE_CXX11_ABI=0 -D_GLIBCXX_USE_CXX11_ABI=0 -std=c++14
  cc1plus: warning: command line option ‘-Wstrict-prototypes’ is valid for C/ObjC but not for C++
  In file included from /home/user/miniconda3/envs/pointcloud/lib/python3.7/site-packages/torch/include/ATen/Parallel.h:149:0,
                   from /home/user/miniconda3/envs/pointcloud/lib/python3.7/site-packages/torch/include/torch/csrc/api/include/torch/utils.h:3,
                   from /home/user/miniconda3/envs/pointcloud/lib/python3.7/site-packages/torch/include/torch/csrc/api/include/torch/nn/cloneable.h:5,
                   from /home/user/miniconda3/envs/pointcloud/lib/python3.7/site-packages/torch/include/torch/csrc/api/include/torch/nn.h:3,
                   from /home/user/miniconda3/envs/pointcloud/lib/python3.7/site-packages/torch/include/torch/csrc/api/include/torch/all.h:12,
                   from /home/user/miniconda3/envs/pointcloud/lib/python3.7/site-packages/torch/include/torch/extension.h:4,
                   from cuda/include/ball_query.h:2,
                   from cuda/src/ball_query.cpp:1:
  /home/user/miniconda3/envs/pointcloud/lib/python3.7/site-packages/torch/include/ATen/ParallelOpenMP.h:84:0: warning: ignoring #pragma omp parallel [-Wunknown-pragmas]
   #pragma omp parallel for if ((end - begin) >= grain_size)
  
  gcc -pthread -B /home/user/miniconda3/envs/pointcloud/compiler_compat -Wl,--sysroot=/ -Wsign-compare -DNDEBUG -g -fwrapv -O3 -Wall -Wstrict-prototypes -fPIC -Icuda/include -I/home/user/miniconda3/envs/pointcloud/lib/python3.7/site-packages/torch/include -I/home/user/miniconda3/envs/pointcloud/lib/python3.7/site-packages/torch/include/torch/csrc/api/include -I/home/user/miniconda3/envs/pointcloud/lib/python3.7/site-packages/torch/include/TH -I/home/user/miniconda3/envs/pointcloud/lib/python3.7/site-packages/torch/include/THC -I/usr/local/cuda/include -I/home/user/miniconda3/envs/pointcloud/include/python3.7m -c cuda/src/bindings.cpp -o build/temp.linux-x86_64-3.7/cuda/src/bindings.o -O3 -DVERSION_GE_1_3 -DTORCH_API_INCLUDE_EXTENSION_H -DTORCH_API_INCLUDE_EXTENSION_H -DPYBIND11_COMPILER_TYPE="_gcc" -DPYBIND11_COMPILER_TYPE="_gcc" -DPYBIND11_STDLIB="_libstdcpp" -DPYBIND11_STDLIB="_libstdcpp" -DPYBIND11_BUILD_ABI="_cxxabi1011" -DPYBIND11_BUILD_ABI="_cxxabi1011" -DTORCH_EXTENSION_NAME=points_cuda -DTORCH_EXTENSION_NAME=points_cuda -D_GLIBCXX_USE_CXX11_ABI=0 -D_GLIBCXX_USE_CXX11_ABI=0 -std=c++14
  cc1plus: warning: command line option ‘-Wstrict-prototypes’ is valid for C/ObjC but not for C++
  In file included from /home/user/miniconda3/envs/pointcloud/lib/python3.7/site-packages/torch/include/ATen/Parallel.h:149:0,
                   from /home/user/miniconda3/envs/pointcloud/lib/python3.7/site-packages/torch/include/torch/csrc/api/include/torch/utils.h:3,
                   from /home/user/miniconda3/envs/pointcloud/lib/python3.7/site-packages/torch/include/torch/csrc/api/include/torch/nn/cloneable.h:5,
                   from /home/user/miniconda3/envs/pointcloud/lib/python3.7/site-packages/torch/include/torch/csrc/api/include/torch/nn.h:3,
                   from /home/user/miniconda3/envs/pointcloud/lib/python3.7/site-packages/torch/include/torch/csrc/api/include/torch/all.h:12,
                   from /home/user/miniconda3/envs/pointcloud/lib/python3.7/site-packages/torch/include/torch/extension.h:4,
                   from cuda/include/ball_query.h:2,
                   from cuda/src/bindings.cpp:1:
  /home/user/miniconda3/envs/pointcloud/lib/python3.7/site-packages/torch/include/ATen/ParallelOpenMP.h:84:0: warning: ignoring #pragma omp parallel [-Wunknown-pragmas]
   #pragma omp parallel for if ((end - begin) >= grain_size)
  
  gcc -pthread -B /home/user/miniconda3/envs/pointcloud/compiler_compat -Wl,--sysroot=/ -Wsign-compare -DNDEBUG -g -fwrapv -O3 -Wall -Wstrict-prototypes -fPIC -Icuda/include -I/home/user/miniconda3/envs/pointcloud/lib/python3.7/site-packages/torch/include -I/home/user/miniconda3/envs/pointcloud/lib/python3.7/site-packages/torch/include/torch/csrc/api/include -I/home/user/miniconda3/envs/pointcloud/lib/python3.7/site-packages/torch/include/TH -I/home/user/miniconda3/envs/pointcloud/lib/python3.7/site-packages/torch/include/THC -I/usr/local/cuda/include -I/home/user/miniconda3/envs/pointcloud/include/python3.7m -c cuda/src/sampling.cpp -o build/temp.linux-x86_64-3.7/cuda/src/sampling.o -O3 -DVERSION_GE_1_3 -DTORCH_API_INCLUDE_EXTENSION_H -DTORCH_API_INCLUDE_EXTENSION_H -DPYBIND11_COMPILER_TYPE="_gcc" -DPYBIND11_COMPILER_TYPE="_gcc" -DPYBIND11_STDLIB="_libstdcpp" -DPYBIND11_STDLIB="_libstdcpp" -DPYBIND11_BUILD_ABI="_cxxabi1011" -DPYBIND11_BUILD_ABI="_cxxabi1011" -DTORCH_EXTENSION_NAME=points_cuda -DTORCH_EXTENSION_NAME=points_cuda -D_GLIBCXX_USE_CXX11_ABI=0 -D_GLIBCXX_USE_CXX11_ABI=0 -std=c++14
  cc1plus: warning: command line option ‘-Wstrict-prototypes’ is valid for C/ObjC but not for C++
  In file included from /home/user/miniconda3/envs/pointcloud/lib/python3.7/site-packages/torch/include/ATen/Parallel.h:149:0,
                   from /home/user/miniconda3/envs/pointcloud/lib/python3.7/site-packages/torch/include/torch/csrc/api/include/torch/utils.h:3,
                   from /home/user/miniconda3/envs/pointcloud/lib/python3.7/site-packages/torch/include/torch/csrc/api/include/torch/nn/cloneable.h:5,
                   from /home/user/miniconda3/envs/pointcloud/lib/python3.7/site-packages/torch/include/torch/csrc/api/include/torch/nn.h:3,
                   from /home/user/miniconda3/envs/pointcloud/lib/python3.7/site-packages/torch/include/torch/csrc/api/include/torch/all.h:12,
                   from /home/user/miniconda3/envs/pointcloud/lib/python3.7/site-packages/torch/include/torch/extension.h:4,
                   from cuda/include/sampling.h:2,
                   from cuda/src/sampling.cpp:1:
  /home/user/miniconda3/envs/pointcloud/lib/python3.7/site-packages/torch/include/ATen/ParallelOpenMP.h:84:0: warning: ignoring #pragma omp parallel [-Wunknown-pragmas]
   #pragma omp parallel for if ((end - begin) >= grain_size)
  
  gcc -pthread -B /home/user/miniconda3/envs/pointcloud/compiler_compat -Wl,--sysroot=/ -Wsign-compare -DNDEBUG -g -fwrapv -O3 -Wall -Wstrict-prototypes -fPIC -Icuda/include -I/home/user/miniconda3/envs/pointcloud/lib/python3.7/site-packages/torch/include -I/home/user/miniconda3/envs/pointcloud/lib/python3.7/site-packages/torch/include/torch/csrc/api/include -I/home/user/miniconda3/envs/pointcloud/lib/python3.7/site-packages/torch/include/TH -I/home/user/miniconda3/envs/pointcloud/lib/python3.7/site-packages/torch/include/THC -I/usr/local/cuda/include -I/home/user/miniconda3/envs/pointcloud/include/python3.7m -c cuda/src/chamfer_dist.cpp -o build/temp.linux-x86_64-3.7/cuda/src/chamfer_dist.o -O3 -DVERSION_GE_1_3 -DTORCH_API_INCLUDE_EXTENSION_H -DTORCH_API_INCLUDE_EXTENSION_H -DPYBIND11_COMPILER_TYPE="_gcc" -DPYBIND11_COMPILER_TYPE="_gcc" -DPYBIND11_STDLIB="_libstdcpp" -DPYBIND11_STDLIB="_libstdcpp" -DPYBIND11_BUILD_ABI="_cxxabi1011" -DPYBIND11_BUILD_ABI="_cxxabi1011" -DTORCH_EXTENSION_NAME=points_cuda -DTORCH_EXTENSION_NAME=points_cuda -D_GLIBCXX_USE_CXX11_ABI=0 -D_GLIBCXX_USE_CXX11_ABI=0 -std=c++14
  cc1plus: warning: command line option ‘-Wstrict-prototypes’ is valid for C/ObjC but not for C++
  In file included from /home/user/miniconda3/envs/pointcloud/lib/python3.7/site-packages/torch/include/ATen/Parallel.h:149:0,
                   from /home/user/miniconda3/envs/pointcloud/lib/python3.7/site-packages/torch/include/torch/csrc/api/include/torch/utils.h:3,
                   from /home/user/miniconda3/envs/pointcloud/lib/python3.7/site-packages/torch/include/torch/csrc/api/include/torch/nn/cloneable.h:5,
                   from /home/user/miniconda3/envs/pointcloud/lib/python3.7/site-packages/torch/include/torch/csrc/api/include/torch/nn.h:3,
                   from /home/user/miniconda3/envs/pointcloud/lib/python3.7/site-packages/torch/include/torch/csrc/api/include/torch/all.h:12,
                   from /home/user/miniconda3/envs/pointcloud/lib/python3.7/site-packages/torch/include/torch/extension.h:4,
                   from cuda/include/chamfer_dist.h:1,
                   from cuda/src/chamfer_dist.cpp:1:
  /home/user/miniconda3/envs/pointcloud/lib/python3.7/site-packages/torch/include/ATen/ParallelOpenMP.h:84:0: warning: ignoring #pragma omp parallel [-Wunknown-pragmas]
   #pragma omp parallel for if ((end - begin) >= grain_size)
  
  gcc -pthread -B /home/user/miniconda3/envs/pointcloud/compiler_compat -Wl,--sysroot=/ -Wsign-compare -DNDEBUG -g -fwrapv -O3 -Wall -Wstrict-prototypes -fPIC -Icuda/include -I/home/user/miniconda3/envs/pointcloud/lib/python3.7/site-packages/torch/include -I/home/user/miniconda3/envs/pointcloud/lib/python3.7/site-packages/torch/include/torch/csrc/api/include -I/home/user/miniconda3/envs/pointcloud/lib/python3.7/site-packages/torch/include/TH -I/home/user/miniconda3/envs/pointcloud/lib/python3.7/site-packages/torch/include/THC -I/usr/local/cuda/include -I/home/user/miniconda3/envs/pointcloud/include/python3.7m -c cuda/src/metrics.cpp -o build/temp.linux-x86_64-3.7/cuda/src/metrics.o -O3 -DVERSION_GE_1_3 -DTORCH_API_INCLUDE_EXTENSION_H -DTORCH_API_INCLUDE_EXTENSION_H -DPYBIND11_COMPILER_TYPE="_gcc" -DPYBIND11_COMPILER_TYPE="_gcc" -DPYBIND11_STDLIB="_libstdcpp" -DPYBIND11_STDLIB="_libstdcpp" -DPYBIND11_BUILD_ABI="_cxxabi1011" -DPYBIND11_BUILD_ABI="_cxxabi1011" -DTORCH_EXTENSION_NAME=points_cuda -DTORCH_EXTENSION_NAME=points_cuda -D_GLIBCXX_USE_CXX11_ABI=0 -D_GLIBCXX_USE_CXX11_ABI=0 -std=c++14
  cc1plus: warning: command line option ‘-Wstrict-prototypes’ is valid for C/ObjC but not for C++
  In file included from /home/user/miniconda3/envs/pointcloud/lib/python3.7/site-packages/torch/include/ATen/Parallel.h:149:0,
                   from /home/user/miniconda3/envs/pointcloud/lib/python3.7/site-packages/torch/include/torch/csrc/api/include/torch/utils.h:3,
                   from /home/user/miniconda3/envs/pointcloud/lib/python3.7/site-packages/torch/include/torch/csrc/api/include/torch/nn/cloneable.h:5,
                   from /home/user/miniconda3/envs/pointcloud/lib/python3.7/site-packages/torch/include/torch/csrc/api/include/torch/nn.h:3,
                   from /home/user/miniconda3/envs/pointcloud/lib/python3.7/site-packages/torch/include/torch/csrc/api/include/torch/all.h:12,
                   from /home/user/miniconda3/envs/pointcloud/lib/python3.7/site-packages/torch/include/torch/extension.h:4,
                   from cuda/include/metrics.h:2,
                   from cuda/src/metrics.cpp:1:
  /home/user/miniconda3/envs/pointcloud/lib/python3.7/site-packages/torch/include/ATen/ParallelOpenMP.h:84:0: warning: ignoring #pragma omp parallel [-Wunknown-pragmas]
   #pragma omp parallel for if ((end - begin) >= grain_size)
  
  gcc -pthread -B /home/user/miniconda3/envs/pointcloud/compiler_compat -Wl,--sysroot=/ -Wsign-compare -DNDEBUG -g -fwrapv -O3 -Wall -Wstrict-prototypes -fPIC -Icuda/include -I/home/user/miniconda3/envs/pointcloud/lib/python3.7/site-packages/torch/include -I/home/user/miniconda3/envs/pointcloud/lib/python3.7/site-packages/torch/include/torch/csrc/api/include -I/home/user/miniconda3/envs/pointcloud/lib/python3.7/site-packages/torch/include/TH -I/home/user/miniconda3/envs/pointcloud/lib/python3.7/site-packages/torch/include/THC -I/usr/local/cuda/include -I/home/user/miniconda3/envs/pointcloud/include/python3.7m -c cuda/src/gridding.cpp -o build/temp.linux-x86_64-3.7/cuda/src/gridding.o -O3 -DVERSION_GE_1_3 -DTORCH_API_INCLUDE_EXTENSION_H -DTORCH_API_INCLUDE_EXTENSION_H -DPYBIND11_COMPILER_TYPE="_gcc" -DPYBIND11_COMPILER_TYPE="_gcc" -DPYBIND11_STDLIB="_libstdcpp" -DPYBIND11_STDLIB="_libstdcpp" -DPYBIND11_BUILD_ABI="_cxxabi1011" -DPYBIND11_BUILD_ABI="_cxxabi1011" -DTORCH_EXTENSION_NAME=points_cuda -DTORCH_EXTENSION_NAME=points_cuda -D_GLIBCXX_USE_CXX11_ABI=0 -D_GLIBCXX_USE_CXX11_ABI=0 -std=c++14
  cc1plus: warning: command line option ‘-Wstrict-prototypes’ is valid for C/ObjC but not for C++
  In file included from /home/user/miniconda3/envs/pointcloud/lib/python3.7/site-packages/torch/include/ATen/Parallel.h:149:0,
                   from /home/user/miniconda3/envs/pointcloud/lib/python3.7/site-packages/torch/include/torch/csrc/api/include/torch/utils.h:3,
                   from /home/user/miniconda3/envs/pointcloud/lib/python3.7/site-packages/torch/include/torch/csrc/api/include/torch/nn/cloneable.h:5,
                   from /home/user/miniconda3/envs/pointcloud/lib/python3.7/site-packages/torch/include/torch/csrc/api/include/torch/nn.h:3,
                   from /home/user/miniconda3/envs/pointcloud/lib/python3.7/site-packages/torch/include/torch/csrc/api/include/torch/all.h:12,
                   from /home/user/miniconda3/envs/pointcloud/lib/python3.7/site-packages/torch/include/torch/extension.h:4,
                   from cuda/include/gridding.h:4,
                   from cuda/src/gridding.cpp:1:
  /home/user/miniconda3/envs/pointcloud/lib/python3.7/site-packages/torch/include/ATen/ParallelOpenMP.h:84:0: warning: ignoring #pragma omp parallel [-Wunknown-pragmas]
   #pragma omp parallel for if ((end - begin) >= grain_size)
  
  /usr/local/cuda/bin/nvcc -Icuda/include -I/home/user/miniconda3/envs/pointcloud/lib/python3.7/site-packages/torch/include -I/home/user/miniconda3/envs/pointcloud/lib/python3.7/site-packages/torch/include/torch/csrc/api/include -I/home/user/miniconda3/envs/pointcloud/lib/python3.7/site-packages/torch/include/TH -I/home/user/miniconda3/envs/pointcloud/lib/python3.7/site-packages/torch/include/THC -I/usr/local/cuda/include -I/home/user/miniconda3/envs/pointcloud/include/python3.7m -c cuda/src/chamfer_dist_gpu.cu -o build/temp.linux-x86_64-3.7/cuda/src/chamfer_dist_gpu.o -D__CUDA_NO_HALF_OPERATORS__ -D__CUDA_NO_HALF_CONVERSIONS__ -D__CUDA_NO_HALF2_OPERATORS__ --expt-relaxed-constexpr --compiler-options '-fPIC' -O3 -DVERSION_GE_1_3 -DTORCH_API_INCLUDE_EXTENSION_H -DTORCH_API_INCLUDE_EXTENSION_H -DPYBIND11_COMPILER_TYPE="_gcc" -DPYBIND11_COMPILER_TYPE="_gcc" -DPYBIND11_STDLIB="_libstdcpp" -DPYBIND11_STDLIB="_libstdcpp" -DPYBIND11_BUILD_ABI="_cxxabi1011" -DPYBIND11_BUILD_ABI="_cxxabi1011" -DTORCH_EXTENSION_NAME=points_cuda -DTORCH_EXTENSION_NAME=points_cuda -D_GLIBCXX_USE_CXX11_ABI=0 -D_GLIBCXX_USE_CXX11_ABI=0 -gencode=arch=compute_86,code=sm_86 -std=c++14
  nvcc fatal   : Unsupported gpu architecture 'compute_86'
  error: command '/usr/local/cuda/bin/nvcc' failed with exit status 1
  ----------------------------------------
  ERROR: Failed building wheel for torch-points-kernels
  Running setup.py clean for torch-points-kernels
Failed to build torch-points-kernels
Installing collected packages: torch-points-kernels
    Running setup.py install for torch-points-kernels ... error
    ERROR: Command errored out with exit status 1:
     command: /home/user/miniconda3/envs/pointcloud/bin/python -u -c 'import sys, setuptools, tokenize; sys.argv[0] = '"'"'/tmp/pip-install-3twio4ga/torch-points-kernels_4d1bdf9ec06f4aa8984512eb395e074d/setup.py'"'"'; __file__='"'"'/tmp/pip-install-3twio4ga/torch-points-kernels_4d1bdf9ec06f4aa8984512eb395e074d/setup.py'"'"';f=getattr(tokenize, '"'"'open'"'"', open)(__file__);code=f.read().replace('"'"'\r\n'"'"', '"'"'\n'"'"');f.close();exec(compile(code, __file__, '"'"'exec'"'"'))' install --record /tmp/pip-record-clyu4km7/install-record.txt --single-version-externally-managed --compile --install-headers /home/user/miniconda3/envs/pointcloud/include/python3.7m/torch-points-kernels
         cwd: /tmp/pip-install-3twio4ga/torch-points-kernels_4d1bdf9ec06f4aa8984512eb395e074d/
    Complete output (126 lines):
    running install
    running build
    running build_py
    creating build
    creating build/lib.linux-x86_64-3.7
    creating build/lib.linux-x86_64-3.7/torch_points_kernels
    copying torch_points_kernels/knn.py -> build/lib.linux-x86_64-3.7/torch_points_kernels
    copying torch_points_kernels/cluster.py -> build/lib.linux-x86_64-3.7/torch_points_kernels
    copying torch_points_kernels/chamfer_dist.py -> build/lib.linux-x86_64-3.7/torch_points_kernels
    copying torch_points_kernels/gridding.py -> build/lib.linux-x86_64-3.7/torch_points_kernels
    copying torch_points_kernels/cubic_feature_sampling.py -> build/lib.linux-x86_64-3.7/torch_points_kernels
    copying torch_points_kernels/__init__.py -> build/lib.linux-x86_64-3.7/torch_points_kernels
    copying torch_points_kernels/torchpoints.py -> build/lib.linux-x86_64-3.7/torch_points_kernels
    copying torch_points_kernels/metrics.py -> build/lib.linux-x86_64-3.7/torch_points_kernels
    running build_ext
    building 'torch_points_kernels.points_cuda' extension
    creating build/temp.linux-x86_64-3.7
    creating build/temp.linux-x86_64-3.7/cuda
    creating build/temp.linux-x86_64-3.7/cuda/src
    gcc -pthread -B /home/user/miniconda3/envs/pointcloud/compiler_compat -Wl,--sysroot=/ -Wsign-compare -DNDEBUG -g -fwrapv -O3 -Wall -Wstrict-prototypes -fPIC -Icuda/include -I/home/user/miniconda3/envs/pointcloud/lib/python3.7/site-packages/torch/include -I/home/user/miniconda3/envs/pointcloud/lib/python3.7/site-packages/torch/include/torch/csrc/api/include -I/home/user/miniconda3/envs/pointcloud/lib/python3.7/site-packages/torch/include/TH -I/home/user/miniconda3/envs/pointcloud/lib/python3.7/site-packages/torch/include/THC -I/usr/local/cuda/include -I/home/user/miniconda3/envs/pointcloud/include/python3.7m -c cuda/src/interpolate.cpp -o build/temp.linux-x86_64-3.7/cuda/src/interpolate.o -O3 -DVERSION_GE_1_3 -DTORCH_API_INCLUDE_EXTENSION_H -DTORCH_API_INCLUDE_EXTENSION_H -DPYBIND11_COMPILER_TYPE="_gcc" -DPYBIND11_COMPILER_TYPE="_gcc" -DPYBIND11_STDLIB="_libstdcpp" -DPYBIND11_STDLIB="_libstdcpp" -DPYBIND11_BUILD_ABI="_cxxabi1011" -DPYBIND11_BUILD_ABI="_cxxabi1011" -DTORCH_EXTENSION_NAME=points_cuda -DTORCH_EXTENSION_NAME=points_cuda -D_GLIBCXX_USE_CXX11_ABI=0 -D_GLIBCXX_USE_CXX11_ABI=0 -std=c++14
    cc1plus: warning: command line option ‘-Wstrict-prototypes’ is valid for C/ObjC but not for C++
    In file included from /home/user/miniconda3/envs/pointcloud/lib/python3.7/site-packages/torch/include/ATen/Parallel.h:149:0,
                     from /home/user/miniconda3/envs/pointcloud/lib/python3.7/site-packages/torch/include/torch/csrc/api/include/torch/utils.h:3,
                     from /home/user/miniconda3/envs/pointcloud/lib/python3.7/site-packages/torch/include/torch/csrc/api/include/torch/nn/cloneable.h:5,
                     from /home/user/miniconda3/envs/pointcloud/lib/python3.7/site-packages/torch/include/torch/csrc/api/include/torch/nn.h:3,
                     from /home/user/miniconda3/envs/pointcloud/lib/python3.7/site-packages/torch/include/torch/csrc/api/include/torch/all.h:12,
                     from /home/user/miniconda3/envs/pointcloud/lib/python3.7/site-packages/torch/include/torch/extension.h:4,
                     from cuda/include/interpolate.h:3,
                     from cuda/src/interpolate.cpp:1:
    /home/user/miniconda3/envs/pointcloud/lib/python3.7/site-packages/torch/include/ATen/ParallelOpenMP.h:84:0: warning: ignoring #pragma omp parallel [-Wunknown-pragmas]
     #pragma omp parallel for if ((end - begin) >= grain_size)
    
    gcc -pthread -B /home/user/miniconda3/envs/pointcloud/compiler_compat -Wl,--sysroot=/ -Wsign-compare -DNDEBUG -g -fwrapv -O3 -Wall -Wstrict-prototypes -fPIC -Icuda/include -I/home/user/miniconda3/envs/pointcloud/lib/python3.7/site-packages/torch/include -I/home/user/miniconda3/envs/pointcloud/lib/python3.7/site-packages/torch/include/torch/csrc/api/include -I/home/user/miniconda3/envs/pointcloud/lib/python3.7/site-packages/torch/include/TH -I/home/user/miniconda3/envs/pointcloud/lib/python3.7/site-packages/torch/include/THC -I/usr/local/cuda/include -I/home/user/miniconda3/envs/pointcloud/include/python3.7m -c cuda/src/cubic_feature_sampling.cpp -o build/temp.linux-x86_64-3.7/cuda/src/cubic_feature_sampling.o -O3 -DVERSION_GE_1_3 -DTORCH_API_INCLUDE_EXTENSION_H -DTORCH_API_INCLUDE_EXTENSION_H -DPYBIND11_COMPILER_TYPE="_gcc" -DPYBIND11_COMPILER_TYPE="_gcc" -DPYBIND11_STDLIB="_libstdcpp" -DPYBIND11_STDLIB="_libstdcpp" -DPYBIND11_BUILD_ABI="_cxxabi1011" -DPYBIND11_BUILD_ABI="_cxxabi1011" -DTORCH_EXTENSION_NAME=points_cuda -DTORCH_EXTENSION_NAME=points_cuda -D_GLIBCXX_USE_CXX11_ABI=0 -D_GLIBCXX_USE_CXX11_ABI=0 -std=c++14
    cc1plus: warning: command line option ‘-Wstrict-prototypes’ is valid for C/ObjC but not for C++
    In file included from /home/user/miniconda3/envs/pointcloud/lib/python3.7/site-packages/torch/include/ATen/Parallel.h:149:0,
                     from /home/user/miniconda3/envs/pointcloud/lib/python3.7/site-packages/torch/include/torch/csrc/api/include/torch/utils.h:3,
                     from /home/user/miniconda3/envs/pointcloud/lib/python3.7/site-packages/torch/include/torch/csrc/api/include/torch/nn/cloneable.h:5,
                     from /home/user/miniconda3/envs/pointcloud/lib/python3.7/site-packages/torch/include/torch/csrc/api/include/torch/nn.h:3,
                     from /home/user/miniconda3/envs/pointcloud/lib/python3.7/site-packages/torch/include/torch/csrc/api/include/torch/all.h:12,
                     from /home/user/miniconda3/envs/pointcloud/lib/python3.7/site-packages/torch/include/torch/extension.h:4,
                     from cuda/include/cubic_feature_sampling.h:4,
                     from cuda/src/cubic_feature_sampling.cpp:1:
    /home/user/miniconda3/envs/pointcloud/lib/python3.7/site-packages/torch/include/ATen/ParallelOpenMP.h:84:0: warning: ignoring #pragma omp parallel [-Wunknown-pragmas]
     #pragma omp parallel for if ((end - begin) >= grain_size)
    
    gcc -pthread -B /home/user/miniconda3/envs/pointcloud/compiler_compat -Wl,--sysroot=/ -Wsign-compare -DNDEBUG -g -fwrapv -O3 -Wall -Wstrict-prototypes -fPIC -Icuda/include -I/home/user/miniconda3/envs/pointcloud/lib/python3.7/site-packages/torch/include -I/home/user/miniconda3/envs/pointcloud/lib/python3.7/site-packages/torch/include/torch/csrc/api/include -I/home/user/miniconda3/envs/pointcloud/lib/python3.7/site-packages/torch/include/TH -I/home/user/miniconda3/envs/pointcloud/lib/python3.7/site-packages/torch/include/THC -I/usr/local/cuda/include -I/home/user/miniconda3/envs/pointcloud/include/python3.7m -c cuda/src/ball_query.cpp -o build/temp.linux-x86_64-3.7/cuda/src/ball_query.o -O3 -DVERSION_GE_1_3 -DTORCH_API_INCLUDE_EXTENSION_H -DTORCH_API_INCLUDE_EXTENSION_H -DPYBIND11_COMPILER_TYPE="_gcc" -DPYBIND11_COMPILER_TYPE="_gcc" -DPYBIND11_STDLIB="_libstdcpp" -DPYBIND11_STDLIB="_libstdcpp" -DPYBIND11_BUILD_ABI="_cxxabi1011" -DPYBIND11_BUILD_ABI="_cxxabi1011" -DTORCH_EXTENSION_NAME=points_cuda -DTORCH_EXTENSION_NAME=points_cuda -D_GLIBCXX_USE_CXX11_ABI=0 -D_GLIBCXX_USE_CXX11_ABI=0 -std=c++14
    cc1plus: warning: command line option ‘-Wstrict-prototypes’ is valid for C/ObjC but not for C++
    In file included from /home/user/miniconda3/envs/pointcloud/lib/python3.7/site-packages/torch/include/ATen/Parallel.h:149:0,
                     from /home/user/miniconda3/envs/pointcloud/lib/python3.7/site-packages/torch/include/torch/csrc/api/include/torch/utils.h:3,
                     from /home/user/miniconda3/envs/pointcloud/lib/python3.7/site-packages/torch/include/torch/csrc/api/include/torch/nn/cloneable.h:5,
                     from /home/user/miniconda3/envs/pointcloud/lib/python3.7/site-packages/torch/include/torch/csrc/api/include/torch/nn.h:3,
                     from /home/user/miniconda3/envs/pointcloud/lib/python3.7/site-packages/torch/include/torch/csrc/api/include/torch/all.h:12,
                     from /home/user/miniconda3/envs/pointcloud/lib/python3.7/site-packages/torch/include/torch/extension.h:4,
                     from cuda/include/ball_query.h:2,
                     from cuda/src/ball_query.cpp:1:
    /home/user/miniconda3/envs/pointcloud/lib/python3.7/site-packages/torch/include/ATen/ParallelOpenMP.h:84:0: warning: ignoring #pragma omp parallel [-Wunknown-pragmas]
     #pragma omp parallel for if ((end - begin) >= grain_size)
    
    gcc -pthread -B /home/user/miniconda3/envs/pointcloud/compiler_compat -Wl,--sysroot=/ -Wsign-compare -DNDEBUG -g -fwrapv -O3 -Wall -Wstrict-prototypes -fPIC -Icuda/include -I/home/user/miniconda3/envs/pointcloud/lib/python3.7/site-packages/torch/include -I/home/user/miniconda3/envs/pointcloud/lib/python3.7/site-packages/torch/include/torch/csrc/api/include -I/home/user/miniconda3/envs/pointcloud/lib/python3.7/site-packages/torch/include/TH -I/home/user/miniconda3/envs/pointcloud/lib/python3.7/site-packages/torch/include/THC -I/usr/local/cuda/include -I/home/user/miniconda3/envs/pointcloud/include/python3.7m -c cuda/src/bindings.cpp -o build/temp.linux-x86_64-3.7/cuda/src/bindings.o -O3 -DVERSION_GE_1_3 -DTORCH_API_INCLUDE_EXTENSION_H -DTORCH_API_INCLUDE_EXTENSION_H -DPYBIND11_COMPILER_TYPE="_gcc" -DPYBIND11_COMPILER_TYPE="_gcc" -DPYBIND11_STDLIB="_libstdcpp" -DPYBIND11_STDLIB="_libstdcpp" -DPYBIND11_BUILD_ABI="_cxxabi1011" -DPYBIND11_BUILD_ABI="_cxxabi1011" -DTORCH_EXTENSION_NAME=points_cuda -DTORCH_EXTENSION_NAME=points_cuda -D_GLIBCXX_USE_CXX11_ABI=0 -D_GLIBCXX_USE_CXX11_ABI=0 -std=c++14
    cc1plus: warning: command line option ‘-Wstrict-prototypes’ is valid for C/ObjC but not for C++
    In file included from /home/user/miniconda3/envs/pointcloud/lib/python3.7/site-packages/torch/include/ATen/Parallel.h:149:0,
                     from /home/user/miniconda3/envs/pointcloud/lib/python3.7/site-packages/torch/include/torch/csrc/api/include/torch/utils.h:3,
                     from /home/user/miniconda3/envs/pointcloud/lib/python3.7/site-packages/torch/include/torch/csrc/api/include/torch/nn/cloneable.h:5,
                     from /home/user/miniconda3/envs/pointcloud/lib/python3.7/site-packages/torch/include/torch/csrc/api/include/torch/nn.h:3,
                     from /home/user/miniconda3/envs/pointcloud/lib/python3.7/site-packages/torch/include/torch/csrc/api/include/torch/all.h:12,
                     from /home/user/miniconda3/envs/pointcloud/lib/python3.7/site-packages/torch/include/torch/extension.h:4,
                     from cuda/include/ball_query.h:2,
                     from cuda/src/bindings.cpp:1:
    /home/user/miniconda3/envs/pointcloud/lib/python3.7/site-packages/torch/include/ATen/ParallelOpenMP.h:84:0: warning: ignoring #pragma omp parallel [-Wunknown-pragmas]
     #pragma omp parallel for if ((end - begin) >= grain_size)
    
    gcc -pthread -B /home/user/miniconda3/envs/pointcloud/compiler_compat -Wl,--sysroot=/ -Wsign-compare -DNDEBUG -g -fwrapv -O3 -Wall -Wstrict-prototypes -fPIC -Icuda/include -I/home/user/miniconda3/envs/pointcloud/lib/python3.7/site-packages/torch/include -I/home/user/miniconda3/envs/pointcloud/lib/python3.7/site-packages/torch/include/torch/csrc/api/include -I/home/user/miniconda3/envs/pointcloud/lib/python3.7/site-packages/torch/include/TH -I/home/user/miniconda3/envs/pointcloud/lib/python3.7/site-packages/torch/include/THC -I/usr/local/cuda/include -I/home/user/miniconda3/envs/pointcloud/include/python3.7m -c cuda/src/sampling.cpp -o build/temp.linux-x86_64-3.7/cuda/src/sampling.o -O3 -DVERSION_GE_1_3 -DTORCH_API_INCLUDE_EXTENSION_H -DTORCH_API_INCLUDE_EXTENSION_H -DPYBIND11_COMPILER_TYPE="_gcc" -DPYBIND11_COMPILER_TYPE="_gcc" -DPYBIND11_STDLIB="_libstdcpp" -DPYBIND11_STDLIB="_libstdcpp" -DPYBIND11_BUILD_ABI="_cxxabi1011" -DPYBIND11_BUILD_ABI="_cxxabi1011" -DTORCH_EXTENSION_NAME=points_cuda -DTORCH_EXTENSION_NAME=points_cuda -D_GLIBCXX_USE_CXX11_ABI=0 -D_GLIBCXX_USE_CXX11_ABI=0 -std=c++14
    cc1plus: warning: command line option ‘-Wstrict-prototypes’ is valid for C/ObjC but not for C++
    In file included from /home/user/miniconda3/envs/pointcloud/lib/python3.7/site-packages/torch/include/ATen/Parallel.h:149:0,
                     from /home/user/miniconda3/envs/pointcloud/lib/python3.7/site-packages/torch/include/torch/csrc/api/include/torch/utils.h:3,
                     from /home/user/miniconda3/envs/pointcloud/lib/python3.7/site-packages/torch/include/torch/csrc/api/include/torch/nn/cloneable.h:5,
                     from /home/user/miniconda3/envs/pointcloud/lib/python3.7/site-packages/torch/include/torch/csrc/api/include/torch/nn.h:3,
                     from /home/user/miniconda3/envs/pointcloud/lib/python3.7/site-packages/torch/include/torch/csrc/api/include/torch/all.h:12,
                     from /home/user/miniconda3/envs/pointcloud/lib/python3.7/site-packages/torch/include/torch/extension.h:4,
                     from cuda/include/sampling.h:2,
                     from cuda/src/sampling.cpp:1:
    /home/user/miniconda3/envs/pointcloud/lib/python3.7/site-packages/torch/include/ATen/ParallelOpenMP.h:84:0: warning: ignoring #pragma omp parallel [-Wunknown-pragmas]
     #pragma omp parallel for if ((end - begin) >= grain_size)
    
    gcc -pthread -B /home/user/miniconda3/envs/pointcloud/compiler_compat -Wl,--sysroot=/ -Wsign-compare -DNDEBUG -g -fwrapv -O3 -Wall -Wstrict-prototypes -fPIC -Icuda/include -I/home/user/miniconda3/envs/pointcloud/lib/python3.7/site-packages/torch/include -I/home/user/miniconda3/envs/pointcloud/lib/python3.7/site-packages/torch/include/torch/csrc/api/include -I/home/user/miniconda3/envs/pointcloud/lib/python3.7/site-packages/torch/include/TH -I/home/user/miniconda3/envs/pointcloud/lib/python3.7/site-packages/torch/include/THC -I/usr/local/cuda/include -I/home/user/miniconda3/envs/pointcloud/include/python3.7m -c cuda/src/chamfer_dist.cpp -o build/temp.linux-x86_64-3.7/cuda/src/chamfer_dist.o -O3 -DVERSION_GE_1_3 -DTORCH_API_INCLUDE_EXTENSION_H -DTORCH_API_INCLUDE_EXTENSION_H -DPYBIND11_COMPILER_TYPE="_gcc" -DPYBIND11_COMPILER_TYPE="_gcc" -DPYBIND11_STDLIB="_libstdcpp" -DPYBIND11_STDLIB="_libstdcpp" -DPYBIND11_BUILD_ABI="_cxxabi1011" -DPYBIND11_BUILD_ABI="_cxxabi1011" -DTORCH_EXTENSION_NAME=points_cuda -DTORCH_EXTENSION_NAME=points_cuda -D_GLIBCXX_USE_CXX11_ABI=0 -D_GLIBCXX_USE_CXX11_ABI=0 -std=c++14
    cc1plus: warning: command line option ‘-Wstrict-prototypes’ is valid for C/ObjC but not for C++
    In file included from /home/user/miniconda3/envs/pointcloud/lib/python3.7/site-packages/torch/include/ATen/Parallel.h:149:0,
                     from /home/user/miniconda3/envs/pointcloud/lib/python3.7/site-packages/torch/include/torch/csrc/api/include/torch/utils.h:3,
                     from /home/user/miniconda3/envs/pointcloud/lib/python3.7/site-packages/torch/include/torch/csrc/api/include/torch/nn/cloneable.h:5,
                     from /home/user/miniconda3/envs/pointcloud/lib/python3.7/site-packages/torch/include/torch/csrc/api/include/torch/nn.h:3,
                     from /home/user/miniconda3/envs/pointcloud/lib/python3.7/site-packages/torch/include/torch/csrc/api/include/torch/all.h:12,
                     from /home/user/miniconda3/envs/pointcloud/lib/python3.7/site-packages/torch/include/torch/extension.h:4,
                     from cuda/include/chamfer_dist.h:1,
                     from cuda/src/chamfer_dist.cpp:1:
    /home/user/miniconda3/envs/pointcloud/lib/python3.7/site-packages/torch/include/ATen/ParallelOpenMP.h:84:0: warning: ignoring #pragma omp parallel [-Wunknown-pragmas]
     #pragma omp parallel for if ((end - begin) >= grain_size)
    
    gcc -pthread -B /home/user/miniconda3/envs/pointcloud/compiler_compat -Wl,--sysroot=/ -Wsign-compare -DNDEBUG -g -fwrapv -O3 -Wall -Wstrict-prototypes -fPIC -Icuda/include -I/home/user/miniconda3/envs/pointcloud/lib/python3.7/site-packages/torch/include -I/home/user/miniconda3/envs/pointcloud/lib/python3.7/site-packages/torch/include/torch/csrc/api/include -I/home/user/miniconda3/envs/pointcloud/lib/python3.7/site-packages/torch/include/TH -I/home/user/miniconda3/envs/pointcloud/lib/python3.7/site-packages/torch/include/THC -I/usr/local/cuda/include -I/home/user/miniconda3/envs/pointcloud/include/python3.7m -c cuda/src/metrics.cpp -o build/temp.linux-x86_64-3.7/cuda/src/metrics.o -O3 -DVERSION_GE_1_3 -DTORCH_API_INCLUDE_EXTENSION_H -DTORCH_API_INCLUDE_EXTENSION_H -DPYBIND11_COMPILER_TYPE="_gcc" -DPYBIND11_COMPILER_TYPE="_gcc" -DPYBIND11_STDLIB="_libstdcpp" -DPYBIND11_STDLIB="_libstdcpp" -DPYBIND11_BUILD_ABI="_cxxabi1011" -DPYBIND11_BUILD_ABI="_cxxabi1011" -DTORCH_EXTENSION_NAME=points_cuda -DTORCH_EXTENSION_NAME=points_cuda -D_GLIBCXX_USE_CXX11_ABI=0 -D_GLIBCXX_USE_CXX11_ABI=0 -std=c++14
    cc1plus: warning: command line option ‘-Wstrict-prototypes’ is valid for C/ObjC but not for C++
    In file included from /home/user/miniconda3/envs/pointcloud/lib/python3.7/site-packages/torch/include/ATen/Parallel.h:149:0,
                     from /home/user/miniconda3/envs/pointcloud/lib/python3.7/site-packages/torch/include/torch/csrc/api/include/torch/utils.h:3,
                     from /home/user/miniconda3/envs/pointcloud/lib/python3.7/site-packages/torch/include/torch/csrc/api/include/torch/nn/cloneable.h:5,
                     from /home/user/miniconda3/envs/pointcloud/lib/python3.7/site-packages/torch/include/torch/csrc/api/include/torch/nn.h:3,
                     from /home/user/miniconda3/envs/pointcloud/lib/python3.7/site-packages/torch/include/torch/csrc/api/include/torch/all.h:12,
                     from /home/user/miniconda3/envs/pointcloud/lib/python3.7/site-packages/torch/include/torch/extension.h:4,
                     from cuda/include/metrics.h:2,
                     from cuda/src/metrics.cpp:1:
    /home/user/miniconda3/envs/pointcloud/lib/python3.7/site-packages/torch/include/ATen/ParallelOpenMP.h:84:0: warning: ignoring #pragma omp parallel [-Wunknown-pragmas]
     #pragma omp parallel for if ((end - begin) >= grain_size)
    
    gcc -pthread -B /home/user/miniconda3/envs/pointcloud/compiler_compat -Wl,--sysroot=/ -Wsign-compare -DNDEBUG -g -fwrapv -O3 -Wall -Wstrict-prototypes -fPIC -Icuda/include -I/home/user/miniconda3/envs/pointcloud/lib/python3.7/site-packages/torch/include -I/home/user/miniconda3/envs/pointcloud/lib/python3.7/site-packages/torch/include/torch/csrc/api/include -I/home/user/miniconda3/envs/pointcloud/lib/python3.7/site-packages/torch/include/TH -I/home/user/miniconda3/envs/pointcloud/lib/python3.7/site-packages/torch/include/THC -I/usr/local/cuda/include -I/home/user/miniconda3/envs/pointcloud/include/python3.7m -c cuda/src/gridding.cpp -o build/temp.linux-x86_64-3.7/cuda/src/gridding.o -O3 -DVERSION_GE_1_3 -DTORCH_API_INCLUDE_EXTENSION_H -DTORCH_API_INCLUDE_EXTENSION_H -DPYBIND11_COMPILER_TYPE="_gcc" -DPYBIND11_COMPILER_TYPE="_gcc" -DPYBIND11_STDLIB="_libstdcpp" -DPYBIND11_STDLIB="_libstdcpp" -DPYBIND11_BUILD_ABI="_cxxabi1011" -DPYBIND11_BUILD_ABI="_cxxabi1011" -DTORCH_EXTENSION_NAME=points_cuda -DTORCH_EXTENSION_NAME=points_cuda -D_GLIBCXX_USE_CXX11_ABI=0 -D_GLIBCXX_USE_CXX11_ABI=0 -std=c++14
    cc1plus: warning: command line option ‘-Wstrict-prototypes’ is valid for C/ObjC but not for C++
    In file included from /home/user/miniconda3/envs/pointcloud/lib/python3.7/site-packages/torch/include/ATen/Parallel.h:149:0,
                     from /home/user/miniconda3/envs/pointcloud/lib/python3.7/site-packages/torch/include/torch/csrc/api/include/torch/utils.h:3,
                     from /home/user/miniconda3/envs/pointcloud/lib/python3.7/site-packages/torch/include/torch/csrc/api/include/torch/nn/cloneable.h:5,
                     from /home/user/miniconda3/envs/pointcloud/lib/python3.7/site-packages/torch/include/torch/csrc/api/include/torch/nn.h:3,
                     from /home/user/miniconda3/envs/pointcloud/lib/python3.7/site-packages/torch/include/torch/csrc/api/include/torch/all.h:12,
                     from /home/user/miniconda3/envs/pointcloud/lib/python3.7/site-packages/torch/include/torch/extension.h:4,
                     from cuda/include/gridding.h:4,
                     from cuda/src/gridding.cpp:1:
    /home/user/miniconda3/envs/pointcloud/lib/python3.7/site-packages/torch/include/ATen/ParallelOpenMP.h:84:0: warning: ignoring #pragma omp parallel [-Wunknown-pragmas]
     #pragma omp parallel for if ((end - begin) >= grain_size)
    
    /usr/local/cuda/bin/nvcc -Icuda/include -I/home/user/miniconda3/envs/pointcloud/lib/python3.7/site-packages/torch/include -I/home/user/miniconda3/envs/pointcloud/lib/python3.7/site-packages/torch/include/torch/csrc/api/include -I/home/user/miniconda3/envs/pointcloud/lib/python3.7/site-packages/torch/include/TH -I/home/user/miniconda3/envs/pointcloud/lib/python3.7/site-packages/torch/include/THC -I/usr/local/cuda/include -I/home/user/miniconda3/envs/pointcloud/include/python3.7m -c cuda/src/chamfer_dist_gpu.cu -o build/temp.linux-x86_64-3.7/cuda/src/chamfer_dist_gpu.o -D__CUDA_NO_HALF_OPERATORS__ -D__CUDA_NO_HALF_CONVERSIONS__ -D__CUDA_NO_HALF2_OPERATORS__ --expt-relaxed-constexpr --compiler-options '-fPIC' -O3 -DVERSION_GE_1_3 -DTORCH_API_INCLUDE_EXTENSION_H -DTORCH_API_INCLUDE_EXTENSION_H -DPYBIND11_COMPILER_TYPE="_gcc" -DPYBIND11_COMPILER_TYPE="_gcc" -DPYBIND11_STDLIB="_libstdcpp" -DPYBIND11_STDLIB="_libstdcpp" -DPYBIND11_BUILD_ABI="_cxxabi1011" -DPYBIND11_BUILD_ABI="_cxxabi1011" -DTORCH_EXTENSION_NAME=points_cuda -DTORCH_EXTENSION_NAME=points_cuda -D_GLIBCXX_USE_CXX11_ABI=0 -D_GLIBCXX_USE_CXX11_ABI=0 -gencode=arch=compute_86,code=sm_86 -std=c++14
    nvcc fatal   : Unsupported gpu architecture 'compute_86'
    error: command '/usr/local/cuda/bin/nvcc' failed with exit status 1
    ----------------------------------------
ERROR: Command errored out with exit status 1: /home/user/miniconda3/envs/pointcloud/bin/python -u -c 'import sys, setuptools, tokenize; sys.argv[0] = '"'"'/tmp/pip-install-3twio4ga/torch-points-kernels_4d1bdf9ec06f4aa8984512eb395e074d/setup.py'"'"'; __file__='"'"'/tmp/pip-install-3twio4ga/torch-points-kernels_4d1bdf9ec06f4aa8984512eb395e074d/setup.py'"'"';f=getattr(tokenize, '"'"'open'"'"', open)(__file__);code=f.read().replace('"'"'\r\n'"'"', '"'"'\n'"'"');f.close();exec(compile(code, __file__, '"'"'exec'"'"'))' install --record /tmp/pip-record-clyu4km7/install-record.txt --single-version-externally-managed --compile --install-headers /home/user/miniconda3/envs/pointcloud/include/python3.7m/torch-points-kernels Check the logs for full command output.

Probably you should update your installation tutorial timely.

I'm using a conda-env with Py-38 and CUDA-11.7. When I was trying to install torch-points-kernels via neither conda nor build from source, it fails. In README seems a PyTorch >= 1.10.0 is fine, but unfortunately PyTorch-2.0.0 is not in the compatible list. So maybe you should update your installation tutorial to prompt a PyTorch >= 1.10.0 and < 2.0.0 is recommended. Thanks!

ModuleNotFoundError: No module named 'torch_points_kernels.points_cuda'

I am running my code on Windows, the requirements include torch-points-kernels>=0.5.3, so I pip install torch-points-kernels==0.7.0. I run my code successfully with only cpu. When I tried to install a gpu version of pytorch, and run my code, something went wrong, it shows that :ModuleNotFoundError: No module named 'torch_points_kernels.points_cuda'. And when I enter the path, I found that only the file : points_cpu.pyd exists while the required file : torch_points_kernels.points_cuda goes missing. So I am dying to know how to deal with it. Thanks very much.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.