Giter Site home page Giter Site logo

neuralpoints's People

Contributors

rainbowrui avatar wanquanf avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

neuralpoints's Issues

测试的数据格式是什么样的?

作者您好,请问一下测试的数据格式是什么样子的,在文件目录中我应该如何调用这些数据呢?比如说我想使用这个程序做一些自己的数据的上采样的话,应该怎么设置呢?希望得到您的回答,谢谢您了。

Some clarifications about metrics reported in Table 2

Hi, I am sorry to reiterate on this point but I am trying to reproduce your results and I have some questions about metrics reported in Table 2 of your paper. I attach the table here below:

table_2_neuralpts

In section 4.1, you state: "We train and test our model on Sketchfab dataset collected by PUGeo-Net". Then, later in section 4.3: "For fairness, we share the same settings (batch size, iterations, learning rate, etc) among all methods". However, the corresponding table in the official PUGeo-Net paper (see Table 1 here: https://www.ecva.net/papers/eccv_2020/papers_ECCV/papers/123640732.pdf) reports significantly different values, for all the methods included in both tables.

table_1_pugeonet

In particular, the Chamfer Distance (CD) metric is about two orders of magnitude smaller. More specifically:

  • PU-Net: 5.93 x 10e-5 vs 0.658 x 10e-2
  • PUGeo-Net: 2.28 x 10e-5 vs 0.558 x 10e-2

To the best of my knowledge, your work and PUGeo-Net are the only published methods with metrics on the Sketchfab dataset, but neither of you have released the evaluation code, so it's hard to understand the reasons behind this difference. There is a recent ArXiV paper with a similar table on the same dataset (see Table 2 here: https://arxiv.org/pdf/2107.05893.pdf), which reports values very similar to PUGeo-Net and at least with the same order of magnitude. However, their code is not open source.

table_2_puflow

Therefore, I have three questions:

  1. Can you please explain these differences and what they might be related to?
  2. Can you please upload to this repository your evaluation code?
  3. Can you please make your dataset accessible with another cloud service, other than Baidu?

Thank you in advance for your clarifications.

about epoch

how can i calculate the epoch?it confuses me .

The number of points

Hi Wanquan,
In PUGeo-Net, during the testing, the number of points is 5000 and that in NeuralPoints is 2000. Would it influence the result?

cannot compile pointnet2 successfully

Has anyone been able to run the code successfully? I found that the newly created envs according to this step cannot compile pointnet2 normally, there are various bugs

When running the test file, an error is reported。RuntimeError: Error building extension 'cd': [1/1] c++ chamfer_distance.o chamfer_distance.cuda.o

error:
RuntimeError: Error building extension 'cd': [1/1] c++ chamfer_distance.o chamfer_distance.cuda.o -shared -L/home/hay/Downloads/ENTER/envs/NePs/lib/python3.8/site-packages/torch/lib -lc10 -lc10_cuda -ltorch_cpu -ltorch_cuda -ltorch -ltorch_python -L/home/hay/Downloads/ENTER/envs/NePs/lib64 -lcudart -o cd.so
FAILED: cd.so
c++ chamfer_distance.o chamfer_distance.cuda.o -shared -L/home/hay/Downloads/ENTER/envs/NePs/lib/python3.8/site-packages/torch/lib -lc10 -lc10_cuda -ltorch_cpu -ltorch_cuda -ltorch -ltorch_python -L/home/hay/Downloads/ENTER/envs/NePs/lib64 -lcudart -o cd.so
/usr/bin/ld: cannot find -lcudart
collect2: error: ld returned 1 exit status
ninja: build stopped: subcommand failed.

When running the test file, an error is reported。

关于论文的一些疑问

谢谢您的工作。我暂时还没有看代码,我只拜读了您的文章,我有以下问题,希望您可以帮忙解答一下。
1、这个local neural fields和local surface patch是什么关系呢,读论文的时候两个概念分不清楚,感觉很混乱。
2、这个学习出来的全局的Neural Fields数据结构是怎样的呢,他在计算机中是如何表示的呢。emmm这个是我导师的问题。

非常感谢您的回答。

install setup.py with error: command 'N:\\Microsoft Visual Stdio\\2019\\Community\\VC\\Tools\\MSVC\\14.29.30133\\bin\\HostX64\\x64\\link.exe' failed with exit status 1120

thanks for ur work.

python setup.py install
meet error as follow, how i slove it

(NePs) E:\A\论文\codes\NeuralPoints-main\model\conpu_v6\pointnet2>python setup.py install
running install
running bdist_egg
running egg_info
writing pointnet2.egg-info\PKG-INFO
writing dependency_links to pointnet2.egg-info\dependency_links.txt
writing top-level names to pointnet2.egg-info\top_level.txt
reading manifest file 'pointnet2.egg-info\SOURCES.txt'
writing manifest file 'pointnet2.egg-info\SOURCES.txt'
installing library code to build\bdist.win-amd64\egg
running install_lib
running build_ext
building 'pointnet2_cuda' extension
Emitting ninja build file E:\A\论文\codes\NeuralPoints-main\model\conpu_v6\pointnet2\build\temp.win-amd64-3.8\Release\build.ninja...
Compiling objects...
Allowing ninja to set a default number of workers... (overridable by setting the environment variable MAX_JOBS=N)
1.10.2
N:\Microsoft Visual Stdio\2019\Community\VC\Tools\MSVC\14.29.30133\bin\HostX64\x64\link.exe /nologo /INCREMENTAL:NO /LTCG /DLL /MANIFEST:EMBED,ID=2 /MANIFESTUAC:NO /LIBPATH:D:\anaconda3
\envs\NePs\lib\site-packages\torch\lib "/LIBPATH:C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v10.2\lib/x64" /LIBPATH:D:\anaconda3\envs\NePs\libs /LIBPATH:D:\anaconda3\envs\NePs\P
Cbuild\amd64 "/LIBPATH:N:\Microsoft Visual Stdio\2019\Community\VC\Tools\MSVC\14.29.30133\ATLMFC\lib\x64" "/LIBPATH:N:\Microsoft Visual Stdio\2019\Community\VC\Tools\MSVC\14.29.30133\li
b\x64" "/LIBPATH:N:\Windows Kits\10\lib\10.0.18362.0\ucrt\x64" "/LIBPATH:N:\Windows Kits\10\lib\10.0.18362.0\um\x64" c10.lib torch.lib torch_cpu.lib torch_python.lib cudart.lib c10_cuda
.lib torch_cuda.lib /EXPORT:PyInit_pointnet2_cuda E:\A\论文\codes\NeuralPoints-main\model\conpu_v6\pointnet2\build\temp.win-amd64-3.8\Release\src/pointnet2_api.obj E:\A\论文\codes\Neura
lPoints-main\model\conpu_v6\pointnet2\build\temp.win-amd64-3.8\Release\src/ball_query.obj E:\A\论文\codes\NeuralPoints-main\model\conpu_v6\pointnet2\build\temp.win-amd64-3.8\Release\src
/ball_query_gpu.obj E:\A\论文\codes\NeuralPoints-main\model\conpu_v6\pointnet2\build\temp.win-amd64-3.8\Release\src/group_points.obj E:\A\论文\codes\NeuralPoints-main\model\conpu_v6\poi
ntnet2\build\temp.win-amd64-3.8\Release\src/group_points_gpu.obj E:\A\论文\codes\NeuralPoints-main\model\conpu_v6\pointnet2\build\temp.win-amd64-3.8\Release\src/interpolate.obj E:\A\论
文\codes\NeuralPoints-main\model\conpu_v6\pointnet2\build\temp.win-amd64-3.8\Release\src/interpolate_gpu.obj E:\A\论文\codes\NeuralPoints-main\model\conpu_v6\pointnet2\build\temp.win-am
d64-3.8\Release\src/sampling.obj E:\A\论文\codes\NeuralPoints-main\model\conpu_v6\pointnet2\build\temp.win-amd64-3.8\Release\src/sampling_gpu.obj /OUT:build\lib.win-amd64-3.8\pointnet2_
cuda.cp38-win_amd64.pyd /IMPLIB:E:\A\论文\codes\NeuralPoints-main\model\conpu_v6\pointnet2\build\temp.win-amd64-3.8\Release\src\pointnet2_cuda.cp38-win_amd64.lib
正在创建库 E:\A\论文\codes\NeuralPoints-main\model\conpu_v6\pointnet2\build\temp.win-amd64-3.8\Release\src\pointnet2_cuda.cp38-win_amd64.lib 和对象 E:\A\论文\codes\NeuralPoints-main\m
odel\conpu_v6\pointnet2\build\temp.win-amd64-3.8\Release\src\pointnet2_cuda.cp38-win_amd64.exp
ball_query_gpu.obj : error LNK2001: 无法解析的外部符号 "__declspec(dllimport) void __cdecl c10::detail::torchCheckFail(char const *,char const *,unsigned int,class std::basic_string<cha
r,struct std::char_traits,class std::allocator > const &)" (_imp?torchCheckFail@detail@c10@@YAXPEBD0IAEBV?$basic_string@DU?$char_traits@D@std@@v?$allocator@D@2@@std@@@z)
group_points_gpu.obj : error LNK2001: 无法解析的外部符号 "__declspec(dllimport) void __cdecl c10::detail::torchCheckFail(char const *,char const *,unsigned int,class std::basic_string<c
har,struct std::char_traits,class std::allocator > const &)" (_imp?torchCheckFail@detail@c10@@YAXPEBD0IAEBV?$basic_string@DU?$char_traits@D@std@@v?$allocator@D@2@@std@@@z)

interpolate_gpu.obj : error LNK2001: 无法解析的外部符号 "__declspec(dllimport) void __cdecl c10::detail::torchCheckFail(char const *,char const *,unsigned int,class std::basic_string<ch
ar,struct std::char_traits,class std::allocator > const &)" (_imp?torchCheckFail@detail@c10@@YAXPEBD0IAEBV?$basic_string@DU?$char_traits@D@std@@v?$allocator@D@2@@std@@@z)
sampling_gpu.obj : error LNK2001: 无法解析的外部符号 "__declspec(dllimport) void __cdecl c10::detail::torchCheckFail(char const *,char const *,unsigned int,class std::basic_string<char,
struct std::char_traits,class std::allocator > const &)" (_imp?torchCheckFail@detail@c10@@YAXPEBD0IAEBV?$basic_string@DU?$char_traits@D@std@@v?$allocator@D@2@@std@@@z)
ball_query_gpu.obj : error LNK2001: 无法解析的外部符号 "__declspec(dllimport) private: void __cdecl c10::IValue::destroy(void)" (_imp?destroy@IValue@c10@@AEAAXXZ)
group_points_gpu.obj : error LNK2001: 无法解析的外部符号 "__declspec(dllimport) private: void __cdecl c10::IValue::destroy(void)" (_imp?destroy@IValue@c10@@AEAAXXZ)
interpolate_gpu.obj : error LNK2001: 无法解析的外部符号 "__declspec(dllimport) private: void __cdecl c10::IValue::destroy(void)" (_imp?destroy@IValue@c10@@AEAAXXZ)
sampling_gpu.obj : error LNK2001: 无法解析的外部符号 "__declspec(dllimport) private: void __cdecl c10::IValue::destroy(void)" (_imp?destroy@IValue@c10@@AEAAXXZ)
build\lib.win-amd64-3.8\pointnet2_cuda.cp38-win_amd64.pyd : fatal error LNK1120: 2 个无法解析的外部命令
error: command 'N:\Microsoft Visual Stdio\2019\Community\VC\Tools\MSVC\14.29.30133\bin\HostX64\x64\link.exe' failed with exit status 1120

About pointnet 2

After I successfully compiled pointnet2 and installed the dependencies required by the project, this problem:
Traceback (most recent call last):
File "train_view_toy.py", line 1, in
from pointnet2 import pointnet2_utils as pn2_utils
File "/home/suetme/NeuralPoints-main/model/conpu_v6/pointnet2/pointnet2_utils.py", line 7, in
import pointnet2_cuda as pointnet2
ImportError: /home/suetme/miniconda3/envs/NePs/lib/python3.8/site-packages/pointnet2-0.0.0-py3.8-linux-x86_64.egg/pointnet2_cuda.cpython-38-x86_64-linux-gnu.so: undefined symbol: _ZN3c105ErrorC1ENS_14SourceLocationESs
occurred when executing the train_script101_test.py. I think it may be caused by the mismatch of GCC compilation.So you can tell me the specific environment you use?

No module named 'torch_scatter'

运行train_script101_test.py 显示如标题 缺少No module named 'torch_scatter'
执行pip install torch-scatter -f https://data.pyg.org/whl/torch-1.6.0+cu102.html 安装
报错ERROR: Command errored out with exit status 1: /home/user/.conda/envs/fangyi-NePs/bin/python -u -c 'import io, os, sys, setuptools, tokenize; sys.argv[0] = '"'"'/tmp/pip-install-ji2hjvud/torch-scatter_fe58236068334dc78028c5f89329631e/setup.py'"'"'; file='"'"'/tmp/pip-install-ji2hjvud/torch-scatter_fe58236068334dc78028c5f89329631e/setup.py'"'"';f = getattr(tokenize, '"'"'open'"'"', open)(file) if os.path.exists(file) else io.StringIO('"'"'from setuptools import setup; setup()'"'"');code = f.read().replace('"'"'\r\n'"'"', '"'"'\n'"'"');f.close();exec(compile(code, file, '"'"'exec'"'"'))' install --record /tmp/pip-record-rviv8pvl/install-record.txt --single-version-externally-managed --compile --install-headers /home/fangyi/.conda/envs/fangyi-NePs/include/python3.8/torch-scatter Check the logs for full command output.
如何解决?

Baidu can't be accessed in Europe

Hi, I'm trying to download your dataset but it is impossible to create a Baidu account from Europe, even with all the tricks you can find online. Could you please share the dataset through Google Drive or another more accessible cloud? Thank you in advance.

hi when I run train_script101_test.py, I meet this problem

ImportError: /mnt/sda/*******/anaconda3/envs/NePs/lib/python3.8/site-packages/torch/lib/../../../.././libstdc++.so.6: version `GLIBCXX_3.4.30' not found (required by /mnt/sda/zhouzhiyang/anaconda3/envs/NePs/lib/python3.8/site-packages/scipy/fft/_pocketfft/pypocketfft.cpython-38-x86_64-linux-gnu.so)

复现结果

您好,很感谢您提供的代码!
我在复现论文的时候,结果没有像论文中的评价指标的数值那样,我发现应该是,计算方法的不同,请问您可以提供一下测评的代码吗,或者代码中该如何设置那些loss参数呢?希望您能看见能够回复一下,谢谢!

loss picture

I ran your test script, why is the generated loss picture blank, what is the problem?

imput format

Hello, is your .bin file input the 3D including the point and its normal vector?

Segmentation fault (core dumped)

Thank you for sharing the code. I encountered a ‘Segmentation fault (core dumped)’ problem when reproducing the code. How can I solve it?

Some questions about loss function, dataset, pre-trained model

Hi, feng
Your method has been a great success, I have doubts about some of the key steps.

  1. about loss function: is the public code the final version, I observed that the loss function in the code is different from the one described in the paper, especially the integration quanlity loss.
    2、about dataset, 13 test models but there is a duplicate model, is this model also used to calculate the average evaluation metrics? The scale of the final generated point cloud is different from the size of the ground truth mesh in PUGeo-Net, how to calculate the p2f distance? Can you share the code of the dataset creation part?
    3、about pretrained model, I used pre-trained model for test, how to evaluate the generated points with ground truth 40000. 160000, how to get 10000, 40000, 160000 points for train and test, I tried Can you share the code of the evaluation strategy section? How to use the pre-trained model to get the results in the paper.
    I am very much looking forward to your answer, I may have missed some key information.

how to make the train and test datasets? and problem about 'index' file?

hi feng,
thanks for your work. i wonder how do you make your datasets? for i have sth wrong when i run my own data.

i notice that your input are just two bin files, how can i turn the bin to what you say in the paper '90 training and 13 testing models'?
i try to use the 'index.txt' file to separate the point in bin, but it not works well. and it seems like two 'index.txt' files are not used in code, why?

is there some operation about the datasets i missed?

Questions about quantitative results for upsampling

Hi, thanks for open-sourcing your work. I have a question about quantitative results in table 2.

As reported in section 4.3, I assume you have re-trained each network to obtain those results. However, the metrics are significantly different than the original ones reported in their respective papers. In particular, CD is 1-2 orders of magnitude smaller (you report a scale of 10^-5), with HD and P2F being even worse than the original ones.

It is also true that HD and P2F computation is known to be wrong in most previous works, so it might be possible that you finally got the correct numbers for all those networks. Would you mind sharing the evaluation code you used to compute those metrics?

Thank you in advance.

Evaluation code

Hi, could you please share your evaluation script? Thanks.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.