Giter Site home page Giter Site logo

autonomousvision / occupancy_networks Goto Github PK

View Code? Open in Web Editor NEW
1.5K 1.5K 292.0 5.17 MB

This repository contains the code for the paper "Occupancy Networks - Learning 3D Reconstruction in Function Space"

Home Page: https://avg.is.tuebingen.mpg.de/publications/occupancy-networks

License: MIT License

Python 57.79% C++ 15.73% Cuda 13.39% C 9.22% Mako 3.27% Shell 0.60%

occupancy_networks's Introduction

Autonomous Vision Blog

This is the blog of the Autonomous Vision Group at MPI-IS Tübingen and University of Tübingen. You can visit our blog at https://autonomousvision.github.io. Also check out our website to learn more about our research.

Overview

Creating a blog post follows the usual git workflow:

  1. clone repository:

    git clone https://github.com/autonomousvision/autonomousvision.github.io.git
    
  2. create new branch for your post:

    git branch my-post
    git checkout my-post
    
  3. work on branch / push my-post branch for collaboration

  4. rebase master on your branch and squash commits (note that all your commits to master will be visible in the git history):

    git checkout master
    git rebase -i my-post
    
  5. push master

    git push origin master
    
  6. delete your branch

    locally:

    git branch -d my-post
    

    and remotely if you pushed your branch in step 3:

    git push origin --delete my-post
    

Instructions for Authors

To write a new blog entry, first register yourself as an author in authors.yml. Here, you can also add your email address and links to your social media accounts etc.

You can then create a new blog post by adding a markdown or html file in the _posts folder. Please use the format YYYY-MM-DD-YOUR_TITLE.{md,html} for naming the file. You can then create a yaml header where you specify the author, the category of the post, tags, etc. For more information, take a look at existing posts and the Minimal Mistakes documentation.

If you want to include images or other assets, create a subfolder in the assets/posts folder with the same name as the filename of your blog post (without extension). You can simply reference your assets in your post using {{ site.url }}/assets/posts/YYYY-MM-DD-YOUR_TITLE/ followed by the filename of the corresponding asset. Make sure that you don't forget to include the {{ site.url }}! While the post while be rendered correctly without the {{ site.url }}, the images in the newsfeed will break if you don't include it.

Please keep in mind that all your commits to master will appear in the git history. To keep this history clean, it might make sense to edit your post in a separate (private) branch and then merge this branch into master.

Offline editing

When you do offline editing, you probably want to build the website offline for a preview. To this end, you first have to install Ruby and Jekyll. Then, you have to install the dependencies (called Gems) for the website:

bundle

Now, you are ready to build and serve the website using

 bundle exec jekyll serve

Sometimes Jekyll hiccups over character encoding. In this case, try

 LANG=en_US.UTF-8 LC_ALL=en_US.UTF-8 bundle exec jekyll serve

If you encounter GemNotFoundException, try to remove

BUNDLED WITH
    2.0.1

from Gemfile.lock.

This command will build the website and serve it at http://localhost:4000. When you save changes, the website will be automatically rebuilt in the background. Note, however, that changes to _config.yaml will not be tracked which means that you have to restart the jekyll server after configuration changes.

References

You can find more information here:

occupancy_networks's People

Contributors

binbin-xu avatar lmescheder avatar m-niemeyer avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

occupancy_networks's Issues

What is voxel file used for during ONet training from pointcloud?

Hello,

Thank you for this great work and clean codebase!

I am trying to relate the training script of your work to the CVPR paper and I want to better understand the use of the voxel data (e.g. "model.binvox") when training from pointcloud data. Is it used during training, or only at inference?

By extension, is the performance of ONet highly dependent on the use of this voxel data? For example, does the performance improve if the resolution of the grid is increase from 32x32x32 to 64x64x64?

undefined symbol: _ZN6caffe26detail37_typeMetaDataInstance_preallocated_32E

I constantly get this error below when I run python generate.py configs/demo.yaml on Ubuntu 18.04 anaconda3 cuda 9.0

Can anyone please help? Thanks!

(mesh_funcspace) user@user-FB-22866-One-Computer-Core-i5-46:~/occupancy_networks-master$ python generate.py configs/demo.yaml
Traceback (most recent call last):
File "generate.py", line 10, in
from im2mesh import config
File "/home/user/occupancy_networks-master/im2mesh/config.py", line 4, in
from im2mesh import onet, r2n2, psgn, pix2mesh, dmc
File "/home/user/occupancy_networks-master/im2mesh/dmc/init.py", line 1, in
from im2mesh.dmc import (
File "/home/user/occupancy_networks-master/im2mesh/dmc/config.py", line 2, in
from im2mesh.dmc import models, training, generation
File "/home/user/occupancy_networks-master/im2mesh/dmc/models/init.py", line 2, in
from im2mesh.dmc.models import encoder, decoder
File "/home/user/occupancy_networks-master/im2mesh/dmc/models/encoder.py", line 4, in
from im2mesh.dmc.ops.grid_pooling import GridPooling
File "/home/user/occupancy_networks-master/im2mesh/dmc/ops/grid_pooling.py", line 6, in
from ._cuda_ext import grid_pooling_forward, grid_pooling_backward
ImportError: /home/user/occupancy_networks-master/im2mesh/dmc/ops/_cuda_ext.cpython-36m-x86_64-linux-gnu.so: undefined symbol: _ZN6caffe26detail37_typeMetaDataInstance_preallocated_32E

Testing on a point cloud

Hi @LMescheder, Thanks for the work.

I wanted to test whether the pre-trained network may work on the point cloud from ScanNet dataset.
So,

  • I parsed one .ply file of a desk from ScanNet scene and converted it to .npy
  • I arranged the dataset in a way in which the occupancy_network takes input
    But I am not able to get the mesh. I am stuck on following error -
    Screenshot from 2019-12-29 20-51-22

What is the way to run the pretrained network on a single point cloud I have?

sample_mesh script not working with voxel option

When running the sample_mesh.py script with the voxelize flags, I get the following error:

AttributeError: module 'im2mesh.utils.voxels' has no attribute 'voxelize'

In line 158 of sample_mesh.py which is:

voxels_occ = voxels.voxelize(mesh, res)

When looking at the module, I indeed see no .voxelize function, only voxelize_fill, voxelize_mesh etc. Am I missing something? What is the intended way of using this script for voxel generation?

Downloading the data

Hey there,

could you please tell me how much time does it take, more or less, to download your
preprocessed data? the 70+GB?

I tried downloading it on one PC and got an eta of 23 days.

EDIT: turns out my connection was poor, sorry for that.

Best,
Matt

OSError: [Errno 99] Cannot assign requested address

When I execute the following command, an error occurs:
python generate.py configs/demo.yaml

Error:

https://s3.eu-central-1.amazonaws.com/avg-projects/occupancy_networks/models/onet_img2mesh_3-f786b04a.pt
=> Loading checkpoint from url...
Downloading: "https://s3.eu-central-1.amazonaws.com/avg-projects/occupancy_networks/models/onet_img2mesh_3-f786b04a.pt" to /home/fxru/.torch/models/onet_img2mesh_3-f786b04a.pt
Traceback (most recent call last):
  File "/home/fxru/anaconda3/envs/mesh_funcspace/lib/python3.6/urllib/request.py", line 1318, in do_open
    encode_chunked=req.has_header('Transfer-encoding'))
  File "/home/fxru/anaconda3/envs/mesh_funcspace/lib/python3.6/http/client.py", line 1239, in request
    self._send_request(method, url, body, headers, encode_chunked)
  File "/home/fxru/anaconda3/envs/mesh_funcspace/lib/python3.6/http/client.py", line 1285, in _send_request
    self.endheaders(body, encode_chunked=encode_chunked)
  File "/home/fxru/anaconda3/envs/mesh_funcspace/lib/python3.6/http/client.py", line 1234, in endheaders
    self._send_output(message_body, encode_chunked=encode_chunked)
  File "/home/fxru/anaconda3/envs/mesh_funcspace/lib/python3.6/http/client.py", line 1026, in _send_output
    self.send(msg)
  File "/home/fxru/anaconda3/envs/mesh_funcspace/lib/python3.6/http/client.py", line 964, in send
    self.connect()
  File "/home/fxru/anaconda3/envs/mesh_funcspace/lib/python3.6/http/client.py", line 1392, in connect
    super().connect()
  File "/home/fxru/anaconda3/envs/mesh_funcspace/lib/python3.6/http/client.py", line 936, in connect
    (self.host,self.port), self.timeout, self.source_address)
  File "/home/fxru/anaconda3/envs/mesh_funcspace/lib/python3.6/socket.py", line 724, in create_connection
    raise err
  File "/home/fxru/anaconda3/envs/mesh_funcspace/lib/python3.6/socket.py", line 713, in create_connection
    sock.connect(sa)
OSError: [Errno 99] Cannot assign requested address

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
  File "generate.py", line 46, in <module>
    checkpoint_io.load(cfg['test']['model_file'])
  File "/home/fxru/tensorflow_learn/occupancy_networks-master/im2mesh/checkpoints.py", line 47, in load
    return self.load_url(filename)
  File "/home/fxru/tensorflow_learn/occupancy_networks-master/im2mesh/checkpoints.py", line 78, in load_url
    state_dict = model_zoo.load_url(url, progress=True)
  File "/home/fxru/anaconda3/envs/mesh_funcspace/lib/python3.6/site-packages/torch/utils/model_zoo.py", line 66, in load_url
    _download_url_to_file(url, cached_file, hash_prefix, progress=progress)
  File "/home/fxru/anaconda3/envs/mesh_funcspace/lib/python3.6/site-packages/torch/utils/model_zoo.py", line 76, in _download_url_to_file
    u = urlopen(url)
  File "/home/fxru/anaconda3/envs/mesh_funcspace/lib/python3.6/urllib/request.py", line 223, in urlopen
    return opener.open(url, data, timeout)
  File "/home/fxru/anaconda3/envs/mesh_funcspace/lib/python3.6/urllib/request.py", line 526, in open
    response = self._open(req, data)
  File "/home/fxru/anaconda3/envs/mesh_funcspace/lib/python3.6/urllib/request.py", line 544, in _open
    '_open', req)
  File "/home/fxru/anaconda3/envs/mesh_funcspace/lib/python3.6/urllib/request.py", line 504, in _call_chain
    result = func(*args)
  File "/home/fxru/anaconda3/envs/mesh_funcspace/lib/python3.6/urllib/request.py", line 1361, in https_open
    context=self._context, check_hostname=self._check_hostname)
  File "/home/fxru/anaconda3/envs/mesh_funcspace/lib/python3.6/urllib/request.py", line 1320, in do_open
    raise URLError(err)
urllib.error.URLError: <urlopen error [Errno 99] Cannot assign requested address>

Which dataset type to use with custom dataset

Hello and thank you for this intriguing work!

I want to use my own custom dataset with this and just want to better understand the use cases for the different dataset types in the manifest -> ["data"]["dataset"].

I have a file structure like this:

Dataset -> { Data_Item_1...Data_Item_N } -> { [Images x n], model.binvox, points.npz }

from which I want to recover a mesh.

Should I use the ShapeNet dataset type or an Images dataset?

Many thanks for your insight!

Artifacts in Data Pipeline?

buildpipeline.zip

I'm trying to perform some post-processing on the shapenet meshes for followup research, so I wanted to get the normal pipeline up and running. I am seeing some artifacts being generated in the watertight stage of the process. Attached an example of the 1_scaled and 2_watertight stages for a chair model. Is there something wrong with the setup? Have you experienced this as well?

image

rendering & voxelizations

from where can i download rendering & voxelizations choy et al. 2016 as mentioned in building Dataset?

Possibly missing package 'im2mesh.data'

Dear @LMescheder,

First, congrats for your CVPR paper and thanks for open sourcing the code.

When I try to run demo via python generate.py configs/demo.yaml, I am getting below error:

Traceback (most recent call last):
  File "generate.py", line 10, in <module>
    from im2mesh import config
  File "/home/Occupancy-Networks/im2mesh/config.py", line 3, in <module>
    from im2mesh import data
ImportError: cannot import name 'data'

Is there a missing package under im2mesh?

Best,

eval

hello,
when I run the eval_meshes.py, it has same warning.
'warning: contains1 != contains2 for same points.'
I wonder if this warning will affect the results.

Unsupported gpu architecture 'compute_75' during installation

python3 setup.py build_ext --inplace
running build_ext
building 'im2mesh.dmc.ops.cuda_ext' extension
gcc -pthread -B /home/sunglyoung_119/miniconda3/envs/mesh_funcspace/compiler_compat -Wl,--sysroot=/ -Wsign-compare -DNDEBUG -g -fwrapv -O3 -Wall -Wstrict-prototypes -fPIC -I/home/sunglyoung_119/.local/lib/python3.6/site-packages/torch/include -I/home/sunglyoung_119/.local/lib/python3.6/site-packages/torch/include/torch/csrc/api/include -I/home/sunglyoung_119/.local/lib/python3.6/site-packages/torch/include/TH -I/home/sunglyoung_119/.local/lib/python3.6/site-packages/torch/include/THC -I/home/sunglyoung_119/miniconda3/envs/mesh_funcspace/include/python3.6m -c im2mesh/dmc/ops/src/extension.cpp -o build/temp.linux-x86_64-3.6/im2mesh/dmc/ops/src/extension.o -DTORCH_API_INCLUDE_EXTENSION_H -DTORCH_EXTENSION_NAME=cuda_ext -D_GLIBCXX_USE_CXX11_ABI=0 -std=c++11
cc1plus: warning: command line option ‘-Wstrict-prototypes’ is valid for C/ObjC but not for C++
/usr/bin/nvcc -I/home/sunglyoung_119/.local/lib/python3.6/site-packages/torch/include -I/home/sunglyoung_119/.local/lib/python3.6/site-packages/torch/include/torch/csrc/api/include -I/home/sunglyoung_119/.local/lib/python3.6/site-packages/torch/include/TH -I/home/sunglyoung_119/.local/lib/python3.6/site-packages/torch/include/THC -I/home/sunglyoung_119/miniconda3/envs/mesh_funcspace/include/python3.6m -c im2mesh/dmc/ops/src/curvature_constraint_kernel.cu -o build/temp.linux-x86_64-3.6/im2mesh/dmc/ops/src/curvature_constraint_kernel.o -D__CUDA_NO_HALF_OPERATORS
-D__CUDA_NO_HALF_CONVERSIONS__ -D__CUDA_NO_HALF2_OPERATORS__ --expt-relaxed-constexpr --compiler-options '-fPIC' -DTORCH_API_INCLUDE_EXTENSION_H -DTORCH_EXTENSION_NAME=_cuda_ext -D_GLIBCXX_USE_CXX11_ABI=0 -gencode=arch=compute_75,code=sm_75 -std=c++11
nvcc fatal : Unsupported gpu architecture 'compute_75'
error: command '/usr/bin/nvcc' failed with exit status 1

I am using RTX2080 so I don't think the Cuda 9.2 architecture recognize the GPU. However, is it anyway I can add 'compute_75' as a support GPU?
If I can do, or may I know what file should I look into?

Noise level added to point clouds

Hello, thanks for your code and data.
As described in paper, you "apply noise using a Gaussian distribution with zero mean and standard deviation 0.05 to the point clouds." However, I found in the configs/pointcloud/onet.yaml that the pointcloud_noise is set as 0.005. So I am wondering which noise level you used in your paper and in the pretrained model in configs/pointcloud/onet_pretrained.yaml.
Many thanks.

Compile error

When I compile the extension module, I reported the following error:

running build_ext
building 'im2mesh.utils.libkdtree.pykdtree.kdtree' extension
gcc -pthread -B /home/fxru/anaconda3/envs/mesh_funcspace/compiler_compat -Wl,--sysroot=/ -Wsign-compare -DNDEBUG -g -fwrapv -O3 -Wall -Wstrict-prototypes -fPIC -I/home/fxru/anaconda3/envs/mesh_funcspace/include/python3.6m -c im2mesh/utils/libkdtree/pykdtree/kdtree.c -o build/temp.linux-x86_64-3.6/im2mesh/utils/libkdtree/pykdtree/kdtree.o -std=c99 -O3 -fopenmp -DTORCH_API_INCLUDE_EXTENSION_H -DTORCH_EXTENSION_NAME=kdtree -D_GLIBCXX_USE_CXX11_ABI=0
im2mesh/utils/libkdtree/pykdtree/kdtree.c:525:31: fatal error: numpy/arrayobject.h: No such file or directory
compilation terminated.
error: command 'gcc' failed with exit status 1

How can I change input img_size from 224?

Hi.

When I changed input image size config from default 224 to 448, an error occurred below.

RuntimeError: size mismatch, m1: [32 x 131072], m2: [2048 x 256] at /opt/conda/conda-bld/pytorch_1544174967633/work/aten/src/THC/generic/THCTensorMathBlas.cu:266

How can I change my config from default?
And, can I get some good 3D-reconstruction results because of changing input image size?

Thanks.

Build Script uses internal alias 'lsfilter'?

Hi,
I'm trying to build the dataset on Ubuntu, and the build.sh script is failing. It says that it cannot find the command lsfilter . Could it be that this is an alias for ls with certain flags that you are using locally?

Installation issue, pyembree?

I tried to install via conda with the yaml file but it cannot find the pyembree package. I tried installing it separately but I get the same issue. Any ideas?

Thanks

dataset

hi,
I preprocessed the data set according to your instructions, but found that some of the samples after processing had no point cloud.
It is found that the original data set of shapenet is missing point cloud itself. How do you deal with these errors? Did you remove them directly from the train.1st file?
Thanks in advance!

Scale of the model

Hi there,

Thanks for releasing the codes, it is an amazing work! I have tried to do shape completion on point cloud sampled from original shapenet models. It seems my data is not in the same scale as your data. Did you rescale the shapenet model? If so, can you provide the scale value? And for point cloud completion, does it matter a lot for the scale(I guess so) and the view?

Thanks,
Ryan

Setting up the dataset for training

Can anyone please elaborate what should be downloaded in the second step?

  • download the renderings and voxelizations from Choy et al. 2016 and unpack them in data/external/Choy2016

It would be useful if someone can provide an download link for this

Error on running on subset of Shapenet Dataset

Hi,
I picked out two classes of Shapenet pre-processed data and placed it in a custom folder maintaining directory structure.
I want to run the point cloud -> mesh generation on this network.
But, when I run python generate.py configs/pointcloud/onet_pretrained.yaml I get following error
Screenshot from 2020-01-18 01-56-15

Any help is appreciated!

Windows support?

Hi,
I'm trying to get this project to run on windows.
I managed to compile the extensions (Without dmc). When I run the sample command

python generate.py configs/demo.yaml

I get the following error:

Traceback (most recent call last): File "generate.py", line 145, in <module> out = generator.generate_mesh(data) File "D:\Programming\Projects\MSc\details\occupancy_networks\im2mesh\onet\generation.py", line 81, in generate_mesh mesh = self.generate_from_latent(z, c, stats_dict=stats_dict, **kwargs) File "D:\Programming\Projects\MSc\details\occupancy_networks\im2mesh\onet\generation.py", line 114, in generate_from_latent points = mesh_extractor.query() File "im2mesh\utils\libmise\mise.pyx", line 122, in im2mesh.utils.libmise.mise.MISE.query cdef long[:, :] points_view = points_np ValueError: Buffer dtype mismatch, expected 'long' but got 'long long'

Any idea how to tackle this? (Windows 10, 64 bit)

metadata.yaml missing in dataset

Currently, the provided dataset misses the metadata.yaml file. As a result, the evaluation is not printed on a per-class basis.

No module named 'im2mesh.utils.libkdtree.pykdtree.kdtree'

Hi, I have some problems with code's configuration.I google it but not found a proper solution.I also follow 'conda env create -f environment.yaml ', but there's something wrong with my network that i can't download.Do you know how to fix this?

ModuleNotFoundError: No module named 'librender.pyrender'

when i try to run
bash dataset_shapenet/build.sh
i get this error
Processing class 03001627
Converting meshes to OFF
dataset_shapenet/build.sh: line 21: parallel: command not found
Scaling meshes
Create depths maps
Traceback (most recent call last):
File "../external/mesh-fusion/2_fusion.py", line 11, in
import librender
File "/home/zash/Desktop/occupancy_networks-master/external/mesh-fusion/librender/init.py", line 6, in
from librender.pyrender import *
ModuleNotFoundError: No module named 'librender.pyrender'
Produce watertight meshes
Traceback (most recent call last):
File "../external/mesh-fusion/2_fusion.py", line 11, in
import librender
File "/home/zash/Desktop/occupancy_networks-master/external/mesh-fusion/librender/init.py", line 6, in
from librender.pyrender import *
ModuleNotFoundError: No module named 'librender.pyrender'

Problem

When I run the demo, there have a problem. What's wrong?
ValueError: numpy.ufunc has the wrong size, try recompiling. Expected 192, got 216

how can i train my own dataset?

hi
i'm working on my university project and for that i have to train my own dataset and get 3d mesh files of my own dataset..my question is how can i do renderings and voxelizations of my own dataset as mentioned in 2nd point of building dataset? will really appreciate if you could reply my question.

Training Problem

I download the Preprocessed data and unzip it to the data/ShapeNet folder
I met some problems in the following messages, could you help me?
It seems like that i did not load the data

File "/media/mickyv2/micky/occupancy_networks-master/train.py", line 70, in
data_vis = next(iter(vis_loader))
File "/home/mickyv2/anaconda3/envs/mesh_funcspace/lib/python3.6/site-packages/torch/utils/data/dataloader.py", line 615, in next
batch = self.collate_fn([self.dataset[i] for i in indices])
File "/media/mickyv2/micky/occupancy_networks-master/im2mesh/data/core.py", line 169, in collate_remove_none
return data.dataloader.default_collate(batch)
File "/home/mickyv2/anaconda3/envs/mesh_funcspace/lib/python3.6/site-packages/torch/utils/data/dataloader.py", line 200, in default_collate
elem_type = type(batch[0])
IndexError: list index out of range

IndexError: list index out of range

while building dataset with this command bash dataset_shapenet/build.sh i got an AssertionError and when i checked the directory data/ShapeNet.build it had created few folder but unfortunately folders were empty.. @LMescheder can you tell me what have i done wrong here??

Obtaining the "input" results for the voxel use case

Hi,

I'm doing followup research on the voxels use case of this paper, and am trying to reproduce the results of the paper before continuing.

I install the environment on ubuntu, and run the following:

python eval.py configs/voxels/onet_pretrained.yml

I obtain the following results (ran once on chairs dataset only, and once on everything):

Chairs Only
iou iou_voxels kl loss rec_error
class name
n/a 0.663234 0.659272 0.0 81.09695 81.09695
mean 0.663234 0.659272 0.0 81.09695 81.09695

Everything
iou iou_voxels kl loss rec_error
class name
n/a 0.695912 0.68121 0.0 57.70389 57.70389
mean 0.695912 0.68121 0.0 57.70389 57.70389

What is the difference between iou and iou_voxels?
Paper says
Input IOU 0.631
ONet IOU 0.703

How was the 0.631 obtained, and is it possible to reproduce the ONet IOU with the supplied code?

Visualizing mesh and occupancy points

Hi, thank you for sharing your code.
Is there code in this repository to visualize a mesh along its predicted occupancy points? If not, could you point me how to get a visualization like this in your presentation video?
Selection_259

time

hi,
when i run the demo,
Timings[s]:
mesh time(encode inputs) time(eval points) time(marching cubes) time(refine) time(simplify)
class name
n/a 8.02325 0.015601 1.29366 1.439129 4.992291 0.249395
mean 8.02325 0.015601 1.29366 1.439129 4.992291 0.249395
There is 'The inference time of our algorithm with simplification and refinement steps is about 3s / mesh' in the paper.
My GPU is TITAN Xp, and I don't know why it takes so much time.
Thanks in advance!

Custom dataset training

Hi @LMescheder , I was trying to train using my own point cloud dataset (in .npy format). What is the format of the file points.npz which we need to create for our own dataset. Do we have to concatenate all the point clouds data in it? Between I am using onet.yaml file as the config file.

Cannot run demo script

model_file in configs/img/onet_pretrained.yaml cannot be downloaded while executing the Demo script => python generate.py configs/demo.yaml
Does anyone make it work?

Results not consistent with paper

Hi there,

I just tried test your model on single view image reconstruction task, i.e. configs/img/onet_pretrained.yaml .

However, I noticed two results which are not consistent with the paper, actually worse than those. Did I use some wrong reconstruction methods or the model was retrained after the publication? If so, any idea on how to improve the quality?

Input 1:
00_in

Output:
Capture

Paper:
Capture

Input 2:
07_in

Output:
Capture

Paper:
Capture

Thanks,
Ryan

Error compiling im2mesh

Hi,
I am getting the following error while compiling im2mesh on Ubuntu 16.04:

(mesh_funcspace) giancos@PC-KW-60110:~/git/occupancy_networks$ python setup.py build_ext --inplace
running build_ext
building 'im2mesh.utils.libkdtree.pykdtree.kdtree' extension
gcc -pthread -B /home/giancos/anaconda3/envs/mesh_funcspace/compiler_compat -Wl,--sysroot=/ -Wsign-compare -DNDEBUG -g -fwrapv -O3 -Wall -Wstrict-prototypes -fPIC -I/home/giancos/anaconda3/envs/mesh_funcspace/include/python3.6m -c im2mesh/utils/libkdtree/pykdtree/kdtree.c -o build/temp.linux-x86_64-3.6/im2mesh/utils/libkdtree/pykdtree/kdtree.o -std=c99 -O3 -fopenmp -DTORCH_API_INCLUDE_EXTENSION_H -DTORCH_EXTENSION_NAME=kdtree -D_GLIBCXX_USE_CXX11_ABI=0
im2mesh/utils/libkdtree/pykdtree/kdtree.c:525:31: fatal error: numpy/arrayobject.h: No such file or directory
compilation terminated.
error: command 'gcc' failed with exit status 1

Any advice?

Own dataset training(unconditional models)

I want to train using own dataset.
I execute sample_mesh.py, after I get watertight meshes.
So, I get pointclouds, voxels, occupancies.

After that, I execute train.py
but i found following error.

Error occured when loading field points of model 〇〇
or
Error occured when loading field voxels of model 〇〇

I think that pointclouds and voxels can't be made exactly.
Please tell me anything solution.

(ShapeNet model can be train)

Is the the TSDF fusion steps necessary for training a model?

Hi, thank you for your brilliant work pretty much, and I have starred your work.

Now I try to use a new 3D dataset to train a new model for a new research based on your code, and now I have got the original 2D images and 3D .off files. During I ran your code in /external/mesh-fusion to preprocess the 3D data, I met some trouble.

I have finished the 'Installation' section, however, when I dived into the 'Usage' section, I could only ran over the first command, which is used for scaling the models, and the second and third commands falied to run. I have spent some days on debugging but the bugs still exists. And I am really not willing to spend more days on it.

So, I just wonder, whether or not could I skip the TSDF preprocessing steps, and train the new model with my own dataset directly? Much appreciations if your could reply my question.

The unit of Chamfer-L1 in the paper

Hi, sorry to bother you again.
I have run your code on ShapeNet point cloud data and got the evaluation result. However, the
Chamfer-L1 in the result is several orders of magnitude lower than that in the table 2. So I am wondering if you have multiplied a large factor on the Chamfer-L1 when reporting in the paper, which is also common in other papers using Chamfer Distance.
Best, Zhenxing

wateritght meshes

Hey there,

congrats on a great paper!

I saw that there are no watertight meshes in your "preprocessed data (73.4 GB)".

Would it be possible for you to upload the preprocessed, ie watertight, meshes as well?

compile

I run python3 generate.py configs/demo.yaml
I get import error : /im2mesh/dmc/ops/_cuda_ext.cpython-35m-x86_64-linux-gnu.so: undefined symbol: _ZN2atErrorCLENS_14SourceLocationERKSs

Pixel2mesh training loss plateaus

Hello,

First of all, thanks for your amazing work!
I am currently trying to train your pixel2mesh implementation, but I am not able to obtain decent results and I was wondering if you changed something that is not tunable from its config file.

The code I am using doesn't have any modification and I downloaded your pre-processed data. After epoch 8 the loss plateaus around 30 and oscillates around that value at least until epoch 50 (then I stopped the training because didn't look like it was going to get better).

Here you can find the visualisation of a prediction and its GT at epoch 50.

000
000_gt

Thanks in advance for your time.

High frequency details

Hi, thanks for the great work! I have trained the network generating data with the per-processing script and the end result looks good but it lacks details. Is there something I could/should do to improve the capture of details? Is there a parameter(s) to be changed in the per-processing step? Or something of the like. Can MISE parameters can achieve more details?
Thanks!

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.