raphaelsulzer / dgnn Goto Github PK
View Code? Open in Web Editor NEW[SGP 2021] Scalable Surface Reconstruction with Delaunay-Graph Neural Networks
License: MIT License
[SGP 2021] Scalable Surface Reconstruction with Delaunay-Graph Neural Networks
License: MIT License
I'm trying to reconstruct a mesh from a custom dataset by using the following command:
$ python run.py -i -c configs/custom.yaml
When has_label
attribute is 0 inside custom.yaml
file the following exception is raised:
READ CONFIG FROM configs/custom.yaml
SAVE CONFIG TO /home/p/src/dgnn/data/models/kf96/config.yaml
######## START INFERENCE OF 1 FILES ########
LOAD MODEL /home/p/src/dgnn/data/models/kf96/model_best.ptm
0%| | 0/1 [00:00<?, ?it/s]
Traceback (most recent call last):
File "/opt/anaconda3/envs/dgnn/lib/python3.9/site-packages/munch/__init__.py", line 103, in __getattr__
return object.__getattribute__(self, k)
AttributeError: 'Munch' object has no attribute 'results'
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/opt/anaconda3/envs/dgnn/lib/python3.9/site-packages/munch/__init__.py", line 106, in __getattr__
return self[k]
KeyError: 'results'
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/home/p/src/dgnn/run.py", line 321, in <module>
inference(clf)
File "/home/p/src/dgnn/run.py", line 173, in inference
prediction = trainer.inference(data, subgraph_sampler, clf)
File "/home/p/src/dgnn/learning/runModel.py", line 414, in inference
clf.results.OA_test = 0.0
File "/opt/anaconda3/envs/dgnn/lib/python3.9/site-packages/munch/__init__.py", line 108, in __getattr__
raise AttributeError(k)
AttributeError: results
It seems results
is never assigned.
Replacing results
by results_df
triggers another exception.
Usage of results
is done when has_label
attribute is 0:
if(clf.inference.has_label):
clf.inference.metrics = Metrics()
data_inference.batch_x = data_inference.x
data_inference.batch_gt = data_inference.y
self.calcLossAndOA(logits_cell, logits_edge, data_inference, clf, clf.inference.metrics)
else:
clf.results.OA_test = 0.0
Note: In my environment, I successfully reconstruct reconbench dataset. Also, in reconbench.yaml
file, has_label
is set to 1.
I'm attempting to process one of the provided point clouds into a mesh via dgnn
as a sanity check before trying my own numpy output. The features seem fine (see log below), however run.py
gives an error (also shown below). My steps follow:
configs/custom.yaml
, e.g.: data: /home/kevin/dgnn/data/sample
feat
build provided:./utils/feat -w /home/kevin/dgnn/data/sample/ -i anchor_0 -s npz
python run.py -i -c configs/custom.yaml
Traceback (most recent call last):
File "/home/kevin/anaconda3/envs/dgnn/lib/python3.9/site-packages/munch/__init__.py", line 103, in __getattr__
return object.__getattribute__(self, k)
AttributeError: 'Munch' object has no attribute 'metrics'
The feature extraction log follows:
-----FEATURE EXTRACTION-----
Working dir set to:
-/home/kevin/dgnn/data/sample/
Read NPZ points...
-from anchor_0
-3110 points read
-with sensor
-with normal
Export points...
-to anchor_0.ply
-3110 points
Delaunay triangulation...
-3110 input points
-3110 output points
-39342 finite facets
-19520(+604) cells
-done in 0s
d parameter of Labatu set to -1
Consider turning on scaling with --sn, if your learning data is not yet scaledto a unit cube!
Start learning ray tracing...
-Trace 1 ray(s) to every point
-Ray tracing done in 0s
Export graph...
-to gt/anchor_0_X.npz
-Exported 20124 nodes and 80496 edges
-in 1s
Export 3DT...
-to gt/anchor_0_3dt.npz
-3110 vertices
-39342 facets
-19520 tetrahedra
Export interface...
-to anchor_0_initial.ply
-Before pruning (points, facets): (3110, 39342)
-After pruning (points, facets): (304, 604)
-done in 0s
-----FEATURE EXTRACTION FINISHED in 1s -----
Hello, thank you for your project.
During ext compilation I had an error:
python utils/setup_libmesh_convonet.py build_ext --inplace
Here is the output
Traceback (most recent call last):
File "/media/maxim/big-500/dgnn/utils/setup_libmesh_convonet.py", line 88, in <module>
ext_modules=cythonize(ext_modules),
File "/media/maxim/big-500/envs/dgnn/lib/python3.9/site-packages/Cython/Build/Dependencies.py", line 965, in cythonize
module_list, module_metadata = create_extension_list(
File "/media/maxim/big-500/envs/dgnn/lib/python3.9/site-packages/Cython/Build/Dependencies.py", line 815, in create_extension_list
for file in nonempty(sorted(extended_iglob(filepattern)), "'%s' doesn't match any files" % filepattern):
File "/media/maxim/big-500/envs/dgnn/lib/python3.9/site-packages/Cython/Build/Dependencies.py", line 114, in nonempty
raise ValueError(error_msg)
ValueError: 'libmesh/triangle_hash.pyx' doesn't match any files
To compile it I changed the setup_libmesh_convonet.py file:
triangle_hash_module = Extension(
'utils.libmesh.triangle_hash',
sources=[
'utils/libmesh/triangle_hash.pyx'
],
libraries=['m'], # Unix-like specific
include_dirs=[numpy_include_dir]
)
I don't know how it affects the rest of the project.
In your code I see a few flags for mesh coloring via Open3D, however for some of the ETH data treated in your paper, you briefly note that the textured results are via Let there be color! (WAECHTER M., MOEHRLE N., GOESELE M.). Which approach do your favor for your mesh output?
In my tests I'm using your provided ply2npz.py
to compile input, which ignores per vertex color in the source files. Is there another supported route that preserves color data to produce a color mesh -- e.g., a naïve per-face color interpolated from the vertex RGB?
I see the provided /utils/scan
binary supports colmap sources as shown below. This suggests the binary may take as input the set of colmap sparse reconstruction outputs cameras.bin
, images.bin
, and points3D.bin
.
Is this correct, and could you suggest a sample invocation?
On a connected point, if we begin with a point cloud, can you confirm that we need to provide a starting mesh to utils/scan
via -i mesh.off
? That is, /utils/scan
reports that it requires 1) an input mesh (as .off), which is a bit confusing since the the utility outputs 2) a Delaunay triangulation via virtual scanning as a prerequisite to the 3) final dgnn mesh.
-s [ --source ] arg (=ply) Data source and options:
-ply
-npz
-colmap
-omvs
-scan,#points,#cameras,std_noise,%outliers
-tt,#scans
-eth
Hi,
I'm trying to evaluate some custom pointclouds, and had a couple of questions.
Thanks in advance and great research!
hello,
I have been following the instructions step-by-step for setting up the project but when I reached the 3rd step of the installation
--"python utils/setup_libmesh_convonet.py build_ext --inplace"
it gives me this error : "ValueError: 'libmesh\triangle_hash.pyx' doesn't match any files"
even though the file is there just fine. Is there something that I should have installed before this?
Thank you.
A declarative, efficient, and flexible JavaScript library for building user interfaces.
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
An Open Source Machine Learning Framework for Everyone
The Web framework for perfectionists with deadlines.
A PHP framework for web artisans
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
Some thing interesting about web. New door for the world.
A server is a program made to process requests and deliver data to clients.
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
Some thing interesting about visualization, use data art
Some thing interesting about game, make everyone happy.
We are working to build community through open source technology. NB: members must have two-factor auth.
Open source projects and samples from Microsoft.
Google ❤️ Open Source for everyone.
Alibaba Open Source for everyone
Data-Driven Documents codes.
China tencent open source team.