Giter Site home page Giter Site logo

disn's People

Contributors

laughtervv avatar xharlie avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

disn's Issues

Space when calculating CD

Hey,
I would like to ask the space when you calculate CD in evaluation - is it normalized to a unit cube or? Thanks.
Best,
Yiheng

expand_rate and centroid in normalized mesh.

Hi, any particular reason why expand_rate is set to 1.2 and normalized mesh is calculated by centroid, the mean of total verts' position, while the original .obj models are all centered based on bounding box center at (0,0,0)?

error while loading shared libraries: libmkl_intel_lp64.so

Hi,

Thank you for releasing the codes. I met problem with marching cube at last of the demo, the error is:

./isosurface/computeMarchingCubes: error while loading shared libraries: libmkl_intel_lp64.so: cannot open shared object file: No such file or directory

However I've tried many methods to reinstall mkl, add this file to lib path and even move it to the same folder with computeMarchingCubes, neither of these methods work.

Any idea on solving this would be appreciated.

Thanks!

create_img_h5 creating h5 file

python -u preprocessing/create_img_h5.py

The above program is expected to create .h5 file but it is expecting .h5 file as below line says:
sdf_fl = os.path.join(sdf_dir, vals, obj, "ori_sample.h5")

Need a little guidance on what is missing as my sdf_dir contains .sdf files, not .h5 file.

demo.py error

Greetings, thank you for the code release, much appreciated!
I attempted to execute the demo via the command on the README.md file as is, but an error occurs:

Traceback (most recent call last):
  File "demo/demo.py", line 398, in <module>
    create()
  File "demo/demo.py", line 175, in create
    test_one_epoch(sess, ops, batch_data)
  File "demo/demo.py", line 292, in test_one_epoch
    extra_pts = np.zeros((1, SPLIT_SIZE * NUM_SAMPLE_POINTS - TOTAL_POINTS, 3), dtype=np.float32)
ValueError: negative dimensions are not allowed

Any ideas on why this occurs? Thanks in advance.

Unable to reproduce results

Firstly, big thanks to @weiyuewang @Xharlie @laughtervv for pushing the frontier of 3D reconstruction again further into the greater states. I really like your proposed metric representation using SDF and the strategic of combining the features from both global and local to recover granular details. These are truly brilliant ideas!

I have been trying to reproduce some of the outputs in your paper from using the online product image such as the one below.
image
I am able to get the exact same image but unable to reproduce the mesh in the same quality as one in the paper...
image
image

Here is the code I run. The only change I have made is the image size (changed to 224x224). I have also try using the default value 137, but the result doesn't change much.

python -u demo/demo.py \
--cam_est \
--log_dir checkpoint/SDF_DISN \
--cam_log_dir cam_est/checkpoint/cam_DISN \
--img_feat_twostream \
--img_h 224 --img_w 224 \
--sdf_res 64

Did I miss anything?

About the resolution of marching cube

Hi! Thanks for releasing this great work!
Could I ask what resolution is used in the evaluation section (4.1) of the paper? It would be helpful for us to follow this work!
Really thank you very much!

Would you please explain this magic number?

In posenet.py:
pred_translation += tf.constant([-0.00193892, 0.00169222, 1.3949631], dtype=tf.float32)

I'm curious about this constant vector? Would you mind explain it to me?
Thank you very much. Just ignore me if I asked a stupid question.

Demo.py: Negative Dimensions are Not Allowed

When I run the demo, it gives this error on the following line:
extra_pts = np.zeros((1, SPLIT_SIZE * NUM_SAMPLE_POINTS - TOTAL_POINTS, 3), dtype=np.float32)

The split size comes out to be 80, NUM_SAMPLE_POINTS is 212182 and TOTAL_POINTS is 16974593.

Where are these numbers coming from and why could this error be coming up?

Can't use computeMarchingCubes

sh: ./external/isosurface/computeMarchingCubes: cannot execute binary file: Exec format error

Maybe the isosurface must be recompiled.

How to calculate the sdf value?

If I understand properly, the function to calculate the sdf value for a point should be in ./isosuface/computeDistanceField. But it's only an exe. Can I get access to the raw code of it?

Problem in running code

I am trying to run the demo.py . I have installed all the libraries as mentioned in requirements.txt.
I get this error. Please guide.
InvalidArgumentError (see above for traceback): No OpKernel was registered to support Op 'Resampler' with these attrs. Registered devices: [CPU], Registered kernels:

[[Node: resampler/Resampler = Resampler[T=DT_FLOAT](ResizeBilinear_1, Minimum)]]
Thanks!

Training from Scratch

Hi there,

Thanks for releasing the codes, it is amazing work!
I try to train the network from scratch and follow all the steps that were mentioned in Readme file, but I couldn't get the same results in comparison to pretrained model.

I was wondering which hyperparameters are used for the pretarined one. Is it the same as the defaults in train_sdf.py?
How many epochs did you train to get the best accuracy?
Also which dataset was used for training? The old one or the new one that you mentioned in Readme?

Unable to run demo.py

Is there any requirements file to figure out which versions of python and Tensorflow does the demo code run for?

If I enable --cam_est as mentioned in README, tf somehow fails to load the checkpoint even though I downloaded the data from mentioned link, following is an error I get and later on it fails on session.run.
"Fail to load overall modelfile: cam_est/checkpoint/cam_DISN/latest.ckpt"

If I don't enable --cam_est, I get the same issue with SDF_DISN checkpoint loading. Does anybody else face the same issue, does anybody else have any clues as to why I am facing this?

I am using Tensorflow 1.15

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.