Giter Site home page Giter Site logo

alignsdf's People

Contributors

zerchen avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar

alignsdf's Issues

occur "do not support renderer in this machine" problem

When i run the code, I occur the "do not support renderer in this machine". I find your code referencing this package in your networks/model.py.
try: import soft_renderer as sr import soft_renderer.functional as srf except: print('do not support renderer in this machine')
I want to consult how to install this package "soft_renderer".

Question about the training epochs

Hello, thank you for your excellent work. I have a question about the train epochs.

You set the total training epoch as 1600 for the Obman dataset. I wonder if the model really needs so many epochs to converge to a satisfactory state. By any chance, have you tried to train with more minor epochs or check the validation error?

Thank you very much in advance and look forward to your reply :D
Best.

Training

How do I train with sdf_hand_mini and sdf_obj_mini that you uploaded?
I think there is a .npz file that doesn't exist because I put it in mini version.

(alignsdf) MS-7B23:~/mount4t/AlignSDF$ CUDA_VISIBLE_DEVICES=0 bash dist_train.sh 4 6666 -e experiments/obman/30k_1e2d_mlp5.json
do not support renderer in this machine
DeepSdf - INFO - Added key: store_based_barrier_key:1 to store for rank: 0
DeepSdf - INFO - Training in distributed mode, 1 GPU per process. Process 0, total 1.
DeepSdf - INFO - Experiment description:
3D hand reconstruction on the mini obman dataset.
Hand branch: True
Object branch: True
Mano branch: False
Depth branch: False
Classifier Weight: 0
Penetration Loss: False
Penetration Loss Weight: 0
Additional Loss start at epoch: 1201
Contact Loss: False
Contact Loss Weight: 0
Contact Loss Sigma (m): 0.005
Independent Obj Scale: False
Ignore other: False
nb_label_class: 6
Image encoder, the branch has latent size 256
DeepSdf - INFO - Finish constructing the dataset
DeepSdf - INFO - start_epoch:1, current_rank:0
DeepSdf - INFO - epoch:1, current_rank:0
Traceback (most recent call last):
File "train.py", line 715, in
main_function(exp_cfg, args.continue_from, args.local_rank, args.opt_level, args.slurm)
File "train.py", line 465, in main_function
for i, (input_iter, label_iter, meta_iter) in enumerate(sdf_loader):
File "/home/gaeun/anaconda3/envs/alignsdf/lib/python3.8/site-packages/torch/utils/data/dataloader.py", line 517, in next
data = self._next_data()
File "/home/gaeun/anaconda3/envs/alignsdf/lib/python3.8/site-packages/torch/utils/data/dataloader.py", line 1199, in _next_data
return self._process_data(data)
File "/home/gaeun/anaconda3/envs/alignsdf/lib/python3.8/site-packages/torch/utils/data/dataloader.py", line 1225, in _process_data
data.reraise()
File "/home/gaeun/anaconda3/envs/alignsdf/lib/python3.8/site-packages/torch/_utils.py", line 429, in reraise
raise self.exc_type(msg)
FileNotFoundError: Caught FileNotFoundError in DataLoader worker process 0.
Original Traceback (most recent call last):
File "/home/gaeun/anaconda3/envs/alignsdf/lib/python3.8/site-packages/torch/utils/data/_utils/worker.py", line 202, in _worker_loop
data = fetcher.fetch(index)
File "/home/gaeun/anaconda3/envs/alignsdf/lib/python3.8/site-packages/torch/utils/data/_utils/fetch.py", line 44, in fetch
data = [self.dataset[idx] for idx in possibly_batched_index]
File "/home/gaeun/anaconda3/envs/alignsdf/lib/python3.8/site-packages/torch/utils/data/_utils/fetch.py", line 44, in
data = [self.dataset[idx] for idx in possibly_batched_index]
File "/home/gaeun/mount4t/AlignSDF/utils/data.py", line 162, in getitem
hand_samples, hand_labels = unpack_sdf_samples(self.data_source, data_key, num_sample, hand=True, clamp=self.clamp, filter_dist=self.filter_dist)
File "/home/gaeun/mount4t/AlignSDF/utils/sdf_utils.py", line 172, in unpack_sdf_samples
npz = np.load(npz_path)
File "/home/gaeun/anaconda3/envs/alignsdf/lib/python3.8/site-packages/numpy/lib/npyio.py", line 405, in load
fid = stack.enter_context(open(os_fspath(file), "rb"))
FileNotFoundError: [Errno 2] No such file or directory: 'data/obman/train/sdf_hand/00018168.npz'

Killing subprocess 12576
Traceback (most recent call last):
File "/home/gaeun/anaconda3/envs/alignsdf/lib/python3.8/runpy.py", line 194, in _run_module_as_main
return _run_code(code, main_globals, None,
File "/home/gaeun/anaconda3/envs/alignsdf/lib/python3.8/runpy.py", line 87, in _run_code
exec(code, run_globals)
File "/home/gaeun/anaconda3/envs/alignsdf/lib/python3.8/site-packages/torch/distributed/launch.py", line 340, in
main()
File "/home/gaeun/anaconda3/envs/alignsdf/lib/python3.8/site-packages/torch/distributed/launch.py", line 326, in main
sigkill_handler(signal.SIGTERM, None) # not coming back
File "/home/gaeun/anaconda3/envs/alignsdf/lib/python3.8/site-packages/torch/distributed/launch.py", line 301, in sigkill_handler
raise subprocess.CalledProcessError(returncode=last_return_code, cmd=cmd)
subprocess.CalledProcessError: Command '['/home/gaeun/anaconda3/envs/alignsdf/bin/python', '-u', 'train.py', '--local_rank=0', '-e', 'experiments/obman/30k_1e2d_mlp5.json']' returned non-zero exit status 1.

Question Regarding the SDF Sampling

Hi there, really thanks for your amazing work.

I have a question regarding the DeepSDF part. Though it's not a really relevant question to this paper, I also checked their original repo and many issues are also regarding the compiling process.

I would like to know which version of Pangolin did you use. Also, I would like to know your GCC and G++ version when compiling the DeepSDF.

The reason I ask is that I've tried many settings but still get a mistake at the end. So hope this won't bother you too much :)

Thanks again

The difference between sdf_hand and sdf_hand_mini

I find the dataset you provided contains sdf_hand_mini and sdf_hand. I want to know the difference between sdf_hand and sdf_hand_mini.
Besides, when I run the command "CUDA_VISIBLE_DEVICES=0,1 bash dist_train.sh 2 6666 -e experiments/obman/30k_1e2d_mlp5.json --mano --obj_pose --point_size 9 --encode both --ocrw 0" to train the model, is the mini version of the dataset utilized?

Question Regarding the SDF Scale Factor

Hi Zerui,

Thanks for your work. I have a question regarding the SdfScaleFactor within the config file of two datasets. I would like to know what is this refer to and how to calculate that corresponding to different personalized datasets.

Thanks
Eric

Preprocessed file are not generated in sdf_hand and sdf_obj but only in norm

I tried to preprocess one sample mesh to get the sdf values. After running prep_obman.py, I only get one npz file under norm, but nothing under sdf_hand and sdf_obj.
The terminal output error as below:
OpenGL Error 500: GL_INVALID_ENUM: An unacceptable value is specified for an enumerated argument. In: /usr/local/include/pangolin/gl/gl.hpp, line 205
In here, people said they can still get the results with this error. Do you know how to solve this?
Thanks.

Issue related to DexYCB dataset cropping

hello, thank you for your great work and patient response.
I noticed that you guided the cropping of 480x480 images through the position of hand wrist from this site. But the location of hand wrist in Fig2, Fig 7, Fig C.2 and Fig C.3 is not in the center of image. So, I wonder how you design this crop image method. Thanks.

Dex_YCB dataset organization

Hello, thank you for your excellent work and contribution to the community.

I noticed that you only provided norm, sdf_hand and sdf_obj for the dex_ycb dataset in the google drive. I'm wondering if you can also provide the meta folder or a script to generate meta.pkl .
I also noticed that you cropped/resized the dex_ycb image, can you also provide some information about that? Thanks!

mesh_hand and mesh_obj folders

@zerchen

"I first use the tool to generate the mesh_hand and mesh_obj folders"

If I understand correctly, these folders contain the ground-truth posed hand and object meshes?

Can you please specify a bit more how the obj's in the folder were exported for DexYCB?

How to generate mesh_hand and mesh_obj folders

Hello, I'm insteresting in your work and trying to run it on my computer.
But I don't know how to generate mesh_hand and mesh_obj folders in dataset preprocessing part. Which part of the code in tool should I use?
Thanks!

Issue related to the result of Ote

hello, thank you for your great work and patient response.
I reproduced the code, the model results related to DexYCB are mostly consistent with the paper, but the Ote indicator is several times (2x ~ 3x) the results in the paper, can you give me some advice? thank you.

Test the model on novel images

Hello @zerchen.
Recently I verified the performance of your model on the obman dataset and it works really well.
But I have a question and hope to obatin your guidance.

I want to eveluate the performance of model on other images. How can I generate the data format like obman test folder on google cloud?

Thanks in advance!

Dataset download

Thanks for sharing your work. I have a problem now. I can not download the training dataset from google cloud due to restrictions on the number of visits to web pages. Can you provide another download link of the training dataset?

occur "do not support renderer in this machine" problem

When i run the code, I occur the "do not support renderer in this machine". I find your code referencing this package in your networks/model.py.
try: import soft_renderer as sr import soft_renderer.functional as srf except: print('do not support renderer in this machine')
I want to consult how to install this package "soft_renderer".

Missing test/mesh_obj_rest/ for obman test

Hi, thx for sharing such great work. When I tried to evaluate the model, it seems that some parts of the test data(test/mesh_obj_rest folder) are missing? After I modified it to /test/mesh_obj, it gives me warning like "WARNING:root:Cannot reconstruct mesh". Could you please tell me how to solve this?

How to generate dexycb split names?

Hello,

I find in your code, you use "00141909" to index dexycb rgb images. But for the dataset, the image index is "20200820-subject-03/20200820_135323/932122062010/color_000000.jpg". How do you create your own dexycb dataset.

I follow official dexycb toolkit to run create_dataset.py and it shows the size of S1 split train is 407088, but in your code it's 148414.

**源

google drive下载数据容易断
能否提供百度网盘的下载链接?
谢谢

Hi, when i try to run the code, i meet some problems.

Hi, sorry to ask so many questions.
1: I try to run the code, but obj_corners_3d and obj_rest_corners_3d are not available while your code have used them. Can you tell me the meanning of them, thus i can use trimesh to get them from the mesh.
2: The loss of corners is not in the paper.
3: I don't know if the code is the final version, is it just a reference but not a available code?

best
xinkang

Trained model

Hello.
This is very nice work.
Could you please provide your trained model?

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.