zerchen / alignsdf Goto Github PK
View Code? Open in Web Editor NEWAlignSDF: Pose-Aligned Signed Distance Fields for Hand-Object Reconstruction, ECCV 2022
AlignSDF: Pose-Aligned Signed Distance Fields for Hand-Object Reconstruction, ECCV 2022
Hello, thank you for your excellent work and contribution to the community.
I noticed that you only provided norm
, sdf_hand
and sdf_obj
for the dex_ycb dataset in the google drive. I'm wondering if you can also provide the meta folder or a script to generate meta.pkl
.
I also noticed that you cropped/resized the dex_ycb image, can you also provide some information about that? Thanks!
Hi, thx for sharing such great work. When I tried to evaluate the model, it seems that some parts of the test data(test/mesh_obj_rest folder) are missing? After I modified it to /test/mesh_obj, it gives me warning like "WARNING:root:Cannot reconstruct mesh". Could you please tell me how to solve this?
Hello.
This is very nice work.
Could you please provide your trained model?
Hello @zerchen.
Recently I verified the performance of your model on the obman dataset and it works really well.
But I have a question and hope to obatin your guidance.
I want to eveluate the performance of model on other images. How can I generate the data format like obman test folder on google cloud?
Thanks in advance!
Where can I find the pre-trained weights of AlignSDF model that is reported in the ECCV 2022 paper?
Hello, thank you for your excellent work. I have a question about the train epochs.
You set the total training epoch as 1600 for the Obman dataset. I wonder if the model really needs so many epochs to converge to a satisfactory state. By any chance, have you tried to train with more minor epochs or check the validation error?
Thank you very much in advance and look forward to your reply :D
Best.
Hi Zerui,
Thanks for your work. I have a question regarding the SdfScaleFactor within the config file of two datasets. I would like to know what is this refer to and how to calculate that corresponding to different personalized datasets.
Thanks
Eric
Thanks for sharing your work. I have a problem now. I can not download the training dataset from google cloud due to restrictions on the number of visits to web pages. Can you provide another download link of the training dataset?
Hello,
I tried to reproduce your work, and I found 'obj_corners_3d' and 'obj_rest_corners_3d' are not in obman meta file. How do you get them?
When i run the code, I occur the "do not support renderer in this machine". I find your code referencing this package in your networks/model.py.
try: import soft_renderer as sr import soft_renderer.functional as srf except: print('do not support renderer in this machine')
I want to consult how to install this package "soft_renderer".
I tried to preprocess one sample mesh to get the sdf values. After running prep_obman.py
, I only get one npz file under norm, but nothing under sdf_hand and sdf_obj.
The terminal output error as below:
OpenGL Error 500: GL_INVALID_ENUM: An unacceptable value is specified for an enumerated argument. In: /usr/local/include/pangolin/gl/gl.hpp, line 205
In here, people said they can still get the results with this error. Do you know how to solve this?
Thanks.
Hi, sorry to ask so many questions.
1: I try to run the code, but obj_corners_3d and obj_rest_corners_3d are not available while your code have used them. Can you tell me the meanning of them, thus i can use trimesh to get them from the mesh.
2: The loss of corners is not in the paper.
3: I don't know if the code is the final version, is it just a reference but not a available code?
best
xinkang
When i run the code, I occur the "do not support renderer in this machine". I find your code referencing this package in your networks/model.py.
try: import soft_renderer as sr import soft_renderer.functional as srf except: print('do not support renderer in this machine')
I want to consult how to install this package "soft_renderer".
Hi there, really thanks for your amazing work.
I have a question regarding the DeepSDF part. Though it's not a really relevant question to this paper, I also checked their original repo and many issues are also regarding the compiling process.
I would like to know which version of Pangolin did you use. Also, I would like to know your GCC and G++ version when compiling the DeepSDF.
The reason I ask is that I've tried many settings but still get a mistake at the end. So hope this won't bother you too much :)
Thanks again
google drive下载数据容易断
能否提供百度网盘的下载链接?
谢谢
I find the dataset you provided contains sdf_hand_mini and sdf_hand. I want to know the difference between sdf_hand and sdf_hand_mini.
Besides, when I run the command "CUDA_VISIBLE_DEVICES=0,1 bash dist_train.sh 2 6666 -e experiments/obman/30k_1e2d_mlp5.json --mano --obj_pose --point_size 9 --encode both --ocrw 0" to train the model, is the mini version of the dataset utilized?
hello, thank you for your great work and patient response.
I reproduced the code, the model results related to DexYCB are mostly consistent with the paper, but the Ote indicator is several times (2x ~ 3x) the results in the paper, can you give me some advice? thank you.
Hello, I'm insteresting in your work and trying to run it on my computer.
But I don't know how to generate mesh_hand
and mesh_obj
folders in dataset preprocessing part. Which part of the code in tool should I use?
Thanks!
How do I train with sdf_hand_mini and sdf_obj_mini that you uploaded?
I think there is a .npz file that doesn't exist because I put it in mini version.
(alignsdf) MS-7B23:~/mount4t/AlignSDF$ CUDA_VISIBLE_DEVICES=0 bash dist_train.sh 4 6666 -e experiments/obman/30k_1e2d_mlp5.json
do not support renderer in this machine
DeepSdf - INFO - Added key: store_based_barrier_key:1 to store for rank: 0
DeepSdf - INFO - Training in distributed mode, 1 GPU per process. Process 0, total 1.
DeepSdf - INFO - Experiment description:
3D hand reconstruction on the mini obman dataset.
Hand branch: True
Object branch: True
Mano branch: False
Depth branch: False
Classifier Weight: 0
Penetration Loss: False
Penetration Loss Weight: 0
Additional Loss start at epoch: 1201
Contact Loss: False
Contact Loss Weight: 0
Contact Loss Sigma (m): 0.005
Independent Obj Scale: False
Ignore other: False
nb_label_class: 6
Image encoder, the branch has latent size 256
DeepSdf - INFO - Finish constructing the dataset
DeepSdf - INFO - start_epoch:1, current_rank:0
DeepSdf - INFO - epoch:1, current_rank:0
Traceback (most recent call last):
File "train.py", line 715, in
main_function(exp_cfg, args.continue_from, args.local_rank, args.opt_level, args.slurm)
File "train.py", line 465, in main_function
for i, (input_iter, label_iter, meta_iter) in enumerate(sdf_loader):
File "/home/gaeun/anaconda3/envs/alignsdf/lib/python3.8/site-packages/torch/utils/data/dataloader.py", line 517, in next
data = self._next_data()
File "/home/gaeun/anaconda3/envs/alignsdf/lib/python3.8/site-packages/torch/utils/data/dataloader.py", line 1199, in _next_data
return self._process_data(data)
File "/home/gaeun/anaconda3/envs/alignsdf/lib/python3.8/site-packages/torch/utils/data/dataloader.py", line 1225, in _process_data
data.reraise()
File "/home/gaeun/anaconda3/envs/alignsdf/lib/python3.8/site-packages/torch/_utils.py", line 429, in reraise
raise self.exc_type(msg)
FileNotFoundError: Caught FileNotFoundError in DataLoader worker process 0.
Original Traceback (most recent call last):
File "/home/gaeun/anaconda3/envs/alignsdf/lib/python3.8/site-packages/torch/utils/data/_utils/worker.py", line 202, in _worker_loop
data = fetcher.fetch(index)
File "/home/gaeun/anaconda3/envs/alignsdf/lib/python3.8/site-packages/torch/utils/data/_utils/fetch.py", line 44, in fetch
data = [self.dataset[idx] for idx in possibly_batched_index]
File "/home/gaeun/anaconda3/envs/alignsdf/lib/python3.8/site-packages/torch/utils/data/_utils/fetch.py", line 44, in
data = [self.dataset[idx] for idx in possibly_batched_index]
File "/home/gaeun/mount4t/AlignSDF/utils/data.py", line 162, in getitem
hand_samples, hand_labels = unpack_sdf_samples(self.data_source, data_key, num_sample, hand=True, clamp=self.clamp, filter_dist=self.filter_dist)
File "/home/gaeun/mount4t/AlignSDF/utils/sdf_utils.py", line 172, in unpack_sdf_samples
npz = np.load(npz_path)
File "/home/gaeun/anaconda3/envs/alignsdf/lib/python3.8/site-packages/numpy/lib/npyio.py", line 405, in load
fid = stack.enter_context(open(os_fspath(file), "rb"))
FileNotFoundError: [Errno 2] No such file or directory: 'data/obman/train/sdf_hand/00018168.npz'
Killing subprocess 12576
Traceback (most recent call last):
File "/home/gaeun/anaconda3/envs/alignsdf/lib/python3.8/runpy.py", line 194, in _run_module_as_main
return _run_code(code, main_globals, None,
File "/home/gaeun/anaconda3/envs/alignsdf/lib/python3.8/runpy.py", line 87, in _run_code
exec(code, run_globals)
File "/home/gaeun/anaconda3/envs/alignsdf/lib/python3.8/site-packages/torch/distributed/launch.py", line 340, in
main()
File "/home/gaeun/anaconda3/envs/alignsdf/lib/python3.8/site-packages/torch/distributed/launch.py", line 326, in main
sigkill_handler(signal.SIGTERM, None) # not coming back
File "/home/gaeun/anaconda3/envs/alignsdf/lib/python3.8/site-packages/torch/distributed/launch.py", line 301, in sigkill_handler
raise subprocess.CalledProcessError(returncode=last_return_code, cmd=cmd)
subprocess.CalledProcessError: Command '['/home/gaeun/anaconda3/envs/alignsdf/bin/python', '-u', 'train.py', '--local_rank=0', '-e', 'experiments/obman/30k_1e2d_mlp5.json']' returned non-zero exit status 1.
Hello,
I find in your code, you use "00141909" to index dexycb rgb images. But for the dataset, the image index is "20200820-subject-03/20200820_135323/932122062010/color_000000.jpg". How do you create your own dexycb dataset.
I follow official dexycb toolkit to run create_dataset.py and it shows the size of S1 split train is 407088, but in your code it's 148414.
I just ran the code and it seems that the evaluation results of the code is in different metric that the Table 4 in ECCV 2022 paper. Could you give insight on how to evaluate like Table 4 in the ECCV 2022 paper?
hello, thank you for your great work and patient response.
I noticed that you guided the cropping of 480x480 images through the position of hand wrist from this site. But the location of hand wrist in Fig2, Fig 7, Fig C.2 and Fig C.3 is not in the center of image. So, I wonder how you design this crop image method. Thanks.
A declarative, efficient, and flexible JavaScript library for building user interfaces.
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
An Open Source Machine Learning Framework for Everyone
The Web framework for perfectionists with deadlines.
A PHP framework for web artisans
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
Some thing interesting about web. New door for the world.
A server is a program made to process requests and deliver data to clients.
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
Some thing interesting about visualization, use data art
Some thing interesting about game, make everyone happy.
We are working to build community through open source technology. NB: members must have two-factor auth.
Open source projects and samples from Microsoft.
Google ❤️ Open Source for everyone.
Alibaba Open Source for everyone
Data-Driven Documents codes.
China tencent open source team.