Giter Site home page Giter Site logo

qianlim / cape Goto Github PK

View Code? Open in Web Editor NEW
307.0 14.0 39.0 9.15 MB

Official implementation of CVPR2020 paper "Learning to Dress 3D People in Generative Clothing" https://arxiv.org/abs/1907.13615

License: Other

Python 100.00%
smpl-model smpl-body mesh-generation vae-gan graph-convolutional-networks clothing cvpr2020 cvpr cvpr-2020

cape's Issues

RuntimeError: a view of a leaf Variable that requires grad is being used in an in-place operation.

Hi, Qian~
I really appreciate your great work!
I want to run the demo, but met a problem:

when I ran the command:
python main.py --config configs/CAPE-affineconv_nz64_pose32_clotype32_male.yaml --mode demo

the terminal's log is:

Pre-computing mesh pooling matrices ..

loading pre-saved transform matrices...
Building model graph...

WARNING: The TensorFlow contrib module will not be included in TensorFlow 2.0.
For more information, please see:
https://github.com/tensorflow/community/blob/master/rfcs/20180907-contrib-sunset.md
https://github.com/tensorflow/addons
If you depend on functionality not listed there, please file an issue.

condition_pose_fc1: (126, 63)
condition_pose_fc2: (63, 32)
condition_clo_label_fc1: (4, 32)
condition_pose_fc1: (126, 63)
condition_pose_fc2: (63, 32)
condition_clo_label_fc1: (4, 32)

------------[Generator]------------
------------Encoder------------
encoder_conv1: (6890, 64), K=2
encoder_conv2: (3445, 64), K=2
encoder_conv3: (3445, 128), K=2
encoder_conv4: (1723, 128), K=2
encoder_conv5: (1723, 256), K=2
encoder_conv6: (862, 256), K=2
encoder_conv7: (862, 512), K=2
encoder_conv8: (862, 512), K=2
encoder_1x1conv: (862, 64), K=1
encoder_fc_mean: (55168, 64)
encoder_fc_logvar: (55168, 64)
------------Decoder------------
decoder_fc1: (128, 55168)
decoder_1x1conv: (862, 512), K=1
decoder_resblock_affine1: (862, 256), K=2
decoder_resblock_affine2: (862, 256), K=2
decoder_resblock_affine3: (1723, 128), K=2
decoder_resblock_affine4: (1723, 128), K=2
decoder_resblock_affine5: (3445, 64), K=2
decoder_resblock_affine6: (3445, 64), K=2
decoder_resblock_affine7: (6890, 32), K=2
decoder_resblock_affine8: (6890, 32), K=2
decoder_output: (6890, 3), K=2

----------[Discriminator]----------
conv1: (3445, 64), K=3
conv2: (1723, 64), K=3
conv3: (862, 128), K=3
conv4: (431, 128), K=3
pred_map: (431, 1), K=3

----------[Discriminator]----------
conv1: (3445, 64), K=3
conv2: (1723, 64), K=3
conv3: (862, 128), K=3
conv4: (431, 128), K=3
pred_map: (431, 1), K=3

For generative experiments:
condition_pose_fc1: (126, 63)
condition_pose_fc2: (63, 32)
condition_clo_label_fc1: (4, 32)
------------Encoder------------
encoder_conv1: (6890, 64), K=2
encoder_conv2: (3445, 64), K=2
encoder_conv3: (3445, 128), K=2
encoder_conv4: (1723, 128), K=2
encoder_conv5: (1723, 256), K=2
encoder_conv6: (862, 256), K=2
encoder_conv7: (862, 512), K=2
encoder_conv8: (862, 512), K=2
encoder_1x1conv: (862, 64), K=1
encoder_fc_mean: (55168, 64)
encoder_fc_logvar: (55168, 64)
------------Decoder------------
decoder_fc1: (128, 55168)
decoder_1x1conv: (862, 512), K=1
decoder_resblock_affine1: (862, 256), K=2
decoder_resblock_affine2: (862, 256), K=2
decoder_resblock_affine3: (1723, 128), K=2
decoder_resblock_affine4: (1723, 128), K=2
decoder_resblock_affine5: (3445, 64), K=2
decoder_resblock_affine6: (3445, 64), K=2
decoder_resblock_affine7: (6890, 32), K=2
decoder_resblock_affine8: (6890, 32), K=2
decoder_output: (6890, 3), K=2

2021-03-17 12:20:43.953960: I tensorflow/core/platform/cpu_feature_guard.cc:141] Your CPU supports instructions that this TensorFlow binary was not compiled to use: AVX2 FMA
2021-03-17 12:20:44.184974: I tensorflow/compiler/xla/service/service.cc:150] XLA service 0x558d71c02640 executing computations on platform CUDA. Devices:
2021-03-17 12:20:44.185006: I tensorflow/compiler/xla/service/service.cc:158] StreamExecutor device (0): TITAN X (Pascal), Compute Capability 6.1
2021-03-17 12:20:44.185014: I tensorflow/compiler/xla/service/service.cc:158] StreamExecutor device (1): TITAN X (Pascal), Compute Capability 6.1
2021-03-17 12:20:44.203509: I tensorflow/core/platform/profile_utils/cpu_utils.cc:94] CPU Frequency: 3598130000 Hz
2021-03-17 12:20:44.204166: I tensorflow/compiler/xla/service/service.cc:150] XLA service 0x558d71c74fd0 executing computations on platform Host. Devices:
2021-03-17 12:20:44.204198: I tensorflow/compiler/xla/service/service.cc:158] StreamExecutor device (0): ,
2021-03-17 12:20:44.204352: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1433] Found device 0 with properties:
name: TITAN X (Pascal) major: 6 minor: 1 memoryClockRate(GHz): 1.531
pciBusID: 0000:01:00.0
totalMemory: 11.91GiB freeMemory: 11.12GiB
2021-03-17 12:20:44.204409: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1433] Found device 1 with properties:
name: TITAN X (Pascal) major: 6 minor: 1 memoryClockRate(GHz): 1.531
pciBusID: 0000:03:00.0
totalMemory: 11.91GiB freeMemory: 11.77GiB
2021-03-17 12:20:44.206252: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1512] Adding visible gpu devices: 0, 1
2021-03-17 12:20:44.209304: I tensorflow/core/common_runtime/gpu/gpu_device.cc:984] Device interconnect StreamExecutor with strength 1 edge matrix:
2021-03-17 12:20:44.209335: I tensorflow/core/common_runtime/gpu/gpu_device.cc:990] 0 1
2021-03-17 12:20:44.209349: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1003] 0: N Y
2021-03-17 12:20:44.209360: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1003] 1: Y N
2021-03-17 12:20:44.209479: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1115] Created TensorFlow device (/job:localhost/replica:0/task:0/device:GPU:0 with 10813 MB memory) -> physical GPU (device: 0, name: TITAN X (Pascal), pci bus id: 0000:01:00.0, compute capability: 6.1)
2021-03-17 12:20:44.209972: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1115] Created TensorFlow device (/job:localhost/replica:0/task:0/device:GPU:1 with 11446 MB memory) -> physical GPU (device: 1, name: TITAN X (Pascal), pci bus id: 0000:03:00.0, compute capability: 6.1)
2021-03-17 12:20:45.017010: I tensorflow/stream_executor/dso_loader.cc:152] successfully opened CUDA library libcublas.so.10.0 locally

=============== Running demo: fix z, clotype, change pose ===============

Found 6 different pose, for each we generate 5 samples

2021-03-17 12:20:45.231372: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1512] Adding visible gpu devices: 0, 1
2021-03-17 12:20:45.231460: I tensorflow/core/common_runtime/gpu/gpu_device.cc:984] Device interconnect StreamExecutor with strength 1 edge matrix:
2021-03-17 12:20:45.231470: I tensorflow/core/common_runtime/gpu/gpu_device.cc:990] 0 1
2021-03-17 12:20:45.231477: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1003] 0: N Y
2021-03-17 12:20:45.231483: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1003] 1: Y N
2021-03-17 12:20:45.231553: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1115] Created TensorFlow device (/job:localhost/replica:0/task:0/device:GPU:0 with 10813 MB memory) -> physical GPU (device: 0, name: TITAN X (Pascal), pci bus id: 0000:01:00.0, compute capability: 6.1)
2021-03-17 12:20:45.231726: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1115] Created TensorFlow device (/job:localhost/replica:0/task:0/device:GPU:1 with 11446 MB memory) -> physical GPU (device: 1, name: TITAN X (Pascal), pci bus id: 0000:03:00.0, compute capability: 6.1)
saving results as .obj files to /home/ang/CAPE-master/results/CAPE-affineconv_nz64_pose32_clotype32_male/sample_vary_pose...
Traceback (most recent call last):
File "main.py", line 109, in
demos.run()
File "/home/ang/CAPE-master/demos.py", line 335, in run
self.sample_vary_pose()
File "/home/ang/CAPE-master/demos.py", line 164, in sample_vary_pose
save_obj=self.save_obj, obj_dir=obj_dir)
File "/home/ang/CAPE-master/demos.py", line 324, in pose_result_onepose_multisample
self.smpl_model.body_pose[:] = torch.from_numpy(pose_params[i][3:])
RuntimeError: a view of a leaf Variable that requires grad is being used in an in-place operation.

I've searched ways on google but still cannot solve it.
So I hope you can give me some advice. Thanks a lot!

Best wishes

Fail to run demo due to missing data files

Traceback (most recent call last):
File "main.py", line 27, in
reference_mesh_file=reference_mesh_file)
File "/home/xxx/CAPE/lib/load_data.py", line 51, in init
self.load()
File "/home/xxx/CAPE/lib/load_data.py", line 62, in load
vertices_train = np.load(self.train_mesh_fn)
File "/home/xxx/anaconda3/envs/tf1.13/lib/python3.6/site-packages/numpy/lib/npyio.py", line 416, in load
fid = stack.enter_context(open(os_fspath(file), "rb"))
FileNotFoundError: [Errno 2] No such file or directory: '/home/xxx/CAPE/data/datasets/dataset_male_4clotypes/train/train_disp.npy'

How can I fix this? Thx!

two different group norm weights with same name in decoders

when I list the pretrained model using tf.train.list_variables, I get results as follows. You can see there are two different group norm weights used by same decoder_resblock such as generator_1/decoder/decoder_resblock_cmr1/group_norm/beta and generator/decoder/decoder_resblock_cmr1/group_norm/beta. I want to transfer the pretrained model from tensorflow to pytorch, so I am not sure which I should transfer to.

[('condition_clo_label/fc1/dense/bias', [8]),
('condition_clo_label/fc1/dense/bias/Momentum', [8]),
('condition_clo_label/fc1/dense/kernel', [4, 8]),
('condition_clo_label/fc1/dense/kernel/Momentum', [4, 8]),
('condition_pose/fc1/dense/bias', [63]),
('condition_pose/fc1/dense/bias/Momentum', [63]),
('condition_pose/fc1/dense/kernel', [126, 63]),
('condition_pose/fc1/dense/kernel/Momentum', [126, 63]),
('condition_pose/fc2/dense/bias', [24]),
('condition_pose/fc2/dense/bias/Momentum', [24]),
('condition_pose/fc2/dense/kernel', [63, 24]),
('condition_pose/fc2/dense/kernel/Momentum', [63, 24]),
('discriminator/prediction_map/weights', [256, 1]),
('discriminator/prediction_map/weights/Momentum', [256, 1]),
('discriminator/shared/conv1/bias', [1, 1, 64]),
('discriminator/shared/conv1/bias/Momentum', [1, 1, 64]),
('discriminator/shared/conv1/weights', [105, 64]),
('discriminator/shared/conv1/weights/Momentum', [105, 64]),
('discriminator/shared/conv2/bias', [1, 1, 64]),
('discriminator/shared/conv2/bias/Momentum', [1, 1, 64]),
('discriminator/shared/conv2/weights', [192, 64]),
('discriminator/shared/conv2/weights/Momentum', [192, 64]),
('discriminator/shared/conv3/bias', [1, 1, 128]),
('discriminator/shared/conv3/bias/Momentum', [1, 1, 128]),
('discriminator/shared/conv3/weights', [192, 128]),
('discriminator/shared/conv3/weights/Momentum', [192, 128]),
('discriminator/shared/conv4/bias', [1, 1, 128]),
('discriminator/shared/conv4/bias/Momentum', [1, 1, 128]),
('discriminator/shared/conv4/weights', [384, 128]),
('discriminator/shared/conv4/weights/Momentum', [384, 128]),
('generator/decoder/1x1-conv/weights', [64, 512]),
('generator/decoder/1x1-conv/weights/Momentum', [64, 512]),
('generator/decoder/decoder_resblock_cmr1/graph_conv/weights', [512, 256]),
('generator/decoder/decoder_resblock_cmr1/graph_conv/weights/Momentum',
[512, 256]),
('generator/decoder/decoder_resblock_cmr1/graph_linear_1/weights', [544, 256]),
('generator/decoder/decoder_resblock_cmr1/graph_linear_1/weights/Momentum',
[544, 256]),
('generator/decoder/decoder_resblock_cmr1/graph_linear_2/weights', [256, 512]),
('generator/decoder/decoder_resblock_cmr1/graph_linear_2/weights/Momentum',
[256, 512]),
('generator/decoder/decoder_resblock_cmr1/graph_linear_input/weights',
[544, 512]),
('generator/decoder/decoder_resblock_cmr1/graph_linear_input/weights/Momentum',
[544, 512]),
('generator/decoder/decoder_resblock_cmr1/group_norm/beta', [544]),
('generator/decoder/decoder_resblock_cmr1/group_norm/beta/Momentum', [544]),
('generator/decoder/decoder_resblock_cmr1/group_norm/gamma', [544]),
('generator/decoder/decoder_resblock_cmr1/group_norm/gamma/Momentum', [544]),
('generator/decoder/decoder_resblock_cmr1/group_norm_1/beta', [256]),
('generator/decoder/decoder_resblock_cmr1/group_norm_1/beta/Momentum', [256]),
('generator/decoder/decoder_resblock_cmr1/group_norm_1/gamma', [256]),
('generator/decoder/decoder_resblock_cmr1/group_norm_1/gamma/Momentum', [256]),
('generator/decoder/decoder_resblock_cmr1/group_norm_2/beta', [256]),
('generator/decoder/decoder_resblock_cmr1/group_norm_2/beta/Momentum', [256]),
('generator/decoder/decoder_resblock_cmr1/group_norm_2/gamma', [256]),
('generator/decoder/decoder_resblock_cmr1/group_norm_2/gamma/Momentum', [256]),
('generator/decoder/decoder_resblock_cmr2/graph_conv/weights', [512, 256]),
('generator/decoder/decoder_resblock_cmr2/graph_conv/weights/Momentum',
[512, 256]),
('generator/decoder/decoder_resblock_cmr2/graph_linear_1/weights', [544, 256]),
('generator/decoder/decoder_resblock_cmr2/graph_linear_1/weights/Momentum',
[544, 256]),
('generator/decoder/decoder_resblock_cmr2/graph_linear_2/weights', [256, 512]),
('generator/decoder/decoder_resblock_cmr2/graph_linear_2/weights/Momentum',
[256, 512]),
('generator/decoder/decoder_resblock_cmr2/graph_linear_input/weights',
[544, 512]),
('generator/decoder/decoder_resblock_cmr2/graph_linear_input/weights/Momentum',
[544, 512]),
('generator/decoder/decoder_resblock_cmr2/group_norm/beta', [544]),
('generator/decoder/decoder_resblock_cmr2/group_norm/beta/Momentum', [544]),
('generator/decoder/decoder_resblock_cmr2/group_norm/gamma', [544]),
('generator/decoder/decoder_resblock_cmr2/group_norm/gamma/Momentum', [544]),
('generator/decoder/decoder_resblock_cmr2/group_norm_1/beta', [256]),
('generator/decoder/decoder_resblock_cmr2/group_norm_1/beta/Momentum', [256]),
('generator/decoder/decoder_resblock_cmr2/group_norm_1/gamma', [256]),
('generator/decoder/decoder_resblock_cmr2/group_norm_1/gamma/Momentum', [256]),
('generator/decoder/decoder_resblock_cmr2/group_norm_2/beta', [256]),
('generator/decoder/decoder_resblock_cmr2/group_norm_2/beta/Momentum', [256]),
('generator/decoder/decoder_resblock_cmr2/group_norm_2/gamma', [256]),
('generator/decoder/decoder_resblock_cmr2/group_norm_2/gamma/Momentum', [256]),
('generator/decoder/decoder_resblock_cmr3/graph_conv/weights', [256, 128]),
('generator/decoder/decoder_resblock_cmr3/graph_conv/weights/Momentum',
[256, 128]),
('generator/decoder/decoder_resblock_cmr3/graph_linear_1/weights', [544, 128]),
('generator/decoder/decoder_resblock_cmr3/graph_linear_1/weights/Momentum',
[544, 128]),
('generator/decoder/decoder_resblock_cmr3/graph_linear_2/weights', [128, 256]),
('generator/decoder/decoder_resblock_cmr3/graph_linear_2/weights/Momentum',
[128, 256]),
('generator/decoder/decoder_resblock_cmr3/graph_linear_input/weights',
[544, 256]),
('generator/decoder/decoder_resblock_cmr3/graph_linear_input/weights/Momentum',
[544, 256]),
('generator/decoder/decoder_resblock_cmr3/group_norm/beta', [544]),
('generator/decoder/decoder_resblock_cmr3/group_norm/beta/Momentum', [544]),
('generator/decoder/decoder_resblock_cmr3/group_norm/gamma', [544]),
('generator/decoder/decoder_resblock_cmr3/group_norm/gamma/Momentum', [544]),
('generator/decoder/decoder_resblock_cmr3/group_norm_1/beta', [128]),
('generator/decoder/decoder_resblock_cmr3/group_norm_1/beta/Momentum', [128]),
('generator/decoder/decoder_resblock_cmr3/group_norm_1/gamma', [128]),
('generator/decoder/decoder_resblock_cmr3/group_norm_1/gamma/Momentum', [128]),
('generator/decoder/decoder_resblock_cmr3/group_norm_2/beta', [128]),
('generator/decoder/decoder_resblock_cmr3/group_norm_2/beta/Momentum', [128]),
('generator/decoder/decoder_resblock_cmr3/group_norm_2/gamma', [128]),
('generator/decoder/decoder_resblock_cmr3/group_norm_2/gamma/Momentum', [128]),
('generator/decoder/decoder_resblock_cmr4/graph_conv/weights', [256, 128]),
('generator/decoder/decoder_resblock_cmr4/graph_conv/weights/Momentum',
[256, 128]),
('generator/decoder/decoder_resblock_cmr4/graph_linear_1/weights', [288, 128]),
('generator/decoder/decoder_resblock_cmr4/graph_linear_1/weights/Momentum',
[288, 128]),
('generator/decoder/decoder_resblock_cmr4/graph_linear_2/weights', [128, 256]),
('generator/decoder/decoder_resblock_cmr4/graph_linear_2/weights/Momentum',
[128, 256]),
('generator/decoder/decoder_resblock_cmr4/graph_linear_input/weights',
[288, 256]),
('generator/decoder/decoder_resblock_cmr4/graph_linear_input/weights/Momentum',
[288, 256]),
('generator/decoder/decoder_resblock_cmr4/group_norm/beta', [288]),
('generator/decoder/decoder_resblock_cmr4/group_norm/beta/Momentum', [288]),
('generator/decoder/decoder_resblock_cmr4/group_norm/gamma', [288]),
('generator/decoder/decoder_resblock_cmr4/group_norm/gamma/Momentum', [288]),
('generator/decoder/decoder_resblock_cmr4/group_norm_1/beta', [128]),
('generator/decoder/decoder_resblock_cmr4/group_norm_1/beta/Momentum', [128]),
('generator/decoder/decoder_resblock_cmr4/group_norm_1/gamma', [128]),
('generator/decoder/decoder_resblock_cmr4/group_norm_1/gamma/Momentum', [128]),
('generator/decoder/decoder_resblock_cmr4/group_norm_2/beta', [128]),
('generator/decoder/decoder_resblock_cmr4/group_norm_2/beta/Momentum', [128]),
('generator/decoder/decoder_resblock_cmr4/group_norm_2/gamma', [128]),
('generator/decoder/decoder_resblock_cmr4/group_norm_2/gamma/Momentum', [128]),
('generator/decoder/decoder_resblock_cmr5/graph_conv/weights', [128, 64]),
('generator/decoder/decoder_resblock_cmr5/graph_conv/weights/Momentum',
[128, 64]),
('generator/decoder/decoder_resblock_cmr5/graph_linear_1/weights', [288, 64]),
('generator/decoder/decoder_resblock_cmr5/graph_linear_1/weights/Momentum',
[288, 64]),
('generator/decoder/decoder_resblock_cmr5/graph_linear_2/weights', [64, 128]),
('generator/decoder/decoder_resblock_cmr5/graph_linear_2/weights/Momentum',
[64, 128]),
('generator/decoder/decoder_resblock_cmr5/graph_linear_input/weights',
[288, 128]),
('generator/decoder/decoder_resblock_cmr5/graph_linear_input/weights/Momentum',
[288, 128]),
('generator/decoder/decoder_resblock_cmr5/group_norm/beta', [288]),
('generator/decoder/decoder_resblock_cmr5/group_norm/beta/Momentum', [288]),
('generator/decoder/decoder_resblock_cmr5/group_norm/gamma', [288]),
('generator/decoder/decoder_resblock_cmr5/group_norm/gamma/Momentum', [288]),
('generator/decoder/decoder_resblock_cmr5/group_norm_1/beta', [64]),
('generator/decoder/decoder_resblock_cmr5/group_norm_1/beta/Momentum', [64]),
('generator/decoder/decoder_resblock_cmr5/group_norm_1/gamma', [64]),
('generator/decoder/decoder_resblock_cmr5/group_norm_1/gamma/Momentum', [64]),
('generator/decoder/decoder_resblock_cmr5/group_norm_2/beta', [64]),
('generator/decoder/decoder_resblock_cmr5/group_norm_2/beta/Momentum', [64]),
('generator/decoder/decoder_resblock_cmr5/group_norm_2/gamma', [64]),
('generator/decoder/decoder_resblock_cmr5/group_norm_2/gamma/Momentum', [64]),
('generator/decoder/decoder_resblock_cmr6/graph_conv/weights', [128, 64]),
('generator/decoder/decoder_resblock_cmr6/graph_conv/weights/Momentum',
[128, 64]),
('generator/decoder/decoder_resblock_cmr6/graph_linear_1/weights', [160, 64]),
('generator/decoder/decoder_resblock_cmr6/graph_linear_1/weights/Momentum',
[160, 64]),
('generator/decoder/decoder_resblock_cmr6/graph_linear_2/weights', [64, 128]),
('generator/decoder/decoder_resblock_cmr6/graph_linear_2/weights/Momentum',
[64, 128]),
('generator/decoder/decoder_resblock_cmr6/graph_linear_input/weights',
[160, 128]),
('generator/decoder/decoder_resblock_cmr6/graph_linear_input/weights/Momentum',
[160, 128]),
('generator/decoder/decoder_resblock_cmr6/group_norm/beta', [160]),
('generator/decoder/decoder_resblock_cmr6/group_norm/beta/Momentum', [160]),
('generator/decoder/decoder_resblock_cmr6/group_norm/gamma', [160]),
('generator/decoder/decoder_resblock_cmr6/group_norm/gamma/Momentum', [160]),
('generator/decoder/decoder_resblock_cmr6/group_norm_1/beta', [64]),
('generator/decoder/decoder_resblock_cmr6/group_norm_1/beta/Momentum', [64]),
('generator/decoder/decoder_resblock_cmr6/group_norm_1/gamma', [64]),
('generator/decoder/decoder_resblock_cmr6/group_norm_1/gamma/Momentum', [64]),
('generator/decoder/decoder_resblock_cmr6/group_norm_2/beta', [64]),
('generator/decoder/decoder_resblock_cmr6/group_norm_2/beta/Momentum', [64]),
('generator/decoder/decoder_resblock_cmr6/group_norm_2/gamma', [64]),
('generator/decoder/decoder_resblock_cmr6/group_norm_2/gamma/Momentum', [64]),
('generator/decoder/decoder_resblock_cmr7/graph_conv/weights', [64, 32]),
('generator/decoder/decoder_resblock_cmr7/graph_conv/weights/Momentum',
[64, 32]),
('generator/decoder/decoder_resblock_cmr7/graph_linear_1/weights', [160, 32]),
('generator/decoder/decoder_resblock_cmr7/graph_linear_1/weights/Momentum',
[160, 32]),
('generator/decoder/decoder_resblock_cmr7/graph_linear_2/weights', [32, 64]),
('generator/decoder/decoder_resblock_cmr7/graph_linear_2/weights/Momentum',
[32, 64]),
('generator/decoder/decoder_resblock_cmr7/graph_linear_input/weights',
[160, 64]),
('generator/decoder/decoder_resblock_cmr7/graph_linear_input/weights/Momentum',
[160, 64]),
('generator/decoder/decoder_resblock_cmr7/group_norm/beta', [160]),
('generator/decoder/decoder_resblock_cmr7/group_norm/beta/Momentum', [160]),
('generator/decoder/decoder_resblock_cmr7/group_norm/gamma', [160]),
('generator/decoder/decoder_resblock_cmr7/group_norm/gamma/Momentum', [160]),
('generator/decoder/decoder_resblock_cmr7/group_norm_1/beta', [32]),
('generator/decoder/decoder_resblock_cmr7/group_norm_1/beta/Momentum', [32]),
('generator/decoder/decoder_resblock_cmr7/group_norm_1/gamma', [32]),
('generator/decoder/decoder_resblock_cmr7/group_norm_1/gamma/Momentum', [32]),
('generator/decoder/decoder_resblock_cmr7/group_norm_2/beta', [32]),
('generator/decoder/decoder_resblock_cmr7/group_norm_2/beta/Momentum', [32]),
('generator/decoder/decoder_resblock_cmr7/group_norm_2/gamma', [32]),
('generator/decoder/decoder_resblock_cmr7/group_norm_2/gamma/Momentum', [32]),
('generator/decoder/decoder_resblock_cmr8/graph_conv/weights', [64, 32]),
('generator/decoder/decoder_resblock_cmr8/graph_conv/weights/Momentum',
[64, 32]),
('generator/decoder/decoder_resblock_cmr8/graph_linear_1/weights', [96, 32]),
('generator/decoder/decoder_resblock_cmr8/graph_linear_1/weights/Momentum',
[96, 32]),
('generator/decoder/decoder_resblock_cmr8/graph_linear_2/weights', [32, 64]),
('generator/decoder/decoder_resblock_cmr8/graph_linear_2/weights/Momentum',
[32, 64]),
('generator/decoder/decoder_resblock_cmr8/graph_linear_input/weights',
[96, 64]),
('generator/decoder/decoder_resblock_cmr8/graph_linear_input/weights/Momentum',
[96, 64]),
('generator/decoder/decoder_resblock_cmr8/group_norm/beta', [96]),
('generator/decoder/decoder_resblock_cmr8/group_norm/beta/Momentum', [96]),
('generator/decoder/decoder_resblock_cmr8/group_norm/gamma', [96]),
('generator/decoder/decoder_resblock_cmr8/group_norm/gamma/Momentum', [96]),
('generator/decoder/decoder_resblock_cmr8/group_norm_1/beta', [32]),
('generator/decoder/decoder_resblock_cmr8/group_norm_1/beta/Momentum', [32]),
('generator/decoder/decoder_resblock_cmr8/group_norm_1/gamma', [32]),
('generator/decoder/decoder_resblock_cmr8/group_norm_1/gamma/Momentum', [32]),
('generator/decoder/decoder_resblock_cmr8/group_norm_2/beta', [32]),
('generator/decoder/decoder_resblock_cmr8/group_norm_2/beta/Momentum', [32]),
('generator/decoder/decoder_resblock_cmr8/group_norm_2/gamma', [32]),
('generator/decoder/decoder_resblock_cmr8/group_norm_2/gamma/Momentum', [32]),
('generator/decoder/fc1/dense/bias', [55168]),
('generator/decoder/fc1/dense/bias/Momentum', [55168]),
('generator/decoder/fc1/dense/kernel', [50, 55168]),
('generator/decoder/fc1/dense/kernel/Momentum', [50, 55168]),
('generator/decoder/outputs/bias', [1, 6890, 3]),
('generator/decoder/outputs/bias/Momentum', [1, 6890, 3]),
('generator/decoder/outputs/weights', [192, 3]),
('generator/decoder/outputs/weights/Momentum', [192, 3]),
('generator/encoder/1x1-conv/weights', [512, 64]),
('generator/encoder/1x1-conv/weights/Momentum', [512, 64]),
('generator/encoder/encoder_conv1/bias', [1, 1, 64]),
('generator/encoder/encoder_conv1/bias/Momentum', [1, 1, 64]),
('generator/encoder/encoder_conv1/weights', [6, 64]),
('generator/encoder/encoder_conv1/weights/Momentum', [6, 64]),
('generator/encoder/encoder_conv2/bias', [1, 1, 64]),
('generator/encoder/encoder_conv2/bias/Momentum', [1, 1, 64]),
('generator/encoder/encoder_conv2/weights', [128, 64]),
('generator/encoder/encoder_conv2/weights/Momentum', [128, 64]),
('generator/encoder/encoder_conv3/bias', [1, 1, 128]),
('generator/encoder/encoder_conv3/bias/Momentum', [1, 1, 128]),
('generator/encoder/encoder_conv3/weights', [128, 128]),
('generator/encoder/encoder_conv3/weights/Momentum', [128, 128]),
('generator/encoder/encoder_conv4/bias', [1, 1, 128]),
('generator/encoder/encoder_conv4/bias/Momentum', [1, 1, 128]),
('generator/encoder/encoder_conv4/weights', [256, 128]),
('generator/encoder/encoder_conv4/weights/Momentum', [256, 128]),
('generator/encoder/encoder_conv5/bias', [1, 1, 256]),
('generator/encoder/encoder_conv5/bias/Momentum', [1, 1, 256]),
('generator/encoder/encoder_conv5/weights', [256, 256]),
('generator/encoder/encoder_conv5/weights/Momentum', [256, 256]),
('generator/encoder/encoder_conv6/bias', [1, 1, 256]),
('generator/encoder/encoder_conv6/bias/Momentum', [1, 1, 256]),
('generator/encoder/encoder_conv6/weights', [512, 256]),
('generator/encoder/encoder_conv6/weights/Momentum', [512, 256]),
('generator/encoder/encoder_conv7/bias', [1, 1, 512]),
('generator/encoder/encoder_conv7/bias/Momentum', [1, 1, 512]),
('generator/encoder/encoder_conv7/weights', [512, 512]),
('generator/encoder/encoder_conv7/weights/Momentum', [512, 512]),
('generator/encoder/encoder_conv8/bias', [1, 1, 512]),
('generator/encoder/encoder_conv8/bias/Momentum', [1, 1, 512]),
('generator/encoder/encoder_conv8/weights', [1024, 512]),
('generator/encoder/encoder_conv8/weights/Momentum', [1024, 512]),
('generator/encoder/fc_mean/dense/bias', [18]),
('generator/encoder/fc_mean/dense/bias/Momentum', [18]),
('generator/encoder/fc_mean/dense/kernel', [55168, 18]),
('generator/encoder/fc_mean/dense/kernel/Momentum', [55168, 18]),
('generator/encoder/fc_var/dense/bias', [18]),
('generator/encoder/fc_var/dense/bias/Momentum', [18]),
('generator/encoder/fc_var/dense/kernel', [55168, 18]),
('generator/encoder/fc_var/dense/kernel/Momentum', [55168, 18]),
('generator_1/decoder/decoder_resblock_cmr1/group_norm/beta', [544]),
('generator_1/decoder/decoder_resblock_cmr1/group_norm/gamma', [544]),
('generator_1/decoder/decoder_resblock_cmr1/group_norm_1/beta', [256]),
('generator_1/decoder/decoder_resblock_cmr1/group_norm_1/gamma', [256]),
('generator_1/decoder/decoder_resblock_cmr1/group_norm_2/beta', [256]),
('generator_1/decoder/decoder_resblock_cmr1/group_norm_2/gamma', [256]),
('generator_1/decoder/decoder_resblock_cmr2/group_norm/beta', [544]),
('generator_1/decoder/decoder_resblock_cmr2/group_norm/gamma', [544]),
('generator_1/decoder/decoder_resblock_cmr2/group_norm_1/beta', [256]),
('generator_1/decoder/decoder_resblock_cmr2/group_norm_1/gamma', [256]),
('generator_1/decoder/decoder_resblock_cmr2/group_norm_2/beta', [256]),
('generator_1/decoder/decoder_resblock_cmr2/group_norm_2/gamma', [256]),
('generator_1/decoder/decoder_resblock_cmr3/group_norm/beta', [544]),
('generator_1/decoder/decoder_resblock_cmr3/group_norm/gamma', [544]),
('generator_1/decoder/decoder_resblock_cmr3/group_norm_1/beta', [128]),
('generator_1/decoder/decoder_resblock_cmr3/group_norm_1/gamma', [128]),
('generator_1/decoder/decoder_resblock_cmr3/group_norm_2/beta', [128]),
('generator_1/decoder/decoder_resblock_cmr3/group_norm_2/gamma', [128]),
('generator_1/decoder/decoder_resblock_cmr4/group_norm/beta', [288]),
('generator_1/decoder/decoder_resblock_cmr4/group_norm/gamma', [288]),
('generator_1/decoder/decoder_resblock_cmr4/group_norm_1/beta', [128]),
('generator_1/decoder/decoder_resblock_cmr4/group_norm_1/gamma', [128]),
('generator_1/decoder/decoder_resblock_cmr4/group_norm_2/beta', [128]),
('generator_1/decoder/decoder_resblock_cmr4/group_norm_2/gamma', [128]),
('generator_1/decoder/decoder_resblock_cmr5/group_norm/beta', [288]),
('generator_1/decoder/decoder_resblock_cmr5/group_norm/gamma', [288]),
('generator_1/decoder/decoder_resblock_cmr5/group_norm_1/beta', [64]),
('generator_1/decoder/decoder_resblock_cmr5/group_norm_1/gamma', [64]),
('generator_1/decoder/decoder_resblock_cmr5/group_norm_2/beta', [64]),
('generator_1/decoder/decoder_resblock_cmr5/group_norm_2/gamma', [64]),
('generator_1/decoder/decoder_resblock_cmr6/group_norm/beta', [160]),
('generator_1/decoder/decoder_resblock_cmr6/group_norm/gamma', [160]),
('generator_1/decoder/decoder_resblock_cmr6/group_norm_1/beta', [64]),
('generator_1/decoder/decoder_resblock_cmr6/group_norm_1/gamma', [64]),
('generator_1/decoder/decoder_resblock_cmr6/group_norm_2/beta', [64]),
('generator_1/decoder/decoder_resblock_cmr6/group_norm_2/gamma', [64]),
('generator_1/decoder/decoder_resblock_cmr7/group_norm/beta', [160]),
('generator_1/decoder/decoder_resblock_cmr7/group_norm/gamma', [160]),
('generator_1/decoder/decoder_resblock_cmr7/group_norm_1/beta', [32]),
('generator_1/decoder/decoder_resblock_cmr7/group_norm_1/gamma', [32]),
('generator_1/decoder/decoder_resblock_cmr7/group_norm_2/beta', [32]),
('generator_1/decoder/decoder_resblock_cmr7/group_norm_2/gamma', [32]),
('generator_1/decoder/decoder_resblock_cmr8/group_norm/beta', [96]),
('generator_1/decoder/decoder_resblock_cmr8/group_norm/gamma', [96]),
('generator_1/decoder/decoder_resblock_cmr8/group_norm_1/beta', [32]),
('generator_1/decoder/decoder_resblock_cmr8/group_norm_1/gamma', [32]),
('generator_1/decoder/decoder_resblock_cmr8/group_norm_2/beta', [32]),
('generator_1/decoder/decoder_resblock_cmr8/group_norm_2/gamma', [32]),
('loss/total_loss/add_4/ExponentialMovingAverage', []),
('loss/total_loss/add_5/ExponentialMovingAverage', []),
('training/global_step', [])]

smpl_model_folder

Great work! To run the demo, we need to set the path to SMPL model folder.

python main.py --config configs/config.yaml --mode demo --vis_demo 1 --smpl_model_folder <path to SMPL model folder>

However, I didn't find the SMPL files in this repo(I may miss it). Could you provide the files or the links to download them? Thanks!

Error while installing requirements on codelab

I am facing this error and not able to run the demo because of this. I tried looking at the closed issues to make sure I am not reporting the same issue again. Please let me know if I missed anything.

ERROR: pip's dependency resolver does not currently take into account all the packages that are installed. This behaviour is the source of the following dependency conflicts.
torchvision 0.13.1+cu113 requires torch==1.12.1, but you have torch 1.2.0 which is incompatible.
torchtext 0.13.1 requires torch==1.12.1, but you have torch 1.2.0 which is incompatible.
torchaudio 0.12.1+cu113 requires torch==1.12.1, but you have torch 1.2.0 which is incompatible.
fastai 2.7.9 requires torch<1.14,>=1.7, but you have torch 1.2.0 which is incompatible.

RuntimeError: size of dimension does not match previous size, operand 1, dim 2

I followed all your README steps to build the project.
Using all your pre-trained models and corresponding config files will get the RuntimeError.
image
the error happened on this line of code:
image
where the shape of betas and shape_disps are (1, 10) and (6890, 3, 300).
I want to know where does the error happen? Thanks a lot!

Pytorch in requirements

Hi! Great work and thank you for making the code public.
I think the torch==1.2.0 is an overhead in the requirements.txt file. Would be helpful for the end-user if the over head can be removed.
TIA

No module named 'psbody'

WoW! Awesome work!
I have some trouble with your released code:

  1. ModuleNotFoundError: No module named 'psbody'. I did'n find the psbody.py
  File "main.py", line 8, in <module>
    from psbody.mesh import Mesh
ModuleNotFoundError: No module named 'psbody'
  1. I'm pretty interested in the SMPL and CAPE, so could you send me a copy of code about SMPL in tf? It'll be important for me to learn the CAPE.
    Thanks for sharing the gread work!

Custom 3d data.

Hi,
I have some 3d scans of my own clothes. I want to extract z-parameters for those scans. I did a bit of research and found out that my garment scans must be registered to your data. May i get some help on this registration task? How can i register my own scans to any dataset that is publicily available? A sample code would be much appreciated. If that wont be possible, can you give me the steps people use for doing this non-rigid mesh registration task.

Thanks.

Mapping 3D T-shirt to Z variable

How can we map an unknown T-Shirt style to the Z variable?
Basically, Instead of sampling, I want to dress a person in a specific 3D garment (say T-shirt). Is it possible with the CAPE ?.

Demo can save the results of obj, but it cannot display the results

Hi, I can run your program and save the results of the model, but it doesn't seem to be able to visualize the model effect. The display effect is a black background, and there is no rendered model. It may be the problem of meshviewers in psbody, but I used different versions of psbody, and the problem is still not solved. More specifically, it seems that meshviewers are not initialized, but it is not clear how to solve it
image

viewer = MeshViewers(shape=(1, 2), titlebar=titlebar):
image

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.