layumi / person-reid-3d Goto Github PK
View Code? Open in Web Editor NEWTNNLS'22 :statue_of_liberty: Parameter-Efficient Person Re-identification in the 3D Space :statue_of_liberty:
Home Page: https://arxiv.org/abs/2006.04569
License: MIT License
TNNLS'22 :statue_of_liberty: Parameter-Efficient Person Re-identification in the 3D Space :statue_of_liberty:
Home Page: https://arxiv.org/abs/2006.04569
License: MIT License
大神你好,最近在拜读您的文章,想结合Demo理解,但根据您的readme.md,有一个已经训练好的model,但是在repo中,并没有发现?
如题,因为我只有30系列的显卡,但30系列好像只支持cuda11.1以上的版本,然而cuda11.1只有在pytorch1.7.1以上版本才支持。看到主页写着pytorch=1.4,所以想问一下该代码是否也能用pytorch1.8执行,非常感谢
Thanks for your excellent work. When I generate 3d dataset with your code, I encounter the following output, and then the program stops. Could you encounter this case?
Restoring checkpoint /home/yinjunhui/per-id/3d/hmr/src/models/model.ckpt-667589..
WARNING:tensorflow:From /home/yinjunhui/anaconda3/envs/hmr/lib/python2.7/site-packages/tensorflow/python/training/saver.py:1276: checkpoint_exists (from tensorflow.python.training.checkpoint_management) is deprecated and will be removed in a future version.
Instructions for updating:
Use standard file APIs to check for files with this prefix.
(OG) francisjiang@francisjiang:~/desktop/person-reid-3d$ python train_M.py --batch-size 30 --name Market_Efficient_ALL_2SDDense_b30_lr6_flip_slim0.5_warm10_scale_e0_d7+bg_adam_init768_clusterXYZRGB_e1000_id2_bn_k9_conv2_balance --id_skip 2 --slim 0.5 --flip --scale --lrRate 6e-4 --gpu_ids 0 --warm_epoch 10 --erase 0 --droprate 0.7 --use_dense --bg 1 --adam --init 768 --cluster xyzrgb --train_all --num-epoch 1000 --feature_dims 48,96,96,192,192,384,384 --efficient --k 9 --num_conv 2 --dataset 2DMarket --balance --gem --norm_layer bn2 --circle --amsgrad --gamma 64
/home/francisjiang/anaconda3/envs/OG/lib/python3.7/site-packages/torchvision/io/image.py:13: UserWarning: Failed to load image Python extension: libtorch_cpu.so: cannot open shared object file: No such file or directory
warn(f"Failed to load image Python extension: {e}")
Traceback (most recent call last):
File "train_M.py", line 6, in
from market3d import Market3D
File "/home/francisjiang/desktop/person-reid-3d/market3d.py", line 1, in
from torchvision import datasets
File "/home/francisjiang/anaconda3/envs/OG/lib/python3.7/site-packages/torchvision/init.py", line 7, in
from torchvision import models
File "/home/francisjiang/anaconda3/envs/OG/lib/python3.7/site-packages/torchvision/models/init.py", line 2, in
from .convnext import *
File "/home/francisjiang/anaconda3/envs/OG/lib/python3.7/site-packages/torchvision/models/convnext.py", line 8, in
from ..ops.misc import Conv2dNormActivation, Permute
File "/home/francisjiang/anaconda3/envs/OG/lib/python3.7/site-packages/torchvision/ops/init.py", line 2, in
from .boxes import (
File "/home/francisjiang/anaconda3/envs/OG/lib/python3.7/site-packages/torchvision/ops/boxes.py", line 78, in
@torch.jit._script_if_tracing
AttributeError: module 'torch.jit' has no attribute '_script_if_tracing'
Hello, Dr. Zheng. Thank you very much for your excellent work. I want to know how to set 8192 points for each human body? Where can this part of the code be found in the HMR project? Thank you very much for your reply.
Really Nice Work..
Can you please merge your model with a person detection model to build a people tracking software like DeepSort (https://github.com/Qidian213/deep_sort_yolov3)
Thank You
output log:
ERROR: Failed building wheel for pointnet2-ops
Running setup.py clean for pointnet2-ops
Failed to build pointnet2-ops
In file 'train_M' line 9 'from model_efficient2 import ModelE_dense2', but we have no such file, could you upload it?
Could you shared with the generated 3D data of the duke and msmt17 datasets? Thank you.
Hello, I met a problem when I tried to run train_M.py without any modification to the code. As below:
Using backend: pytorch ModelE_dense( (nng): KNNGraphE() (conv): ModuleList( (0): Conv2d(6, 64, kernel_size=(1, 1), stride=(1, 1)) (1): Conv2d(128, 128, kernel_size=(1, 1), stride=(1, 1)) (2): Conv2d(256, 256, kernel_size=(1, 1), stride=(1, 1)) (3): Conv2d(512, 512, kernel_size=(1, 1), stride=(1, 1)) ) (conv_s1): ModuleList() (conv_s2): ModuleList() (bn): ModuleList( (0): BatchNorm1d(64, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) (1): BatchNorm1d(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) (2): BatchNorm1d(256, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) (3): BatchNorm1d(512, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) ) (sa): ModuleList( (0): PointnetSAModuleMSG( (groupers): ModuleList( (0): QueryAndGroup() (1): QueryAndGroup() (2): QueryAndGroup() ) (mlps): ModuleList( (0): Sequential( (0): Conv2d(67, 32, kernel_size=(1, 1), stride=(1, 1), bias=False) (1): BatchNorm2d(32, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) (2): SE( (se_reduce): Conv2d(32, 1, kernel_size=(1, 1), stride=(1, 1)) (se_expand): Conv2d(1, 32, kernel_size=(1, 1), stride=(1, 1)) (swish): MemoryEfficientSwish() ) (3): Conv2d(32, 64, kernel_size=(1, 1), stride=(1, 1), bias=False) (4): BatchNorm2d(64, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) (5): ReLU(inplace=True) ) (1): Sequential( (0): Conv2d(67, 32, kernel_size=(1, 1), stride=(1, 1), bias=False) (1): BatchNorm2d(32, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) (2): SE( (se_reduce): Conv2d(32, 1, kernel_size=(1, 1), stride=(1, 1)) (se_expand): Conv2d(1, 32, kernel_size=(1, 1), stride=(1, 1)) (swish): MemoryEfficientSwish() ) (3): Conv2d(32, 64, kernel_size=(1, 1), stride=(1, 1), bias=False) (4): BatchNorm2d(64, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) (5): ReLU(inplace=True) ) (2): Sequential( (0): Conv2d(67, 32, kernel_size=(1, 1), stride=(1, 1), bias=False) (1): BatchNorm2d(32, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) (2): SE( (se_reduce): Conv2d(32, 1, kernel_size=(1, 1), stride=(1, 1)) (se_expand): Conv2d(1, 32, kernel_size=(1, 1), stride=(1, 1)) (swish): MemoryEfficientSwish() ) (3): Conv2d(32, 64, kernel_size=(1, 1), stride=(1, 1), bias=False) (4): BatchNorm2d(64, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) (5): ReLU(inplace=True) ) ) ) (1): PointnetSAModuleMSG( (groupers): ModuleList( (0): QueryAndGroup() (1): QueryAndGroup() (2): QueryAndGroup() ) (mlps): ModuleList( (0): Sequential( (0): Conv2d(131, 64, kernel_size=(1, 1), stride=(1, 1), bias=False) (1): BatchNorm2d(64, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) (2): SE( (se_reduce): Conv2d(64, 2, kernel_size=(1, 1), stride=(1, 1)) (se_expand): Conv2d(2, 64, kernel_size=(1, 1), stride=(1, 1)) (swish): MemoryEfficientSwish() ) (3): Conv2d(64, 128, kernel_size=(1, 1), stride=(1, 1), bias=False) (4): BatchNorm2d(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) (5): ReLU(inplace=True) ) (1): Sequential( (0): Conv2d(131, 64, kernel_size=(1, 1), stride=(1, 1), bias=False) (1): BatchNorm2d(64, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) (2): SE( (se_reduce): Conv2d(64, 2, kernel_size=(1, 1), stride=(1, 1)) (se_expand): Conv2d(2, 64, kernel_size=(1, 1), stride=(1, 1)) (swish): MemoryEfficientSwish() ) (3): Conv2d(64, 128, kernel_size=(1, 1), stride=(1, 1), bias=False) (4): BatchNorm2d(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) (5): ReLU(inplace=True) ) (2): Sequential( (0): Conv2d(131, 64, kernel_size=(1, 1), stride=(1, 1), bias=False) (1): BatchNorm2d(64, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) (2): SE( (se_reduce): Conv2d(64, 2, kernel_size=(1, 1), stride=(1, 1)) (se_expand): Conv2d(2, 64, kernel_size=(1, 1), stride=(1, 1)) (swish): MemoryEfficientSwish() ) (3): Conv2d(64, 128, kernel_size=(1, 1), stride=(1, 1), bias=False) (4): BatchNorm2d(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) (5): ReLU(inplace=True) ) ) ) (2): PointnetSAModuleMSG( (groupers): ModuleList( (0): QueryAndGroup() (1): QueryAndGroup() (2): QueryAndGroup() ) (mlps): ModuleList( (0): Sequential( (0): Conv2d(259, 128, kernel_size=(1, 1), stride=(1, 1), bias=False) (1): BatchNorm2d(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) (2): SE( (se_reduce): Conv2d(128, 5, kernel_size=(1, 1), stride=(1, 1)) (se_expand): Conv2d(5, 128, kernel_size=(1, 1), stride=(1, 1)) (swish): MemoryEfficientSwish() ) (3): Conv2d(128, 256, kernel_size=(1, 1), stride=(1, 1), bias=False) (4): BatchNorm2d(256, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) (5): ReLU(inplace=True) ) (1): Sequential( (0): Conv2d(259, 128, kernel_size=(1, 1), stride=(1, 1), bias=False) (1): BatchNorm2d(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) (2): SE( (se_reduce): Conv2d(128, 5, kernel_size=(1, 1), stride=(1, 1)) (se_expand): Conv2d(5, 128, kernel_size=(1, 1), stride=(1, 1)) (swish): MemoryEfficientSwish() ) (3): Conv2d(128, 256, kernel_size=(1, 1), stride=(1, 1), bias=False) (4): BatchNorm2d(256, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) (5): ReLU(inplace=True) ) (2): Sequential( (0): Conv2d(259, 128, kernel_size=(1, 1), stride=(1, 1), bias=False) (1): BatchNorm2d(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) (2): SE( (se_reduce): Conv2d(128, 5, kernel_size=(1, 1), stride=(1, 1)) (se_expand): Conv2d(5, 128, kernel_size=(1, 1), stride=(1, 1)) (swish): MemoryEfficientSwish() ) (3): Conv2d(128, 256, kernel_size=(1, 1), stride=(1, 1), bias=False) (4): BatchNorm2d(256, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) (5): ReLU(inplace=True) ) ) ) (3): PointnetSAModuleMSG( (groupers): ModuleList( (0): QueryAndGroup() (1): QueryAndGroup() (2): QueryAndGroup() ) (mlps): ModuleList( (0): Sequential( (0): Conv2d(515, 256, kernel_size=(1, 1), stride=(1, 1), bias=False) (1): BatchNorm2d(256, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) (2): SE( (se_reduce): Conv2d(256, 10, kernel_size=(1, 1), stride=(1, 1)) (se_expand): Conv2d(10, 256, kernel_size=(1, 1), stride=(1, 1)) (swish): MemoryEfficientSwish() ) (3): Conv2d(256, 512, kernel_size=(1, 1), stride=(1, 1), bias=False) (4): BatchNorm2d(512, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) (5): ReLU(inplace=True) ) (1): Sequential( (0): Conv2d(515, 256, kernel_size=(1, 1), stride=(1, 1), bias=False) (1): BatchNorm2d(256, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) (2): SE( (se_reduce): Conv2d(256, 10, kernel_size=(1, 1), stride=(1, 1)) (se_expand): Conv2d(10, 256, kernel_size=(1, 1), stride=(1, 1)) (swish): MemoryEfficientSwish() ) (3): Conv2d(256, 512, kernel_size=(1, 1), stride=(1, 1), bias=False) (4): BatchNorm2d(512, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) (5): ReLU(inplace=True) ) (2): Sequential( (0): Conv2d(515, 256, kernel_size=(1, 1), stride=(1, 1), bias=False) (1): BatchNorm2d(256, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) (2): SE( (se_reduce): Conv2d(256, 10, kernel_size=(1, 1), stride=(1, 1)) (se_expand): Conv2d(10, 256, kernel_size=(1, 1), stride=(1, 1)) (swish): MemoryEfficientSwish() ) (3): Conv2d(256, 512, kernel_size=(1, 1), stride=(1, 1), bias=False) (4): BatchNorm2d(512, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) (5): ReLU(inplace=True) ) ) ) ) (embs): ModuleList( (0): Linear(in_features=1024, out_features=512, bias=False) ) (bn_embs): ModuleList( (0): BatchNorm1d(512, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) ) (dropouts): ModuleList( (0): Dropout(p=0.7, inplace=True) ) (partpool): AdaptiveAvgPool1d(output_size=1) (proj_output): Linear(in_features=512, out_features=751, bias=True) ) torch.Size([1, 4096, 6]) Traceback (most recent call last): File "/home/uisee/anaconda3/envs/OG/lib/python3.7/runpy.py", line 193, in _run_module_as_main "__main__", mod_spec) File "/home/uisee/anaconda3/envs/OG/lib/python3.7/runpy.py", line 85, in _run_code exec(code, run_globals) File "/home/uisee/.vscode-server/extensions/ms-python.python-2021.10.1365161279/pythonFiles/lib/python/debugpy/__main__.py", line 45, in <module> cli.main() File "/home/uisee/.vscode-server/extensions/ms-python.python-2021.10.1365161279/pythonFiles/lib/python/debugpy/../debugpy/server/cli.py", line 444, in main run() File "/home/uisee/.vscode-server/extensions/ms-python.python-2021.10.1365161279/pythonFiles/lib/python/debugpy/../debugpy/server/cli.py", line 285, in run_file runpy.run_path(target_as_str, run_name=compat.force_str("__main__")) File "/home/uisee/anaconda3/envs/OG/lib/python3.7/runpy.py", line 263, in run_path pkg_name=pkg_name, script_name=fname) File "/home/uisee/anaconda3/envs/OG/lib/python3.7/runpy.py", line 96, in _run_module_code mod_name, mod_spec, pkg_name, script_name) File "/home/uisee/anaconda3/envs/OG/lib/python3.7/runpy.py", line 85, in _run_code exec(code, run_globals) File "/home/uisee/yongtao/proj/person-reid-3d/train_M.py", line 362, in <module> macs, params = get_model_complexity_info(model.cuda(), batch0.cuda(), ((round(6890*opt.slim), 3) ), as_strings=True, print_per_layer_stat=False, verbose=True) TypeError: get_model_complexity_info() got multiple values for argument 'print_per_layer_stat'
I think there is something wrong with the input to get_model_complexity_info(). Do you know how to fix it?
cv2.error: OpenCV(4.5.4-dev) 👎 error: (-5:Bad argument) in function 'circle'
Overload resolution failed:
- Scalar value for argument 'color' is not numeric
- Scalar value for argument 'color' is not numeric
出现这个错误,请问大神,这应该怎么解决啊,万分感谢您的帮助
大神您好,我运行了https://github.com/layumi/hmr代码
生成可视化3D数据时候
提示:tensorflow.python.framework.errors_impl.InternalError: <exception str() failed>
感谢您的解答大神,研究三四天了
When I test the trained model using 'test_M.py', it needs 'evaluate_gpu.py' at last, but this python file is not included in the programme. Where can I get it?
Hi, I did everything according to the instructions, trained the model and the result came out. Everything is ok. Could you please tell me. I want to test my pictures or take pictures of people from video. Can I do this? And How
大神你好,打扰你了。想问下pointnet2_ops_lib这个文件是环境包的文件嘛?在乌班图系统上能够按上,在win环境下没有安成功,win环境下是需要自己下载什么东西么?这个包有windows版本嘛?感谢您的解答
Hi,
Thank you for sharing your work.
I ran into an issue running train_M.sh on the supplied generated 3D data of the Market-1501 dataset
Number of training parameters: 2.34 M
Epoch #0 Validating
/ichec/home/users/niallomahony/.conda/envs/tfgpu/lib/python3.6/site-packages/numpy/core/fromnumeric.py:3335: RuntimeWarning: Mean of empty slice.
out=out, **kwargs)
/ichec/home/users/niallomahony/.conda/envs/tfgpu/lib/python3.6/site-packages/numpy/core/_methods.py:154: RuntimeWarning: invalid value encountered in true_divide
ret, rcount, out=ret, casting='unsafe', subok=False)
/ichec/home/users/niallomahony/.conda/envs/tfgpu/lib/python3.6/site-packages/numpy/core/fromnumeric.py:3335: RuntimeWarning: Mean of empty slice.
out=out, **kwargs)
/ichec/home/users/niallomahony/.conda/envs/tfgpu/lib/python3.6/site-packages/numpy/core/_methods.py:154: RuntimeWarning: invalid value encountered in true_divide
ret, rcount, out=ret, casting='unsafe', subok=False)
0%| | 0/1617 [00:00<?, ?it/s]
Traceback (most recent call last):
File "train_M.py", line 298, in
train(model, optimizer, scheduler, train_loader, dev, epoch)
File "train_M.py", line 129, in train
logits = model(xyz.detach(), rgb.detach(), istrain=True)
File "/ichec/home/users/niallomahony/.conda/envs/tfgpu/lib/python3.6/site-packages/torch/nn/modules/module.py", line 532, in call
result = self.forward(*input, **kwargs)
File "/ichec/home/users/niallomahony/.conda/envs/tfgpu/lib/python3.6/site-packages/torch/nn/parallel/data_parallel.py", line 150, in forward
return self.module(*inputs[0], **kwargs[0])
File "/ichec/home/users/niallomahony/.conda/envs/tfgpu/lib/python3.6/site-packages/torch/nn/modules/module.py", line 532, in call
result = self.forward(*input, **kwargs)
File "/ichec/work/iecom001b/person-reid-3d/model.py", line 171, in forward
g = self.nng(xyz, istrain=istrain and self.graph_jitter)
File "/ichec/home/users/niallomahony/.conda/envs/tfgpu/lib/python3.6/site-packages/torch/nn/modules/module.py", line 532, in call
result = self.forward(*input, **kwargs)
File "/ichec/work/iecom001b/person-reid-3d/KNNGraphE.py", line 102, in forward
return knn_graphE(x, self.k, istrain)
File "/ichec/work/iecom001b/person-reid-3d/KNNGraphE.py", line 51, in knn_graphE
k_indices = F.argtopk(dist, k, 2, descending=False)
File "/ichec/home/users/niallomahony/.conda/envs/tfgpu/lib/python3.6/site-packages/dgl/backend/pytorch/tensor.py", line 132, in argtopk
return th.topk(input, k, dim, largest=descending)[1]
RuntimeError: invalid argument 5: k not in range for dimension at /opt/conda/conda-bld/pytorch_1579027003190/work/aten/src/THC/generic/THCTensorTopK.cu:23<
I followed all the installation steps but had to use cuda 10.0 (and cudatoolkit 10.0 and dgl-cu100 as that is what is available on the hpc.
Hello. I downloaded the Data that you have provided in the google Drive.
But when I opened the 3DMarkey+bg obj file, the result came out like the above image.
I opened it with MeshLab in Window10.
I am not sure what the problem is.. Since I downloaded from the Prepare Data section.
Could you give me some help??
Thanks in advance.
A declarative, efficient, and flexible JavaScript library for building user interfaces.
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
An Open Source Machine Learning Framework for Everyone
The Web framework for perfectionists with deadlines.
A PHP framework for web artisans
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
Some thing interesting about web. New door for the world.
A server is a program made to process requests and deliver data to clients.
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
Some thing interesting about visualization, use data art
Some thing interesting about game, make everyone happy.
We are working to build community through open source technology. NB: members must have two-factor auth.
Open source projects and samples from Microsoft.
Google ❤️ Open Source for everyone.
Alibaba Open Source for everyone
Data-Driven Documents codes.
China tencent open source team.