Giter Site home page Giter Site logo

pifuhd's Introduction

report Open In Colab

News:

  • [2020/06/15] Demo with Google Colab (incl. visualization) is available! Please check out #pifuhd on Twitter for many results tested by users!

This repository contains a pytorch implementation of "Multi-Level Pixel-Aligned Implicit Function for High-Resolution 3D Human Digitization".

Teaser Image

This codebase provides:

  • test code
  • visualization code

See our blog post to learn more about our work at CVPR2020!

Demo on Google Colab

In case you don't have an environment with GPUs to run PIFuHD, we offer Google Colab demo. You can also upload your own images and reconstruct 3D geometry together with visualization. Try our Colab demo using the following notebook:
Open In Colab

Requirements

  • Python 3
  • PyTorch tested on 1.4.0, 1.5.0
  • json
  • PIL
  • skimage
  • tqdm
  • cv2

For visualization

  • trimesh with pyembree
  • PyOpenGL
  • freeglut (use sudo apt-get install freeglut3-dev for ubuntu users)
  • ffmpeg

Note: At least 8GB GPU memory is recommended to run PIFuHD model.

Run the following code to install all pip packages:

pip install -r requirements.txt 

Download Pre-trained model

Run the following script to download the pretrained model. The checkpoint is saved under ./checkpoints/.

sh ./scripts/download_trained_model.sh

A Quick Testing

To process images under ./sample_images, run the following code:

sh ./scripts/demo.sh

The resulting obj files and rendering will be saved in ./results. You may use meshlab (http://www.meshlab.net/) to visualize the 3D mesh output (obj file).

Testing

  1. run the following script to get joints for each image for testing (joints are used for image cropping only.). Make sure you correctly set the location of OpenPose binary. Alternatively colab demo provides more light-weight cropping rectange estimation without requiring openpose.
python apps/batch_openpose.py -d {openpose_root_path} -i {path_of_images} -o {path_of_images}
  1. run the following script to run reconstruction code. Make sure to set --input_path to path_of_images, --out_path to where you want to dump out results, and --ckpt_path to the checkpoint. Note that unlike PIFu, PIFuHD doesn't require segmentation mask as input. But if you observe severe artifacts, you may try removing background with off-the-shelf tools such as removebg. If you have {image_name}_rect.txt instead of {image_name}_keypoints.json, add --use_rect flag. For reference, you can take a look at colab demo.
python -m apps.simple_test
  1. optionally, you can also remove artifacts by keeping only the biggest connected component from the mesh reconstruction with the following script. (Warning: the script will overwrite the original obj files.)
python apps/clean_mesh.py -f {path_of_objs}

Visualization

To render results with turn-table, run the following code. The rendered animation (.mp4) will be stored under {path_of_objs}.

python -m apps.render_turntable -f {path_of_objs} -ww {rendering_width} -hh {rendering_height} 
# add -g for geometry rendering. default is normal visualization.

Citation

@inproceedings{saito2020pifuhd,
  title={PIFuHD: Multi-Level Pixel-Aligned Implicit Function for High-Resolution 3D Human Digitization},
  author={Saito, Shunsuke and Simon, Tomas and Saragih, Jason and Joo, Hanbyul},
  booktitle={CVPR},
  year={2020}
}

Relevant Projects

Monocular Real-Time Volumetric Performance Capture (ECCV 2020)
Ruilong Li*, Yuliang Xiu*, Shunsuke Saito, Zeng Huang, Kyle Olszewski, Hao Li

The first real-time PIFu by accelerating reconstruction and rendering!!

PIFu: Pixel-Aligned Implicit Function for High-Resolution Clothed Human Digitization (ICCV 2019)
Shunsuke Saito*, Zeng Huang*, Ryota Natsume*, Shigeo Morishima, Angjoo Kanazawa, Hao Li

The original work of Pixel-Aligned Implicit Function for geometry and texture reconstruction, unifying sigle-view and multi-view methods.

Learning to Infer Implicit Surfaces without 3d Supervision (NeurIPS 2019)
Shichen Liu, Shunsuke Saito, Weikai Chen, Hao Li

We answer to the question of "how can we learn implicit function if we don't have 3D ground truth?"

SiCloPe: Silhouette-Based Clothed People (CVPR 2019, best paper finalist)
Ryota Natsume*, Shunsuke Saito*, Zeng Huang, Weikai Chen, Chongyang Ma, Hao Li, Shigeo Morishima

Our first attempt to reconstruct 3D clothed human body with texture from a single image!

Other Relevant Works

ARCH: Animatable Reconstruction of Clothed Humans (CVPR 2020)
Zeng Huang, Yuanlu Xu, Christoph Lassner, Hao Li, Tony Tung

Learning PIFu in canonical space for animatable avatar generation!

Robust 3D Self-portraits in Seconds (CVPR 2020)
Zhe Li, Tao Yu, Chuanyu Pan, Zerong Zheng, Yebin Liu

They extend PIFu to RGBD + introduce "PIFusion" utilizing PIFu reconstruction for non-rigid fusion.

Deep Volumetric Video from Very Sparse Multi-view Performance Capture (ECCV 2018)
Zeng Huang, Tianye Li, Weikai Chen, Yajie Zhao, Jun Xing, Chloe LeGendre, Linjie Luo, Chongyang Ma, Hao Li

Implict surface learning for sparse view human performance capture!

License

CC-BY-NC 4.0. See the LICENSE file.

pifuhd's People

Contributors

10xjschad avatar benknight135 avatar iamoracle avatar ipsavitsky avatar jfcostta avatar kinivi avatar preetamsantosh12 avatar shunsukesaito avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

pifuhd's Issues

dataset

Is it convenient to tell me your training data set? Train the data set of "pifuhd.pt" file.Thanks

cuda out of memory when training with 24G gpu memory

The number of network parameters of PIFuHD is 387 million,which is 25 times that of PIFu. Can you share with us the hardware requirements of your GPU when training pifuhd? Such as GPU memory, type and number of gpus.The GPU memory I used for training was 24g,batch_size=1, but still cuda out of memory

Pixellation

PIFuHD's output seems to always be incredibly pixelated:
image
Is this caused by the marching cubes sampling?

Training source code

Congratulations on your great work!

If I want to train PihuHD on another 3d dataset, where should I start? I couldn't find an instruction on how to train the model. Perhaps it is not released yet?

Thank you very much!

How to train this?

How to train this model? Can you provide code for training? And is there a public dataset that can train this model? Thank you!

Access Voxel Data

Is there a way to access the data of every voxel from the reconstruction, for example, each voxel's depth and sdf value?

Resolution effect

Will a higher resolution increase the details in the face of the person?

tex-pifu

Hi, does codebase provide the Tex-Pifu? How we can get the mesh with rgb color?

NotImplementedError: Input Error: Only 3D, 4D and 5D input Tensors supported (got 3D) for the modes: nearest | linear | bilinear | bicubic | trilinear (got bicubic)

Hi @shunsukesaito

I could run the code on colab on my test data and worked fine. I am trying to run it on gpu machine getting this following error.

Traceback (most recent call last):
File "/home/hamid_farhid/anaconda3/envs/PIFuHD/lib/python3.7/runpy.py", line 193, in _run_module_as_main
"main", mod_spec)
File "/home/hamid_farhid/anaconda3/envs/PIFuHD/lib/python3.7/runpy.py", line 85, in _run_code
exec(code, run_globals)
File "/home/hamid_farhid/PIFuHD/pifuhd/apps/simple_test.py", line 30, in
reconWrapper(cmd, args.use_rect)
File "/home/hamid_farhid/PIFuHD/pifuhd/apps/recon.py", line 220, in reconWrapper
recon(opt, use_rect)
File "/home/hamid_farhid/PIFuHD/pifuhd/apps/recon.py", line 210, in recon
gen_mesh(opt.resolution, netMR, cuda, test_data, save_path, components=opt.use_compose)
File "/home/hamid_farhid/PIFuHD/pifuhd/apps/recon.py", line 38, in gen_mesh
net.filter_global(image_tensor_global)
File "/home/hamid_farhid/PIFuHD/pifuhd/lib/model/HGPIFuMRNet.py", line 83, in filter_global
self.netG.filter(images)
File "/home/hamid_farhid/PIFuHD/pifuhd/lib/model/HGPIFuNetwNML.py", line 134, in filter
self.im_feat_list, self.normx = self.image_filter(images)
File "/home/hamid_farhid/anaconda3/envs/PIFuHD/lib/python3.7/site-packages/torch/nn/modules/module.py", line 489, in call
result = self.forward(*input, **kwargs)
File "/home/hamid_farhid/PIFuHD/pifuhd/lib/model/HGFilters.py", line 195, in forward
hg = self._modules'm' + str(i)
File "/home/hamid_farhid/anaconda3/envs/PIFuHD/lib/python3.7/site-packages/torch/nn/modules/module.py", line 489, in call
result = self.forward(*input, **kwargs)
File "/home/hamid_farhid/PIFuHD/pifuhd/lib/model/HGFilters.py", line 117, in forward
return self._forward(self.depth, x)
File "/home/hamid_farhid/PIFuHD/pifuhd/lib/model/HGFilters.py", line 103, in _forward
low2 = self._forward(level - 1, low1)
File "/home/hamid_farhid/PIFuHD/pifuhd/lib/model/HGFilters.py", line 111, in _forward
up2 = F.interpolate(low3, scale_factor=2, mode='bicubic', align_corners=True)
File "/home/hamid_farhid/anaconda3/envs/PIFuHD/lib/python3.7/site-packages/torch/nn/functional.py", line 2459, in interpolate
" (got {})".format(input.dim(), mode))
NotImplementedError: Input Error: Only 3D, 4D and 5D input Tensors supported (got 4D) for the modes: nearest | linear | bilinear | trilinear (got bicubic)
freeglut (foo): failed to open display ''

By referencing this: Error, we should do some changes on input tensor. Any thought on this one?

hand-painted character

I want to use a better hand-painted character to generate 3D model, please tell me how to do it, thank you!

1024 resolution

Hey,
I couldn't find any documentation on this. Is it possible? I tried using this command:

python -m apps.simple_test --resolution=1024

But it didn't give any output obj, and the output png only used the top left corner of the input image.
Thanks

CUDA Out of memory, even after changing default

After changing the default line:
parser.add_argument('-r', '--resolution', type=int, default=100)
from 512 to 100. I am still getting this error

image

My GPU is Gtx 1060 6gb... Is there anyway to make the program more economical on my GPU?

openpose on windows

Which file from the openpose binaries should I use on Windows10? The file (./build/examples/openpose/openpose.bin) given in batch_openpose.py does not exist on the Windows 10 binaries with the openpose version 1.6.0

Btw:. In the documentation the command (python apps/process_openpose.py .....) is wrong.
It has to be (python apps/batch_openpose.py .....) i guess.

How to get a bigger model

hi,Thank you for sharing.
How to get a better model, a bigger model. Can you give me some advice,Thank you for your reply.

color mesh is weird

Thank you for your great work

but i have a trouble with getting color mesh

I changed from get_mesh to get_mesh_img on recon.py and executed. The result is the same on the front and back as in the picture below. Is there any additional part that I need to set up?

Thank you

Untitled
Untitled (1)

Render for BUFF dataset

Hi, Because in the paper PIFuHD and the paper PIFu before, there are quantitative results on BUFF dataset, so did you train a new model on BUFF dataset for that evaluation?
I want to know if you can share the render code for BUFF dataset's ply file? Because even the evaluation on BUFF requires the projection image of the 3D model.

Trying to run it locally on a non-CUDA gpu or just CPU

I'm trying to run it on a local machine instead of the google collab demo

I have setup my local env, all required packages installed (and made sure to get a Pytorch+Pyvision version without CUDA) since I don't have a Nvidia GPU, rather a AMD Vega64.

I got this error initially:

Resuming from  ./checkpoints/pifuhd.pt
Traceback (most recent call last):
  File "/home/sunbath171/miniconda3/envs/my_env/lib/python3.8/runpy.py", line 194, in _run_module_as_main
    return _run_code(code, main_globals, None,
  File "/home/sunbath171/miniconda3/envs/my_env/lib/python3.8/runpy.py", line 87, in _run_code
    exec(code, run_globals)
  File "/mnt/c/Users/Some Guy AB/Downloads/pifuhd-master/apps/simple_test.py", line 30, in <module>
    reconWrapper(cmd, args.use_rect)
  File "/mnt/c/Users/Some Guy AB/Downloads/pifuhd-master/apps/recon.py", line 220, in reconWrapper
    recon(opt, use_rect)
  File "/mnt/c/Users/Some Guy AB/Downloads/pifuhd-master/apps/recon.py", line 148, in recon
    state_dict = torch.load(state_dict_path)
  File "/home/sunbath171/.local/lib/python3.8/site-packages/torch/serialization.py", line 593, in load
    return _legacy_load(opened_file, map_location, pickle_module, **pickle_load_args)
  File "/home/sunbath171/.local/lib/python3.8/site-packages/torch/serialization.py", line 773, in _legacy_load
    result = unpickler.load()
  File "/home/sunbath171/.local/lib/python3.8/site-packages/torch/serialization.py", line 729, in persistent_load
    deserialized_objects[root_key] = restore_location(obj, location)
  File "/home/sunbath171/.local/lib/python3.8/site-packages/torch/serialization.py", line 178, in default_restore_location
    result = fn(storage, location)
  File "/home/sunbath171/.local/lib/python3.8/site-packages/torch/serialization.py", line 154, in _cuda_deserialize
    device = validate_cuda_device(location)
  File "/home/sunbath171/.local/lib/python3.8/site-packages/torch/serialization.py", line 138, in validate_cuda_device
    raise RuntimeError('Attempting to deserialize object on a CUDA '
RuntimeError: Attempting to deserialize object on a CUDA device but torch.cuda.is_available() is False. If you are running on a CPU-only machine, please use torch.load with map_location=torch.device('cpu') to map your storages to the CPU.
Linker failure:

Linker failure:

After that I changed this in recon.py :
state_dict = torch.load(state_dict_path, map_location=torch.device('cpu'))

But I still get this error:

Resuming from  ./checkpoints/pifuhd.pt
Warning: opt is overwritten.
test data size:  1
initialize network with normal
Traceback (most recent call last):
  File "/home/sunbath171/miniconda3/envs/my_env/lib/python3.8/runpy.py", line 194, in _run_module_as_main
    return _run_code(code, main_globals, None,
  File "/home/sunbath171/miniconda3/envs/my_env/lib/python3.8/runpy.py", line 87, in _run_code
    exec(code, run_globals)
  File "/mnt/c/Users/Some Guy AB/Downloads/pifuhd-master/apps/simple_test.py", line 30, in <module>
    reconWrapper(cmd, args.use_rect)
  File "/mnt/c/Users/Some Guy AB/Downloads/pifuhd-master/apps/recon.py", line 220, in reconWrapper
    recon(opt, use_rect)
  File "/mnt/c/Users/Some Guy AB/Downloads/pifuhd-master/apps/recon.py", line 176, in recon
    netG = HGPIFuNetwNML(opt_netG, projection_mode).to(device=cuda)
  File "/home/sunbath171/.local/lib/python3.8/site-packages/torch/nn/modules/module.py", line 443, in to
    return self._apply(convert)
  File "/home/sunbath171/.local/lib/python3.8/site-packages/torch/nn/modules/module.py", line 203, in _apply
    module._apply(fn)
  File "/home/sunbath171/.local/lib/python3.8/site-packages/torch/nn/modules/module.py", line 203, in _apply
    module._apply(fn)
  File "/home/sunbath171/.local/lib/python3.8/site-packages/torch/nn/modules/module.py", line 225, in _apply
    param_applied = fn(param)
  File "/home/sunbath171/.local/lib/python3.8/site-packages/torch/nn/modules/module.py", line 441, in convert
    return t.to(device, dtype if t.is_floating_point() else None, non_blocking)
  File "/home/sunbath171/.local/lib/python3.8/site-packages/torch/cuda/__init__.py", line 149, in _lazy_init
    _check_driver()
  File "/home/sunbath171/.local/lib/python3.8/site-packages/torch/cuda/__init__.py", line 47, in _check_driver
    raise AssertionError("Torch not compiled with CUDA enabled")
AssertionError: Torch not compiled with CUDA enabled
Linker failure:

Linker failure:

Need help figuring out the rest. I would ideally want to run it on my GPU. But as a last resort, I would be okay with running it on my CPU (i5 4670k) .. which is much much slower.

Testing with perspective camera mode failed

After downloading the provided testing model, I run both orthogonal and perspective camera mode with
sh ./scripts/demo.sh
I change this camera mode by modifying apps/recon.py line 168-170:

if use_rect:
   test_dataset = EvalDataset(opt)
else:
   test_dataset = EvalWPoseDataset(opt)

to

if use_rect:
   test_dataset = EvalDataset(opt, 'perspective')
else:
   test_dataset = EvalWPoseDataset(opt, 'perspective')

However, the results of orthogonal mode look great, but the results of perspective mode failed like below:
image
I output the value of calib data in lib/data/EvalWposeDataset.py - line 270

calib_data = {'proj':projection_matrix, 'instr':intrinsic}
proj =
1 0 0 0
0 -1 0 0
0 0 1 0
0 0 0 1

instr =
1.084745762711864 0 0 0.190677966101695
0 1.084745762711864 0 0.025423728813559
0 0 1.084745762711864 0
0 0 0 1.000000000000000

Do you have any idea to fix this problem? Does it related to this instr matrix?
Look forward to your early reply. Thanks!

Cuda OOM in Win10

I have a 2080 card with 8G memory but still ran into the following OOM problem. The env is pytorch 1.5.1 on Win 10. I pretty much closed all other apps when running this.

Error msg:
CUDA out of memory. Tried to allocate 782.00 MiB (GPU 0; 8.00 GiB total capacity; 4.43 GiB already allocated; 610.25 MiB free; 5.29 GiB reserved in total by PyTorch)

Any one shred a light on how to make more memory available?

Visualization error

I want to visualize the 3D mesh into video. I am getting this by running visualization command line

File "C:\Users\hamid.farhidzadeh\Anaconda3\envs\pytorch\lib\runpy.py", line 194, in _run_module_as_main
return _run_code(code, main_globals, None,
File "C:\Users\hamid.farhidzadeh\Anaconda3\envs\pytorch\lib\runpy.py", line 87, in _run_code
exec(code, run_globals)
File "C:\Users\hamid.farhidzadeh\Documents\pifuhd\apps\render_turntable.py", line 69, in
renderer = ColorRender(width=args.width, height=args.height)
File "C:\Users\hamid.farhidzadeh\Documents\pifuhd\lib\render\gl\color_render.py", line 34, in init
CamRender.init(self, width, height, name, program_files=program_files)
File "C:\Users\hamid.farhidzadeh\Documents\pifuhd\lib\render\gl\cam_render.py", line 32, in init
Render.init(self, width, height, name, program_files, color_size, ms_rate)
File "C:\Users\hamid.farhidzadeh\Documents\pifuhd\lib\render\gl\render.py", line 45, in init
_glut_window = glutCreateWindow("My Render .")
File "C:\Users\hamid.farhidzadeh\Anaconda3\envs\pytorch\lib\site-packages\OpenGL\GLUT\special.py", line 73, in glutCreateWindow
return __glutCreateWindowWithExit(title, _exitfunc)
ctypes.ArgumentError: argument 1: <class 'TypeError'>: wrong type

By searching I found out I need to modify :
https://github.com/facebookresearch/pifuhd/blob/master/lib/render/gl/render.py#L45
by adding b to name
_glut_window = glutCreateWindow(b"My Render .")

After fixing this I got the new error and couldn't find any solution for it. Any input is appropriated.

freeglut (foo): fgInitGL2: fghGenBuffers is NULL
Traceback (most recent call last):
File "C:\Users\hamid.farhidzadeh\Anaconda3\envs\pytorch\lib\runpy.py", line 194, in _run_module_as_main
return _run_code(code, main_globals, None,
File "C:\Users\hamid.farhidzadeh\Anaconda3\envs\pytorch\lib\runpy.py", line 87, in _run_code
exec(code, run_globals)
File "C:\Users\hamid.farhidzadeh\Documents\pifuhd\apps\render_turntable.py", line 69, in
renderer = ColorRender(width=args.width, height=args.height)
File "C:\Users\hamid.farhidzadeh\Documents\pifuhd\lib\render\gl\color_render.py", line 34, in init
CamRender.init(self, width, height, name, program_files=program_files)
File "C:\Users\hamid.farhidzadeh\Documents\pifuhd\lib\render\gl\cam_render.py", line 32, in init
Render.init(self, width, height, name, program_files, color_size, ms_rate)
File "C:\Users\hamid.farhidzadeh\Documents\pifuhd\lib\render\gl\render.py", line 50, in init
glClampColor(GL_CLAMP_READ_COLOR, GL_FALSE)
File "C:\Users\hamid.farhidzadeh\Anaconda3\envs\pytorch\lib\site-packages\OpenGL\platform\baseplatform.py", line 405, in call
raise error.NullFunctionError(
OpenGL.error.NullFunctionError: Attempt to call an undefined function glClampColor, check for bool(glClampColor) before calling

Training data list

Could the authors kindly clarify what dataset(s) the released checkpoint is trained on? Renderpeople, a-xyz, buff etc.; one of the above or combination, etc.

model from multple images

Great tool for model creation, I would like to create a human model from side and front images, please explain how it possible? and have any colab for it?

How to improve mesh accuracy

Hi! Thanks for your great contribution.
I test a set of dance frames offline ( Use openpose body25 ), but the results are not so good as paper and sample given. My test result shows like that:
result

result_00200_512

Could you kindly offer some ideas about how to improve mesh accuracy? Do I need to train my own dataset of specified motion? How to deal with the noise?

Thanks.

Issues about the inputs of HGPIFuMRNet

When I read the code of HGPIFuMRNet.py, I'm not clear about the meaning of images_local([B1, B2, C, H, W]) and images_global([B1, C, H, W]). What is the difference between images_local and images_global and what does B1, B2 mean?
And for points_nml and labels_nml, are they from .obj file or calculated?
@shunsukesaito Thank you!

About RenderPeople

Hi, in the paper, you mention that the model was trained with the RenderPeople dataset. Does the dataset provide real 2D photos of each subject, or did you manually create photo-like images from scanned models? And if the latter is the case, is there any techniques that you applied to deal with the discrepancy between real photos and the photo-like images?

Openpose docker guide

As an unexperienced Linux user, I prefer to use Openpose from the docker container.
Perhaps for someone like me this instruction will be useful.

After loading docker image, you need to change or add a new batch_openpose.sh, with the following code:

# Copyright (c) Facebook, Inc. and its affiliates. All rights reserved.
import numpy as np
import os
import argparse

parser = argparse.ArgumentParser()
parser.add_argument('-i', '--input_root', type=str, required=True)

args = parser.parse_args()
input_path = args.input_root

cmd = "docker run -v {0}/:/openpose/data/ --net=host -e DISPLAY --runtime=nvidia exsidius/openpose ./build/examples/openpose/openpose.bin --image_dir ./data --display 0 --write_json ./data --render_pose 0 --face --hand".format(input_path)
print(cmd)
os.system(cmd)

Then, run:

python apps/batch_docker_openpose.py -i {$path_to_folder}

connect pifuhd to webcam

is there a way to connect pifuHD to computer camera and generate near real time 3d model poses of human being?

pifuHD code takes around 5-15 minutes to run on google colab to generate one 3d model from one picture.
can we feed the picture from webcam instead of uploading it and save 3d models automatically on our local machine.

if this works, then how can we run it in a loop where, google colab take a snap from webcam and generate 3d model and download it automatically continuously till you personally stop it?

error CuDNN

I got this error

(base) D:\pifuhd>python -m apps.simple_test
Resuming from ./checkpoints/pifuhd.pt
Warning: opt is overwritten.
test data size: 1
initialize network with normal
initialize network with normal
generate mesh (test) ...
0%| | 0/1 [00:00<?, ?it/s]./results/pifuhd_final/recon/result_test_512.obj
0%| | 0/1 [00:01<?, ?it/s]
Traceback (most recent call last):
File "D:\Mini-tutorial\miniconda\lib\runpy.py", line 194, in _run_module_as_main
return _run_code(code, main_globals, None,
File "D:\Mini-tutorial\miniconda\lib\runpy.py", line 87, in _run_code
exec(code, run_globals)
File "D:\pifuhd\apps\simple_test.py", line 30, in
reconWrapper(cmd, args.use_rect)
File "D:\pifuhd\apps\recon.py", line 220, in reconWrapper
recon(opt, use_rect)
File "D:\pifuhd\apps\recon.py", line 210, in recon
gen_mesh(opt.resolution, netMR, cuda, test_data, save_path, components=opt.use_compose)
File "D:\pifuhd\apps\recon.py", line 38, in gen_mesh
net.filter_global(image_tensor_global)
File "D:\pifuhd\lib\model\HGPIFuMRNet.py", line 83, in filter_global
self.netG.filter(images)
File "D:\pifuhd\lib\model\HGPIFuNetwNML.py", line 122, in filter
self.nmlF = self.netF.forward(images).detach()
File "D:\pifuhd\lib\networks.py", line 163, in forward
return self.model(input)
File "D:\Mini-tutorial\miniconda\lib\site-packages\torch\nn\modules\module.py", line 722, in _call_impl
result = self.forward(*input, **kwargs)
File "D:\Mini-tutorial\miniconda\lib\site-packages\torch\nn\modules\container.py", line 117, in forward
input = module(input)
File "D:\Mini-tutorial\miniconda\lib\site-packages\torch\nn\modules\module.py", line 722, in _call_impl
result = self.forward(*input, **kwargs)
File "D:\Mini-tutorial\miniconda\lib\site-packages\torch\nn\modules\conv.py", line 419, in forward
return self._conv_forward(input, self.weight)
File "D:\Mini-tutorial\miniconda\lib\site-packages\torch\nn\modules\conv.py", line 415, in _conv_forward
return F.conv2d(input, weight, self.bias, self.stride,
RuntimeError: cuDNN error: CUDNN_STATUS_ALLOC_FAILED

(base) D:\pifuhd>python -m apps.render_turntable -f ./results/pifuhd_final/recon -ww 512 -hh 512
Traceback (most recent call last):
File "D:\Mini-tutorial\miniconda\lib\runpy.py", line 194, in _run_module_as_main
return _run_code(code, main_globals, None,
File "D:\Mini-tutorial\miniconda\lib\runpy.py", line 87, in _run_code
exec(code, run_globals)
File "D:\pifuhd\apps\render_turntable.py", line 69, in
renderer = ColorRender(width=args.width, height=args.height)
File "D:\pifuhd\lib\render\gl\color_render.py", line 34, in init
CamRender.init(self, width, height, name, program_files=program_files)
File "D:\pifuhd\lib\render\gl\cam_render.py", line 32, in init
Render.init(self, width, height, name, program_files, color_size, ms_rate)
File "D:\pifuhd\lib\render\gl\render.py", line 41, in init
glutInit()
File "D:\Mini-tutorial\miniconda\lib\site-packages\OpenGL\GLUT\special.py", line 333, in glutInit
_base_glutInit( ctypes.byref(count), holder )
File "D:\Mini-tutorial\miniconda\lib\site-packages\OpenGL\platform\baseplatform.py", line 423, in call
raise error.NullFunctionError(
OpenGL.error.NullFunctionError: Attempt to call an undefined function glutInit, check for bool(glutInit) before calling

Not compiled with GPU support

from lib.colab_util import generate_video_from_obj, set_renderer, video
renderer = set_renderer()
generate_video_from_obj(obj_path, out_img_path, video_path, renderer)

# we cannot play a mp4 video generated by cv2
!ffmpeg -i $video_path -vcodec libx264 $video_display_path -y -loglevel quiet
video(video_display_path)

result:

0%
0/90 [00:00<?, ?it/s]

---------------------------------------------------------------------------
RuntimeError                              Traceback (most recent call last)
<ipython-input-35-22614b7ef849> in <module>
      2 
      3 renderer = set_renderer()
----> 4 generate_video_from_obj(obj_path, out_img_path, video_path, renderer)
      5 
      6 # we cannot play a mp4 video generated by cv2

~/Downloads/3Dmodel/pifuhd/lib/colab_util.py in generate_video_from_obj(obj_path, image_path, video_path, renderer)
    127     # create VideoWriter
    128     # print(':)')
--> 129     fourcc = cv2. VideoWriter_fourcc(*'MP4V')
    130     out = cv2.VideoWriter(video_path, fourcc, 20.0, (1024,512))
    131 

~/anaconda3/lib/python3.7/site-packages/torch/nn/modules/module.py in __call__(self, *input, **kwargs)
    548             result = self._slow_forward(*input, **kwargs)
    549         else:
--> 550             result = self.forward(*input, **kwargs)
    551         for hook in self._forward_hooks.values():
    552             hook_result = hook(self, input, result)

~/anaconda3/lib/python3.7/site-packages/pytorch3d/renderer/mesh/renderer.py in forward(self, meshes_world, **kwargs)
     49         the range for the corresponding face.
     50         """
---> 51         fragments = self.rasterizer(meshes_world, **kwargs)
     52         raster_settings = kwargs.get("raster_settings", self.rasterizer.raster_settings)
     53         if raster_settings.blur_radius > 0.0:

~/anaconda3/lib/python3.7/site-packages/torch/nn/modules/module.py in __call__(self, *input, **kwargs)
    548             result = self._slow_forward(*input, **kwargs)
    549         else:
--> 550             result = self.forward(*input, **kwargs)
    551         for hook in self._forward_hooks.values():
    552             hook_result = hook(self, input, result)

~/anaconda3/lib/python3.7/site-packages/pytorch3d/renderer/mesh/rasterizer.py in forward(self, meshes_world, **kwargs)
    126             max_faces_per_bin=raster_settings.max_faces_per_bin,
    127             perspective_correct=raster_settings.perspective_correct,
--> 128             cull_backfaces=raster_settings.cull_backfaces,
    129         )
    130         return Fragments(

~/anaconda3/lib/python3.7/site-packages/pytorch3d/renderer/mesh/rasterize_meshes.py in rasterize_meshes(meshes, image_size, blur_radius, faces_per_pixel, bin_size, max_faces_per_bin, perspective_correct, cull_backfaces)
    143         max_faces_per_bin,
    144         perspective_correct,
--> 145         cull_backfaces,
    146     )
    147 

~/anaconda3/lib/python3.7/site-packages/pytorch3d/renderer/mesh/rasterize_meshes.py in forward(ctx, face_verts, mesh_to_face_first_idx, num_faces_per_mesh, image_size, blur_radius, faces_per_pixel, bin_size, max_faces_per_bin, perspective_correct, cull_backfaces)
    195             max_faces_per_bin,
    196             perspective_correct,
--> 197             cull_backfaces,
    198         )
    199         ctx.save_for_backward(face_verts, pix_to_face)

RuntimeError: Not compiled with GPU support

but I try torch.cuda.is_available() is True
Can anybody help? Thanks

No module named apps!

hello, when I execute the command
sh ./scripts/demo.sh
it appears the error
/usr/bin/python: No module named apps

I reviewed all the documentation, but it did not work
Any idea how to solve it?

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.