peizhuoli / neural-blend-shapes Goto Github PK
View Code? Open in Web Editor NEWAn end-to-end library for automatic character rigging, skinning, and blend shapes generation, as well as a visualization tool [SIGGRAPH 2021]
License: Other
An end-to-end library for automatic character rigging, skinning, and blend shapes generation, as well as a visualization tool [SIGGRAPH 2021]
License: Other
Thanks @PeizhuoLi for this amazing work.
I tred this repo and it's working fine with the sample character provided here.
But when i'm trying a custom normalized humanoid character mesh in T-Pose, it's producing
weird result.
I first tried,
python demo.py --obj_path samples/model1.obj --result_path samples/output/ --normalize=1 --animated_bvh=1 --obj_output=0
And then exported the .fbx animation using the provided script. I also checked the skeleton.bvh
and it's not including correct animation.
Then I tried without --normalize=1
as my mesh looks already normalized,
python demo.py --obj_path samples/model1.obj --result_path samples/output/ --animated_bvh=1 --obj_output=0
In this case skeleton.bvh
includes correct animation except a bit bent leg.
But the exported .fbx file is not correct as it's hand is extended weirdly,
Here is reference output,
And here is the custom mesh,
https://drive.google.com/file/d/1550yYhpFnil3XixcEtoKrSVUJYdGH41l/view?usp=share_link
Please suggest some way out.
Thanks.
sudo apt install -y -q xvfb unzip wget flatpak
sudo flatpak remote-add --if-not-exists flathub https://flathub.org repo/flathub.flatpakrepo
sudo flatpak install flathub org.blender.Blender -y
sudo flatpak override org.blender.Blender --talk-name=org.freedesktop.Flatpak
DRI_PRIME=0 xvfb-run --auto-servernum flatpak run org.blender.Blender --background --python `pwd`/run.py -- `pwd
Tested on Ubuntu 18.04
hello,I am a student of USTB,Thanks for excite project,I want to apply your project to maya, but I can't understand rig,I make a FBX file and run this code to get blendshape values,But with the wrong rig,the results is a bit bad, if you have made a fbx file?
Hello, I was trying to understand the forward kinematics implementation and was looking at https://www.alecjacobson.com/weblog/?p=3763 this blog post for clarification, and the implementation doesn't seem to match up for non-parent bones.
`for i, p in enumerate(self.parents):
if i != 0:
rots[:, i] = torch.matmul(rots[:, p], rots[:, i])
pos[:, i] = torch.matmul(rots[:, p], offsets[:, i].unsqueeze(-1)).squeeze(-1) + pos[:, p]
rest_pos[:, i] = rest_pos[:, p] + offsets[:, i]
res[:, i, :3, :3] = rots[:, i]
res[:, i, :, 3] = torch.matmul(rots[:, i], -rest_pos[:, i].unsqueeze(-1)).squeeze(-1) + pos[:, i]
`
Suppose I have two bones located at b1, b2 with m1 and m2 as the local transformation matrix and b1 is b2's parent bone, according to the implementation in this codebase, the global translation for bone 2 is m1 * m2 * (-b1-b2) + ( m1 * b2 + b1 ).
However, according to the formula in this blog post, it should be m1 * m2 * (-b2) + m1 * (b1+b2) + b1
I'm wondering if you know why is the difference in this implementation?
Hello, I encountered this problem when running demo.py. I want to know the structure of args.txt. thank you
Great work!
When I try to visualize the animation via blender following this, an error occured.
Saved: '../demo/images/0000.png'
Time: 00:00.28 (Saving: 00:00.25)
Traceback (most recent call last):
File "/home/yusun/Documents/GitHub/neural-blend-shapes/blender_scripts/render_mesh.py", line 96, in <module>
batch_render(args.obj_path)
File "/home/yusun/Documents/GitHub/neural-blend-shapes/blender_scripts/render_mesh.py", line 56, in batch_render
render_obj(os.path.join(dir, file))
File "/home/yusun/Documents/GitHub/neural-blend-shapes/blender_scripts/render_mesh.py", line 42, in render_obj
obj.select_set(True)
AttributeError: 'Object' object has no attribute 'select_set'
I am using Blender v2.91a. Any suggestion?
Is there a recommended way for converting an arbitrary posed mesh to a t-posed mesh?
Side Question - Also since you are able to calculate skinning weights of a mesh, why isn't it possible to do neural blend shapes on any arbitrary posed mesh?
Hi! When I reproduced this great work and trained from scratch, the skeleton produced some weird result, as shown in the picture. The left is the result I reproduced, and the right is the result produced from the pre_trained model. I think it may because the end effector was set wrongly, but I didn't change any code about it. Could you please help me with this problem? Looking forward to your reply!
Hi Peizhuo:
I use the code to evaluation RigNet testset and find the following error, I want to make sure for that:
neural-blend-shapes/models/measurement.py
Line 83 in 6d10830
should be
83 res[i][j] = segment2segment(a[i][0], a[i][0] + a[i][1], b[j][0], b[j][0] + b[j][1])
where b[i] should be b[j]. and the evaluation is not the same as paper
For training models other than humans, I think a custom SMPL is needed which can generate deformed shapes of your desired model (which is very difficult to find). And once I have that I will need to define the bone hierarchy.
Is it correct or did I miss anything?
作者您好!当我运行这个命令python demo.py --animated_bvh=0 --obj_output=0 cd blender_scripts blender -b -P nbs_fbx_output.py -- --input ../demo --output ../demo/output.fbx
后,我得到了T-pose的fbx文件。但我把它导入unity时,我发现几乎所有骨骼的坐标系都不与unity重合。而标准的smpl的初始骨骼坐标系应该都与unity坐标系平行。这导致我无法用smpl的姿态参数去驱动使用neural-blend-shapes绑定后的骨骼。请问有什么方法可以在不改变T-pose的姿态下将所有骨骼坐标系都与unity对齐吗?
这是标准的smpl:
这是用neural-blend-shapes绑定后的:
这是启动参数:
I am trying to run this on custom meshes and I am getting this weird artifact. I am using this model from SketchFab - https://sketchfab.com/3d-models/trackerman-5b04147173fd41f19ccd4fe42f549a15 and also with a custom mesh. I have triangulated the mesh and calling is using normalize - python demo.py --pose_file=.\eval_constant\sequences\greeting.npy --obj_path=.\eval_constant\meshes\trackerman\trackerman.obj --animated_bvh=1 --obj_output=0 --normalize=1
. Any possible reason for why could be happening and any suggestions/filters to avoid this, would be greatly appreciated.
miko_siggraph_neural_blend_shapes.zip
python demo.py --pose_file=./eval_constant/sequences/house-dance.npy --obj_path=./eval_constant/meshes/miko.obj --normalize=1 --obj_output=0 --animated_bvh=1
Thank you for your great work. After reading the paper, I still have some questions. So I hope you or anyone else can answer me if possible.
In the paper, the Residual Deformation Branch learns to predict blendshapes for each individual character. I'm wondering how these blendshapes are defined.
I'm not familiar with body blendshapes but as far as I know, blendshapes for facial expressions like jawOpen, eyeBlinkLeft, smileRight, etc., are semantically defined. In the paper[1], personalized facial blendshapes are learned via blendshape gradient loss function, which forces each of the generated blendshapes to have specific semantic meaning.
Another way to use blendshapes for facial expression is like what was done in MetaHuman[2] (Unreal Engine), where expressions are produced by bones. Blendshapes (called morph targets in Unreal Engine) are used to refine the face, which add more details. I think this is more similar to your work.
So I would like to know some details on your blendshapes: how they are defined, how they are learned, etc.
I really appreciate it if you could answer my questions.
Ref:
[1] Personalized Face Modeling for Improved Face Reconstruction and Motion Retargeting
[2] MetaHuman
after trans pose, no displacement , how to do ?
Similar to the current cpu version. Provide an environment.yml for cuda based pytorch.
It is non trivial to switch between gpu and cpu. https://github.com/pytorch/pytorch/issues/30664
.
First, thanks for your great work.
Just wondering if exporting animated format(such as bvh or fbx) is supported ?
Looks like demo.py only exports T-pose skeleton .
I've tried to combine all outputs to fbx format but had trouble with animating character with a given pose. (In blender)
I used pose (FrameN, Joint_Num,3) data as an input of euler value but it doesn't work.
Any Suggestions??
Hi @PeizhuoLi, thanks for sharing this wonderful work ,and I have try some obj with different shapes, but encountered penetration problem.
it seems this situation didn't discuss in the paper, can you give some advice? thanks~
(neural-blend-shapes) D:\CG_Source\NeRFs\3D_Avatar_Pipeline\neural-blend-shapes>python demo.py --pose_file=.\eval_constant\sequences\greeting.npy --obj_path=.\eval_constant\meshes\test_remesh.obj --animated_bvh=1 --obj_output=0 --normalize=1
Traceback (most recent call last):
File "demo.py", line 150, in <module>
main()
File "demo.py", line 132, in main
env_model, res_model = load_model(device, model_args, topo_loader, args.model_path)
File "demo.py", line 59, in load_model
geo, att, gen = create_envelope_model(device, model_args, topo_loader, is_train=False, parents=parent_smpl)
File "D:\CG_Source\NeRFs\3D_Avatar_Pipeline\neural-blend-shapes\architecture\__init__.py", line 34, in create_envelope_model
save_freq=args.save_freq)
File "D:\CG_Source\NeRFs\3D_Avatar_Pipeline\neural-blend-shapes\models\networks.py", line 54, in __init__
topo_loader, requires_recorder, is_cont, save_freq)
File "D:\CG_Source\NeRFs\3D_Avatar_Pipeline\neural-blend-shapes\models\meshcnn_base.py", line 56, in __init__
val.extend([1 / len(neighbors)] * len(neighbors))
ZeroDivisionError: division by zero
I get this error, everytime I try your model with custom meshes. I checked to make sure they are triangulated and called with normalize=1 but this happens for all the meshes I have. Is there some MeshLab filter that I can apply/script which could parse my custom mesh before sending it to your model or a simple fix for this issue. Thank you for your amazing work!
python 3.8.5 is the default setup in environment.yaml, but bpy does not support python 3.8, how to solve it?
Hi,
Amazing work. Can you please add license to the repo.
Thanks!!
I met some similar issue as other mentioned before. I'm not sure if I missed some step to prepare the mesh. I tried to align it and fix the non manifold issue, but the mesh is still broken after I run the demo. I've attached my files, it contains the one I use as referral(maynard:Mesh) and the one I'm running the demo on(castle:Castleguard1), if you can take a look. Thanks!
How different is it from a skeletal transform representation of the animation on standard rig you chose?
Traceback (most recent call last):
File "", line 1, in
File "D:\360Downloads\neural_blend_shapes_main\blender_scripts\render_mesh.py", line 96, in
batch_render(args.obj_path)
File "D:\360Downloads\neural_blend_shapes_main\blender_scripts\render_mesh.py", line 56, in batch_render
render_obj(os.path.join(dir, file))
File "D:\360Downloads\neural_blend_shapes_main\blender_scripts\render_mesh.py", line 33, in render_obj
add_material_for_objs([obj], 'character')
File "D:\360Downloads\neural_blend_shapes_main\blender_scripts\render_mesh.py", line 28, in add_material_for_objs
raise Exception("This line shouldn't be reached")
Exception: This line shouldn't be reached
I don't know why this happen.
My command is ' blender -P render_mesh.py -b'
Is lacking fingers a limit of the algorithm?
how to change the blender script to get the running fbx, not fix one point?
Hi.
Could you tell how the sequence pose files(located in ./eval_constant/sequences/ *.npy) are generated ??
It seems like those files were converted from Mixamo files but couldn't find a clue to create those.
If you don't mind , could you let us know this process ?
A declarative, efficient, and flexible JavaScript library for building user interfaces.
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
An Open Source Machine Learning Framework for Everyone
The Web framework for perfectionists with deadlines.
A PHP framework for web artisans
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
Some thing interesting about web. New door for the world.
A server is a program made to process requests and deliver data to clients.
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
Some thing interesting about visualization, use data art
Some thing interesting about game, make everyone happy.
We are working to build community through open source technology. NB: members must have two-factor auth.
Open source projects and samples from Microsoft.
Google ❤️ Open Source for everyone.
Alibaba Open Source for everyone
Data-Driven Documents codes.
China tencent open source team.