Giter Site home page Giter Site logo

peizhuoli / neural-blend-shapes Goto Github PK

View Code? Open in Web Editor NEW
635.0 22.0 92.0 5.63 MB

An end-to-end library for automatic character rigging, skinning, and blend shapes generation, as well as a visualization tool [SIGGRAPH 2021]

License: Other

Python 100.00%
computer-graphics computer-animation deep-learning character-animation rigging-framework

neural-blend-shapes's People

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

neural-blend-shapes's Issues

Issue with custom mesh

Thanks @PeizhuoLi for this amazing work.
I tred this repo and it's working fine with the sample character provided here.
But when i'm trying a custom normalized humanoid character mesh in T-Pose, it's producing
weird result.

I first tried,
python demo.py --obj_path samples/model1.obj --result_path samples/output/ --normalize=1 --animated_bvh=1 --obj_output=0

And then exported the .fbx animation using the provided script. I also checked the skeleton.bvh and it's not including correct animation.

Then I tried without --normalize=1 as my mesh looks already normalized,
python demo.py --obj_path samples/model1.obj --result_path samples/output/ --animated_bvh=1 --obj_output=0

In this case skeleton.bvh includes correct animation except a bit bent leg.
But the exported .fbx file is not correct as it's hand is extended weirdly,

Here is reference output,

Kazam_screencast_00011.mp4

And here is the custom mesh,
https://drive.google.com/file/d/1550yYhpFnil3XixcEtoKrSVUJYdGH41l/view?usp=share_link

Please suggest some way out.
Thanks.

Instructions to run Blender Eevee under a headless machine

  1. sudo apt install -y -q xvfb unzip wget flatpak
  2. sudo flatpak remote-add --if-not-exists flathub https://flathub.org repo/flathub.flatpakrepo
  3. sudo flatpak install flathub org.blender.Blender -y
  4. sudo flatpak override org.blender.Blender --talk-name=org.freedesktop.Flatpak
DRI_PRIME=0 xvfb-run --auto-servernum flatpak run org.blender.Blender --background --python `pwd`/run.py -- `pwd

Tested on Ubuntu 18.04

maya

hello,I am a student of USTB,Thanks for excite project,I want to apply your project to maya, but I can't understand rig,I make a FBX file and run this code to get blendshape values,But with the wrong rig,the results is a bit bad, if you have made a fbx file?

forward kinematics implementation

Hello, I was trying to understand the forward kinematics implementation and was looking at https://www.alecjacobson.com/weblog/?p=3763 this blog post for clarification, and the implementation doesn't seem to match up for non-parent bones.

`for i, p in enumerate(self.parents):
if i != 0:
rots[:, i] = torch.matmul(rots[:, p], rots[:, i])
pos[:, i] = torch.matmul(rots[:, p], offsets[:, i].unsqueeze(-1)).squeeze(-1) + pos[:, p]
rest_pos[:, i] = rest_pos[:, p] + offsets[:, i]

        res[:, i, :3, :3] = rots[:, i]
        res[:, i, :, 3] = torch.matmul(rots[:, i], -rest_pos[:, i].unsqueeze(-1)).squeeze(-1) + pos[:, i]

`
Suppose I have two bones located at b1, b2 with m1 and m2 as the local transformation matrix and b1 is b2's parent bone, according to the implementation in this codebase, the global translation for bone 2 is m1 * m2 * (-b1-b2) + ( m1 * b2 + b1 ).

However, according to the formula in this blog post, it should be m1 * m2 * (-b2) + m1 * (b1+b2) + b1

I'm wondering if you know why is the difference in this implementation?

Error when try to show the animation

Great work!
When I try to visualize the animation via blender following this, an error occured.

Saved: '../demo/images/0000.png'
 Time: 00:00.28 (Saving: 00:00.25)

Traceback (most recent call last):
  File "/home/yusun/Documents/GitHub/neural-blend-shapes/blender_scripts/render_mesh.py", line 96, in <module>
    batch_render(args.obj_path)
  File "/home/yusun/Documents/GitHub/neural-blend-shapes/blender_scripts/render_mesh.py", line 56, in batch_render
    render_obj(os.path.join(dir, file))
  File "/home/yusun/Documents/GitHub/neural-blend-shapes/blender_scripts/render_mesh.py", line 42, in render_obj
    obj.select_set(True)
AttributeError: 'Object' object has no attribute 'select_set'

I am using Blender v2.91a. Any suggestion?

Something weird with the skeleton

Hi! When I reproduced this great work and trained from scratch, the skeleton produced some weird result, as shown in the picture. The left is the result I reproduced, and the right is the result produced from the pre_trained model. I think it may because the end effector was set wrongly, but I didn't change any code about it. Could you please help me with this problem? Looking forward to your reply!
屏幕截图 2021-10-28 223501

evaluation CD-B2B error

Hi Peizhuo:

I use the code to evaluation RigNet testset and find the following error, I want to make sure for that:

res[i][j] = segment2segment(a[i][0], a[i][0] + a[i][1], b[i][0], b[i][0] + b[i][1])

should be

83       res[i][j] = segment2segment(a[i][0], a[i][0] + a[i][1], b[j][0], b[j][0] + b[j][1]) 

where b[i] should be b[j]. and the evaluation is not the same as paper

Other than human models

For training models other than humans, I think a custom SMPL is needed which can generate deformed shapes of your desired model (which is very difficult to find). And once I have that I will need to define the bone hierarchy.

Is it correct or did I miss anything?

骨骼的坐标系问题

作者您好!当我运行这个命令python demo.py --animated_bvh=0 --obj_output=0 cd blender_scripts blender -b -P nbs_fbx_output.py -- --input ../demo --output ../demo/output.fbx
后,我得到了T-pose的fbx文件。但我把它导入unity时,我发现几乎所有骨骼的坐标系都不与unity重合。而标准的smpl的初始骨骼坐标系应该都与unity坐标系平行。这导致我无法用smpl的姿态参数去驱动使用neural-blend-shapes绑定后的骨骼。请问有什么方法可以在不改变T-pose的姿态下将所有骨骼坐标系都与unity对齐吗?
这是标准的smpl:
2023-02-04 11-29-37 的屏幕截图
这是用neural-blend-shapes绑定后的:
2023-02-04 11-35-27 的屏幕截图
这是启动参数:
2023-02-04 11-32-32 的屏幕截图

Weird artifacts while running on custom mesh

I am trying to run this on custom meshes and I am getting this weird artifact. I am using this model from SketchFab - https://sketchfab.com/3d-models/trackerman-5b04147173fd41f19ccd4fe42f549a15 and also with a custom mesh. I have triangulated the mesh and calling is using normalize - python demo.py --pose_file=.\eval_constant\sequences\greeting.npy --obj_path=.\eval_constant\meshes\trackerman\trackerman.obj --animated_bvh=1 --obj_output=0 --normalize=1. Any possible reason for why could be happening and any suggestions/filters to avoid this, would be greatly appreciated.
artifacts_error1
artifacts_error2

Questions regarding blendshapes

Thank you for your great work. After reading the paper, I still have some questions. So I hope you or anyone else can answer me if possible.

In the paper, the Residual Deformation Branch learns to predict blendshapes for each individual character. I'm wondering how these blendshapes are defined.

I'm not familiar with body blendshapes but as far as I know, blendshapes for facial expressions like jawOpen, eyeBlinkLeft, smileRight, etc., are semantically defined. In the paper[1], personalized facial blendshapes are learned via blendshape gradient loss function, which forces each of the generated blendshapes to have specific semantic meaning.

Another way to use blendshapes for facial expression is like what was done in MetaHuman[2] (Unreal Engine), where expressions are produced by bones. Blendshapes (called morph targets in Unreal Engine) are used to refine the face, which add more details. I think this is more similar to your work.

So I would like to know some details on your blendshapes: how they are defined, how they are learned, etc.

I really appreciate it if you could answer my questions.

Ref:
[1] Personalized Face Modeling for Improved Face Reconstruction and Motion Retargeting
[2] MetaHuman

about data pre process

您好,我测试了一些自己的模型,增加了normalize=1,发现骨骼尺度上有差距,最终出来的蒙皮权重就很差,有几个问题想问下:

  1. 三角化后的模型consistent upright and front facing orientation.是指模型导入引擎,朝向(比如fbx脚本里的axis_forward='-Z', axis_up='Y')一致,然后是T-pose就行? 还是三角面片坐标上有什么含义?
  2. 和smpl_std.obj对齐,直接使用您给定的归一化函数就可以?还是有什么更细致的要求?
    1665651058186

Results not matching

The results provided in the paper are not matching. I'm getting the following results after training using the train script and evaluated using the given script:

image

Provide instructions for GPU based pytorch

Similar to the current cpu version. Provide an environment.yml for cuda based pytorch.

It is non trivial to switch between gpu and cpu. https://github.com/pytorch/pytorch/issues/30664.

Export Animated bvh or FBX?

First, thanks for your great work.

Just wondering if exporting animated format(such as bvh or fbx) is supported ?
Looks like demo.py only exports T-pose skeleton .
I've tried to combine all outputs to fbx format but had trouble with animating character with a given pose. (In blender)
I used pose (FrameN, Joint_Num,3) data as an input of euler value but it doesn't work.

Any Suggestions??

penetration occur when use a fat person

Hi @PeizhuoLi, thanks for sharing this wonderful work ,and I have try some obj with different shapes, but encountered penetration problem.
it seems this situation didn't discuss in the paper, can you give some advice? thanks~

Error: ZeroDivisionError: division by zero when importing custom mesh

(neural-blend-shapes) D:\CG_Source\NeRFs\3D_Avatar_Pipeline\neural-blend-shapes>python demo.py --pose_file=.\eval_constant\sequences\greeting.npy --obj_path=.\eval_constant\meshes\test_remesh.obj --animated_bvh=1 --obj_output=0 --normalize=1
Traceback (most recent call last):
  File "demo.py", line 150, in <module>
    main()
  File "demo.py", line 132, in main
    env_model, res_model = load_model(device, model_args, topo_loader, args.model_path)
  File "demo.py", line 59, in load_model
    geo, att, gen = create_envelope_model(device, model_args, topo_loader, is_train=False, parents=parent_smpl)
  File "D:\CG_Source\NeRFs\3D_Avatar_Pipeline\neural-blend-shapes\architecture\__init__.py", line 34, in create_envelope_model
    save_freq=args.save_freq)
  File "D:\CG_Source\NeRFs\3D_Avatar_Pipeline\neural-blend-shapes\models\networks.py", line 54, in __init__
    topo_loader, requires_recorder, is_cont, save_freq)
  File "D:\CG_Source\NeRFs\3D_Avatar_Pipeline\neural-blend-shapes\models\meshcnn_base.py", line 56, in __init__
    val.extend([1 / len(neighbors)] * len(neighbors))
ZeroDivisionError: division by zero

I get this error, everytime I try your model with custom meshes. I checked to make sure they are triangulated and called with normalize=1 but this happens for all the meshes I have. Is there some MeshLab filter that I can apply/script which could parse my custom mesh before sending it to your model or a simple fix for this issue. Thank you for your amazing work!

bpy with python 3.8.5

python 3.8.5 is the default setup in environment.yaml, but bpy does not support python 3.8, how to solve it?

License

Hi,

Amazing work. Can you please add license to the repo.

Thanks!!

broken custom mesh after running demo

I met some similar issue as other mentioned before. I'm not sure if I missed some step to prepare the mesh. I tried to align it and fix the non manifold issue, but the mesh is still broken after I run the demo. I've attached my files, it contains the one I use as referral(maynard:Mesh) and the one I'm running the demo on(castle:Castleguard1), if you can take a look. Thanks!
image

Uploading confusion.fbx…

What is in a pose file?

How different is it from a skeletal transform representation of the animation on standard rig you chose?

when I run the command 'blender ......',I got this

Traceback (most recent call last):
File "", line 1, in
File "D:\360Downloads\neural_blend_shapes_main\blender_scripts\render_mesh.py", line 96, in
batch_render(args.obj_path)
File "D:\360Downloads\neural_blend_shapes_main\blender_scripts\render_mesh.py", line 56, in batch_render
render_obj(os.path.join(dir, file))
File "D:\360Downloads\neural_blend_shapes_main\blender_scripts\render_mesh.py", line 33, in render_obj
add_material_for_objs([obj], 'character')
File "D:\360Downloads\neural_blend_shapes_main\blender_scripts\render_mesh.py", line 28, in add_material_for_objs
raise Exception("This line shouldn't be reached")
Exception: This line shouldn't be reached

I don't know why this happen.
My command is ' blender -P render_mesh.py -b'

Workflow to generate custom pose (e.g. from Mixamo fbx)

Hi.
Could you tell how the sequence pose files(located in ./eval_constant/sequences/ *.npy) are generated ??
It seems like those files were converted from Mixamo files but couldn't find a clue to create those.
If you don't mind , could you let us know this process ?

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.