opentalker / dpe Goto Github PK
View Code? Open in Web Editor NEW[CVPR 2023] DPE: Disentanglement of Pose and Expression for General Video Portrait Editing
Home Page: https://carlyx.github.io/DPE/
License: MIT License
[CVPR 2023] DPE: Disentanglement of Pose and Expression for General Video Portrait Editing
Home Page: https://carlyx.github.io/DPE/
License: MIT License
Dear author, hello, I am a graduate student who is doing research on posture transfer. Now that I read your article, I think it is of great help to me, so I want to do your experiment again to make a further step. Research, the difficulty I have encountered at present is that I have not been able to obtain your data, so I wonder if you can send me your voxcelebshu data and processed data, so that I can do further research.
Hello,
Thank for your great work. I tried your code to perform pose transfer with the code
python run_demo.py --s_path video.mp4 --d_path stable3.mp4 --model_path .\checkpoints\dpe.pt --face pose
But the result is quite weired (see attachment). Is there any setting to improve the result?
https://github.com/OpenTalker/DPE/assets/109195411/1fc0ee68-bcf0-4450-a78a-24c7a49d03d6
video is down for same reason. Please see https://www.bilibili.com/video/BV1844y1F7v1/?vd_source=68fd0a3864408b733915dd2c8b2676f7.
请问一下可以输入一张图片用两条视频进行驱动吗谢谢
hi, I really appreciate your work... it's very interesting. however, when I tried the demo_paste and examined its contents, I found that the "--face" argument was set to the default value of "exp". since I wanted to try transferring the pose from a driving video, I ended up changing it to "pose". however, the masking result wasn't accurate. can you help me with this?
on run_demo_paste line 327 I change this
output_dict = self.gen(img_source, img_target, 'exp')
into this
output_dict = self.gen(img_source, img_target, 'pose')
In the code of generator, stage pose
uses mlp_exp
and stage exp
uses mlp_pose
. Why so?
https://github.com/OpenTalker/DPE/blob/main/networks/generator.py#L134-L140
elif stage=='exp':
directions_expD = self.mlp_pose(directions_D)
elif stage=='pose':
directions_poseD = self.mlp_exp(directions_D)
There are 4 different loss functions in the paper, while only 3 of them are found in the Trainer module:
g_loss = vgg_loss + l1_loss + gan_g_loss + vgg_loss_mid + rec_loss
where is the expression loss?
Sincerely looking forward to your reply.
We are considering, and have prototyped, the use of DPE as part of a video editing pipeline in a potential commercial project, and are excited about it's results. However, the license is not clear to us. The GitHub page says that it has an MIT license, as is included in the source, yet there is a note in the README that specifies research and non-commercial use only. Could you clarify if commercial use is OK?
Hi, thanks for this great project.
How did you generate the first source video on the homepage? (this one)
Was it from SadTalker?
thanks
In LIA, "output video" contain the same proportion of body+head parts as in "source image". But in DPE, the proportion of body+head parts in "output video" is intuitively determined by "driving video", similar to a cropping process.
Is this result caused by the pre-trained model? Can I control the percentage of cropping?
Great work, can you share an example of One-shot driving
Sorry, I didn't see the Expression loss (eq 10 in the paper) implemented in the training codes.
请问这个文件是需要自己生成吗
Dear author, hello, I am a graduate student who is doing research on posture transfer. Now that I read your article, I think it is of great help to me, so I want to do your experiment again to make a further step. Research, the difficulty I have encountered at present is that I have not been able to obtain your data, so I wonder if you can send me your voxcelebshu data and processed data, so that I can do further research.
colab link not working
Hi, just wonder when the video editing mode for single source image & two driving videos will be released?
When do you release gradio ?
使用run_demo_paste.py之后,唇部会有重影,请问应该如何解决?
HI,The result of running the command :
python run_demo_single.py --s_path ./data/s.jpg
--pose_path ./data/pose.mp4
--exp_path ./data/exp.mp4
--model_path ./checkpoints/dpe.pt
--face both
--output_folder ./res
The resulting video is very blurry. What could be the cause of t
his?
非常感谢作者的开源项目。
目前按照步骤将原视频origin.mp4 crop出头部分s.mp4,然后run_demo.py由另一视频d.mp4驱动,
发现edit.mp4中的头的比例和位置都跟d.mp4一致了,而跟s.mp4不同,这样edit.mp4直接paste回origin.mp4就会对不上。
请问我是不是哪里做得不对?
A declarative, efficient, and flexible JavaScript library for building user interfaces.
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
An Open Source Machine Learning Framework for Everyone
The Web framework for perfectionists with deadlines.
A PHP framework for web artisans
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
Some thing interesting about web. New door for the world.
A server is a program made to process requests and deliver data to clients.
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
Some thing interesting about visualization, use data art
Some thing interesting about game, make everyone happy.
We are working to build community through open source technology. NB: members must have two-factor auth.
Open source projects and samples from Microsoft.
Google ❤️ Open Source for everyone.
Alibaba Open Source for everyone
Data-Driven Documents codes.
China tencent open source team.