seanseattle / styleswap Goto Github PK
View Code? Open in Web Editor NEWStyleSwap: Style-Based Generator Empowers Robust Face Swapping (ECCV 2022)
License: Apache License 2.0
StyleSwap: Style-Based Generator Empowers Robust Face Swapping (ECCV 2022)
License: Apache License 2.0
Oh my god! I love this
Just looking at the preview, this looks good! It is better than simswap, and is way more friendlier than deepfacelab! I am still waiting for this but I'm honestly mind blown
I would love to see more research like this and also see if it's possible to manipulate the lips of the face! Technoligy has come so far!!
After all this time, this would be a logical step to publish pretrained models after posting some simple test code. Or publishing the training code. Please give at least some feedback.
You should reference your code on https://paperswithcode.com
Currently only your paper is referenced but not the github repository https://paperswithcode.com/paper/styleswap-style-based-generator-empowers
I tried to run test.py
and I got the following error:
Namespace(source_img_path='./examples/images/2.png', target_img_path='./examples/images/12012.png', output_path='results/test.png', size=512, align_source=False, align_target=False)
W1107 11:21:27.417249 22688 device_context.cc:447] Please NOTE: device: 0, GPU Compute Capability: 8.6, Driver API Version: 11.6, Runtime API Version: 11.2
W1107 11:21:27.535109 22688 device_context.cc:465] device: 0, cuDNN Version: 8.1.
.../anaconda3/envs/paddlepaddle/lib/python3.9/site-packages/paddle/tensor/creation.py:130: DeprecationWarning: `np.object` is a deprecated alias for the builtin `object`. To silence this warning, use `object` by itself. Doing this will not modify any behavior and is safe.
Deprecated in NumPy 1.20; for more details and guidance: https://numpy.org/devdocs/release/1.20.0-notes.html#deprecations
if data.dtype == np.object:
Traceback (most recent call last):
File ".../StyleSwap/test.py", line 76, in <module>
generator.set_dict(paddle.load('./checkpoints/styleswap512.pdparams'))
File ".../anaconda3/envs/paddlepaddle/lib/python3.9/site-packages/paddle/framework/io.py", line 985, in load
load_result = _legacy_load(path, **configs)
File ".../anaconda3/envs/paddlepaddle/lib/python3.9/site-packages/paddle/framework/io.py", line 1003, in _legacy_load
model_path, config = _build_load_path_and_config(path, config)
File ".../anaconda3/envs/paddlepaddle/lib/python3.9/site-packages/paddle/framework/io.py", line 161, in _build_load_path_and_config
raise ValueError(error_msg % path)
ValueError: The ``path`` (./checkpoints/styleswap512.pdparams) to load model not exists.
Can we download the missing checkpoints ? If so, how can we download them ?
Thanks for your wonderful work! Recently I'm working on another faceswap network, but when I tried to reproduce the expression error in the paper, I met with difficulties. I use the paper A Compact Embedding for Facial Expression Similarity
refered in your paper to extract expression embeddings. The code I used is AmirSh15/FECNet which is the only pytorch implementation I could find. But when I compute the L2 distance of the 16-dim embeddings of target face and swap face, I found that the result is always around 0.5(I tested DeepFakes, FaceShifter.etc), which differ greatly with the result you put on your paper. So could you please give me some details about how you compute the expression error? I would appreciate it if you could show your expression error code to your convenience.
Hello, thank you for your efforts. I tested the same type of code and found that the biggest problem with stylegan is that the face reconstruction is not accurate. Rather than reconstruction, it is better to say that stylegan regenerates a picture similar to the source face. Even with some optimizations, the generated image is more similar to the source image. If I want to use stylegan2 to generate the exact same face as the source image, what do I need to do? Looking forward to your reply
Hi thank you for the incredible work.
I'm confused with the mismatch in paper and code.
In the code, you follow the GPEN, which is your reference paper. Features from the encoder are concatenated "not before" the upsample, but after 2 convolutions in building block. Is there any reason for that?
It confused me. Because the paper's architecture is new to me, but not the implementation.
Very cool! Do you know how we can preserve the source's expression?
Hi, thanks for sharing this wonderful work. I wonder when will you release the code.
Please remove this repo as you have no intention to deliver on your promise. Thanks.
-The owner said they aren't releasing the model, no need to check up on the repo.
A declarative, efficient, and flexible JavaScript library for building user interfaces.
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
An Open Source Machine Learning Framework for Everyone
The Web framework for perfectionists with deadlines.
A PHP framework for web artisans
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
Some thing interesting about web. New door for the world.
A server is a program made to process requests and deliver data to clients.
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
Some thing interesting about visualization, use data art
Some thing interesting about game, make everyone happy.
We are working to build community through open source technology. NB: members must have two-factor auth.
Open source projects and samples from Microsoft.
Google ❤️ Open Source for everyone.
Alibaba Open Source for everyone
Data-Driven Documents codes.
China tencent open source team.