stylegan_latenteditor's Introduction
stylegan_latenteditor's People
Forkers
renmengyuan michaelpdu kohei-kubota frank199584921 qianmingduowan blandocs tonghoameo berylsheep-up thestarboy longmarch7 satya-umd vladimirgl mtlong drminix tomokipun andersonfaaria rohitsaha minha12 liuguoyou rhange bruinxiong takayuki-suzuki aledala dream-fu ndb796 cwilliamjay shoutoutyangjie sarrbranka eurekayuan zhenglisec nirvanalan imthebilliejoe arthas-ice israrbacha ivanlen oshertidhar cinemaker123 tauber yueyedeai fooping-tech faisalbi lenkerr fyushan ikki-hanada miftahulridwan vitochien chaofeibu 41xu bohub12 andrmoura yugong123 bossunwangstylegan_latenteditor's Issues
facial.exchange.py
Hello,What is the implementation of style transfer, is it the same as the file ' facial.exchange.py '?I don't know,can you tell me?
Thank you very much!
Minor fix in facial_exchange.py
Hello. Thank you for a cool work! With latest pytorch I have a message on running facial_exchange.py
about variable modification at style_loss optimization backward (line 95).
May be codes need minor fix:
- move
synth_img_a=g_synthesis(dlatent_a)
andsynth_img_a= (synth_img_a + 1.0) / 2.0
below toloss_1.backward()
. - Also maybe code requires
optimizer.zero_grad()
after loss1 optimizer step.
How to change default image?
Can you please let me know how to change default image? (0.png)
can not encode another resolution image except 1024x1024
StyleGAN_LatentEditor/encode_image.py
Line 33 in 3695fb4
I try to use my own image like a cat which resolution is 512x512. I didn't run the aline_images command because I don't want my orignial image to be resized and the human face detector cannot detect the cat face too. But when I set the command args.resolution=512, g_all.load_state_dict() get wrong. The Pytorch weight is still the same as "./karras2019stylegan-ffhq-1024x1024.pt" when i run on resolution 1024x1024.
Below is the wrong information.
python encode_image.py --src_im sample.png --iteration 10 --resolution 512
Traceback (most recent call last):
File "encode_image.py", line 123, in
main()
File "encode_image.py", line 37, in main
g_all.load_state_dict(torch.load(args.weight_file, map_location=device))
File "/home/beryl/anaconda3/envs/py36/lib/python3.6/site-packages/torch/nn/modules/module.py", line 839, in load_state_dict
self.class.name, "\n\t".join(error_msgs)))
RuntimeError: Error(s) in loading state_dict for Sequential:
Unexpected key(s) in state_dict: "g_synthesis.blocks.1024x1024.conv0_up.weight", "g_synthesis.blocks.1024x1024.conv0_up.bias", "g_synthesis.blocks.1024x1024.conv0_up.intermediate.kernel", "g_synthesis.blocks.1024x1024.epi1.top_epi.noise.weight", "g_synthesis.blocks.1024x1024.epi1.style_mod.lin.weight", "g_synthesis.blocks.1024x1024.epi1.style_mod.lin.bias", "g_synthesis.blocks.1024x1024.conv1.weight", "g_synthesis.blocks.1024x1024.conv1.bias", "g_synthesis.blocks.1024x1024.epi2.top_epi.noise.weight", "g_synthesis.blocks.1024x1024.epi2.style_mod.lin.weight", "g_synthesis.blocks.1024x1024.epi2.style_mod.lin.bias".
size mismatch for g_synthesis.torgb.weight: copying a param with shape torch.Size([3, 16, 1, 1]) from checkpoint, the shape in current model is torch.Size([3, 32, 1, 1]).
karras2019stylegan-ffhq-1024x1024.pt
Can anyone share weight files? The link provided by the author cannot be downloaded at this time.
about the generated synth_img in encode_image.py
StyleGAN_LatentEditor/encode_image.py
Line 64 in 3695fb4
I am confused about the operation on the synth_img. Why not just pass the synth_img to calculate the loss? Can anyone tell me ?
weight_convert.py error assert state["version"] in [2, 3] AssertionError
I am trying to convert my trainned model from tensorflow to pytorch version using weight_convert.py however it shows below error
Traceback (most recent call last): File "weight_convert.py", line 37, in <module> weights =pickle.load(open(tensorflow_dir+weight_name+".pkl",'rb')) File "/home/ahmed/Documents/3DAI/StyleGAN_LatentEditor/dnnlib/tflib/network.py", line 279, in __setstate__ assert state["version"] in [2, 3] AssertionError
thank you in advance for reply
error in facial_exchange.py
Hello,when I run this .py file,error happend,can you tell me how to solve?thank you!
"RuntimeError: No default TensorFlow session found. Please call dnnlib.tflib.init_tf()."
the complete code in dnnlib/tflib/tfutil.py:
def assert_tf_initialized():
"""Check that TensorFlow session has been initialized."""
if tf.get_default_session() is None:
raise RuntimeError("No default TensorFlow session found. Please call dnnlib.tflib.init_tf().")
style transfer
What is the implementation of style transfer, I want to run a test.Thanks!
crossover operation
In the paper, in the style transfer section, it is mentioned that the calculation uses crossover operation. Is this roughly the same as the crossover.py in your project file?
If not, what is the correct operation?
The version
What's the version of pytorch and tensorflow ? And python ?
There might be mistakes in weight_convert.py
Hi, thanks for your repo.
I have downloaded the official TensorFlow model from https://drive.google.com/uc?id=1MEGjdvVpUsu1jB4zrXZN7Y4kBBOzizDQ.
Then I successfully run weight_convert.py.
However, when I am trying to run the code, there are errors when load the model:
`Missing key(s) in state_dict: "g_mapping.dense0.weight", "g_mapping.dense0.bias", "g_mapping.dense1.weight", "g_mapping.dense1.bias", "g_mapping.dense2.weight", "g_mapping.dense2.bias", "g_mapping.dense3.weight", "g_mapping.dense3.bias", "g_mapping.dense4.weight", "g_mapping.dense4.bias", "g_mapping.dense5.weight", "g_mapping.dense5.bias", "g_mapping.dense6.weight", "g_mapping.dense6.bias", "g_mapping.dense7.weight", "g_mapping.dense7.bias", "g_synthesis.torgb.weight", "g_synthesis.torgb.bias", "g_synthesis.blocks.4x4.const", "g_synthesis.blocks.4x4.bias", "g_synthesis.blocks.4x4.epi1.top_epi.noise.weight", "g_synthesis.blocks.4x4.epi1.style_mod.lin.weight", "g_synthesis.blocks.4x4.epi1.style_mod.lin.bias", "g_synthesis.blocks.4x4.conv.weight", "g_synthesis.blocks.4x4.conv.bias", "g_synthesis.blocks.4x4.epi2.top_epi.noise.weight", "g_synthesis.blocks.4x4.epi2.style_mod.lin.weight", "g_synthesis.blocks.4x4.epi2.style_mod.lin.bias", "g_synthesis.blocks.8x8.conv0_up.weight", "g_synthesis.blocks.8x8.conv0_up.bias", "g_synthesis.blocks.8x8.conv0_up.intermediate.kernel", "g_synthesis.blocks.8x8.epi1.top_epi.noise.weight", "g_synthesis.blocks.8x8.epi1.style_mod.lin.weight", "g_synthesis.blocks.8x8.epi1.style_mod.lin.bias", "g_synthesis.blocks.8x8.conv1.weight", "g_synthesis.blocks.8x8.conv1.bias", "g_synthesis.blocks.8x8.epi2.top_epi.noise.weight", "g_synthesis.blocks.8x8.epi2.style_mod.lin.weight", "g_synthesis.blocks.8x8.epi2.style_mod.lin.bias", "g_synthesis.blocks.16x16.conv0_up.weight", "g_synthesis.blocks.16x16.conv0_up.bias", "g_synthesis.blocks.16x16.conv0_up.intermediate.kernel", "g_synthesis.blocks.16x16.epi1.top_epi.noise.weight", "g_synthesis.blocks.16x16.epi1.style_mod.lin.weight", "g_synthesis.blocks.16x16.epi1.style_mod.lin.bias", "g_synthesis.blocks.16x16.conv1.weight", "g_synthesis.blocks.16x16.conv1.bias", "g_synthesis.blocks.16x16.epi2.top_epi.noise.weight", "g_synthesis.blocks.16x16.epi2.style_mod.lin.weight", "g_synthesis.blocks.16x16.epi2.style_mod.lin.bias", "g_synthesis.blocks.32x32.conv0_up.weight", "g_synthesis.blocks.32x32.conv0_up.bias", "g_synthesis.blocks.32x32.conv0_up.intermediate.kernel", "g_synthesis.blocks.32x32.epi1.top_epi.noise.weight", "g_synthesis.blocks.32x32.epi1.style_mod.lin.weight", "g_synthesis.blocks.32x32.epi1.style_mod.lin.bias", "g_synthesis.blocks.32x32.conv1.weight", "g_synthesis.blocks.32x32.conv1.bias", "g_synthesis.blocks.32x32.epi2.top_epi.noise.weight", "g_synthesis.blocks.32x32.epi2.style_mod.lin.weight", "g_synthesis.blocks.32x32.epi2.style_mod.lin.bias", "g_synthesis.blocks.64x64.conv0_up.weight", "g_synthesis.blocks.64x64.conv0_up.bias", "g_synthesis.blocks.64x64.conv0_up.intermediate.kernel", "g_synthesis.blocks.64x64.epi1.top_epi.noise.weight", "g_synthesis.blocks.64x64.epi1.style_mod.lin.weight", "g_synthesis.blocks.64x64.epi1.style_mod.lin.bias", "g_synthesis.blocks.64x64.conv1.weight", "g_synthesis.blocks.64x64.conv1.bias", "g_synthesis.blocks.64x64.epi2.top_epi.noise.weight", "g_synthesis.blocks.64x64.epi2.style_mod.lin.weight", "g_synthesis.blocks.64x64.epi2.style_mod.lin.bias", "g_synthesis.blocks.128x128.conv0_up.weight", "g_synthesis.blocks.128x128.conv0_up.bias", "g_synthesis.blocks.128x128.conv0_up.intermediate.kernel", "g_synthesis.blocks.128x128.epi1.top_epi.noise.weight", "g_synthesis.blocks.128x128.epi1.style_mod.lin.weight", "g_synthesis.blocks.128x128.epi1.style_mod.lin.bias", "g_synthesis.blocks.128x128.conv1.weight", "g_synthesis.blocks.128x128.conv1.bias", "g_synthesis.blocks.128x128.epi2.top_epi.noise.weight", "g_synthesis.blocks.128x128.epi2.style_mod.lin.weight", "g_synthesis.blocks.128x128.epi2.style_mod.lin.bias", "g_synthesis.blocks.256x256.conv0_up.weight", "g_synthesis.blocks.256x256.conv0_up.bias", "g_synthesis.blocks.256x256.conv0_up.intermediate.kernel", "g_synthesis.blocks.256x256.epi1.top_epi.noise.weight", "g_synthesis.blocks.256x256.epi1.style_mod.lin.weight", "g_synthesis.blocks.256x256.epi1.style_mod.lin.bias", "g_synthesis.blocks.256x256.conv1.weight", "g_synthesis.blocks.256x256.conv1.bias", "g_synthesis.blocks.256x256.epi2.top_epi.noise.weight", "g_synthesis.blocks.256x256.epi2.style_mod.lin.weight", "g_synthesis.blocks.256x256.epi2.style_mod.lin.bias", "g_synthesis.blocks.512x512.conv0_up.weight", "g_synthesis.blocks.512x512.conv0_up.bias", "g_synthesis.blocks.512x512.conv0_up.intermediate.kernel", "g_synthesis.blocks.512x512.epi1.top_epi.noise.weight", "g_synthesis.blocks.512x512.epi1.style_mod.lin.weight", "g_synthesis.blocks.512x512.epi1.style_mod.lin.bias", "g_synthesis.blocks.512x512.conv1.weight", "g_synthesis.blocks.512x512.conv1.bias", "g_synthesis.blocks.512x512.epi2.top_epi.noise.weight", "g_synthesis.blocks.512x512.epi2.style_mod.lin.weight", "g_synthesis.blocks.512x512.epi2.style_mod.lin.bias", "g_synthesis.blocks.1024x1024.conv0_up.weight", "g_synthesis.blocks.1024x1024.conv0_up.bias", "g_synthesis.blocks.1024x1024.conv0_up.intermediate.kernel", "g_synthesis.blocks.1024x1024.epi1.top_epi.noise.weight", "g_synthesis.blocks.1024x1024.epi1.style_mod.lin.weight", "g_synthesis.blocks.1024x1024.epi1.style_mod.lin.bias", "g_synthesis.blocks.1024x1024.conv1.weight", "g_synthesis.blocks.1024x1024.conv1.bias", "g_synthesis.blocks.1024x1024.epi2.top_epi.noise.weight", "g_synthesis.blocks.1024x1024.epi2.style_mod.lin.weight", "g_synthesis.blocks.1024x1024.epi2.style_mod.lin.bias".
Unexpected key(s) in state_dict: "fromrgb.weight", "fromrgb.bias", "1024x1024.conv0.weight", "1024x1024.conv0.bias", "1024x1024.blur.kernel", "1024x1024.conv1_down.weight", "1024x1024.conv1_down.bias", "1024x1024.conv1_down.downscale.blur.kernel", "512x512.conv0.weight", "512x512.conv0.bias", "512x512.blur.kernel", "512x512.conv1_down.weight", "512x512.conv1_down.bias", "512x512.conv1_down.downscale.blur.kernel", "256x256.conv0.weight", "256x256.conv0.bias", "256x256.blur.kernel", "256x256.conv1_down.weight", "256x256.conv1_down.bias", "256x256.conv1_down.downscale.blur.kernel", "128x128.conv0.weight", "128x128.conv0.bias", "128x128.blur.kernel", "128x128.conv1_down.weight", "128x128.conv1_down.bias", "128x128.conv1_down.downscale.blur.kernel", "64x64.conv0.weight", "64x64.conv0.bias", "64x64.blur.kernel", "64x64.conv1_down.weight", "64x64.conv1_down.bias", "64x64.conv1_down.downscale.blur.kernel", "32x32.conv0.weight", "32x32.conv0.bias", "32x32.blur.kernel", "32x32.conv1_down.weight", "32x32.conv1_down.bias", "32x32.conv1_down.downscale.blur.kernel", "16x16.conv0.weight", "16x16.conv0.bias", "16x16.blur.kernel", "16x16.conv1_down.weight", "16x16.conv1_down.bias", "16x16.conv1_down.downscale.blur.kernel", "8x8.conv0.weight", "8x8.conv0.bias", "8x8.blur.kernel", "8x8.conv1_down.weight", "8x8.conv1_down.bias", "8x8.conv1_down.downscale.blur.kernel", "4x4.conv.weight", "4x4.conv.bias", "4x4.dense0.weight", "4x4.dense0.bias", "4x4.dense1.weight", "4x4.dense1.bias".
`
I wonder there might be any mismatch between the "karras2019stylegan-ffhq-1024x1024.pkl" I downloaded from the google link and the expected model.
boundary directory is missing
StyleGAN_LatentEditor/semantic_edit.py
Line 58 in 3695fb4
hello! I found the directory and corresponding boundary files is missing in this Github. Will you provided for us?
failed to locate the G_synthesis function
I have a tensorflow 2 installed in my machine. So changed the tensorflow imports to the following-
"import tensorflow.compat.v1 as tf
tf.disable_v2_behavior()"
When I try to run weight_convert.py, I get this error shown in the screengrab
I tried but failed to locate the G_synthesis function. Can you help me out. Thanks!
ST
Can you tell me whice ".py" file do the style transfer ?
Thanks you!
Expression transfer
I want to know whether facial_exchange.py corresponds to style_transfer or expression transfer in the paper, because I think the code is similar to style_transfer, but in this case, which one corresponds to expression transfer? Hope you answer your doubts.Thanks!
About noise optimize in image2styleGAN++
Hi, thank you for your good work, I have a question, In paper image2styleGAN++, the author mentioned that they both optimize w and n(noise) , but in your code, I only find w, and find nothing about noise optimize process
About the Style Transfer
There is an application called Style Transfer in the paper “Image2StyleGAN: How to Embed Images Into the StyleGAN Latent Space?” Did anyone try to accomplish it? My result is bad. I have try the loss function include perceptual loss +MSE loss (the paper method) and only the style loss from conv4_2 layer of VGG-16 as well as style loss from conv4_2 layer of VGG_16 +MSE loss.
Recommend Projects
-
React
A declarative, efficient, and flexible JavaScript library for building user interfaces.
-
Vue.js
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
-
Typescript
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
-
TensorFlow
An Open Source Machine Learning Framework for Everyone
-
Django
The Web framework for perfectionists with deadlines.
-
Laravel
A PHP framework for web artisans
-
D3
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
-
Recommend Topics
-
javascript
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
-
web
Some thing interesting about web. New door for the world.
-
server
A server is a program made to process requests and deliver data to clients.
-
Machine learning
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
-
Visualization
Some thing interesting about visualization, use data art
-
Game
Some thing interesting about game, make everyone happy.
Recommend Org
-
Facebook
We are working to build community through open source technology. NB: members must have two-factor auth.
-
Microsoft
Open source projects and samples from Microsoft.
-
Google
Google ❤️ Open Source for everyone.
-
Alibaba
Alibaba Open Source for everyone
-
D3
Data-Driven Documents codes.
-
Tencent
China tencent open source team.