genforce / sefa Goto Github PK
View Code? Open in Web Editor NEW[CVPR 2021] Closed-Form Factorization of Latent Semantics in GANs
Home Page: https://genforce.github.io/sefa/
License: MIT License
[CVPR 2021] Closed-Form Factorization of Latent Semantics in GANs
Home Page: https://genforce.github.io/sefa/
License: MIT License
Thanks for the great work. Could you please let us know the indices of the closed form directions corresponding to Smiling, Pose, Eyeglasses, Age and Gender obtained from PGGAN on CelebA-HQ dataset?
Thanks for your excellent work! but the mapping net consists of 8 FC layers with 'leak_relu' in between. But the code seems to concatenate the weight of each layer as the final weight, but ignores the 'leak_relu' layers. Why ignore them?
I run streamlit run interface.py, but I got Error like this
Traceback (most recent call last):
File "d:\program files\python36\lib\site-packages\streamlit\script_runner.py", line 354, in _run_script
exec(code, module.dict)
File "D:\code\sefa-master\interface.py", line 129, in
main()
File "D:\code\sefa-master\interface.py", line 75, in main
layers, boundaries, eigen_values = factorize_model(model, layer_idx)
File "d:\program files\python36\lib\site-packages\streamlit\caching.py", line 545, in wrapped_func
return get_or_create_cached_value()
File "d:\program files\python36\lib\site-packages\streamlit\caching.py", line 527, in get_or_create_cached_value
return_value = func(*args, **kwargs)
File "D:\code\sefa-master\interface.py", line 26, in factorize_model
return factorize_weight(model, layer_idx)
File "D:\code\sefa-master\utils.py", line 187, in factorize_weight
weight = generator.synthesis.getattr(layer_name).style.weight.T
AttributeError: 'Parameter' object has no attribute 'T'
I change this code to weight = generator.synthesis.getattr(layer_name).style.weight.t()
then, I still got Error like this
Traceback (most recent call last):
File "d:\program files\python36\lib\site-packages\streamlit\script_runner.py", line 354, in _run_script
exec(code, module.dict)
File "D:\code\sefa-master\interface.py", line 129, in
main()
File "D:\code\sefa-master\interface.py", line 124, in main
image = synthesize(model, gan_type, code)
File "d:\program files\python36\lib\site-packages\streamlit\caching.py", line 545, in wrapped_func
return get_or_create_cached_value()
File "d:\program files\python36\lib\site-packages\streamlit\caching.py", line 527, in get_or_create_cached_value
return_value = func(*args, **kwargs)
File "D:\code\sefa-master\interface.py", line 54, in synthesize
image = model.synthesis(to_tensor(code))['image']
File "d:\program files\python36\lib\site-packages\torch\nn\modules\module.py", line 493, in call
result = self.forward(*input, **kwargs)
File "D:\code\sefa-master\models\stylegan_generator.py", line 471, in forward
if wp.ndim != 3 or wp.shape[1:] != (self.num_layers, self.w_space_dim):
AttributeError: 'Tensor' object has no attribute 'ndim'
Is this something wrong with my pytorch vesion or settings?
Hi,
I have a question that whether we can apply this method for any layers of generator? Do you try any experiments to explore it?
In the paper, you only show theoretical proof on the mapping from the latent code to the first linear layer of the generator, but ignoring some following non-linear and linear layers.
Thanks!
Is there a pretrained stylegan 2 model on streettscape dataset
Hi, thanks for your great work and easy to use streamlit app. The current available checkpoint has impressed me a lot, however, i find it hard to use other checkpoint from nvlab official repositorys such as stylegan2-ada-pytorch. Detail list following:
I try to download some checkpoint from 'model zoo', the checkpoint end with .pth
and load through torch.load()
My trained checkpoint end with .pkl
. It seems to be saved in nvlab's unique method.
Directly use torch.load()
to load checkpoint
Search for some method to convert the pkl
checkpoint in genforce repository issues.
But I failed to find a resolution.
Hi, I first like to thank you for your great research, and generous release of code.
I'm using your approach on a research using styleGAN.
while looking at your code, I realized that on concatenating weights to get A from the paper,
last layer is from output layer(like synthesis.output7.style.weight
),
unlike others which are from style weight layers (like synthesis.layer14.style.weight
)
I was wondering why last output layer was included in constructing A,
as some other unofficial implements which do not include output layer seems to work as well.
If there is specific reason why output layer was included, I'd like to know.
(I understand such curiosity may have come from my lack of knowledge of GAN structure.
On such case, I would be also grateful even if you point it out )
Thank you for your kind response!
When I load weight from genforce format, it reports error as follow:
RuntimeError: Error(s) in loading state_dict for StyleGAN2Generator:
size mismatch for synthesis.layer7.weight: copying a param with shape torch.Size([256, 512, 3, 3]) from checkpoint, the shape in current model is torch.Size([512, 512, 3, 3]).
size mismatch for synthesis.layer7.bias: copying a param with shape torch.Size([256]) from checkpoint, the shape in current model is torch.Size([512]).
size mismatch for synthesis.layer8.weight: copying a param with shape torch.Size([256, 256, 3, 3]) from checkpoint, the shape in current model is torch.Size([512, 512, 3, 3]).
size mismatch for synthesis.layer8.bias: copying a param with shape torch.Size([256]) from checkpoint, the shape in current model is torch.Size([512]).
size mismatch for synthesis.layer8.style.weight: copying a param with shape torch.Size([256, 512]) from checkpoint, the shape in current model is torch.Size([512, 512]).
size mismatch for synthesis.layer8.style.bias: copying a param with shape torch.Size([256]) from checkpoint, the shape in current model is torch.Size([512]).
size mismatch for synthesis.output4.weight: copying a param with shape torch.Size([3, 256, 1, 1]) from checkpoint, the shape in current model is torch.Size([3, 512, 1, 1]).
size mismatch for synthesis.output4.style.weight: copying a param with shape torch.Size([256, 512]) from checkpoint, the shape in current model is torch.Size([512, 512]).
size mismatch for synthesis.output4.style.bias: copying a param with shape torch.Size([256]) from checkpoint, the shape in current model is torch.Size([512]).
size mismatch for synthesis.layer9.weight: copying a param with shape torch.Size([128, 256, 3, 3]) from checkpoint, the shape in current model is torch.Size([256, 512, 3, 3]).
size mismatch for synthesis.layer9.bias: copying a param with shape torch.Size([128]) from checkpoint, the shape in current model is torch.Size([256]).
size mismatch for synthesis.layer9.style.weight: copying a param with shape torch.Size([256, 512]) from checkpoint, the shape in current model is torch.Size([512, 512]).
size mismatch for synthesis.layer9.style.bias: copying a param with shape torch.Size([256]) from checkpoint, the shape in current model is torch.Size([512]).
size mismatch for synthesis.layer10.weight: copying a param with shape torch.Size([128, 128, 3, 3]) from checkpoint, the shape in current model is torch.Size([256, 256, 3, 3]).
size mismatch for synthesis.layer10.bias: copying a param with shape torch.Size([128]) from checkpoint, the shape in current model is torch.Size([256]).
size mismatch for synthesis.layer10.style.weight: copying a param with shape torch.Size([128, 512]) from checkpoint, the shape in current model is torch.Size([256, 512]).
size mismatch for synthesis.layer10.style.bias: copying a param with shape torch.Size([128]) from checkpoint, the shape in current model is torch.Size([256]).
size mismatch for synthesis.output5.weight: copying a param with shape torch.Size([3, 128, 1, 1]) from checkpoint, the shape in current model is torch.Size([3, 256, 1, 1]).
size mismatch for synthesis.output5.style.weight: copying a param with shape torch.Size([128, 512]) from checkpoint, the shape in current model is torch.Size([256, 512]).
size mismatch for synthesis.output5.style.bias: copying a param with shape torch.Size([128]) from checkpoint, the shape in current model is torch.Size([256]).
size mismatch for synthesis.layer11.weight: copying a param with shape torch.Size([64, 128, 3, 3]) from checkpoint, the shape in current model is torch.Size([128, 256, 3, 3]).
size mismatch for synthesis.layer11.bias: copying a param with shape torch.Size([64]) from checkpoint, the shape in current model is torch.Size([128]).
size mismatch for synthesis.layer11.style.weight: copying a param with shape torch.Size([128, 512]) from checkpoint, the shape in current model is torch.Size([256, 512]).
size mismatch for synthesis.layer11.style.bias: copying a param with shape torch.Size([128]) from checkpoint, the shape in current model is torch.Size([256]).
size mismatch for synthesis.layer12.weight: copying a param with shape torch.Size([64, 64, 3, 3]) from checkpoint, the shape in current model is torch.Size([128, 128, 3, 3]).
size mismatch for synthesis.layer12.bias: copying a param with shape torch.Size([64]) from checkpoint, the shape in current model is torch.Size([128]).
size mismatch for synthesis.layer12.style.weight: copying a param with shape torch.Size([64, 512]) from checkpoint, the shape in current model is torch.Size([128, 512]).
size mismatch for synthesis.layer12.style.bias: copying a param with shape torch.Size([64]) from checkpoint, the shape in current model is torch.Size([128]).
size mismatch for synthesis.output6.weight: copying a param with shape torch.Size([3, 64, 1, 1]) from checkpoint, the shape in current model is torch.Size([3, 128, 1, 1]).
size mismatch for synthesis.output6.style.weight: copying a param with shape torch.Size([64, 512]) from checkpoint, the shape in current model is torch.Size([128, 512]).
size mismatch for synthesis.output6.style.bias: copying a param with shape torch.Size([64]) from checkpoint, the shape in current model is torch.Size([128]).
After I check the stylegan2_generator.py, I found that the default value of 'fmaps_base' is said to be 16 << 10, but actually it is 32 << 10 in the python code. When I change the 'fmaps_base' into 16 << 10, the code works?
However, in the genforce (where I obtain the model weight), the 'fmaps_base' is 32 << 10! It seems that the same parameter does not work for the same model??
Can anyone tell me what happens? Thank you very much!
I found that paper is really innovative. I want to try building something on top of it.
Can I ask about the estimated released date of the code?
Thank you a lot.
Hi, I want to do some experiment on disentanglement, but i am confused about the metric.
Suppose G(z) is the origin image and d is the dirction, how to select k for re-scoring G(z+kd)?
Hi,
Please in factorize_weight, what does generator.synthesis.getattr(layer_name).style.weight correspond to exactly ??
i am trying to reimplement your paper and my stylegan is based on this version and I can't figure out the exact correspondances
thanks in advance for your help!!
Dear GenForce team,
I really like your work . It is a very nice paper.
In figure 1, you show manipulation in StyleGAN2 LSUN bedroom, but this model is not in model zoo. Would you mind sharing it with us?
I know that in model zoo, there are bedroom model for pggan and StyleGAN1, but I want to try it on StyleGAN2.
Thank you very much for your help.
Best Wishes,
Alex
EOFError: Ran out of input
Traceback:
File "c:\users\a\anaconda3\lib\site-packages\streamlit\script_runner.py", line 332, in _run_script
exec(code, module.dict)
File "C:\Users\A\Desktop\sefa-master\interface.py", line 128, in
main()
File "C:\Users\A\Desktop\sefa-master\interface.py", line 69, in main
model = get_model(model_name)
File "c:\users\a\anaconda3\lib\site-packages\streamlit\caching.py", line 606, in wrapped_func
return get_or_create_cached_value()
File "c:\users\a\anaconda3\lib\site-packages\streamlit\caching.py", line 588, in get_or_create_cached_value
return_value = func(*args, **kwargs)
File "C:\Users\A\Desktop\sefa-master\interface.py", line 19, in get_model
return load_generator(model_name)
File "C:\Users\A\Desktop\sefa-master\utils.py", line 85, in load_generator
checkpoint = torch.load(checkpoint_path, map_location='cpu')
File "c:\users\a\anaconda3\lib\site-packages\torch\serialization.py", line 585, in load
return _legacy_load(opened_file, map_location, pickle_module, **pickle_load_args)
File "c:\users\a\anaconda3\lib\site-packages\torch\serialization.py", line 755, in _legacy_load
magic_number = pickle_module.load(f, **pickle_load_args)
How??
Hi thanks for the release, I've been waiting for it.
So I have a stylegan2 model converted from tf to pytorch, how would I go about using it?
I am getting a keyerror when loading the converted .pt model
Traceback (most recent call last):
File "sefa.py", line 145, in <module>
main()
File "sefa.py", line 70, in main
generator = load_generator(args.model_name)
File "J:\sefa-master\utils.py", line 101, in load_generator
generator.load_state_dict(checkpoint['generator'])
KeyError: 'generator'
Printing the keys of my model yields dict_keys(['g_ema', 'latent_avg'])
but your pretrained models yield dict_keys(['generator', 'discriminator', 'generator_smooth'])
What should I still need to do? I am able to use a closed-form-factorization implementation using my converted model from this stylegan2-pytorch repo but how to do it in this one?
This is really interesting work, thanks for publishing.
Do you have plans to support StyleGAN3 in this repository?
Dear @ShenYujun :
Thanks for your excellent works. It is very useful for my research and I will cite your work in my study. However, I have some issues with re-scoring analysis since the attribute predictor is lost (The same issue on InterfaceGAN). Would you mind releasing your pre-trained attribute predictor's code and model?
Thank you again.
Best wishes.
We have a model stored as .pkl file which is trained on NVIDIA's StyleGAN. Since its extension is .pkl, and your structure asks for a .pth file, is there any way to convert our .pkl file to .pth file? Or is there any way to work on your system with a .pkl file?
Hi, thanks for your great research first. May I have the seed to generate the source StyleGAN image which used in figure 5 of the paper? And also the index of sorted eigen value corresponding to "Pose", "Eyeglasses" and "Smile" manipulation so that I can reproduce results in figure 5.
By the way, the InterFaceGAN result in qualitative comparison were produced in W space or W+ space?
Thanks.
I trained a couple of models with the pytorch version of StyleGAN 2 ADA and want to use them with SeFa. Most of them run fine, but using models that are based on the transfer learning sourcenet models provided in the official ADA repo results in the following error:
RuntimeError: Error(s) in loading state_dict for StyleGAN2Generator:
Missing key(s) in state_dict: "mapping.dense2.weight", "mapping.dense2.bias", "mapping.dense3.weight", "mapping.dense3.bias", "mapping.dense4.weight", "mapping.dense4.bias", "mapping.dense5.weight", "mapping.dense5.bias", "mapping.dense6.weight", "mapping.dense6.bias", "mapping.dense7.weight", "mapping.dense7.bias".
Unexpected key(s) in state_dict: "synthesis.layer15.weight", "synthesis.layer15.bias", "synthesis.layer15.noise_strength", "synthesis.layer15.noise", "synthesis.layer15.filter.kernel", "synthesis.layer15.style.weight", "synthesis.layer15.style.bias", "synthesis.layer16.weight", "synthesis.layer16.bias", "synthesis.layer16.noise_strength", "synthesis.layer16.noise", "synthesis.layer16.style.weight", "synthesis.layer16.style.bias", "synthesis.output8.weight", "synthesis.output8.bias", "synthesis.output8.style.weight", "synthesis.output8.style.bias".
I know that those models have a different layer structure, but is there any way to run them with this code anyways?
Hi,
Would it be possible to release the model used to produce the BigGAN results in the paper?
Thanks,
Sagie
I edited face in Eyeglasses, Gender, Hair Color, Pose , Smile ... direction by MODEL_ZOO/StyleGAN2/ffhq-1024x1024
it shown that some attributes/directions are highly correlated with each other .
Hi,
I couldn't find the post labeling for the semantic indices.
Could you share the post annotation for StyleGAN and StyleGAN2 on celeba/ffhq?
Thanks
Line 185 in c60c536
A declarative, efficient, and flexible JavaScript library for building user interfaces.
๐ Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
An Open Source Machine Learning Framework for Everyone
The Web framework for perfectionists with deadlines.
A PHP framework for web artisans
Bring data to life with SVG, Canvas and HTML. ๐๐๐
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
Some thing interesting about web. New door for the world.
A server is a program made to process requests and deliver data to clients.
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
Some thing interesting about visualization, use data art
Some thing interesting about game, make everyone happy.
We are working to build community through open source technology. NB: members must have two-factor auth.
Open source projects and samples from Microsoft.
Google โค๏ธ Open Source for everyone.
Alibaba Open Source for everyone
Data-Driven Documents codes.
China tencent open source team.