Giter Site home page Giter Site logo

Comments (20)

aparafita avatar aparafita commented on July 30, 2024 2

I think I've found a difference with the official implementation.

In the StyledConvBlock, the noise is injected after to the AdaIN operation, whereas the official implementation does it just after the conv, before the AdaIN operation. Could this be the reason for the difference in results?

image

I'm trying to get the parameters from the official pretrained model (in TensorFlow) and put it in your network to see if I get the same results. I'll edit this point in my forked repository and get back here if I notice any more differences.

from style-based-gan-pytorch.

rosinality avatar rosinality commented on July 30, 2024 1
  1. You can add more layers and extend model to higher resolutions.
  2. I think I matched almost all details in the paper. I didn't checked all the details of implementations, but it looks like that both is almost similar. But some detail is slightly different - I used native bilinear interpolation, whereas official implementation uses binomial filter. And learning rates - this implementation uses 1e-3 (same as progressive gan paper.) and official implementation uses 1.5e-3.

from style-based-gan-pytorch.

rosinality avatar rosinality commented on July 30, 2024 1

It's my mistake. Thanks! Changed in 24896bb

from style-based-gan-pytorch.

cientgu avatar cientgu commented on July 30, 2024

thanks! I will try higher resolution.

from style-based-gan-pytorch.

zhuhaozh avatar zhuhaozh commented on July 30, 2024

I think I found another difference too.
In https://github.com/rosinality/style-based-gan-pytorch/blob/master/model.py#L266, you only apply to_rgb() when i == step, while in official implementation, they apply torgb in all blocks.

The same problem in Discriminator.

from style-based-gan-pytorch.

rosinality avatar rosinality commented on July 30, 2024

Hmm, but wouldn't lerp_clip makes model ignore previous torgbs?

from style-based-gan-pytorch.

mileslefttogo avatar mileslefttogo commented on July 30, 2024

Something I noticed was that here:
https://github.com/rosinality/style-based-gan-pytorch/blob/master/model.py#L270

You are sending the upsampled activations through the previous steps toRGB because this line executes first:
https://github.com/rosinality/style-based-gan-pytorch/blob/master/model.py#L259
and then you are interpolating.

Whereas in the official implementation, the activations of each step are run through the corresponding torgb layer and then the resulting output image is upsampled afterwards to do the interpolation
https://github.com/NVlabs/stylegan/blob/master/training/networks_stylegan.py#L542

Was this intentional?

from style-based-gan-pytorch.

rosinality avatar rosinality commented on July 30, 2024

Both will almost similar. But using torgbs before upsampling will be more efficient as it reduce channels first.

from style-based-gan-pytorch.

voa18105 avatar voa18105 commented on July 30, 2024

@rosinality @aparafita guys, am I correct that this before/after changes do not require retrain? Feels like this impacts only inference

from style-based-gan-pytorch.

rosinality avatar rosinality commented on July 30, 2024

Unfortunately this will require retrain as noise term will interacts with adaptive instance norm.

from style-based-gan-pytorch.

aparafita avatar aparafita commented on July 30, 2024

@voa18105 The function will be affected for sure. The AdaIN changes the scale of each channel, so if the noise comes before it, the scale of the noise is also affected. In that sense, the official implementation makes sense and the noise should be injected before the AdaIN, but it's hard to say how important it'd be to the overall result.

from style-based-gan-pytorch.

voa18105 avatar voa18105 commented on July 30, 2024

oh no, 3 days retrain... again...

from style-based-gan-pytorch.

zxch3n avatar zxch3n commented on July 30, 2024

In the official implementation, they use blur after the upscale conv.

But this repo does not use the upscale conv when upscaling the image.

if i > 0 and step > 0:
upsample = F.interpolate(out, scale_factor=2, mode='bilinear', align_corners=False)
# upsample = self.blur(upsample)
out = conv(upsample, style_step, noise[i])

Did I miss something here?

from style-based-gan-pytorch.

mileslefttogo avatar mileslefttogo commented on July 30, 2024

In the official implementation, they use blur after the upscale conv.

But this repo does not use the upscale conv when upscaling the image.

style-based-gan-pytorch/model.py

Lines 258 to 261 in 24896bb

if i > 0 and step > 0:
upsample = F.interpolate(out, scale_factor=2, mode='bilinear', align_corners=False)
# upsample = self.blur(upsample)
out = conv(upsample, style_step, noise[i])
Did I miss something here?

bilinear upsampling is taking the place of the conv up + blur, since in pytorch the upscaling uses interpolate anyway the bilinear filtering on the way up is essentially the same as blur.

It is slightly different, but I changed it to be the exact same and it didn't make a noticeable difference qualitatively in FID score. The StyleGAN paper also mentions they tried bilinear upsampling and it made a small improvement, although I didn't see it in the code.

from style-based-gan-pytorch.

zxch3n avatar zxch3n commented on July 30, 2024

@mileslefttogo what I don't understand here is why conv up layer can be replaced as well, as one is trainable while another is not.

from style-based-gan-pytorch.

rosinality avatar rosinality commented on July 30, 2024

Official implementations use upscale -> conv -> blur, my implementation use upscale (bilinear) -> conv. So yes order is different. (upscale & blur works similarly to bilinear interpolation except of edges as @mileslefttogo said. I used bilinear interpolations due to speed problems.) I don't know it will make much differences. But maybe you can try to change ordering.

from style-based-gan-pytorch.

zxch3n avatar zxch3n commented on July 30, 2024

Now I got it. Thanks

from style-based-gan-pytorch.

Cold-Winter avatar Cold-Winter commented on July 30, 2024

@rosinality @aparafita @voa18105 Hi guys, do you guys get the new model for the fixed-bug version (commit 24896bb). I will appreciate if any of you can provide me a more advanced pre-trained model on ffhq. Further question, do you guys get a model on generating high-resolution images.

from style-based-gan-pytorch.

voa18105 avatar voa18105 commented on July 30, 2024

@Cold-Winter as I understand, this implementation does not suppose HQ. Also, I dont have ffhq.

from style-based-gan-pytorch.

rosinality avatar rosinality commented on July 30, 2024

@Cold-Winter I don't know I can get enough computing resources to train high resolution model in reasonable time...But I will revise codes to allow train model in higher resolutions.

from style-based-gan-pytorch.

Related Issues (20)

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.