Giter Site home page Giter Site logo

lotayou / face-renovation Goto Github PK

View Code? Open in Web Editor NEW
286.0 16.0 48.0 5.72 MB

Official repository of the paper "HiFaceGAN: Face Renovation via Collaborative Suppression and Replenishment".

Home Page: https://arxiv.org/abs/2005.05005

License: GNU General Public License v3.0

Python 98.86% Dockerfile 0.48% Shell 0.66%
benchmark sota image-restoration

face-renovation's Introduction

python report PWC PWCPWC





Face-Renovation

HiFaceGAN: Face Renovation via Collaborative Suppression and Replenishment

Lingbo Yang, Chang Liu, Pan Wang, Shanshe Wang, Peiran Ren, Siwei Ma, Wen Gao

Update 20201026: Pretrained checkpoints released to facilitate reproduction.

Update 20200911: Please find video restoration results at this repo!

Update: This paper is accepted at ACM Multimedia 2020.

Stunner

Contents

  1. Usage
  2. Benchmark
  3. Remarks
  4. License
  5. Citation
  6. Acknowledgements

Usage

Environment

  • Ubuntu/CentOS
  • PyTorch 1.0+
  • CUDA 10.1
  • python packages: opencv-python, tqdm,
  • Data augmentation tool: imgaug
  • Face Recognition Toolkit for evaluation
  • tqdm to make you less anxious when testing:)

Dataset Preparation

Download FFHQ, resize to 512x512 and split id [65000, 70000) for testing. We only use first 10000 images for training, which takes 2~3 days on a P100 GPU, training with full FFHQ is possible, but could take weeks.

After that, run degrade.py to acquire paired images for training. You need to specify the degradation type and input root in the script first.

Configurations

The configurations is stored in options/config_hifacegan.py, the options should be self-explanatory, but feel free to leave an issue anytime.

Training and Testing

python train.py            # A fool-proof training script
python test.py             # Test on synthetic dataset
python test_nogt.py        # Test on real-world images
python two_source_test.py  # Visualization of Fig 5

Pretrained Models

Download, unzip and put under ./checkpoints. Then change names in configuration file accordingly.

BaiduNetDisk: Extraction code:cxp0

YandexDisk

Note:

  • These checkpoints works best on synthetic degradation prescribed in degrade.py, don't expect them to handle real-world LQ face images. You can try to fine-tune them with additional collected samples though.
  • There are two face_renov checkpoints trained under different degradation mixtures. Unfortunately I've forgot which one I used for our paper, so just try both and select the better one. Also, this could give you a hint about how our model behaves under a different degradation setting:)
  • You may need to set netG=lipspade and ngf=48 inside the configuration file. In case of loading failure, don't hesitate to submit a issue or email me.

Evaluation

Please find in metrics_package folder:

  • main.py: GPU-based PSNR, SSIM, MS-SSIM, FID
  • face_dist.py: CPU-based face embedding distance(FED) and landmark localization error (LLE).
  • PerceptualSimilarity\main.py: GPU-based LPIPS
  • niqe\niqe.py: NIQE, CPU-based, no reference

Note:

  • Read the scripts and modify result folder path(s) before testing (do not add / in the end), the results will be displayed on screen and saved in txt.
  • At least 10GB is required for main.py. If this is too heavy for you, reducebs=250 at line 79
  • Initializing Inception V3 Model for FID could take several minutes, just be patient. If you find a solution, please submit a PR.
  • By default face_dist.py script runs with 8 parallel subprocesses, which could cause error on certain environments. In that case, just disable the multiprocessing and replace with a for loop (This would take 2~3 hours for 5k images, you may want to wrap the loop in tqdm to reduce your anxiety).

Benchmark

Please refer to benchmark.md for benchmark experimental settings and performance comparison.

Memory Cost The default model is designed to fit in a P100 card with 16 GB memory. For Titan-X or 1080Ti card with 12 GB memory, you can reduce ngf=48, or further turn batchSize=1 without significant performance drop.

Inference Speed Currently the inference script is single-threaded which runs at 5fps. To further increase the inference speed, possible options are using multi-thread dataloader, batch inference, and combine normalization and convolution operations.

Remarks

Face Renovation is not designed to create a perfect specimen OUT OF you, but to bring out the best WITHIN you.

License

Copyright © 2020, Alibaba Group. All rights reserved. This code is intended for academic and educational use only, any commercial usage without authorization is strictly prohibited.

Citation

Please kindly cite our paper when using this project for your research.

@article{Yang2020HiFaceGANFR,
  title={HiFaceGAN: Face Renovation via Collaborative Suppression and Replenishment},
  author={Lingbo Yang and C. Liu and P. Wang and Shanshe Wang and P. Ren and Siwei Ma and W. Gao},
  journal={Proceedings of the 28th ACM International Conference on Multimedia},
  year={2020}
}

Acknowledgements

The replenishment module borrows the implementation of SPADE.

face-renovation's People

Contributors

lotayou avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

face-renovation's Issues

Questions about the experimental part

For the quantitative comparsions, how do you obtain the numerical scores such as PSNR, SSIM of other methods? Do you re-train all the methods using your own training data or just use their pretrained model?

Custom dataset

Hi, First of all, thanks for sharing a great research.

Is there any guidelines for training using custom's dataset??

test data size is 0

(fr) PS E:\Face-Renovation-master> python test_nogt.py
dataset [TestDataset] of size 0 was created
Network [LIPSPADEGenerator] was created. Total number of parameters: 72.2 million. To see the architecture, do print(network).
Load checkpoint from path: ./checkpoints\face_renov_2\latest_net_G.pth
0it [00:00, ?it/s]
(fr) PS E:\Face-Renovation-master>

any clue why is this happening my dataset size created is 0 but I have specified the directory in config

Training error

Hello again, I got this training error when running "train.py", how can I solve this?

(hiface) G:\HiFaceGAN\Face-Renovation-master>python train.py
train.py
dataset [TrainDataset] of size 7 was created
Network [HiFaceGANGenerator] was created. Total number of parameters: 128.0 million. To see the architecture, do print(network).
Network [MultiscaleDiscriminator] was created. Total number of parameters: 5.5 million. To see the architecture, do print(network).
create web directory ./checkpoints\exp1\web...
Traceback (most recent call last):
  File "train.py", line 93, in <module>
    main()
  File "train.py", line 52, in main
    trainer.run_generator_one_step(data_i)
  File "G:\HiFaceGAN\Face-Renovation-master\trainers\pix2pix_trainer.py", line 34, in run_generator_one_step
    g_losses, generated = self.pix2pix_model(data, mode='generator')
  File "E:\Anaconda3\envs\hiface\lib\site-packages\torch\nn\modules\module.py", line 550, in __call__
    result = self.forward(*input, **kwargs)
  File "E:\Anaconda3\envs\hiface\lib\site-packages\torch\nn\parallel\data_parallel.py", line 153, in forward
    return self.module(*inputs[0], **kwargs[0])
  File "E:\Anaconda3\envs\hiface\lib\site-packages\torch\nn\modules\module.py", line 550, in __call__
    result = self.forward(*input, **kwargs)
  File "G:\HiFaceGAN\Face-Renovation-master\models\pix2pix_model.py", line 47, in forward
    g_loss, generated = self.compute_generator_loss(input_semantics, real_image)
  File "G:\HiFaceGAN\Face-Renovation-master\models\pix2pix_model.py", line 74, in compute_generator_loss
    fake_image = self.generate_fake(input_semantics)
  File "G:\HiFaceGAN\Face-Renovation-master\models\pix2pix_model.py", line 120, in generate_fake
    fake_image = self.netG(input_semantics)
  File "E:\Anaconda3\envs\hiface\lib\site-packages\torch\nn\modules\module.py", line 550, in __call__
    result = self.forward(*input, **kwargs)
  File "G:\HiFaceGAN\Face-Renovation-master\models\networks\generator.py", line 238, in forward
    x = self.head_0(x, xs[0])
  File "E:\Anaconda3\envs\hiface\lib\site-packages\torch\nn\modules\module.py", line 550, in __call__
    result = self.forward(*input, **kwargs)
  File "G:\HiFaceGAN\Face-Renovation-master\models\networks\architecture.py", line 55, in forward
    dx = self.conv_0(self.actvn(self.norm_0(x, seg)))
  File "E:\Anaconda3\envs\hiface\lib\site-packages\torch\nn\modules\module.py", line 550, in __call__
    result = self.forward(*input, **kwargs)
  File "G:\HiFaceGAN\Face-Renovation-master\models\networks\normalization.py", line 100, in forward
    actv = self.mlp_shared(segmap)
  File "E:\Anaconda3\envs\hiface\lib\site-packages\torch\nn\modules\module.py", line 550, in __call__
    result = self.forward(*input, **kwargs)
  File "E:\Anaconda3\envs\hiface\lib\site-packages\torch\nn\modules\container.py", line 100, in forward
    input = module(input)
  File "E:\Anaconda3\envs\hiface\lib\site-packages\torch\nn\modules\module.py", line 550, in __call__
    result = self.forward(*input, **kwargs)
  File "E:\Anaconda3\envs\hiface\lib\site-packages\torch\nn\modules\conv.py", line 353, in forward
    return self._conv_forward(input, self.weight)
  File "E:\Anaconda3\envs\hiface\lib\site-packages\torch\nn\modules\conv.py", line 350, in _conv_forward
    self.padding, self.dilation, self.groups)
RuntimeError: Given groups=1, weight of size [128, 768, 3, 3], expected input[2, 1024, 4, 4] to have 768 channels, but got 1024 channels instead

I've done exactly what you described for degrade.py, input 512x512 image and produce a paired image which is 512x1024.

Here's the training config

class TrainOptions(object):
    dataroot = './training_t_full/'
    dataroot_assist = ''
    name = 'exp1'
    crop_size = 512

    gpu_ids = [0]  # set to [] for CPU-only training (not tested)
    gan_mode = 'ls'

    continue_train = False
    which_epoch = 'latest'

    D_steps_per_G = 1
    aspect_ratio = 1.0
    batchSize = 2
    beta1 = 0.0
    beta2 = 0.9
    cache_filelist_read = True
    cache_filelist_write = True
    checkpoints_dir = './checkpoints'
    choose_pair = [0, 1]
    coco_no_portraits = False
    contain_dontcare_label = False

    dataset_mode = 'train'
    debug = False
    display_freq = 100
    display_winsize = 256
    print_freq = 100
    save_epoch_freq = 1
    save_latest_freq = 5000

    init_type = 'xavier'
    init_variance = 0.02
    isTrain = True
    is_test = False

    semantic_nc = 3
    label_nc = 3
    output_nc = 3
    lambda_feat = 10.0
    lambda_kld = 0.05
    lambda_vgg = 10.0
    load_from_opt_file = False
    lr = 0.0002
    max_dataset_size = sys.maxsize
    model = 'pix2pix'
    nThreads = 2

    n_layers_D = 4
    num_D = 2
    ndf = 64
    nef = 16
    netD = 'multiscale'
    netD_subarch = 'n_layer'
    netG = 'hifacegan'  # spade, lipspade
    ngf = 64  # set to 48 for Titan X 12GB card
    niter = 30
    niter_decay = 20
    no_TTUR = False
    no_flip = False
    no_ganFeat_loss = False
    no_html = False
    no_instance = True
    no_pairing_check = False
    no_vgg_loss = False

    norm_D = 'spectralinstance'
    norm_E = 'spectralinstance'
    norm_G = 'spectralspadesyncbatch3x3'

    num_upsampling_layers = 'normal'
    optimizer = 'adam'
    phase = 'train'
    prd_resize = 512
    preprocess_mode = 'resize_and_crop'

    serial_batches = False
    tf_log = False
    train_phase = 3  # progressive training disabled (set initial phase to 0 to enable it)
    # 20200211
    #max_train_phase = 2 # default 3 (4x)
    max_train_phase = 3
    # training 1024*1024 is also possible, just turning this to 4 and add more layers in generator.
    upsample_phase_epoch_fq = 5
    use_vae = False
    z_dim = 256

thank you!

How to launch training on 1070

Hi. Can you tell how i can launch training process on GeForce 1070 (8 Gb)?
Then I changed nfg to 48 in config file ngf = 48 # set to 48 for Titan X 12GB card
its displays this message RuntimeError: CUDA out of memory. Tried to allocate 192.00 MiB (GPU 0; 8.00 GiB total capacity; 6.05 GiB already allocated; 95.55 MiB free; 108.68 MiB cached)
That I can do to reduce model size and start taraining?

生成图像存在色差问题

作者你好,我在用你的训练脚本训练你给的退化图像时(主要是用你退化脚本的去blurFFHQ),发现除了有彩色的artifact,还存在比较严重的色差问题,脚本其他地方没有任何改动,你们在训练的时候存在这个色差问题的现象吗?
epoch-train-600-020
中间是生成的结果,有比较明显的色差,并且这个现象比较普遍,很奇怪的是我并没有对代码做任何改动,直接拿来你的代码和退化方式去做的
这跟我把ngf改成了32有关系吗?(显存不够,跑batch=2必须要设置成32)

Image artifacts

Hi,
I followed your instructions to train the network, However, there are some artifacts in generated images
These artifacts normally happen on where relatively more white pixel locate at, such as eyes, tip of nose.
there are some pics for details
epoch023_iter227400_input_label
epoch023_iter227400_real_image
epoch023_iter227400_synthesized_image

epoch023_iter228800_synthesized_image

Can I ask the reason why these artifacts happened, and how can I solve these.
Thank you!

Pretrained checkpoint load problem

Hello, I tried to load the pretrained checkpoints and faced a prblem of weights mismatch, code:

import os, torch
from collections import OrderedDict

import data
# change config file for ablation study...
from options.config_hifacegan import TestOptions
from models.pix2pix_model import Pix2PixModel
from util.visualizer import Visualizer
from util import html
import numpy as np
import cv2
from tqdm import tqdm

os.environ['CUDA_VISIBLE_DEVICES'] = '0'

torch.backends.cudnn.benchmark = True

opt = TestOptions()
opt.name='4xsr'
# opt.checkpoints_dir='checkpoints/4xsr/'

# dataloader = data.create_dataloader(opt)

model = Pix2PixModel(opt)
### 20200218 Critical Bug
# When model is set to eval mode, the generated image
# is not enhanced whatsoever, with almost 0 residual
# when turned to training mode, it behaves as expected.
###
#model.eval()
#model.netG.eval()
model.netG.train()

error:
`
Network [HiFaceGANGenerator] was created. Total number of parameters: 130.6 million. To see the architecture, do print(network).

RuntimeError Traceback (most recent call last)
in
22 # dataloader = data.create_dataloader(opt)
23
---> 24 model = Pix2PixModel(opt)
25 ### 20200218 Critical Bug
26 # When model is set to eval mode, the generated image

~/git/Face-Renovation/models/pix2pix_model.py in init(self, opt)
24 else torch.ByteTensor
25
---> 26 self.netG, self.netD, self.netE = self.initialize_networks(opt)
27
28 # set loss functions

~/git/Face-Renovation/models/pix2pix_model.py in initialize_networks(self, opt)
186
187 if not opt.isTrain or opt.continue_train:
--> 188 netG = util.load_network(netG, 'G', opt.which_epoch, opt)
189 if opt.isTrain:
190 netD = util.load_network(netD, 'D', opt.which_epoch, opt)

~/git/Face-Renovation/util/util.py in load_network(net, label, epoch, opt)
207 save_path = os.path.join(save_dir, save_filename)
208 weights = torch.load(save_path)
--> 209 net.load_state_dict(weights)
210 print('Load checkpoint from path: ', save_path)
211 return net

~/.conda/envs/face-renovation/lib/python3.7/site-packages/torch/nn/modules/module.py in load_state_dict(self, state_dict, strict)
1043 0, 'Unexpected key(s) in state_dict: {}. '.format(
1044 ', '.join('"{}"'.format(k) for k in unexpected_keys)))
-> 1045 if len(missing_keys) > 0:
1046 error_msgs.insert(
1047 0, 'Missing key(s) in state_dict: {}. '.format(

RuntimeError: Error(s) in loading state_dict for HiFaceGANGenerator:
Missing key(s) in state_dict: "encoder.head.0.weight", "encoder.encoder_0.0.logit.0.weight", "encoder.encoder_0.0.logit.1.weight", "encoder.encoder_0.0.logit.1.bias", "encoder.encoder_0.1.weight", "encoder.encoder_0.1.bias", "encoder.encoder_1.0.logit.0.weight", "encoder.encoder_1.0.logit.1.weight", "encoder.encoder_1.0.logit.1.bias", "encoder.encoder_1.1.weight", "encoder.encoder_1.1.bias", "encoder.encoder_2.0.logit.0.weight", "encoder.encoder_2.0.logit.1.weight", "encoder.encoder_2.0.logit.1.bias", "encoder.encoder_2.1.weight", "encoder.encoder_2.1.bias", "encoder.encoder_3.0.logit.0.weight", "encoder.encoder_3.0.logit.1.weight", "encoder.encoder_3.0.logit.1.bias", "encoder.encoder_3.1.weight", "encoder.encoder_3.1.bias", "encoder.encoder_4.0.logit.0.weight", "encoder.encoder_4.0.logit.1.weight", "encoder.encoder_4.0.logit.1.bias", "encoder.encoder_4.1.weight", "encoder.encoder_4.1.bias".
size mismatch for fc.weight: copying a param with shape torch.Size([768, 3, 3, 3]) from checkpoint, the shape in current model is torch.Size([1024, 3, 3, 3]).
size mismatch for fc.bias: copying a param with shape torch.Size([768]) from checkpoint, the shape in current model is torch.Size([1024]).
size mismatch for head_0.conv_0.bias: copying a param with shape torch.Size([768]) from checkpoint, the shape in current model is torch.Size([1024]).
size mismatch for head_0.conv_0.weight_orig: copying a param with shape torch.Size([768, 768, 3, 3]) from checkpoint, the shape in current model is torch.Size([1024, 1024, 3, 3]).
size mismatch for head_0.conv_0.weight_u: copying a param with shape torch.Size([768]) from checkpoint, the shape in current model is torch.Size([1024]).
size mismatch for head_0.conv_0.weight_v: copying a param with shape torch.Size([6912]) from checkpoint, the shape in current model is torch.Size([9216]).
size mismatch for head_0.conv_1.bias: copying a param with shape torch.Size([768]) from checkpoint, the shape in current model is torch.Size([1024]).
size mismatch for head_0.conv_1.weight_orig: copying a param with shape torch.Size([768, 768, 3, 3]) from checkpoint, the shape in current model is torch.Size([1024, 1024, 3, 3]).
size mismatch for head_0.conv_1.weight_u: copying a param with shape torch.Size([768]) from checkpoint, the shape in current model is torch.Size([1024]).
size mismatch for head_0.conv_1.weight_v: copying a param with shape torch.Size([6912]) from checkpoint, the shape in current model is torch.Size([9216]).
size mismatch for head_0.norm_0.param_free_norm.running_mean: copying a param with shape torch.Size([768]) from checkpoint, the shape in current model is torch.Size([1024]).
size mismatch for head_0.norm_0.param_free_norm.running_var: copying a param with shape torch.Size([768]) from checkpoint, the shape in current model is torch.Size([1024]).
size mismatch for head_0.norm_0.mlp_shared.0.weight: copying a param with shape torch.Size([128, 3, 3, 3]) from checkpoint, the shape in current model is torch.Size([128, 1024, 3, 3]).
size mismatch for head_0.norm_0.mlp_gamma.weight: copying a param with shape torch.Size([768, 128, 3, 3]) from checkpoint, the shape in current model is torch.Size([1024, 128, 3, 3]).
size mismatch for head_0.norm_0.mlp_beta.weight: copying a param with shape torch.Size([768, 128, 3, 3]) from checkpoint, the shape in current model is torch.Size([1024, 128, 3, 3]).
size mismatch for head_0.norm_1.param_free_norm.running_mean: copying a param with shape torch.Size([768]) from checkpoint, the shape in current model is torch.Size([1024]).
size mismatch for head_0.norm_1.param_free_norm.running_var: copying a param with shape torch.Size([768]) from checkpoint, the shape in current model is torch.Size([1024]).
size mismatch for head_0.norm_1.mlp_shared.0.weight: copying a param with shape torch.Size([128, 3, 3, 3]) from checkpoint, the shape in current model is torch.Size([128, 1024, 3, 3]).
size mismatch for head_0.norm_1.mlp_gamma.weight: copying a param with shape torch.Size([768, 128, 3, 3]) from checkpoint, the shape in current model is torch.Size([1024, 128, 3, 3]).
size mismatch for head_0.norm_1.mlp_beta.weight: copying a param with shape torch.Size([768, 128, 3, 3]) from checkpoint, the shape in current model is torch.Size([1024, 128, 3, 3]).
size mismatch for G_middle_0.conv_0.bias: copying a param with shape torch.Size([768]) from checkpoint, the shape in current model is torch.Size([1024]).
size mismatch for G_middle_0.conv_0.weight_orig: copying a param with shape torch.Size([768, 768, 3, 3]) from checkpoint, the shape in current model is torch.Size([1024, 1024, 3, 3]).
size mismatch for G_middle_0.conv_0.weight_u: copying a param with shape torch.Size([768]) from checkpoint, the shape in current model is torch.Size([1024]).
size mismatch for G_middle_0.conv_0.weight_v: copying a param with shape torch.Size([6912]) from checkpoint, the shape in current model is torch.Size([9216]).
size mismatch for G_middle_0.conv_1.bias: copying a param with shape torch.Size([768]) from checkpoint, the shape in current model is torch.Size([1024]).
size mismatch for G_middle_0.conv_1.weight_orig: copying a param with shape torch.Size([768, 768, 3, 3]) from checkpoint, the shape in current model is torch.Size([1024, 1024, 3, 3]).
size mismatch for G_middle_0.conv_1.weight_u: copying a param with shape torch.Size([768]) from checkpoint, the shape in current model is torch.Size([1024]).
size mismatch for G_middle_0.conv_1.weight_v: copying a param with shape torch.Size([6912]) from checkpoint, the shape in current model is torch.Size([9216]).
size mismatch for G_middle_0.norm_0.param_free_norm.running_mean: copying a param with shape torch.Size([768]) from checkpoint, the shape in current model is torch.Size([1024]).
size mismatch for G_middle_0.norm_0.param_free_norm.running_var: copying a param with shape torch.Size([768]) from checkpoint, the shape in current model is torch.Size([1024]).
size mismatch for G_middle_0.norm_0.mlp_shared.0.weight: copying a param with shape torch.Size([128, 3, 3, 3]) from checkpoint, the shape in current model is torch.Size([128, 1024, 3, 3]).
size mismatch for G_middle_0.norm_0.mlp_gamma.weight: copying a param with shape torch.Size([768, 128, 3, 3]) from checkpoint, the shape in current model is torch.Size([1024, 128, 3, 3]).
size mismatch for G_middle_0.norm_0.mlp_beta.weight: copying a param with shape torch.Size([768, 128, 3, 3]) from checkpoint, the shape in current model is torch.Size([1024, 128, 3, 3]).
size mismatch for G_middle_0.norm_1.param_free_norm.running_mean: copying a param with shape torch.Size([768]) from checkpoint, the shape in current model is torch.Size([1024]).
size mismatch for G_middle_0.norm_1.param_free_norm.running_var: copying a param with shape torch.Size([768]) from checkpoint, the shape in current model is torch.Size([1024]).
size mismatch for G_middle_0.norm_1.mlp_shared.0.weight: copying a param with shape torch.Size([128, 3, 3, 3]) from checkpoint, the shape in current model is torch.Size([128, 1024, 3, 3]).
size mismatch for G_middle_0.norm_1.mlp_gamma.weight: copying a param with shape torch.Size([768, 128, 3, 3]) from checkpoint, the shape in current model is torch.Size([1024, 128, 3, 3]).
size mismatch for G_middle_0.norm_1.mlp_beta.weight: copying a param with shape torch.Size([768, 128, 3, 3]) from checkpoint, the shape in current model is torch.Size([1024, 128, 3, 3]).
size mismatch for G_middle_1.conv_0.bias: copying a param with shape torch.Size([768]) from checkpoint, the shape in current model is torch.Size([1024]).
size mismatch for G_middle_1.conv_0.weight_orig: copying a param with shape torch.Size([768, 768, 3, 3]) from checkpoint, the shape in current model is torch.Size([1024, 1024, 3, 3]).
size mismatch for G_middle_1.conv_0.weight_u: copying a param with shape torch.Size([768]) from checkpoint, the shape in current model is torch.Size([1024]).
size mismatch for G_middle_1.conv_0.weight_v: copying a param with shape torch.Size([6912]) from checkpoint, the shape in current model is torch.Size([9216]).
size mismatch for G_middle_1.conv_1.bias: copying a param with shape torch.Size([768]) from checkpoint, the shape in current model is torch.Size([1024]).
size mismatch for G_middle_1.conv_1.weight_orig: copying a param with shape torch.Size([768, 768, 3, 3]) from checkpoint, the shape in current model is torch.Size([1024, 1024, 3, 3]).
size mismatch for G_middle_1.conv_1.weight_u: copying a param with shape torch.Size([768]) from checkpoint, the shape in current model is torch.Size([1024]).
size mismatch for G_middle_1.conv_1.weight_v: copying a param with shape torch.Size([6912]) from checkpoint, the shape in current model is torch.Size([9216]).
size mismatch for G_middle_1.norm_0.param_free_norm.running_mean: copying a param with shape torch.Size([768]) from checkpoint, the shape in current model is torch.Size([1024]).
size mismatch for G_middle_1.norm_0.param_free_norm.running_var: copying a param with shape torch.Size([768]) from checkpoint, the shape in current model is torch.Size([1024]).
size mismatch for G_middle_1.norm_0.mlp_shared.0.weight: copying a param with shape torch.Size([128, 3, 3, 3]) from checkpoint, the shape in current model is torch.Size([128, 1024, 3, 3]).
size mismatch for G_middle_1.norm_0.mlp_gamma.weight: copying a param with shape torch.Size([768, 128, 3, 3]) from checkpoint, the shape in current model is torch.Size([1024, 128, 3, 3]).
size mismatch for G_middle_1.norm_0.mlp_beta.weight: copying a param with shape torch.Size([768, 128, 3, 3]) from checkpoint, the shape in current model is torch.Size([1024, 128, 3, 3]).
size mismatch for G_middle_1.norm_1.param_free_norm.running_mean: copying a param with shape torch.Size([768]) from checkpoint, the shape in current model is torch.Size([1024]).
size mismatch for G_middle_1.norm_1.param_free_norm.running_var: copying a param with shape torch.Size([768]) from checkpoint, the shape in current model is torch.Size([1024]).
size mismatch for G_middle_1.norm_1.mlp_shared.0.weight: copying a param with shape torch.Size([128, 3, 3, 3]) from checkpoint, the shape in current model is torch.Size([128, 1024, 3, 3]).
size mismatch for G_middle_1.norm_1.mlp_gamma.weight: copying a param with shape torch.Size([768, 128, 3, 3]) from checkpoint, the shape in current model is torch.Size([1024, 128, 3, 3]).
size mismatch for G_middle_1.norm_1.mlp_beta.weight: copying a param with shape torch.Size([768, 128, 3, 3]) from checkpoint, the shape in current model is torch.Size([1024, 128, 3, 3]).
size mismatch for ups.0.conv_0.bias: copying a param with shape torch.Size([384]) from checkpoint, the shape in current model is torch.Size([512]).
size mismatch for ups.0.conv_0.weight_orig: copying a param with shape torch.Size([384, 768, 3, 3]) from checkpoint, the shape in current model is torch.Size([512, 1024, 3, 3]).
size mismatch for ups.0.conv_0.weight_u: copying a param with shape torch.Size([384]) from checkpoint, the shape in current model is torch.Size([512]).
size mismatch for ups.0.conv_0.weight_v: copying a param with shape torch.Size([6912]) from checkpoint, the shape in current model is torch.Size([9216]).
size mismatch for ups.0.conv_1.bias: copying a param with shape torch.Size([384]) from checkpoint, the shape in current model is torch.Size([512]).
size mismatch for ups.0.conv_1.weight_orig: copying a param with shape torch.Size([384, 384, 3, 3]) from checkpoint, the shape in current model is torch.Size([512, 512, 3, 3]).
size mismatch for ups.0.conv_1.weight_u: copying a param with shape torch.Size([384]) from checkpoint, the shape in current model is torch.Size([512]).
size mismatch for ups.0.conv_1.weight_v: copying a param with shape torch.Size([3456]) from checkpoint, the shape in current model is torch.Size([4608]).
size mismatch for ups.0.conv_s.weight_orig: copying a param with shape torch.Size([384, 768, 1, 1]) from checkpoint, the shape in current model is torch.Size([512, 1024, 1, 1]).
size mismatch for ups.0.conv_s.weight_u: copying a param with shape torch.Size([384]) from checkpoint, the shape in current model is torch.Size([512]).
size mismatch for ups.0.conv_s.weight_v: copying a param with shape torch.Size([768]) from checkpoint, the shape in current model is torch.Size([1024]).
size mismatch for ups.0.norm_0.param_free_norm.running_mean: copying a param with shape torch.Size([768]) from checkpoint, the shape in current model is torch.Size([1024]).
size mismatch for ups.0.norm_0.param_free_norm.running_var: copying a param with shape torch.Size([768]) from checkpoint, the shape in current model is torch.Size([1024]).
size mismatch for ups.0.norm_0.mlp_shared.0.weight: copying a param with shape torch.Size([128, 3, 3, 3]) from checkpoint, the shape in current model is torch.Size([128, 512, 3, 3]).
size mismatch for ups.0.norm_0.mlp_gamma.weight: copying a param with shape torch.Size([768, 128, 3, 3]) from checkpoint, the shape in current model is torch.Size([1024, 128, 3, 3]).
size mismatch for ups.0.norm_0.mlp_beta.weight: copying a param with shape torch.Size([768, 128, 3, 3]) from checkpoint, the shape in current model is torch.Size([1024, 128, 3, 3]).
size mismatch for ups.0.norm_1.param_free_norm.running_mean: copying a param with shape torch.Size([384]) from checkpoint, the shape in current model is torch.Size([512]).
size mismatch for ups.0.norm_1.param_free_norm.running_var: copying a param with shape torch.Size([384]) from checkpoint, the shape in current model is torch.Size([512]).
size mismatch for ups.0.norm_1.mlp_shared.0.weight: copying a param with shape torch.Size([128, 3, 3, 3]) from checkpoint, the shape in current model is torch.Size([128, 512, 3, 3]).
size mismatch for ups.0.norm_1.mlp_gamma.weight: copying a param with shape torch.Size([384, 128, 3, 3]) from checkpoint, the shape in current model is torch.Size([512, 128, 3, 3]).
size mismatch for ups.0.norm_1.mlp_beta.weight: copying a param with shape torch.Size([384, 128, 3, 3]) from checkpoint, the shape in current model is torch.Size([512, 128, 3, 3]).
size mismatch for ups.0.norm_s.param_free_norm.running_mean: copying a param with shape torch.Size([768]) from checkpoint, the shape in current model is torch.Size([1024]).
size mismatch for ups.0.norm_s.param_free_norm.running_var: copying a param with shape torch.Size([768]) from checkpoint, the shape in current model is torch.Size([1024]).
size mismatch for ups.0.norm_s.mlp_shared.0.weight: copying a param with shape torch.Size([128, 3, 3, 3]) from checkpoint, the shape in current model is torch.Size([128, 512, 3, 3]).
size mismatch for ups.0.norm_s.mlp_gamma.weight: copying a param with shape torch.Size([768, 128, 3, 3]) from checkpoint, the shape in current model is torch.Size([1024, 128, 3, 3]).
size mismatch for ups.0.norm_s.mlp_beta.weight: copying a param with shape torch.Size([768, 128, 3, 3]) from checkpoint, the shape in current model is torch.Size([1024, 128, 3, 3]).
size mismatch for ups.1.conv_0.bias: copying a param with shape torch.Size([192]) from checkpoint, the shape in current model is torch.Size([256]).
size mismatch for ups.1.conv_0.weight_orig: copying a param with shape torch.Size([192, 384, 3, 3]) from checkpoint, the shape in current model is torch.Size([256, 512, 3, 3]).
size mismatch for ups.1.conv_0.weight_u: copying a param with shape torch.Size([192]) from checkpoint, the shape in current model is torch.Size([256]).
size mismatch for ups.1.conv_0.weight_v: copying a param with shape torch.Size([3456]) from checkpoint, the shape in current model is torch.Size([4608]).
size mismatch for ups.1.conv_1.bias: copying a param with shape torch.Size([192]) from checkpoint, the shape in current model is torch.Size([256]).
size mismatch for ups.1.conv_1.weight_orig: copying a param with shape torch.Size([192, 192, 3, 3]) from checkpoint, the shape in current model is torch.Size([256, 256, 3, 3]).
size mismatch for ups.1.conv_1.weight_u: copying a param with shape torch.Size([192]) from checkpoint, the shape in current model is torch.Size([256]).
size mismatch for ups.1.conv_1.weight_v: copying a param with shape torch.Size([1728]) from checkpoint, the shape in current model is torch.Size([2304]).
size mismatch for ups.1.conv_s.weight_orig: copying a param with shape torch.Size([192, 384, 1, 1]) from checkpoint, the shape in current model is torch.Size([256, 512, 1, 1]).
size mismatch for ups.1.conv_s.weight_u: copying a param with shape torch.Size([192]) from checkpoint, the shape in current model is torch.Size([256]).
size mismatch for ups.1.conv_s.weight_v: copying a param with shape torch.Size([384]) from checkpoint, the shape in current model is torch.Size([512]).
size mismatch for ups.1.norm_0.param_free_norm.running_mean: copying a param with shape torch.Size([384]) from checkpoint, the shape in current model is torch.Size([512]).
size mismatch for ups.1.norm_0.param_free_norm.running_var: copying a param with shape torch.Size([384]) from checkpoint, the shape in current model is torch.Size([512]).
size mismatch for ups.1.norm_0.mlp_shared.0.weight: copying a param with shape torch.Size([128, 3, 3, 3]) from checkpoint, the shape in current model is torch.Size([128, 256, 3, 3]).
size mismatch for ups.1.norm_0.mlp_gamma.weight: copying a param with shape torch.Size([384, 128, 3, 3]) from checkpoint, the shape in current model is torch.Size([512, 128, 3, 3]).
size mismatch for ups.1.norm_0.mlp_beta.weight: copying a param with shape torch.Size([384, 128, 3, 3]) from checkpoint, the shape in current model is torch.Size([512, 128, 3, 3]).
size mismatch for ups.1.norm_1.param_free_norm.running_mean: copying a param with shape torch.Size([192]) from checkpoint, the shape in current model is torch.Size([256]).
size mismatch for ups.1.norm_1.param_free_norm.running_var: copying a param with shape torch.Size([192]) from checkpoint, the shape in current model is torch.Size([256]).
size mismatch for ups.1.norm_1.mlp_shared.0.weight: copying a param with shape torch.Size([128, 3, 3, 3]) from checkpoint, the shape in current model is torch.Size([128, 256, 3, 3]).
size mismatch for ups.1.norm_1.mlp_gamma.weight: copying a param with shape torch.Size([192, 128, 3, 3]) from checkpoint, the shape in current model is torch.Size([256, 128, 3, 3]).
size mismatch for ups.1.norm_1.mlp_beta.weight: copying a param with shape torch.Size([192, 128, 3, 3]) from checkpoint, the shape in current model is torch.Size([256, 128, 3, 3]).
size mismatch for ups.1.norm_s.param_free_norm.running_mean: copying a param with shape torch.Size([384]) from checkpoint, the shape in current model is torch.Size([512]).
size mismatch for ups.1.norm_s.param_free_norm.running_var: copying a param with shape torch.Size([384]) from checkpoint, the shape in current model is torch.Size([512]).
size mismatch for ups.1.norm_s.mlp_shared.0.weight: copying a param with shape torch.Size([128, 3, 3, 3]) from checkpoint, the shape in current model is torch.Size([128, 256, 3, 3]).
size mismatch for ups.1.norm_s.mlp_gamma.weight: copying a param with shape torch.Size([384, 128, 3, 3]) from checkpoint, the shape in current model is torch.Size([512, 128, 3, 3]).
size mismatch for ups.1.norm_s.mlp_beta.weight: copying a param with shape torch.Size([384, 128, 3, 3]) from checkpoint, the shape in current model is torch.Size([512, 128, 3, 3]).
size mismatch for ups.2.conv_0.bias: copying a param with shape torch.Size([96]) from checkpoint, the shape in current model is torch.Size([128]).
size mismatch for ups.2.conv_0.weight_orig: copying a param with shape torch.Size([96, 192, 3, 3]) from checkpoint, the shape in current model is torch.Size([128, 256, 3, 3]).
size mismatch for ups.2.conv_0.weight_u: copying a param with shape torch.Size([96]) from checkpoint, the shape in current model is torch.Size([128]).
size mismatch for ups.2.conv_0.weight_v: copying a param with shape torch.Size([1728]) from checkpoint, the shape in current model is torch.Size([2304]).
size mismatch for ups.2.conv_1.bias: copying a param with shape torch.Size([96]) from checkpoint, the shape in current model is torch.Size([128]).
size mismatch for ups.2.conv_1.weight_orig: copying a param with shape torch.Size([96, 96, 3, 3]) from checkpoint, the shape in current model is torch.Size([128, 128, 3, 3]).
size mismatch for ups.2.conv_1.weight_u: copying a param with shape torch.Size([96]) from checkpoint, the shape in current model is torch.Size([128]).
size mismatch for ups.2.conv_1.weight_v: copying a param with shape torch.Size([864]) from checkpoint, the shape in current model is torch.Size([1152]).
size mismatch for ups.2.conv_s.weight_orig: copying a param with shape torch.Size([96, 192, 1, 1]) from checkpoint, the shape in current model is torch.Size([128, 256, 1, 1]).
size mismatch for ups.2.conv_s.weight_u: copying a param with shape torch.Size([96]) from checkpoint, the shape in current model is torch.Size([128]).
size mismatch for ups.2.conv_s.weight_v: copying a param with shape torch.Size([192]) from checkpoint, the shape in current model is torch.Size([256]).
size mismatch for ups.2.norm_0.param_free_norm.running_mean: copying a param with shape torch.Size([192]) from checkpoint, the shape in current model is torch.Size([256]).
size mismatch for ups.2.norm_0.param_free_norm.running_var: copying a param with shape torch.Size([192]) from checkpoint, the shape in current model is torch.Size([256]).
size mismatch for ups.2.norm_0.mlp_shared.0.weight: copying a param with shape torch.Size([128, 3, 3, 3]) from checkpoint, the shape in current model is torch.Size([128, 128, 3, 3]).
size mismatch for ups.2.norm_0.mlp_gamma.weight: copying a param with shape torch.Size([192, 128, 3, 3]) from checkpoint, the shape in current model is torch.Size([256, 128, 3, 3]).
size mismatch for ups.2.norm_0.mlp_beta.weight: copying a param with shape torch.Size([192, 128, 3, 3]) from checkpoint, the shape in current model is torch.Size([256, 128, 3, 3]).
size mismatch for ups.2.norm_1.param_free_norm.running_mean: copying a param with shape torch.Size([96]) from checkpoint, the shape in current model is torch.Size([128]).
size mismatch for ups.2.norm_1.param_free_norm.running_var: copying a param with shape torch.Size([96]) from checkpoint, the shape in current model is torch.Size([128]).
size mismatch for ups.2.norm_1.mlp_shared.0.weight: copying a param with shape torch.Size([96, 3, 3, 3]) from checkpoint, the shape in current model is torch.Size([128, 128, 3, 3]).
size mismatch for ups.2.norm_1.mlp_shared.0.bias: copying a param with shape torch.Size([96]) from checkpoint, the shape in current model is torch.Size([128]).
size mismatch for ups.2.norm_1.mlp_gamma.weight: copying a param with shape torch.Size([96, 96, 3, 3]) from checkpoint, the shape in current model is torch.Size([128, 128, 3, 3]).
size mismatch for ups.2.norm_1.mlp_beta.weight: copying a param with shape torch.Size([96, 96, 3, 3]) from checkpoint, the shape in current model is torch.Size([128, 128, 3, 3]).
size mismatch for ups.2.norm_s.param_free_norm.running_mean: copying a param with shape torch.Size([192]) from checkpoint, the shape in current model is torch.Size([256]).
size mismatch for ups.2.norm_s.param_free_norm.running_var: copying a param with shape torch.Size([192]) from checkpoint, the shape in current model is torch.Size([256]).
size mismatch for ups.2.norm_s.mlp_shared.0.weight: copying a param with shape torch.Size([128, 3, 3, 3]) from checkpoint, the shape in current model is torch.Size([128, 128, 3, 3]).
size mismatch for ups.2.norm_s.mlp_gamma.weight: copying a param with shape torch.Size([192, 128, 3, 3]) from checkpoint, the shape in current model is torch.Size([256, 128, 3, 3]).
size mismatch for ups.2.norm_s.mlp_beta.weight: copying a param with shape torch.Size([192, 128, 3, 3]) from checkpoint, the shape in current model is torch.Size([256, 128, 3, 3]).
size mismatch for ups.3.conv_0.bias: copying a param with shape torch.Size([48]) from checkpoint, the shape in current model is torch.Size([64]).
size mismatch for ups.3.conv_0.weight_orig: copying a param with shape torch.Size([48, 96, 3, 3]) from checkpoint, the shape in current model is torch.Size([64, 128, 3, 3]).
size mismatch for ups.3.conv_0.weight_u: copying a param with shape torch.Size([48]) from checkpoint, the shape in current model is torch.Size([64]).
size mismatch for ups.3.conv_0.weight_v: copying a param with shape torch.Size([864]) from checkpoint, the shape in current model is torch.Size([1152]).
size mismatch for ups.3.conv_1.bias: copying a param with shape torch.Size([48]) from checkpoint, the shape in current model is torch.Size([64]).
size mismatch for ups.3.conv_1.weight_orig: copying a param with shape torch.Size([48, 48, 3, 3]) from checkpoint, the shape in current model is torch.Size([64, 64, 3, 3]).
size mismatch for ups.3.conv_1.weight_u: copying a param with shape torch.Size([48]) from checkpoint, the shape in current model is torch.Size([64]).
size mismatch for ups.3.conv_1.weight_v: copying a param with shape torch.Size([432]) from checkpoint, the shape in current model is torch.Size([576]).
size mismatch for ups.3.conv_s.weight_orig: copying a param with shape torch.Size([48, 96, 1, 1]) from checkpoint, the shape in current model is torch.Size([64, 128, 1, 1]).
size mismatch for ups.3.conv_s.weight_u: copying a param with shape torch.Size([48]) from checkpoint, the shape in current model is torch.Size([64]).
size mismatch for ups.3.conv_s.weight_v: copying a param with shape torch.Size([96]) from checkpoint, the shape in current model is torch.Size([128]).
size mismatch for ups.3.norm_0.param_free_norm.running_mean: copying a param with shape torch.Size([96]) from checkpoint, the shape in current model is torch.Size([128]).
size mismatch for ups.3.norm_0.param_free_norm.running_var: copying a param with shape torch.Size([96]) from checkpoint, the shape in current model is torch.Size([128]).
size mismatch for ups.3.norm_0.mlp_shared.0.weight: copying a param with shape torch.Size([96, 3, 3, 3]) from checkpoint, the shape in current model is torch.Size([128, 64, 3, 3]).
size mismatch for ups.3.norm_0.mlp_shared.0.bias: copying a param with shape torch.Size([96]) from checkpoint, the shape in current model is torch.Size([128]).
size mismatch for ups.3.norm_0.mlp_gamma.weight: copying a param with shape torch.Size([96, 96, 3, 3]) from checkpoint, the shape in current model is torch.Size([128, 128, 3, 3]).
size mismatch for ups.3.norm_0.mlp_beta.weight: copying a param with shape torch.Size([96, 96, 3, 3]) from checkpoint, the shape in current model is torch.Size([128, 128, 3, 3]).
size mismatch for ups.3.norm_1.param_free_norm.running_mean: copying a param with shape torch.Size([48]) from checkpoint, the shape in current model is torch.Size([64]).
size mismatch for ups.3.norm_1.param_free_norm.running_var: copying a param with shape torch.Size([48]) from checkpoint, the shape in current model is torch.Size([64]).
size mismatch for ups.3.norm_1.mlp_shared.0.weight: copying a param with shape torch.Size([48, 3, 3, 3]) from checkpoint, the shape in current model is torch.Size([64, 64, 3, 3]).
size mismatch for ups.3.norm_1.mlp_shared.0.bias: copying a param with shape torch.Size([48]) from checkpoint, the shape in current model is torch.Size([64]).
size mismatch for ups.3.norm_1.mlp_gamma.weight: copying a param with shape torch.Size([48, 48, 3, 3]) from checkpoint, the shape in current model is torch.Size([64, 64, 3, 3]).
size mismatch for ups.3.norm_1.mlp_beta.weight: copying a param with shape torch.Size([48, 48, 3, 3]) from checkpoint, the shape in current model is torch.Size([64, 64, 3, 3]).
size mismatch for ups.3.norm_s.param_free_norm.running_mean: copying a param with shape torch.Size([96]) from checkpoint, the shape in current model is torch.Size([128]).
size mismatch for ups.3.norm_s.param_free_norm.running_var: copying a param with shape torch.Size([96]) from checkpoint, the shape in current model is torch.Size([128]).
size mismatch for ups.3.norm_s.mlp_shared.0.weight: copying a param with shape torch.Size([96, 3, 3, 3]) from checkpoint, the shape in current model is torch.Size([128, 64, 3, 3]).
size mismatch for ups.3.norm_s.mlp_shared.0.bias: copying a param with shape torch.Size([96]) from checkpoint, the shape in current model is torch.Size([128]).
size mismatch for ups.3.norm_s.mlp_gamma.weight: copying a param with shape torch.Size([96, 96, 3, 3]) from checkpoint, the shape in current model is torch.Size([128, 128, 3, 3]).
size mismatch for ups.3.norm_s.mlp_beta.weight: copying a param with shape torch.Size([96, 96, 3, 3]) from checkpoint, the shape in current model is torch.Size([128, 128, 3, 3]).
size mismatch for to_rgbs.0.weight: copying a param with shape torch.Size([3, 384, 3, 3]) from checkpoint, the shape in current model is torch.Size([3, 512, 3, 3]).
size mismatch for to_rgbs.1.weight: copying a param with shape torch.Size([3, 192, 3, 3]) from checkpoint, the shape in current model is torch.Size([3, 256, 3, 3]).
size mismatch for to_rgbs.2.weight: copying a param with shape torch.Size([3, 96, 3, 3]) from checkpoint, the shape in current model is torch.Size([3, 128, 3, 3]).
size mismatch for to_rgbs.3.weight: copying a param with shape torch.Size([3, 48, 3, 3]) from checkpoint, the shape in current model is torch.Size([3, 64, 3, 3]).
`

Tested with torch 1.8.1, 1.6, 1.5

How to load and use the pretrained model for image 4x super resolution?

Tried the following code to do 4x super resolution on input image.
But, the output has very bad quality.
What is the right way to do super resolution?
Many thanks.

import os
import cv2
import torch
import imageio
import numpy as np
import matplotlib.pyplot as plt

from tqdm import tqdm
from models.pix2pix_model import Pix2PixModel
from options.config_hifacegan import TestOptions


os.environ['CUDA_VISIBLE_DEVICES'] = '0'
torch.backends.cudnn.benchmark = True

opt = TestOptions()
opt.name = '4xsr'
opt.checkpoints_dir = './checkpoints'
opt.netG = 'spade' # lipspade, spade, hifacegan
opt.ngf = 48       # 64, 48
opt.num_upsampling_layers = 'more'

model = Pix2PixModel(opt)
model.netG.train()

# src_image: [512, 512, 3]
src_image = imageio.imread('image.png')
image = src_image.transpose(2, 0, 1) / 255.
image = image[np.newaxis]
image = torch.from_numpy(image).to(torch.float32)

data = {'image': image, 'label': image}
generated = model(data, mode='inference2')
generated = generated.cpu().numpy()[0]
generated = generated.transpose(1, 2, 0)

plt.figure()
plt.subplot(121)
plt.imshow(src_image)
plt.axis('off')
plt.subplot(122)
plt.imshow(generated)
plt.axis('off')
plt.tight_layout()
plt.show()

Input images larger than 512x512?

Hi!
I am very impressed by the capabilities of HiFaceGAN!
However, I tried to process an image larger than 512x512 in size - changing the values of crop_size and prd_resize in config_hifacegan.py to the size of my image - and it scaled the results back to 512x512. Is there something else I need to do to process larger images in the code?
Thanks!

To train own model using author's model

First of all, thank you for your research and I learn much. Now, I want to use your model, 'latest_net_G.pth', as a pre-training model to train my own model with my own dataset, but I don't find anything like 'load_model(...)'.
Is this interface provided in the source code? I'm looking forward to your reply. Thanks.

TypeError in degrade.py

Does this support python 3.6? I'm running into issues with the degrade.py step. I set suffix to full and source_dir to a folder with 10,000 or so images from the FFHQ dataset but end up getting this error. Is 512x512 an absolute requirement for the first step to work?

Traceback (most recent call last):
  File "degrade.py", line 67, in <module>
    create_mixed_dataset(source_dir, suffix)
  File "degrade.py", line 53, in create_mixed_dataset
    trans = get_by_suffix(suffix) # or use other functions
  File "degrade.py", line 47, in get_by_suffix
    raise('%s not supported' % suffix)
TypeError: exceptions must derive from BaseException

No such file or directory

!python ./metrics_package/main.py --help

/usr/local/lib/python3.6/dist-packages/torchvision/models/inception.py:77: FutureWarning: The default weight initialization of inception_v3 will be changed in future releases of torchvision. If you wish to keep the old behavior (which leads to long initialization times due to scipy/scipy#11299), please set init_weights=True.
  ' due to scipy/scipy#11299), please set init_weights=True.', FutureWarning)
Traceback (most recent call last):
  File "./metrics_package/main.py", line 169, in <module>
    run_main(folder)
  File "./metrics_package/main.py", line 78, in run_main
    file_packs = [os.path.join(FOLDER, l) for l in os.listdir(FOLDER)]
FileNotFoundError: [Errno 2] No such file or directory: '/home/lingbo.ylb/datasets/FFHQ_yuv_recon/37'

(colab)

About instancenorm2d

Dear author

I notice that nn.InstanceNorm2d is used without setting affine=True. In this case, the affine transform is not learnable.

I'm wondering that is it good for image generation? How about setting affine=True just like batch norm does?

btw: in Class SimplifiedLIP, affine=True.

class ContentAdaptiveSuppresor(BaseNetwork):
def __init__(self, opt, sw, sh, n_2xdown,
norm_layer=nn.InstanceNorm2d):
super().__init__()
self.sw = sw
self.sh = sh
self.max_ratio = 16
self.n_2xdown = n_2xdown
# norm_layer = get_nonspade_norm_layer(opt, opt.norm_E)
# 20200310: Several Convolution (stride 1) + LIP blocks, 4 fold
ngf = opt.ngf
kw = 3
pw = (kw - 1) // 2
self.head = nn.Sequential(
nn.Conv2d(opt.semantic_nc, ngf, kw, stride=1, padding=pw, bias=False),
norm_layer(ngf),
nn.ReLU(),
)
cur_ratio = 1
for i in range(n_2xdown):
next_ratio = min(cur_ratio*2, self.max_ratio)
model = [
SimplifiedLIP(ngf*cur_ratio),
nn.Conv2d(ngf*cur_ratio, ngf*next_ratio, kw, stride=1, padding=pw),
norm_layer(ngf*next_ratio),
]
cur_ratio = next_ratio
if i < n_2xdown - 1:
model += [nn.ReLU(inplace=True)]
setattr(self, 'encoder_%d' % i, nn.Sequential(*model))
def forward(self, x):
# 20200628: Note the features are arranged from small to large
x = [self.head(x)]
for i in range(self.n_2xdown):
net = getattr(self, 'encoder_%d' % i)
x = [net(x[0])] + x
return x

why set model mode as train() instead of eval() in test?

I noticed the comment ,but why ?

20200218 Critical Bug

# When model is set to eval mode, the generated image
# is not enhanced whatsoever, with almost 0 residual
# when turned to training mode, it behaves as expected.
###
#model.eval()
model.netG.eval()
#model.netG.train()

Request for pretrained discriminator checkpoints

Hello, @Lotayou. Thank you for your amazing work.
I want to fine-tune HiFaceGAN, but there is only generator pretrained checkpoints available. Is there any way to get discriminator checkpoints for fine-tuning? It'll be really great.

Thank you.

blue spots on my own training results

00001-1
when I train the HIFACEGAN,I use the FFHQ dataset and degrade them as train data
I set netG = 'hifacegan'(not same as your 'lipspade') , ngf = 32 (11GB only for me) others are same as yours 。
but I got asome results as picture illuminate(about 10%), left is low quality ,middle is my results ,right is high orginal qualty image, I wanna konw why the model generate those blue spots.

Question about augmentations

Thank you very much for your work!
My question is about augmentations for Face Renovation. In article it is said that one of four augmentations is 4x bicubic downsampling. However, in degrade.py we make random downsampling between 4x to 8x:

# random downsample between 4x to 8x and get back
        ia.Resize((0.125,0.25)),

which one is true?

Error

anthony@anthony-OMEN-by-HP-Laptop-17-cb0xxx:~/Downloads/Face-Renovation-master$ python test.py
dataset [TestDataset] of size 1 was created
Network [LIPSPADEGenerator] was created. Total number of parameters: 120.3 million. To see the architecture, do print(network).
Traceback (most recent call last):
File "test.py", line 21, in
model = Pix2PixModel(opt)
File "/home/anthony/Downloads/Face-Renovation-master/models/pix2pix_model.py", line 26, in init
self.netG, self.netD, self.netE = self.initialize_networks(opt)
File "/home/anthony/Downloads/Face-Renovation-master/models/pix2pix_model.py", line 188, in initialize_networks
netG = util.load_network(netG, 'G', opt.which_epoch, opt)
File "/home/anthony/Downloads/Face-Renovation-master/util/util.py", line 209, in load_network
net.load_state_dict(weights)
File "/home/anthony/Downloads/ENTER/lib/python3.8/site-packages/torch/nn/modules/module.py", line 1051, in load_state_dict
raise RuntimeError('Error(s) in loading state_dict for {}:\n\t{}'.format(
RuntimeError: Error(s) in loading state_dict for LIPSPADEGenerator:
size mismatch for fc.weight: copying a param with shape torch.Size([768, 3, 3, 3]) from checkpoint, the shape in current model is torch.Size([1024, 3, 3, 3]).
size mismatch for fc.bias: copying a param with shape torch.Size([768]) from checkpoint, the shape in current model is torch.Size([1024]).
size mismatch for head_0.conv_0.bias: copying a param with shape torch.Size([768]) from checkpoint, the shape in current model is torch.Size([1024]).
size mismatch for head_0.conv_0.weight_orig: copying a param with shape torch.Size([768, 768, 3, 3]) from checkpoint, the shape in current model is torch.Size([1024, 1024, 3, 3]).
size mismatch for head_0.conv_0.weight_u: copying a param with shape torch.Size([768]) from checkpoint, the shape in current model is torch.Size([1024]).
size mismatch for head_0.conv_0.weight_v: copying a param with shape torch.Size([6912]) from checkpoint, the shape in current model is torch.Size([9216]).
size mismatch for head_0.conv_1.bias: copying a param with shape torch.Size([768]) from checkpoint, the shape in current model is torch.Size([1024]).
size mismatch for head_0.conv_1.weight_orig: copying a param with shape torch.Size([768, 768, 3, 3]) from checkpoint, the shape in current model is torch.Size([1024, 1024, 3, 3]).
size mismatch for head_0.conv_1.weight_u: copying a param with shape torch.Size([768]) from checkpoint, the shape in current model is torch.Size([1024]).
size mismatch for head_0.conv_1.weight_v: copying a param with shape torch.Size([6912]) from checkpoint, the shape in current model is torch.Size([9216]).
size mismatch for head_0.norm_0.param_free_norm.running_mean: copying a param with shape torch.Size([768]) from checkpoint, the shape in current model is torch.Size([1024]).
size mismatch for head_0.norm_0.param_free_norm.running_var: copying a param with shape torch.Size([768]) from checkpoint, the shape in current model is torch.Size([1024]).
size mismatch for head_0.norm_0.mlp_gamma.weight: copying a param with shape torch.Size([768, 128, 3, 3]) from checkpoint, the shape in current model is torch.Size([1024, 128, 3, 3]).
size mismatch for head_0.norm_0.mlp_beta.weight: copying a param with shape torch.Size([768, 128, 3, 3]) from checkpoint, the shape in current model is torch.Size([1024, 128, 3, 3]).
size mismatch for head_0.norm_1.param_free_norm.running_mean: copying a param with shape torch.Size([768]) from checkpoint, the shape in current model is torch.Size([1024]).
size mismatch for head_0.norm_1.param_free_norm.running_var: copying a param with shape torch.Size([768]) from checkpoint, the shape in current model is torch.Size([1024]).
size mismatch for head_0.norm_1.mlp_gamma.weight: copying a param with shape torch.Size([768, 128, 3, 3]) from checkpoint, the shape in current model is torch.Size([1024, 128, 3, 3]).
size mismatch for head_0.norm_1.mlp_beta.weight: copying a param with shape torch.Size([768, 128, 3, 3]) from checkpoint, the shape in current model is torch.Size([1024, 128, 3, 3]).
size mismatch for G_middle_0.conv_0.bias: copying a param with shape torch.Size([768]) from checkpoint, the shape in current model is torch.Size([1024]).
size mismatch for G_middle_0.conv_0.weight_orig: copying a param with shape torch.Size([768, 768, 3, 3]) from checkpoint, the shape in current model is torch.Size([1024, 1024, 3, 3]).
size mismatch for G_middle_0.conv_0.weight_u: copying a param with shape torch.Size([768]) from checkpoint, the shape in current model is torch.Size([1024]).
size mismatch for G_middle_0.conv_0.weight_v: copying a param with shape torch.Size([6912]) from checkpoint, the shape in current model is torch.Size([9216]).
size mismatch for G_middle_0.conv_1.bias: copying a param with shape torch.Size([768]) from checkpoint, the shape in current model is torch.Size([1024]).
size mismatch for G_middle_0.conv_1.weight_orig: copying a param with shape torch.Size([768, 768, 3, 3]) from checkpoint, the shape in current model is torch.Size([1024, 1024, 3, 3]).
size mismatch for G_middle_0.conv_1.weight_u: copying a param with shape torch.Size([768]) from checkpoint, the shape in current model is torch.Size([1024]).
size mismatch for G_middle_0.conv_1.weight_v: copying a param with shape torch.Size([6912]) from checkpoint, the shape in current model is torch.Size([9216]).
size mismatch for G_middle_0.norm_0.param_free_norm.running_mean: copying a param with shape torch.Size([768]) from checkpoint, the shape in current model is torch.Size([1024]).
size mismatch for G_middle_0.norm_0.param_free_norm.running_var: copying a param with shape torch.Size([768]) from checkpoint, the shape in current model is torch.Size([1024]).
size mismatch for G_middle_0.norm_0.mlp_gamma.weight: copying a param with shape torch.Size([768, 128, 3, 3]) from checkpoint, the shape in current model is torch.Size([1024, 128, 3, 3]).
size mismatch for G_middle_0.norm_0.mlp_beta.weight: copying a param with shape torch.Size([768, 128, 3, 3]) from checkpoint, the shape in current model is torch.Size([1024, 128, 3, 3]).
size mismatch for G_middle_0.norm_1.param_free_norm.running_mean: copying a param with shape torch.Size([768]) from checkpoint, the shape in current model is torch.Size([1024]).
size mismatch for G_middle_0.norm_1.param_free_norm.running_var: copying a param with shape torch.Size([768]) from checkpoint, the shape in current model is torch.Size([1024]).
size mismatch for G_middle_0.norm_1.mlp_gamma.weight: copying a param with shape torch.Size([768, 128, 3, 3]) from checkpoint, the shape in current model is torch.Size([1024, 128, 3, 3]).
size mismatch for G_middle_0.norm_1.mlp_beta.weight: copying a param with shape torch.Size([768, 128, 3, 3]) from checkpoint, the shape in current model is torch.Size([1024, 128, 3, 3]).
size mismatch for G_middle_1.conv_0.bias: copying a param with shape torch.Size([768]) from checkpoint, the shape in current model is torch.Size([1024]).
size mismatch for G_middle_1.conv_0.weight_orig: copying a param with shape torch.Size([768, 768, 3, 3]) from checkpoint, the shape in current model is torch.Size([1024, 1024, 3, 3]).
size mismatch for G_middle_1.conv_0.weight_u: copying a param with shape torch.Size([768]) from checkpoint, the shape in current model is torch.Size([1024]).
size mismatch for G_middle_1.conv_0.weight_v: copying a param with shape torch.Size([6912]) from checkpoint, the shape in current model is torch.Size([9216]).
size mismatch for G_middle_1.conv_1.bias: copying a param with shape torch.Size([768]) from checkpoint, the shape in current model is torch.Size([1024]).
size mismatch for G_middle_1.conv_1.weight_orig: copying a param with shape torch.Size([768, 768, 3, 3]) from checkpoint, the shape in current model is torch.Size([1024, 1024, 3, 3]).
size mismatch for G_middle_1.conv_1.weight_u: copying a param with shape torch.Size([768]) from checkpoint, the shape in current model is torch.Size([1024]).
size mismatch for G_middle_1.conv_1.weight_v: copying a param with shape torch.Size([6912]) from checkpoint, the shape in current model is torch.Size([9216]).
size mismatch for G_middle_1.norm_0.param_free_norm.running_mean: copying a param with shape torch.Size([768]) from checkpoint, the shape in current model is torch.Size([1024]).
size mismatch for G_middle_1.norm_0.param_free_norm.running_var: copying a param with shape torch.Size([768]) from checkpoint, the shape in current model is torch.Size([1024]).
size mismatch for G_middle_1.norm_0.mlp_gamma.weight: copying a param with shape torch.Size([768, 128, 3, 3]) from checkpoint, the shape in current model is torch.Size([1024, 128, 3, 3]).
size mismatch for G_middle_1.norm_0.mlp_beta.weight: copying a param with shape torch.Size([768, 128, 3, 3]) from checkpoint, the shape in current model is torch.Size([1024, 128, 3, 3]).
size mismatch for G_middle_1.norm_1.param_free_norm.running_mean: copying a param with shape torch.Size([768]) from checkpoint, the shape in current model is torch.Size([1024]).
size mismatch for G_middle_1.norm_1.param_free_norm.running_var: copying a param with shape torch.Size([768]) from checkpoint, the shape in current model is torch.Size([1024]).
size mismatch for G_middle_1.norm_1.mlp_gamma.weight: copying a param with shape torch.Size([768, 128, 3, 3]) from checkpoint, the shape in current model is torch.Size([1024, 128, 3, 3]).
size mismatch for G_middle_1.norm_1.mlp_beta.weight: copying a param with shape torch.Size([768, 128, 3, 3]) from checkpoint, the shape in current model is torch.Size([1024, 128, 3, 3]).
size mismatch for ups.0.conv_0.bias: copying a param with shape torch.Size([384]) from checkpoint, the shape in current model is torch.Size([512]).
size mismatch for ups.0.conv_0.weight_orig: copying a param with shape torch.Size([384, 768, 3, 3]) from checkpoint, the shape in current model is torch.Size([512, 1024, 3, 3]).
size mismatch for ups.0.conv_0.weight_u: copying a param with shape torch.Size([384]) from checkpoint, the shape in current model is torch.Size([512]).
size mismatch for ups.0.conv_0.weight_v: copying a param with shape torch.Size([6912]) from checkpoint, the shape in current model is torch.Size([9216]).
size mismatch for ups.0.conv_1.bias: copying a param with shape torch.Size([384]) from checkpoint, the shape in current model is torch.Size([512]).
size mismatch for ups.0.conv_1.weight_orig: copying a param with shape torch.Size([384, 384, 3, 3]) from checkpoint, the shape in current model is torch.Size([512, 512, 3, 3]).
size mismatch for ups.0.conv_1.weight_u: copying a param with shape torch.Size([384]) from checkpoint, the shape in current model is torch.Size([512]).
size mismatch for ups.0.conv_1.weight_v: copying a param with shape torch.Size([3456]) from checkpoint, the shape in current model is torch.Size([4608]).
size mismatch for ups.0.conv_s.weight_orig: copying a param with shape torch.Size([384, 768, 1, 1]) from checkpoint, the shape in current model is torch.Size([512, 1024, 1, 1]).
size mismatch for ups.0.conv_s.weight_u: copying a param with shape torch.Size([384]) from checkpoint, the shape in current model is torch.Size([512]).
size mismatch for ups.0.conv_s.weight_v: copying a param with shape torch.Size([768]) from checkpoint, the shape in current model is torch.Size([1024]).
size mismatch for ups.0.norm_0.param_free_norm.running_mean: copying a param with shape torch.Size([768]) from checkpoint, the shape in current model is torch.Size([1024]).
size mismatch for ups.0.norm_0.param_free_norm.running_var: copying a param with shape torch.Size([768]) from checkpoint, the shape in current model is torch.Size([1024]).
size mismatch for ups.0.norm_0.mlp_gamma.weight: copying a param with shape torch.Size([768, 128, 3, 3]) from checkpoint, the shape in current model is torch.Size([1024, 128, 3, 3]).
size mismatch for ups.0.norm_0.mlp_beta.weight: copying a param with shape torch.Size([768, 128, 3, 3]) from checkpoint, the shape in current model is torch.Size([1024, 128, 3, 3]).
size mismatch for ups.0.norm_1.param_free_norm.running_mean: copying a param with shape torch.Size([384]) from checkpoint, the shape in current model is torch.Size([512]).
size mismatch for ups.0.norm_1.param_free_norm.running_var: copying a param with shape torch.Size([384]) from checkpoint, the shape in current model is torch.Size([512]).
size mismatch for ups.0.norm_1.mlp_gamma.weight: copying a param with shape torch.Size([384, 128, 3, 3]) from checkpoint, the shape in current model is torch.Size([512, 128, 3, 3]).
size mismatch for ups.0.norm_1.mlp_beta.weight: copying a param with shape torch.Size([384, 128, 3, 3]) from checkpoint, the shape in current model is torch.Size([512, 128, 3, 3]).
size mismatch for ups.0.norm_s.param_free_norm.running_mean: copying a param with shape torch.Size([768]) from checkpoint, the shape in current model is torch.Size([1024]).
size mismatch for ups.0.norm_s.param_free_norm.running_var: copying a param with shape torch.Size([768]) from checkpoint, the shape in current model is torch.Size([1024]).
size mismatch for ups.0.norm_s.mlp_gamma.weight: copying a param with shape torch.Size([768, 128, 3, 3]) from checkpoint, the shape in current model is torch.Size([1024, 128, 3, 3]).
size mismatch for ups.0.norm_s.mlp_beta.weight: copying a param with shape torch.Size([768, 128, 3, 3]) from checkpoint, the shape in current model is torch.Size([1024, 128, 3, 3]).
size mismatch for ups.1.conv_0.bias: copying a param with shape torch.Size([192]) from checkpoint, the shape in current model is torch.Size([256]).
size mismatch for ups.1.conv_0.weight_orig: copying a param with shape torch.Size([192, 384, 3, 3]) from checkpoint, the shape in current model is torch.Size([256, 512, 3, 3]).
size mismatch for ups.1.conv_0.weight_u: copying a param with shape torch.Size([192]) from checkpoint, the shape in current model is torch.Size([256]).
size mismatch for ups.1.conv_0.weight_v: copying a param with shape torch.Size([3456]) from checkpoint, the shape in current model is torch.Size([4608]).
size mismatch for ups.1.conv_1.bias: copying a param with shape torch.Size([192]) from checkpoint, the shape in current model is torch.Size([256]).
size mismatch for ups.1.conv_1.weight_orig: copying a param with shape torch.Size([192, 192, 3, 3]) from checkpoint, the shape in current model is torch.Size([256, 256, 3, 3]).
size mismatch for ups.1.conv_1.weight_u: copying a param with shape torch.Size([192]) from checkpoint, the shape in current model is torch.Size([256]).
size mismatch for ups.1.conv_1.weight_v: copying a param with shape torch.Size([1728]) from checkpoint, the shape in current model is torch.Size([2304]).
size mismatch for ups.1.conv_s.weight_orig: copying a param with shape torch.Size([192, 384, 1, 1]) from checkpoint, the shape in current model is torch.Size([256, 512, 1, 1]).
size mismatch for ups.1.conv_s.weight_u: copying a param with shape torch.Size([192]) from checkpoint, the shape in current model is torch.Size([256]).
size mismatch for ups.1.conv_s.weight_v: copying a param with shape torch.Size([384]) from checkpoint, the shape in current model is torch.Size([512]).
size mismatch for ups.1.norm_0.param_free_norm.running_mean: copying a param with shape torch.Size([384]) from checkpoint, the shape in current model is torch.Size([512]).
size mismatch for ups.1.norm_0.param_free_norm.running_var: copying a param with shape torch.Size([384]) from checkpoint, the shape in current model is torch.Size([512]).
size mismatch for ups.1.norm_0.mlp_gamma.weight: copying a param with shape torch.Size([384, 128, 3, 3]) from checkpoint, the shape in current model is torch.Size([512, 128, 3, 3]).
size mismatch for ups.1.norm_0.mlp_beta.weight: copying a param with shape torch.Size([384, 128, 3, 3]) from checkpoint, the shape in current model is torch.Size([512, 128, 3, 3]).
size mismatch for ups.1.norm_1.param_free_norm.running_mean: copying a param with shape torch.Size([192]) from checkpoint, the shape in current model is torch.Size([256]).
size mismatch for ups.1.norm_1.param_free_norm.running_var: copying a param with shape torch.Size([192]) from checkpoint, the shape in current model is torch.Size([256]).
size mismatch for ups.1.norm_1.mlp_gamma.weight: copying a param with shape torch.Size([192, 128, 3, 3]) from checkpoint, the shape in current model is torch.Size([256, 128, 3, 3]).
size mismatch for ups.1.norm_1.mlp_beta.weight: copying a param with shape torch.Size([192, 128, 3, 3]) from checkpoint, the shape in current model is torch.Size([256, 128, 3, 3]).
size mismatch for ups.1.norm_s.param_free_norm.running_mean: copying a param with shape torch.Size([384]) from checkpoint, the shape in current model is torch.Size([512]).
size mismatch for ups.1.norm_s.param_free_norm.running_var: copying a param with shape torch.Size([384]) from checkpoint, the shape in current model is torch.Size([512]).
size mismatch for ups.1.norm_s.mlp_gamma.weight: copying a param with shape torch.Size([384, 128, 3, 3]) from checkpoint, the shape in current model is torch.Size([512, 128, 3, 3]).
size mismatch for ups.1.norm_s.mlp_beta.weight: copying a param with shape torch.Size([384, 128, 3, 3]) from checkpoint, the shape in current model is torch.Size([512, 128, 3, 3]).
size mismatch for ups.2.conv_0.bias: copying a param with shape torch.Size([96]) from checkpoint, the shape in current model is torch.Size([128]).
size mismatch for ups.2.conv_0.weight_orig: copying a param with shape torch.Size([96, 192, 3, 3]) from checkpoint, the shape in current model is torch.Size([128, 256, 3, 3]).
size mismatch for ups.2.conv_0.weight_u: copying a param with shape torch.Size([96]) from checkpoint, the shape in current model is torch.Size([128]).
size mismatch for ups.2.conv_0.weight_v: copying a param with shape torch.Size([1728]) from checkpoint, the shape in current model is torch.Size([2304]).
size mismatch for ups.2.conv_1.bias: copying a param with shape torch.Size([96]) from checkpoint, the shape in current model is torch.Size([128]).
size mismatch for ups.2.conv_1.weight_orig: copying a param with shape torch.Size([96, 96, 3, 3]) from checkpoint, the shape in current model is torch.Size([128, 128, 3, 3]).
size mismatch for ups.2.conv_1.weight_u: copying a param with shape torch.Size([96]) from checkpoint, the shape in current model is torch.Size([128]).
size mismatch for ups.2.conv_1.weight_v: copying a param with shape torch.Size([864]) from checkpoint, the shape in current model is torch.Size([1152]).
size mismatch for ups.2.conv_s.weight_orig: copying a param with shape torch.Size([96, 192, 1, 1]) from checkpoint, the shape in current model is torch.Size([128, 256, 1, 1]).
size mismatch for ups.2.conv_s.weight_u: copying a param with shape torch.Size([96]) from checkpoint, the shape in current model is torch.Size([128]).
size mismatch for ups.2.conv_s.weight_v: copying a param with shape torch.Size([192]) from checkpoint, the shape in current model is torch.Size([256]).
size mismatch for ups.2.norm_0.param_free_norm.running_mean: copying a param with shape torch.Size([192]) from checkpoint, the shape in current model is torch.Size([256]).
size mismatch for ups.2.norm_0.param_free_norm.running_var: copying a param with shape torch.Size([192]) from checkpoint, the shape in current model is torch.Size([256]).
size mismatch for ups.2.norm_0.mlp_gamma.weight: copying a param with shape torch.Size([192, 128, 3, 3]) from checkpoint, the shape in current model is torch.Size([256, 128, 3, 3]).
size mismatch for ups.2.norm_0.mlp_beta.weight: copying a param with shape torch.Size([192, 128, 3, 3]) from checkpoint, the shape in current model is torch.Size([256, 128, 3, 3]).
size mismatch for ups.2.norm_1.param_free_norm.running_mean: copying a param with shape torch.Size([96]) from checkpoint, the shape in current model is torch.Size([128]).
size mismatch for ups.2.norm_1.param_free_norm.running_var: copying a param with shape torch.Size([96]) from checkpoint, the shape in current model is torch.Size([128]).
size mismatch for ups.2.norm_1.mlp_shared.0.weight: copying a param with shape torch.Size([96, 3, 3, 3]) from checkpoint, the shape in current model is torch.Size([128, 3, 3, 3]).
size mismatch for ups.2.norm_1.mlp_shared.0.bias: copying a param with shape torch.Size([96]) from checkpoint, the shape in current model is torch.Size([128]).
size mismatch for ups.2.norm_1.mlp_gamma.weight: copying a param with shape torch.Size([96, 96, 3, 3]) from checkpoint, the shape in current model is torch.Size([128, 128, 3, 3]).
size mismatch for ups.2.norm_1.mlp_beta.weight: copying a param with shape torch.Size([96, 96, 3, 3]) from checkpoint, the shape in current model is torch.Size([128, 128, 3, 3]).
size mismatch for ups.2.norm_s.param_free_norm.running_mean: copying a param with shape torch.Size([192]) from checkpoint, the shape in current model is torch.Size([256]).
size mismatch for ups.2.norm_s.param_free_norm.running_var: copying a param with shape torch.Size([192]) from checkpoint, the shape in current model is torch.Size([256]).
size mismatch for ups.2.norm_s.mlp_gamma.weight: copying a param with shape torch.Size([192, 128, 3, 3]) from checkpoint, the shape in current model is torch.Size([256, 128, 3, 3]).
size mismatch for ups.2.norm_s.mlp_beta.weight: copying a param with shape torch.Size([192, 128, 3, 3]) from checkpoint, the shape in current model is torch.Size([256, 128, 3, 3]).
size mismatch for ups.3.conv_0.bias: copying a param with shape torch.Size([48]) from checkpoint, the shape in current model is torch.Size([64]).
size mismatch for ups.3.conv_0.weight_orig: copying a param with shape torch.Size([48, 96, 3, 3]) from checkpoint, the shape in current model is torch.Size([64, 128, 3, 3]).
size mismatch for ups.3.conv_0.weight_u: copying a param with shape torch.Size([48]) from checkpoint, the shape in current model is torch.Size([64]).
size mismatch for ups.3.conv_0.weight_v: copying a param with shape torch.Size([864]) from checkpoint, the shape in current model is torch.Size([1152]).
size mismatch for ups.3.conv_1.bias: copying a param with shape torch.Size([48]) from checkpoint, the shape in current model is torch.Size([64]).
size mismatch for ups.3.conv_1.weight_orig: copying a param with shape torch.Size([48, 48, 3, 3]) from checkpoint, the shape in current model is torch.Size([64, 64, 3, 3]).
size mismatch for ups.3.conv_1.weight_u: copying a param with shape torch.Size([48]) from checkpoint, the shape in current model is torch.Size([64]).
size mismatch for ups.3.conv_1.weight_v: copying a param with shape torch.Size([432]) from checkpoint, the shape in current model is torch.Size([576]).
size mismatch for ups.3.conv_s.weight_orig: copying a param with shape torch.Size([48, 96, 1, 1]) from checkpoint, the shape in current model is torch.Size([64, 128, 1, 1]).
size mismatch for ups.3.conv_s.weight_u: copying a param with shape torch.Size([48]) from checkpoint, the shape in current model is torch.Size([64]).
size mismatch for ups.3.conv_s.weight_v: copying a param with shape torch.Size([96]) from checkpoint, the shape in current model is torch.Size([128]).
size mismatch for ups.3.norm_0.param_free_norm.running_mean: copying a param with shape torch.Size([96]) from checkpoint, the shape in current model is torch.Size([128]).
size mismatch for ups.3.norm_0.param_free_norm.running_var: copying a param with shape torch.Size([96]) from checkpoint, the shape in current model is torch.Size([128]).
size mismatch for ups.3.norm_0.mlp_shared.0.weight: copying a param with shape torch.Size([96, 3, 3, 3]) from checkpoint, the shape in current model is torch.Size([128, 3, 3, 3]).
size mismatch for ups.3.norm_0.mlp_shared.0.bias: copying a param with shape torch.Size([96]) from checkpoint, the shape in current model is torch.Size([128]).
size mismatch for ups.3.norm_0.mlp_gamma.weight: copying a param with shape torch.Size([96, 96, 3, 3]) from checkpoint, the shape in current model is torch.Size([128, 128, 3, 3]).
size mismatch for ups.3.norm_0.mlp_beta.weight: copying a param with shape torch.Size([96, 96, 3, 3]) from checkpoint, the shape in current model is torch.Size([128, 128, 3, 3]).
size mismatch for ups.3.norm_1.param_free_norm.running_mean: copying a param with shape torch.Size([48]) from checkpoint, the shape in current model is torch.Size([64]).
size mismatch for ups.3.norm_1.param_free_norm.running_var: copying a param with shape torch.Size([48]) from checkpoint, the shape in current model is torch.Size([64]).
size mismatch for ups.3.norm_1.mlp_shared.0.weight: copying a param with shape torch.Size([48, 3, 3, 3]) from checkpoint, the shape in current model is torch.Size([64, 3, 3, 3]).
size mismatch for ups.3.norm_1.mlp_shared.0.bias: copying a param with shape torch.Size([48]) from checkpoint, the shape in current model is torch.Size([64]).
size mismatch for ups.3.norm_1.mlp_gamma.weight: copying a param with shape torch.Size([48, 48, 3, 3]) from checkpoint, the shape in current model is torch.Size([64, 64, 3, 3]).
size mismatch for ups.3.norm_1.mlp_beta.weight: copying a param with shape torch.Size([48, 48, 3, 3]) from checkpoint, the shape in current model is torch.Size([64, 64, 3, 3]).
size mismatch for ups.3.norm_s.param_free_norm.running_mean: copying a param with shape torch.Size([96]) from checkpoint, the shape in current model is torch.Size([128]).
size mismatch for ups.3.norm_s.param_free_norm.running_var: copying a param with shape torch.Size([96]) from checkpoint, the shape in current model is torch.Size([128]).
size mismatch for ups.3.norm_s.mlp_shared.0.weight: copying a param with shape torch.Size([96, 3, 3, 3]) from checkpoint, the shape in current model is torch.Size([128, 3, 3, 3]).
size mismatch for ups.3.norm_s.mlp_shared.0.bias: copying a param with shape torch.Size([96]) from checkpoint, the shape in current model is torch.Size([128]).
size mismatch for ups.3.norm_s.mlp_gamma.weight: copying a param with shape torch.Size([96, 96, 3, 3]) from checkpoint, the shape in current model is torch.Size([128, 128, 3, 3]).
size mismatch for ups.3.norm_s.mlp_beta.weight: copying a param with shape torch.Size([96, 96, 3, 3]) from checkpoint, the shape in current model is torch.Size([128, 128, 3, 3]).
size mismatch for to_rgbs.0.weight: copying a param with shape torch.Size([3, 384, 3, 3]) from checkpoint, the shape in current model is torch.Size([3, 512, 3, 3]).
size mismatch for to_rgbs.1.weight: copying a param with shape torch.Size([3, 192, 3, 3]) from checkpoint, the shape in current model is torch.Size([3, 256, 3, 3]).
size mismatch for to_rgbs.2.weight: copying a param with shape torch.Size([3, 96, 3, 3]) from checkpoint, the shape in current model is torch.Size([3, 128, 3, 3]).
size mismatch for to_rgbs.3.weight: copying a param with shape torch.Size([3, 48, 3, 3]) from checkpoint, the shape in current model is torch.Size([3, 64, 3, 3]).
size mismatch for lip_encoder.model.0.weight: copying a param with shape torch.Size([48, 3, 3, 3]) from checkpoint, the shape in current model is torch.Size([64, 3, 3, 3]).
size mismatch for lip_encoder.model.3.logit.0.weight: copying a param with shape torch.Size([48, 48, 3, 3]) from checkpoint, the shape in current model is torch.Size([64, 64, 3, 3]).
size mismatch for lip_encoder.model.3.logit.1.weight: copying a param with shape torch.Size([48]) from checkpoint, the shape in current model is torch.Size([64]).
size mismatch for lip_encoder.model.3.logit.1.bias: copying a param with shape torch.Size([48]) from checkpoint, the shape in current model is torch.Size([64]).
size mismatch for lip_encoder.model.4.weight: copying a param with shape torch.Size([96, 48, 3, 3]) from checkpoint, the shape in current model is torch.Size([128, 64, 3, 3]).
size mismatch for lip_encoder.model.4.bias: copying a param with shape torch.Size([96]) from checkpoint, the shape in current model is torch.Size([128]).
size mismatch for lip_encoder.model.7.logit.0.weight: copying a param with shape torch.Size([96, 96, 3, 3]) from checkpoint, the shape in current model is torch.Size([128, 128, 3, 3]).
size mismatch for lip_encoder.model.7.logit.1.weight: copying a param with shape torch.Size([96]) from checkpoint, the shape in current model is torch.Size([128]).
size mismatch for lip_encoder.model.7.logit.1.bias: copying a param with shape torch.Size([96]) from checkpoint, the shape in current model is torch.Size([128]).
size mismatch for lip_encoder.model.8.weight: copying a param with shape torch.Size([192, 96, 3, 3]) from checkpoint, the shape in current model is torch.Size([256, 128, 3, 3]).
size mismatch for lip_encoder.model.8.bias: copying a param with shape torch.Size([192]) from checkpoint, the shape in current model is torch.Size([256]).
size mismatch for lip_encoder.model.11.logit.0.weight: copying a param with shape torch.Size([192, 192, 3, 3]) from checkpoint, the shape in current model is torch.Size([256, 256, 3, 3]).
size mismatch for lip_encoder.model.11.logit.1.weight: copying a param with shape torch.Size([192]) from checkpoint, the shape in current model is torch.Size([256]).
size mismatch for lip_encoder.model.11.logit.1.bias: copying a param with shape torch.Size([192]) from checkpoint, the shape in current model is torch.Size([256]).
size mismatch for lip_encoder.model.12.weight: copying a param with shape torch.Size([384, 192, 3, 3]) from checkpoint, the shape in current model is torch.Size([512, 256, 3, 3]).
size mismatch for lip_encoder.model.12.bias: copying a param with shape torch.Size([384]) from checkpoint, the shape in current model is torch.Size([512]).
size mismatch for lip_encoder.model.15.logit.0.weight: copying a param with shape torch.Size([384, 384, 3, 3]) from checkpoint, the shape in current model is torch.Size([512, 512, 3, 3]).
size mismatch for lip_encoder.model.15.logit.1.weight: copying a param with shape torch.Size([384]) from checkpoint, the shape in current model is torch.Size([512]).
size mismatch for lip_encoder.model.15.logit.1.bias: copying a param with shape torch.Size([384]) from checkpoint, the shape in current model is torch.Size([512]).
size mismatch for lip_encoder.model.16.weight: copying a param with shape torch.Size([768, 384, 3, 3]) from checkpoint, the shape in current model is torch.Size([1024, 512, 3, 3]).
size mismatch for lip_encoder.model.16.bias: copying a param with shape torch.Size([768]) from checkpoint, the shape in current model is torch.Size([1024]).
size mismatch for lip_encoder.model.19.logit.0.weight: copying a param with shape torch.Size([768, 768, 3, 3]) from checkpoint, the shape in current model is torch.Size([1024, 1024, 3, 3]).
size mismatch for lip_encoder.model.19.logit.1.weight: copying a param with shape torch.Size([768]) from checkpoint, the shape in current model is torch.Size([1024]).
size mismatch for lip_encoder.model.19.logit.1.bias: copying a param with shape torch.Size([768]) from checkpoint, the shape in current model is torch.Size([1024]).

hi ,when i use test.py ,but meet problem

python test.py
dataset [TestDataset] of size 1 was created
Network [HiFaceGANGenerator] was created. Total number of parameters: 130.6 million. To see the architecture, do print(network).
Traceback (most recent call last):
File "test.py", line 22, in
model = Pix2PixModel(opt)
File "/home/anthony/Downloads/Face-Renovation-master/models/pix2pix_model.py", line 26, in init
self.netG, self.netD, self.netE = self.initialize_networks(opt)
File "/home/anthony/Downloads/Face-Renovation-master/models/pix2pix_model.py", line 188, in initialize_networks
netG = util.load_network(netG, 'G', opt.which_epoch, opt)
File "/home/anthony/Downloads/Face-Renovation-master/util/util.py", line 209, in load_network
net.load_state_dict(weights)
File "/home/anthony/Downloads/ENTER/lib/python3.8/site-packages/torch/nn/modules/module.py", line 1044, in load_state_dict
raise RuntimeError('Error(s) in loading state_dict for {}:\n\t{}'.format(
RuntimeError: Error(s) in loading state_dict for HiFaceGANGenerator:
Missing key(s) in state_dict: "encoder.head.0.weight", "encoder.encoder_0.0.logit.0.weight", "encoder.encoder_0.0.logit.1.weight", "encoder.encoder_0.0.logit.1.bias", "encoder.encoder_0.1.weight", "encoder.encoder_0.1.bias", "encoder.encoder_1.0.logit.0.weight", "encoder.encoder_1.0.logit.1.weight", "encoder.encoder_1.0.logit.1.bias", "encoder.encoder_1.1.weight", "encoder.encoder_1.1.bias", "encoder.encoder_2.0.logit.0.weight", "encoder.encoder_2.0.logit.1.weight", "encoder.encoder_2.0.logit.1.bias", "encoder.encoder_2.1.weight", "encoder.encoder_2.1.bias", "encoder.encoder_3.0.logit.0.weight", "encoder.encoder_3.0.logit.1.weight", "encoder.encoder_3.0.logit.1.bias", "encoder.encoder_3.1.weight", "encoder.encoder_3.1.bias", "encoder.encoder_4.0.logit.0.weight", "encoder.encoder_4.0.logit.1.weight", "encoder.encoder_4.0.logit.1.bias", "encoder.encoder_4.1.weight", "encoder.encoder_4.1.bias".
Unexpected key(s) in state_dict: "lip_encoder.model.0.weight", "lip_encoder.model.3.logit.0.weight", "lip_encoder.model.3.logit.1.weight", "lip_encoder.model.3.logit.1.bias", "lip_encoder.model.4.weight", "lip_encoder.model.4.bias", "lip_encoder.model.7.logit.0.weight", "lip_encoder.model.7.logit.1.weight", "lip_encoder.model.7.logit.1.bias", "lip_encoder.model.8.weight", "lip_encoder.model.8.bias", "lip_encoder.model.11.logit.0.weight", "lip_encoder.model.11.logit.1.weight", "lip_encoder.model.11.logit.1.bias", "lip_encoder.model.12.weight", "lip_encoder.model.12.bias", "lip_encoder.model.15.logit.0.weight", "lip_encoder.model.15.logit.1.weight", "lip_encoder.model.15.logit.1.bias", "lip_encoder.model.16.weight", "lip_encoder.model.16.bias", "lip_encoder.model.19.logit.0.weight", "lip_encoder.model.19.logit.1.weight", "lip_encoder.model.19.logit.1.bias", "lip_encoder.model.20.weight", "lip_encoder.model.20.bias".
size mismatch for fc.weight: copying a param with shape torch.Size([768, 3, 3, 3]) from checkpoint, the shape in current model is torch.Size([1024, 3, 3, 3]).
size mismatch for fc.bias: copying a param with shape torch.Size([768]) from checkpoint, the shape in current model is torch.Size([1024]).
size mismatch for head_0.conv_0.bias: copying a param with shape torch.Size([768]) from checkpoint, the shape in current model is torch.Size([1024]).
size mismatch for head_0.conv_0.weight_orig: copying a param with shape torch.Size([768, 768, 3, 3]) from checkpoint, the shape in current model is torch.Size([1024, 1024, 3, 3]).
size mismatch for head_0.conv_0.weight_u: copying a param with shape torch.Size([768]) from checkpoint, the shape in current model is torch.Size([1024]).
size mismatch for head_0.conv_0.weight_v: copying a param with shape torch.Size([6912]) from checkpoint, the shape in current model is torch.Size([9216]).
size mismatch for head_0.conv_1.bias: copying a param with shape torch.Size([768]) from checkpoint, the shape in current model is torch.Size([1024]).
size mismatch for head_0.conv_1.weight_orig: copying a param with shape torch.Size([768, 768, 3, 3]) from checkpoint, the shape in current model is torch.Size([1024, 1024, 3, 3]).
size mismatch for head_0.conv_1.weight_u: copying a param with shape torch.Size([768]) from checkpoint, the shape in current model is torch.Size([1024]).
size mismatch for head_0.conv_1.weight_v: copying a param with shape torch.Size([6912]) from checkpoint, the shape in current model is torch.Size([9216]).
size mismatch for head_0.norm_0.param_free_norm.running_mean: copying a param with shape torch.Size([768]) from checkpoint, the shape in current model is torch.Size([1024]).
size mismatch for head_0.norm_0.param_free_norm.running_var: copying a param with shape torch.Size([768]) from checkpoint, the shape in current model is torch.Size([1024]).
size mismatch for head_0.norm_0.mlp_shared.0.weight: copying a param with shape torch.Size([128, 3, 3, 3]) from checkpoint, the shape in current model is torch.Size([128, 1024, 3, 3]).
size mismatch for head_0.norm_0.mlp_gamma.weight: copying a param with shape torch.Size([768, 128, 3, 3]) from checkpoint, the shape in current model is torch.Size([1024, 128, 3, 3]).
size mismatch for head_0.norm_0.mlp_beta.weight: copying a param with shape torch.Size([768, 128, 3, 3]) from checkpoint, the shape in current model is torch.Size([1024, 128, 3, 3]).
size mismatch for head_0.norm_1.param_free_norm.running_mean: copying a param with shape torch.Size([768]) from checkpoint, the shape in current model is torch.Size([1024]).
size mismatch for head_0.norm_1.param_free_norm.running_var: copying a param with shape torch.Size([768]) from checkpoint, the shape in current model is torch.Size([1024]).
size mismatch for head_0.norm_1.mlp_shared.0.weight: copying a param with shape torch.Size([128, 3, 3, 3]) from checkpoint, the shape in current model is torch.Size([128, 1024, 3, 3]).
size mismatch for head_0.norm_1.mlp_gamma.weight: copying a param with shape torch.Size([768, 128, 3, 3]) from checkpoint, the shape in current model is torch.Size([1024, 128, 3, 3]).
size mismatch for head_0.norm_1.mlp_beta.weight: copying a param with shape torch.Size([768, 128, 3, 3]) from checkpoint, the shape in current model is torch.Size([1024, 128, 3, 3]).
size mismatch for G_middle_0.conv_0.bias: copying a param with shape torch.Size([768]) from checkpoint, the shape in current model is torch.Size([1024]).
size mismatch for G_middle_0.conv_0.weight_orig: copying a param with shape torch.Size([768, 768, 3, 3]) from checkpoint, the shape in current model is torch.Size([1024, 1024, 3, 3]).
size mismatch for G_middle_0.conv_0.weight_u: copying a param with shape torch.Size([768]) from checkpoint, the shape in current model is torch.Size([1024]).
size mismatch for G_middle_0.conv_0.weight_v: copying a param with shape torch.Size([6912]) from checkpoint, the shape in current model is torch.Size([9216]).
size mismatch for G_middle_0.conv_1.bias: copying a param with shape torch.Size([768]) from checkpoint, the shape in current model is torch.Size([1024]).
size mismatch for G_middle_0.conv_1.weight_orig: copying a param with shape torch.Size([768, 768, 3, 3]) from checkpoint, the shape in current model is torch.Size([1024, 1024, 3, 3]).
size mismatch for G_middle_0.conv_1.weight_u: copying a param with shape torch.Size([768]) from checkpoint, the shape in current model is torch.Size([1024]).
size mismatch for G_middle_0.conv_1.weight_v: copying a param with shape torch.Size([6912]) from checkpoint, the shape in current model is torch.Size([9216]).
size mismatch for G_middle_0.norm_0.param_free_norm.running_mean: copying a param with shape torch.Size([768]) from checkpoint, the shape in current model is torch.Size([1024]).
size mismatch for G_middle_0.norm_0.param_free_norm.running_var: copying a param with shape torch.Size([768]) from checkpoint, the shape in current model is torch.Size([1024]).
size mismatch for G_middle_0.norm_0.mlp_shared.0.weight: copying a param with shape torch.Size([128, 3, 3, 3]) from checkpoint, the shape in current model is torch.Size([128, 1024, 3, 3]).
size mismatch for G_middle_0.norm_0.mlp_gamma.weight: copying a param with shape torch.Size([768, 128, 3, 3]) from checkpoint, the shape in current model is torch.Size([1024, 128, 3, 3]).
size mismatch for G_middle_0.norm_0.mlp_beta.weight: copying a param with shape torch.Size([768, 128, 3, 3]) from checkpoint, the shape in current model is torch.Size([1024, 128, 3, 3]).
size mismatch for G_middle_0.norm_1.param_free_norm.running_mean: copying a param with shape torch.Size([768]) from checkpoint, the shape in current model is torch.Size([1024]).
size mismatch for G_middle_0.norm_1.param_free_norm.running_var: copying a param with shape torch.Size([768]) from checkpoint, the shape in current model is torch.Size([1024]).
size mismatch for G_middle_0.norm_1.mlp_shared.0.weight: copying a param with shape torch.Size([128, 3, 3, 3]) from checkpoint, the shape in current model is torch.Size([128, 1024, 3, 3]).
size mismatch for G_middle_0.norm_1.mlp_gamma.weight: copying a param with shape torch.Size([768, 128, 3, 3]) from checkpoint, the shape in current model is torch.Size([1024, 128, 3, 3]).
size mismatch for G_middle_0.norm_1.mlp_beta.weight: copying a param with shape torch.Size([768, 128, 3, 3]) from checkpoint, the shape in current model is torch.Size([1024, 128, 3, 3]).
size mismatch for G_middle_1.conv_0.bias: copying a param with shape torch.Size([768]) from checkpoint, the shape in current model is torch.Size([1024]).
size mismatch for G_middle_1.conv_0.weight_orig: copying a param with shape torch.Size([768, 768, 3, 3]) from checkpoint, the shape in current model is torch.Size([1024, 1024, 3, 3]).
size mismatch for G_middle_1.conv_0.weight_u: copying a param with shape torch.Size([768]) from checkpoint, the shape in current model is torch.Size([1024]).
size mismatch for G_middle_1.conv_0.weight_v: copying a param with shape torch.Size([6912]) from checkpoint, the shape in current model is torch.Size([9216]).
size mismatch for G_middle_1.conv_1.bias: copying a param with shape torch.Size([768]) from checkpoint, the shape in current model is torch.Size([1024]).
size mismatch for G_middle_1.conv_1.weight_orig: copying a param with shape torch.Size([768, 768, 3, 3]) from checkpoint, the shape in current model is torch.Size([1024, 1024, 3, 3]).
size mismatch for G_middle_1.conv_1.weight_u: copying a param with shape torch.Size([768]) from checkpoint, the shape in current model is torch.Size([1024]).
size mismatch for G_middle_1.conv_1.weight_v: copying a param with shape torch.Size([6912]) from checkpoint, the shape in current model is torch.Size([9216]).
size mismatch for G_middle_1.norm_0.param_free_norm.running_mean: copying a param with shape torch.Size([768]) from checkpoint, the shape in current model is torch.Size([1024]).
size mismatch for G_middle_1.norm_0.param_free_norm.running_var: copying a param with shape torch.Size([768]) from checkpoint, the shape in current model is torch.Size([1024]).
size mismatch for G_middle_1.norm_0.mlp_shared.0.weight: copying a param with shape torch.Size([128, 3, 3, 3]) from checkpoint, the shape in current model is torch.Size([128, 1024, 3, 3]).
size mismatch for G_middle_1.norm_0.mlp_gamma.weight: copying a param with shape torch.Size([768, 128, 3, 3]) from checkpoint, the shape in current model is torch.Size([1024, 128, 3, 3]).
size mismatch for G_middle_1.norm_0.mlp_beta.weight: copying a param with shape torch.Size([768, 128, 3, 3]) from checkpoint, the shape in current model is torch.Size([1024, 128, 3, 3]).
size mismatch for G_middle_1.norm_1.param_free_norm.running_mean: copying a param with shape torch.Size([768]) from checkpoint, the shape in current model is torch.Size([1024]).
size mismatch for G_middle_1.norm_1.param_free_norm.running_var: copying a param with shape torch.Size([768]) from checkpoint, the shape in current model is torch.Size([1024]).
size mismatch for G_middle_1.norm_1.mlp_shared.0.weight: copying a param with shape torch.Size([128, 3, 3, 3]) from checkpoint, the shape in current model is torch.Size([128, 1024, 3, 3]).
size mismatch for G_middle_1.norm_1.mlp_gamma.weight: copying a param with shape torch.Size([768, 128, 3, 3]) from checkpoint, the shape in current model is torch.Size([1024, 128, 3, 3]).
size mismatch for G_middle_1.norm_1.mlp_beta.weight: copying a param with shape torch.Size([768, 128, 3, 3]) from checkpoint, the shape in current model is torch.Size([1024, 128, 3, 3]).
size mismatch for ups.0.conv_0.bias: copying a param with shape torch.Size([384]) from checkpoint, the shape in current model is torch.Size([512]).
size mismatch for ups.0.conv_0.weight_orig: copying a param with shape torch.Size([384, 768, 3, 3]) from checkpoint, the shape in current model is torch.Size([512, 1024, 3, 3]).
size mismatch for ups.0.conv_0.weight_u: copying a param with shape torch.Size([384]) from checkpoint, the shape in current model is torch.Size([512]).
size mismatch for ups.0.conv_0.weight_v: copying a param with shape torch.Size([6912]) from checkpoint, the shape in current model is torch.Size([9216]).
size mismatch for ups.0.conv_1.bias: copying a param with shape torch.Size([384]) from checkpoint, the shape in current model is torch.Size([512]).
size mismatch for ups.0.conv_1.weight_orig: copying a param with shape torch.Size([384, 384, 3, 3]) from checkpoint, the shape in current model is torch.Size([512, 512, 3, 3]).
size mismatch for ups.0.conv_1.weight_u: copying a param with shape torch.Size([384]) from checkpoint, the shape in current model is torch.Size([512]).
size mismatch for ups.0.conv_1.weight_v: copying a param with shape torch.Size([3456]) from checkpoint, the shape in current model is torch.Size([4608]).
size mismatch for ups.0.conv_s.weight_orig: copying a param with shape torch.Size([384, 768, 1, 1]) from checkpoint, the shape in current model is torch.Size([512, 1024, 1, 1]).
size mismatch for ups.0.conv_s.weight_u: copying a param with shape torch.Size([384]) from checkpoint, the shape in current model is torch.Size([512]).
size mismatch for ups.0.conv_s.weight_v: copying a param with shape torch.Size([768]) from checkpoint, the shape in current model is torch.Size([1024]).
size mismatch for ups.0.norm_0.param_free_norm.running_mean: copying a param with shape torch.Size([768]) from checkpoint, the shape in current model is torch.Size([1024]).
size mismatch for ups.0.norm_0.param_free_norm.running_var: copying a param with shape torch.Size([768]) from checkpoint, the shape in current model is torch.Size([1024]).
size mismatch for ups.0.norm_0.mlp_shared.0.weight: copying a param with shape torch.Size([128, 3, 3, 3]) from checkpoint, the shape in current model is torch.Size([128, 512, 3, 3]).
size mismatch for ups.0.norm_0.mlp_gamma.weight: copying a param with shape torch.Size([768, 128, 3, 3]) from checkpoint, the shape in current model is torch.Size([1024, 128, 3, 3]).
size mismatch for ups.0.norm_0.mlp_beta.weight: copying a param with shape torch.Size([768, 128, 3, 3]) from checkpoint, the shape in current model is torch.Size([1024, 128, 3, 3]).
size mismatch for ups.0.norm_1.param_free_norm.running_mean: copying a param with shape torch.Size([384]) from checkpoint, the shape in current model is torch.Size([512]).
size mismatch for ups.0.norm_1.param_free_norm.running_var: copying a param with shape torch.Size([384]) from checkpoint, the shape in current model is torch.Size([512]).
size mismatch for ups.0.norm_1.mlp_shared.0.weight: copying a param with shape torch.Size([128, 3, 3, 3]) from checkpoint, the shape in current model is torch.Size([128, 512, 3, 3]).
size mismatch for ups.0.norm_1.mlp_gamma.weight: copying a param with shape torch.Size([384, 128, 3, 3]) from checkpoint, the shape in current model is torch.Size([512, 128, 3, 3]).
size mismatch for ups.0.norm_1.mlp_beta.weight: copying a param with shape torch.Size([384, 128, 3, 3]) from checkpoint, the shape in current model is torch.Size([512, 128, 3, 3]).
size mismatch for ups.0.norm_s.param_free_norm.running_mean: copying a param with shape torch.Size([768]) from checkpoint, the shape in current model is torch.Size([1024]).
size mismatch for ups.0.norm_s.param_free_norm.running_var: copying a param with shape torch.Size([768]) from checkpoint, the shape in current model is torch.Size([1024]).
size mismatch for ups.0.norm_s.mlp_shared.0.weight: copying a param with shape torch.Size([128, 3, 3, 3]) from checkpoint, the shape in current model is torch.Size([128, 512, 3, 3]).
size mismatch for ups.0.norm_s.mlp_gamma.weight: copying a param with shape torch.Size([768, 128, 3, 3]) from checkpoint, the shape in current model is torch.Size([1024, 128, 3, 3]).
size mismatch for ups.0.norm_s.mlp_beta.weight: copying a param with shape torch.Size([768, 128, 3, 3]) from checkpoint, the shape in current model is torch.Size([1024, 128, 3, 3]).
size mismatch for ups.1.conv_0.bias: copying a param with shape torch.Size([192]) from checkpoint, the shape in current model is torch.Size([256]).
size mismatch for ups.1.conv_0.weight_orig: copying a param with shape torch.Size([192, 384, 3, 3]) from checkpoint, the shape in current model is torch.Size([256, 512, 3, 3]).
size mismatch for ups.1.conv_0.weight_u: copying a param with shape torch.Size([192]) from checkpoint, the shape in current model is torch.Size([256]).
size mismatch for ups.1.conv_0.weight_v: copying a param with shape torch.Size([3456]) from checkpoint, the shape in current model is torch.Size([4608]).
size mismatch for ups.1.conv_1.bias: copying a param with shape torch.Size([192]) from checkpoint, the shape in current model is torch.Size([256]).
size mismatch for ups.1.conv_1.weight_orig: copying a param with shape torch.Size([192, 192, 3, 3]) from checkpoint, the shape in current model is torch.Size([256, 256, 3, 3]).
size mismatch for ups.1.conv_1.weight_u: copying a param with shape torch.Size([192]) from checkpoint, the shape in current model is torch.Size([256]).
size mismatch for ups.1.conv_1.weight_v: copying a param with shape torch.Size([1728]) from checkpoint, the shape in current model is torch.Size([2304]).
size mismatch for ups.1.conv_s.weight_orig: copying a param with shape torch.Size([192, 384, 1, 1]) from checkpoint, the shape in current model is torch.Size([256, 512, 1, 1]).
size mismatch for ups.1.conv_s.weight_u: copying a param with shape torch.Size([192]) from checkpoint, the shape in current model is torch.Size([256]).
size mismatch for ups.1.conv_s.weight_v: copying a param with shape torch.Size([384]) from checkpoint, the shape in current model is torch.Size([512]).
size mismatch for ups.1.norm_0.param_free_norm.running_mean: copying a param with shape torch.Size([384]) from checkpoint, the shape in current model is torch.Size([512]).
size mismatch for ups.1.norm_0.param_free_norm.running_var: copying a param with shape torch.Size([384]) from checkpoint, the shape in current model is torch.Size([512]).
size mismatch for ups.1.norm_0.mlp_shared.0.weight: copying a param with shape torch.Size([128, 3, 3, 3]) from checkpoint, the shape in current model is torch.Size([128, 256, 3, 3]).
size mismatch for ups.1.norm_0.mlp_gamma.weight: copying a param with shape torch.Size([384, 128, 3, 3]) from checkpoint, the shape in current model is torch.Size([512, 128, 3, 3]).
size mismatch for ups.1.norm_0.mlp_beta.weight: copying a param with shape torch.Size([384, 128, 3, 3]) from checkpoint, the shape in current model is torch.Size([512, 128, 3, 3]).
size mismatch for ups.1.norm_1.param_free_norm.running_mean: copying a param with shape torch.Size([192]) from checkpoint, the shape in current model is torch.Size([256]).
size mismatch for ups.1.norm_1.param_free_norm.running_var: copying a param with shape torch.Size([192]) from checkpoint, the shape in current model is torch.Size([256]).
size mismatch for ups.1.norm_1.mlp_shared.0.weight: copying a param with shape torch.Size([128, 3, 3, 3]) from checkpoint, the shape in current model is torch.Size([128, 256, 3, 3]).
size mismatch for ups.1.norm_1.mlp_gamma.weight: copying a param with shape torch.Size([192, 128, 3, 3]) from checkpoint, the shape in current model is torch.Size([256, 128, 3, 3]).
size mismatch for ups.1.norm_1.mlp_beta.weight: copying a param with shape torch.Size([192, 128, 3, 3]) from checkpoint, the shape in current model is torch.Size([256, 128, 3, 3]).
size mismatch for ups.1.norm_s.param_free_norm.running_mean: copying a param with shape torch.Size([384]) from checkpoint, the shape in current model is torch.Size([512]).
size mismatch for ups.1.norm_s.param_free_norm.running_var: copying a param with shape torch.Size([384]) from checkpoint, the shape in current model is torch.Size([512]).
size mismatch for ups.1.norm_s.mlp_shared.0.weight: copying a param with shape torch.Size([128, 3, 3, 3]) from checkpoint, the shape in current model is torch.Size([128, 256, 3, 3]).
size mismatch for ups.1.norm_s.mlp_gamma.weight: copying a param with shape torch.Size([384, 128, 3, 3]) from checkpoint, the shape in current model is torch.Size([512, 128, 3, 3]).
size mismatch for ups.1.norm_s.mlp_beta.weight: copying a param with shape torch.Size([384, 128, 3, 3]) from checkpoint, the shape in current model is torch.Size([512, 128, 3, 3]).
size mismatch for ups.2.conv_0.bias: copying a param with shape torch.Size([96]) from checkpoint, the shape in current model is torch.Size([128]).
size mismatch for ups.2.conv_0.weight_orig: copying a param with shape torch.Size([96, 192, 3, 3]) from checkpoint, the shape in current model is torch.Size([128, 256, 3, 3]).
size mismatch for ups.2.conv_0.weight_u: copying a param with shape torch.Size([96]) from checkpoint, the shape in current model is torch.Size([128]).
size mismatch for ups.2.conv_0.weight_v: copying a param with shape torch.Size([1728]) from checkpoint, the shape in current model is torch.Size([2304]).
size mismatch for ups.2.conv_1.bias: copying a param with shape torch.Size([96]) from checkpoint, the shape in current model is torch.Size([128]).
size mismatch for ups.2.conv_1.weight_orig: copying a param with shape torch.Size([96, 96, 3, 3]) from checkpoint, the shape in current model is torch.Size([128, 128, 3, 3]).
size mismatch for ups.2.conv_1.weight_u: copying a param with shape torch.Size([96]) from checkpoint, the shape in current model is torch.Size([128]).
size mismatch for ups.2.conv_1.weight_v: copying a param with shape torch.Size([864]) from checkpoint, the shape in current model is torch.Size([1152]).
size mismatch for ups.2.conv_s.weight_orig: copying a param with shape torch.Size([96, 192, 1, 1]) from checkpoint, the shape in current model is torch.Size([128, 256, 1, 1]).
size mismatch for ups.2.conv_s.weight_u: copying a param with shape torch.Size([96]) from checkpoint, the shape in current model is torch.Size([128]).
size mismatch for ups.2.conv_s.weight_v: copying a param with shape torch.Size([192]) from checkpoint, the shape in current model is torch.Size([256]).
size mismatch for ups.2.norm_0.param_free_norm.running_mean: copying a param with shape torch.Size([192]) from checkpoint, the shape in current model is torch.Size([256]).
size mismatch for ups.2.norm_0.param_free_norm.running_var: copying a param with shape torch.Size([192]) from checkpoint, the shape in current model is torch.Size([256]).
size mismatch for ups.2.norm_0.mlp_shared.0.weight: copying a param with shape torch.Size([128, 3, 3, 3]) from checkpoint, the shape in current model is torch.Size([128, 128, 3, 3]).
size mismatch for ups.2.norm_0.mlp_gamma.weight: copying a param with shape torch.Size([192, 128, 3, 3]) from checkpoint, the shape in current model is torch.Size([256, 128, 3, 3]).
size mismatch for ups.2.norm_0.mlp_beta.weight: copying a param with shape torch.Size([192, 128, 3, 3]) from checkpoint, the shape in current model is torch.Size([256, 128, 3, 3]).
size mismatch for ups.2.norm_1.param_free_norm.running_mean: copying a param with shape torch.Size([96]) from checkpoint, the shape in current model is torch.Size([128]).
size mismatch for ups.2.norm_1.param_free_norm.running_var: copying a param with shape torch.Size([96]) from checkpoint, the shape in current model is torch.Size([128]).
size mismatch for ups.2.norm_1.mlp_shared.0.weight: copying a param with shape torch.Size([96, 3, 3, 3]) from checkpoint, the shape in current model is torch.Size([128, 128, 3, 3]).
size mismatch for ups.2.norm_1.mlp_shared.0.bias: copying a param with shape torch.Size([96]) from checkpoint, the shape in current model is torch.Size([128]).
size mismatch for ups.2.norm_1.mlp_gamma.weight: copying a param with shape torch.Size([96, 96, 3, 3]) from checkpoint, the shape in current model is torch.Size([128, 128, 3, 3]).
size mismatch for ups.2.norm_1.mlp_beta.weight: copying a param with shape torch.Size([96, 96, 3, 3]) from checkpoint, the shape in current model is torch.Size([128, 128, 3, 3]).
size mismatch for ups.2.norm_s.param_free_norm.running_mean: copying a param with shape torch.Size([192]) from checkpoint, the shape in current model is torch.Size([256]).
size mismatch for ups.2.norm_s.param_free_norm.running_var: copying a param with shape torch.Size([192]) from checkpoint, the shape in current model is torch.Size([256]).
size mismatch for ups.2.norm_s.mlp_shared.0.weight: copying a param with shape torch.Size([128, 3, 3, 3]) from checkpoint, the shape in current model is torch.Size([128, 128, 3, 3]).
size mismatch for ups.2.norm_s.mlp_gamma.weight: copying a param with shape torch.Size([192, 128, 3, 3]) from checkpoint, the shape in current model is torch.Size([256, 128, 3, 3]).
size mismatch for ups.2.norm_s.mlp_beta.weight: copying a param with shape torch.Size([192, 128, 3, 3]) from checkpoint, the shape in current model is torch.Size([256, 128, 3, 3]).
size mismatch for ups.3.conv_0.bias: copying a param with shape torch.Size([48]) from checkpoint, the shape in current model is torch.Size([64]).
size mismatch for ups.3.conv_0.weight_orig: copying a param with shape torch.Size([48, 96, 3, 3]) from checkpoint, the shape in current model is torch.Size([64, 128, 3, 3]).
size mismatch for ups.3.conv_0.weight_u: copying a param with shape torch.Size([48]) from checkpoint, the shape in current model is torch.Size([64]).
size mismatch for ups.3.conv_0.weight_v: copying a param with shape torch.Size([864]) from checkpoint, the shape in current model is torch.Size([1152]).
size mismatch for ups.3.conv_1.bias: copying a param with shape torch.Size([48]) from checkpoint, the shape in current model is torch.Size([64]).
size mismatch for ups.3.conv_1.weight_orig: copying a param with shape torch.Size([48, 48, 3, 3]) from checkpoint, the shape in current model is torch.Size([64, 64, 3, 3]).
size mismatch for ups.3.conv_1.weight_u: copying a param with shape torch.Size([48]) from checkpoint, the shape in current model is torch.Size([64]).
size mismatch for ups.3.conv_1.weight_v: copying a param with shape torch.Size([432]) from checkpoint, the shape in current model is torch.Size([576]).
size mismatch for ups.3.conv_s.weight_orig: copying a param with shape torch.Size([48, 96, 1, 1]) from checkpoint, the shape in current model is torch.Size([64, 128, 1, 1]).
size mismatch for ups.3.conv_s.weight_u: copying a param with shape torch.Size([48]) from checkpoint, the shape in current model is torch.Size([64]).
size mismatch for ups.3.conv_s.weight_v: copying a param with shape torch.Size([96]) from checkpoint, the shape in current model is torch.Size([128]).
size mismatch for ups.3.norm_0.param_free_norm.running_mean: copying a param with shape torch.Size([96]) from checkpoint, the shape in current model is torch.Size([128]).
size mismatch for ups.3.norm_0.param_free_norm.running_var: copying a param with shape torch.Size([96]) from checkpoint, the shape in current model is torch.Size([128]).
size mismatch for ups.3.norm_0.mlp_shared.0.weight: copying a param with shape torch.Size([96, 3, 3, 3]) from checkpoint, the shape in current model is torch.Size([128, 64, 3, 3]).
size mismatch for ups.3.norm_0.mlp_shared.0.bias: copying a param with shape torch.Size([96]) from checkpoint, the shape in current model is torch.Size([128]).
size mismatch for ups.3.norm_0.mlp_gamma.weight: copying a param with shape torch.Size([96, 96, 3, 3]) from checkpoint, the shape in current model is torch.Size([128, 128, 3, 3]).
size mismatch for ups.3.norm_0.mlp_beta.weight: copying a param with shape torch.Size([96, 96, 3, 3]) from checkpoint, the shape in current model is torch.Size([128, 128, 3, 3]).
size mismatch for ups.3.norm_1.param_free_norm.running_mean: copying a param with shape torch.Size([48]) from checkpoint, the shape in current model is torch.Size([64]).
size mismatch for ups.3.norm_1.param_free_norm.running_var: copying a param with shape torch.Size([48]) from checkpoint, the shape in current model is torch.Size([64]).
size mismatch for ups.3.norm_1.mlp_shared.0.weight: copying a param with shape torch.Size([48, 3, 3, 3]) from checkpoint, the shape in current model is torch.Size([64, 64, 3, 3]).
size mismatch for ups.3.norm_1.mlp_shared.0.bias: copying a param with shape torch.Size([48]) from checkpoint, the shape in current model is torch.Size([64]).
size mismatch for ups.3.norm_1.mlp_gamma.weight: copying a param with shape torch.Size([48, 48, 3, 3]) from checkpoint, the shape in current model is torch.Size([64, 64, 3, 3]).
size mismatch for ups.3.norm_1.mlp_beta.weight: copying a param with shape torch.Size([48, 48, 3, 3]) from checkpoint, the shape in current model is torch.Size([64, 64, 3, 3]).
size mismatch for ups.3.norm_s.param_free_norm.running_mean: copying a param with shape torch.Size([96]) from checkpoint, the shape in current model is torch.Size([128]).
size mismatch for ups.3.norm_s.param_free_norm.running_var: copying a param with shape torch.Size([96]) from checkpoint, the shape in current model is torch.Size([128]).
size mismatch for ups.3.norm_s.mlp_shared.0.weight: copying a param with shape torch.Size([96, 3, 3, 3]) from checkpoint, the shape in current model is torch.Size([128, 64, 3, 3]).
size mismatch for ups.3.norm_s.mlp_shared.0.bias: copying a param with shape torch.Size([96]) from checkpoint, the shape in current model is torch.Size([128]).
size mismatch for ups.3.norm_s.mlp_gamma.weight: copying a param with shape torch.Size([96, 96, 3, 3]) from checkpoint, the shape in current model is torch.Size([128, 128, 3, 3]).
size mismatch for ups.3.norm_s.mlp_beta.weight: copying a param with shape torch.Size([96, 96, 3, 3]) from checkpoint, the shape in current model is torch.Size([128, 128, 3, 3]).
size mismatch for to_rgbs.0.weight: copying a param with shape torch.Size([3, 384, 3, 3]) from checkpoint, the shape in current model is torch.Size([3, 512, 3, 3]).
size mismatch for to_rgbs.1.weight: copying a param with shape torch.Size([3, 192, 3, 3]) from checkpoint, the shape in current model is torch.Size([3, 256, 3, 3]).
size mismatch for to_rgbs.2.weight: copying a param with shape torch.Size([3, 96, 3, 3]) from checkpoint, the shape in current model is torch.Size([3, 128, 3, 3]).
size mismatch for to_rgbs.3.weight: copying a param with shape torch.Size([3, 48, 3, 3]) from checkpoint, the shape in current model is torch.Size([3, 64, 3, 3]).

Image Super-Resolution on FFHQ 1024 x 1024 - 4x upscaling result and checkpoint

Hi! I'm looking for SOTA results on the tasks Image Super-Resolution on FFHQ 1024 x 1024 - 4x upscaling. And found that your work is stated as a SOTA result for now, in your readme file you provide a link to these results. https://paperswithcode.com/sota/image-super-resolution-on-ffhq-1024-x-1024-4x?p=hifacegan-face-renovation-via-collaborative

I'm trying to find according results in your paper, but I can't. I see close results in Table 1 Face Super Resolution (x4, bicubic), nevertheless, they are different. Could you please clarify this moment, more precisely, for what image resolution you provide results in your article? Are the numbers in paperswithcode page correct for 1024x1024 resolution? Could you please provide a checkpoint for this model? (as I understand in the archive with checkpoint you provided models for 512x512 resolution).

are you sure the dataset class correct?

    "label_paths=image_paths=instance_paths = files

    label_paths = label_paths[:opt.max_dataset_size]
    image_paths = image_paths[:opt.max_dataset_size]
    instance_paths = instance_paths[:opt.max_dataset_size]"

image path == label path? :)

niqe

你好,可以透漏一下您输入niqe计算相似度的人脸图像尺寸以及 patch size大小吗?

Can you share the FaceRenov dataset?

Thank you for sharing your extraordinary work. I find the model trained on your FaceRenov dataset can perform better on real word images. So i want to know if you can share your FaceRenov dataset?

[ARCHIVED]figures

I store all teaser figures here to reduce the size of the repo. No one wants to git clone a huge repo of more than 100Mb.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.