Giter Site home page Giter Site logo

asyrp_official's Introduction

Diffusion Models already have a Semantic Latent Space (ICLR2023 notable-top-25%)

arXiv project_page

Diffusion Models already have a Semantic Latent Space
Mingi Kwon, Jaeseok Jeong, Youngjung Uh
Arxiv preprint.

Abstract:
Diffusion models achieve outstanding generative performance in various domains. Despite their great success, they lack semantic latent space which is essential for controlling the generative process. To address the problem, we propose asymmetric reverse process (Asyrp) which discovers the semantic latent space in frozen pretrained diffusion models. Our semantic latent space, named h-space, has nice properties for accommodating semantic image manipulation: homogeneity, linearity, robustness, and consistency across timesteps. In addition, we introduce a principled design of the generative process for versatile editing and quality boosting by quantifiable measures: editing strength of an interval and quality deficiency at a timestep. Our method is applicable to various architectures (DDPM++, iDDPM, and ADM) and datasets (CelebA-HQ, AFHQ-dog, LSUN-church, LSUN-bedroom, and METFACES).

Description

This repo includes the official Pytorch implementation of Asyrp: Diffusion Models already have a Semantic Latent Space.

  • Asyrp allows using h-space, the bottleneck of the U-Net, as a semantic latent space of diffusion models.

image image image

Edited real images (Top) as Happy dog (Bottom). So cute!!

Getting Started

We recommend running our code using NVIDIA GPU + CUDA, CuDNN.

Pretrained Models for Asyrp

Asyrp works on the checkpoints of pretrained diffusion models.

Image Type to Edit Size Pretrained Model Dataset Reference Repo.
Human face 256×256 Diffusion (Auto) CelebA-HQ SDEdit
Human face 256×256 Diffusion CelebA-HQ P2 weighting
Human face 256×256 Diffusion FFHQ P2 weighting
Church 256×256 Diffusion (Auto) LSUN-Bedroom SDEdit
Bedroom 256×256 Diffusion (Auto) LSUN-Church SDEdit
Dog face 256×256 Diffusion AFHQ-Dog ILVR
Painting face 256×256 Diffusion METFACES P2 weighting
ImageNet 256x256 Diffusion ImageNet Guided Diffusion
  • The pretrained Diffuson models on 256x256 images in CelebA-HQ, LSUN-Church, and LSUN-Bedroom are automatically downloaded in the code. (codes from DiffusionCLIP)

  • In contrast, you need to download the models pretrained on other datasets in the table and put it in the ./pretrained directory.

  • You can manually revise the checkpoint paths and names in the ./configs/paths_config.py file.

  • We used CelebA-HQ pretrained model from SDEdit but we found from P2 weighting is better. We highly recommend to use P2 weighting models rather than SDEdit.

Datasets

To precompute latents and find the direction of h-space, you need about 100+ images in the dataset. You can use both sampled images from the pretrained models or real images from the pretraining dataset.

If you want to use real images, check the URLs :

You can simply modify ./configs/paths_config.py for dataset path.

CUSTOM Datasets

If you want to use a custom dataset, you can use the config/custom.yml file.

  • You have to match data.dataset in custom.yml with your data domain. For example, if you want to use Human Face images, data.dataset should be CelebA_HQ or FFHQ.
  • data.category should be 'CUSTOM'
  • Then, you can use the below arguments:
--custom_train_dataset_dir "your/costom/dataset/dir/train"    \
--custom_test_dataset_dir "your/costom/dataset/dir/test"      \

Get LPIPS distance

We provide precomputed LPIPS distances for CelebA_HQ, LSUN-Bedroom, LSUN-Church, AFHQ-Dog, and METFACES in the ./utils.

If you want to use the custom/other dataset, we recommand to precompute LPIPS distance.

To precompute LPIPS distance for automatically defined t_edit & t_boost, run the following commands using script_get_lpips.sh.

python main.py  --lpips                  \
                --config $config         \
                --exp ./runs/tmp         \
                --edit_attr test         \
                --n_train_img 100        \
                --n_inv_step 1000   
  • $config : celeba.yml for human face, bedroom.yml for bedroom, church.yml for church, afhq.yml for dog face, imagenet.yml for images from ImageNet, metface.yml for artistic face from METFACES, ffqh.yml for human face from FFHQ.
  • exp: Experiment name.
  • edit_attr: Attribute to edit. But not used for now. you can use ./utils/text_dic.py to predefined source-target text pairs or define new pair.
  • n_train_img : LPIPS distance from # of images.
  • n_inv_step : # of steps during the generative pross for the inversion. You can use --n_inv_step 50 for speed.

Asyrp

To train the implicit function f, you can prepare two optional things. 1) get LPIPS distances 2) precompute

We alredy provide precomputed LPIPS distances for CelebA_HQ, LSUN-Bedroom, LSUN-Church, AFHQ-Dog, and METFACES in the ./utils.

If you want to use your own defined-t_edit (e.g., 500) and defined-t_boost (e.g., 200), you don't need to get LPIPS distances.

For that case, you can can use the below arguments:

--user_defined_t_edit 500       \
--user_defined_t_addnoise 200   \

If you want to train with sampled images, you don't need to precompute real images. For that case you can use the below argument:

--load_random_noise

Precompute real images

To precompute real images for saving time, run the follwing commands using script_precompute.sh.

python main.py  --run_train          \
                --config $config     \
                --exp ./runs/tmp     \
                --edit_attr test     \
                --do_train 1         \
                --do_test 1          \
                --n_train_img 100    \
                --n_test_img 32      \
                --bs_train 1         \
                --n_inv_step 50      \
                --n_train_step 50    \
                --n_test_step 50     \
                --just_precompute    

Train the implicit function

To train the implicit function, run the following commands using script_train.sh

python main.py  --run_train                    \
                --config $config               \
                --exp ./runs/example           \
                --edit_attr $guid              \
                --do_train 1                   \
                --do_test 1                    \
                --n_train_img 100              \
                --n_test_img 32                \
                --n_iter 5                     \
                --bs_train 1                   \
                --t_0 999                      \
                --n_inv_step 50                \
                --n_train_step 50              \
                --n_test_step 50               \
                --get_h_num 1                  \
                --train_delta_block            \
                --save_x0                      \
                --use_x0_tensor                \
                --lr_training 0.5              \
                --clip_loss_w 1.0              \
                --l1_loss_w 3.0                \
                --add_noise_from_xt            \
                --lpips_addnoise_th 1.2        \
                --lpips_edit_th 0.33           \
                --sh_file_name $sh_file_name   \

                (optional - if you pass "get LPIPS")
                --user_defined_t_edit 500      \
                --user_defined_t_addnoise 200  \

                (optional - if you pass "precompute")
                --load_random_noise
  • do_test: If you finish training quickly withouth checking the outputs in the middle of training, you can set do_test as 0.
  • bs_train : batch size.
  • n_iter : iter
  • get_h_num : It determine the number of attributes. It should be 1 for training.
  • train_delta_block : Train the implicit function. You can use --train_delta_h instead of --train_delta_block to optimize direction directly. (we recommend --train_delta_block)
  • --save_x0, --use_x0_tensor : If you want to save the results with original real images, use it.
  • n_inv_step, n_train_step, n_test_step: # of steps during the generative pross for the inversion, training and test respectively. They are in [0, 999]. We usually use 40 or 1000 for n_inv_step, 40 or 50 for n_train_step and 40 or 50 or 1000 for n_test_step respectively.
  • clip_loss_w, l1_loss_w : Weights of CLIP loss and L1 loss.

Inference

After training finished, you can inference with various settings using script_inference.sh. We provide some of it.

python main.py  --run_test                    \
                --config $config              \
                --exp ./runs/example          \
                --edit_attr $guid             \
                --do_train 1                  \
                --do_test 1                   \
                --n_train_img 100             \
                --n_test_img 32               \
                --n_iter 5                    \
                --bs_train 1                  \
                --t_0 999                     \
                --n_inv_step 50               \
                --n_train_step 50             \
                --n_test_step $test_step      \
                --get_h_num 1                 \
                --train_delta_block           \
                --add_noise_from_xt           \
                --lpips_addnoise_th 1.2       \
                --lpips_edit_th 0.33          \
                --sh_file_name $sh_file_name  \
                --save_x0                     \
                --use_x0_tensor               \
                --hs_coeff_delta_h 1.0        \

                (optional - checkpoint)
                --load_from_checkpoint "exp_name"  
                or
                --manual_checkpoint_name "full_path.pth"

                (optional - gradually editing)
                --delta_interpolation
                --max_delta 1.0
                --min_delta -1.0
                --num_delta 10

                (optinal - multi)
                --multiple_attr "exp1 exp2 exp3"
                --multiple_hs_coeff "1 0.5 1.5"
  • exp : is should be matched with trained exp. If you want to use our pretrained implicit function, you have to set --exp as $guid.
  • do_train, do_test: Sampling from training dataset / test dataset.
  • n_iter : If n_iter is same as trained argument, it use last-iter-checkpoint.
  • n_test_step : You can manually regulate inference step. 1000 shows best quality.
  • hs_coeff_delta_h : You can manually regulate the degree of editing. It can be the minus number.
  • --load_from_checkpoint, --manual_checkpoint_name : load_from_checkpoint should be the name of exp. manual_checkpoint_name should be the full path of checkpoint.
  • --delta_interpolation: You can set $max, $min, $num values. The $num of results will use gradually increased dgree of editing from min to max.
  • --multiple_attr: If you use multiple attributes, write down the name of exps (use blanks as separators). You can use --multiple_hs_coeff to regulate the degree of editing respectively.

Acknowledge

Codes are based on DiffusionCLIP.

asyrp_official's People

Contributors

kwonminki avatar

Stargazers

 avatar davinnovation avatar  avatar Wenyi Mo avatar  avatar wwh avatar  avatar Diego Garcia Cerdas avatar NeverFade avatar  avatar Taenam avatar  avatar Feng Chen avatar  avatar  avatar Shareef Ifthekhar avatar Yuanzhi Zhu avatar TakatoYoshikawa avatar Jungik Jang avatar Mello avatar  avatar MicahDoo avatar Jinny Kim avatar Sophia Sanborn avatar Xiefan Guo avatar  avatar Tal Shiri avatar Wei Deng avatar Josh Fourie avatar Yongsen Mao avatar Yitong Li avatar zxlin avatar Everglow avatar  avatar Wenqiang Sun avatar Alex Loftus avatar Jeong HoeJun avatar Moein Shariatnia avatar  avatar  avatar  avatar wyczzy avatar yujinhan avatar  avatar Algaeee avatar Cundian Yang avatar  avatar Anthony Bao avatar Joseph K J avatar  avatar Chris xUE avatar Ryosuke avatar Renato Sortino avatar Jun Liu avatar Yuan-Man avatar Marius Miron avatar Wenjun Huang avatar Sauradip Nag avatar Ng Kam Woh avatar  avatar  avatar  avatar fikry102 avatar  avatar dan avatar songmiao avatar Yoonjae Baek avatar Yifei Yang avatar  avatar Meher Shashwat Nigam avatar sujin yun avatar Okuda Kohei avatar  avatar {+7}zfxfB.%RWQ;6.:c0f?k} avatar Felipe Engelberger avatar Yusuf Dalva avatar hyunsoo avatar  avatar  avatar Mingjia Li avatar 吴天铭 avatar  avatar  avatar  avatar Baird Xiong avatar  avatar Fanda Fan avatar Byungwoo avatar  avatar Abhijnya Bhat avatar Surbhi Mittal avatar Yue Tan avatar  avatar Wenhao Ding avatar Baoxiong Jia avatar Xiaojian Ma avatar  avatar Liu Hanqi avatar sobieskibj avatar Hans Brouwer avatar

Watchers

Kostas Georgiou avatar Pyjcsx avatar  avatar Yunzhi Zhang avatar  avatar

asyrp_official's Issues

About CUSTOM dataset

It's an excellent job. I really appreciate it.
I observed when getting LPIPS distance, it has a denoising step which makes use of pre-trained diffusion models.
If my data domain makes a great difference from the pre-trained dataset, should I pretrain the diffusion model on my dataset first?

Custom Dataset training

Hello,

Thanks for your awesome work!

I have a question to ask the author, is it possible to learn by including attribute values and images that do not exist in the pretrained model in custom data? (ex. Can I put new Food image and Food attribute values in CelebHQ pretrained model?)

Looking forward for your reply, Thank you!

Bad performance

Thanks for you work!

I tried to reproduce the results for "happy dog" using released pretrained models but the performance was bad.

Here are the settings of inference scripts.

sh_file_name="script_inference.sh"
gpu="0"
config="afhq.yml"
guid="dog_happy"
test_step=40    # if large, it takes long time.
dt_lambda=1.0   # hyperparameter for dt_lambda. This is the method that will appear in the next paper.

CUDA_VISIBLE_DEVICES=$gpu python main.py --run_test                         \
                        --config $config                                    \
                        --exp ./runs/${guid}                                \
                        --edit_attr $guid                                   \
                        --do_train 0                                       \
                        --do_test 1                                        \
                        --n_train_img 0                                   \
                        --n_test_img 32                                     \
                        --n_iter 5                                          \
                        --bs_train 1                                        \
                        --t_0 999                                           \
                        --n_inv_step 40                                     \
                        --n_train_step 40                                   \
                        --n_test_step $test_step                            \
                        --get_h_num 1                                       \
                        --train_delta_block                                 \
                        --sh_file_name $sh_file_name                        \
                        --save_x0                                           \
                        --use_x0_tensor                                     \
                        --hs_coeff_delta_h 1.0                              \
                        --dt_lambda $dt_lambda                              \
                        --add_noise_from_xt                                 \
                        --lpips_addnoise_th 1.2                             \
                        --lpips_edit_th 0.33                                \
                        --sh_file_name "script_inference.sh"                \
                        --manual_checkpoint_name "dog_happy_LC_dog_t999_ninv40_ngen40_0.pth" \

The pretrained model is "afhqdog_p2.pt"

Some examples of results are

test_15_4_ngen40
test_23_4_ngen40

Do you have any suggestions on this?

Transfer Guided Diffusion U-Net to your Diffusion Object and back

Hi,

first thank you for your amazing work!

However, I have some questions in part related to the other issues here.

  1. How can I do standard inference with the pre-trained models? E.g. for celebahq_p2.pt the model was supposedly trained with the code of P2 weighting. But the code in the original P2 weighting repository has a totally different structure than your code base. If you load the checkpoint you get a models.ddpm.diffusion.Diffusion object, but this is totally different from what other forks of guided diffusion like P2 weighting understand under a Diffusion object and rather has the variables and the interface of the U-Net only, but without information about the actual diffusion process anymore. If one wraps around a models.guided_diffusion.gaussian_diffusion.GaussianDiffusion then one can execute p_sample_loop successfully, but the results of the reverse diffusion process is random noise. How can I test the pretrained models without directly executing Asyrp only doing normal reverse diffusion?

  2. The same problem also exists in the other direction as already mentioned in #10 - you can train a models.guided_diffusion.unet.UNetModel with the common guided diffusion code (e.g. also the P2 variant), but how are you using this U-Net now in your code?

  3. I guess the meta-question is, how am I doing the transfer between your models.ddpm.diffusion.Diffusion object and the models.guided_diffusion.unet.UNetModel in either direction?

Best,
Sidney

Problem installing requirements

(.venv) user@DESKTOP:~/Asyrp_official$ pip install -r requirements.txt
Collecting blobfile==1.3.1
Using cached blobfile-1.3.1-py3-none-any.whl (70 kB)
ERROR: Could not find a version that satisfies the requirement clip==1.0 (from versions: 0.0.1, 0.1.0, 0.2.0)
ERROR: No matching distribution found for clip==1.0

How can I get clip==1.0?

Custom Dataset Diffusion model

Thanks for your great work!

I want to train Asyrp on some new domain, and I saw #4 that I need to train my own diffusion model first in order to train Asyrp.

May I ask are there any limitation on the Diffusion Model? For example I want to use guided Diffusion, it is matter to train it to be class conditional or not?

Add requirements.txt file

Hi, could you add a requirements.txt file of the versions of the python packages used for this paper?

do ./checkpoint/* work?

Hi! I've tried several checkpoints saved in ./checkpoint, like smiling, angry, young, it seems that they couldn't change corresponding attributes. Are there any problems of these checkpoints you have uploaded?

Need more detailed training & inference settings

Thanks for your work!
I am trying to train Asyrp for animal face editing on my own device. For attribute 'Happ Dog' , the training setting is as below:

sh_file_name="script_train.sh"
gpu="7"

config="afhq.yml" # if you use other dataset, config/path_config.py should be matched
guid="dog_happy"



CUDA_VISIBLE_DEVICES=$gpu python main.py --run_train                        \
                        --config $config                                    \
                        --exp ./runs/$guid                                  \
                        --edit_attr $guid                                   \
                        --do_train 1                                        \
                        --do_test 1                                         \
                        --n_train_img 1000                                   \
                        --n_test_img 32                                     \
                        --n_iter 1                                          \
                        --bs_train 1                                        \
                        --t_0 999                                           \
                        --n_inv_step 40                                     \
                        --n_train_step 40                                   \
                        --n_test_step 1000                                   \
                        --get_h_num 1                                       \
                        --train_delta_block                                 \
                        --sh_file_name $sh_file_name                        \
                        --save_x0                                           \
                        --use_x0_tensor                                     \
                        --hs_coeff_delta_h 1.0                              \
                        --lr_training 0.5                                   \
                        --clip_loss_w 1.5                                   \
                        --l1_loss_w 3.0                                     \
                        --retrain 1                                         \
                        --sh_file_name "script_train.sh"                    \
                        --load_random_noise
                        --lpips_addnoise_th 1.2                           \ # if you compute lpips, use it.
                        --lpips_edit_th 0.33                              \
                        --add_noise_from_xt                               \ # if you compute lpips, use it.

However, the editing results are not so good. The changes are undistinguishable, some examples are shown below.
image

I wonder could you privode a more detailed training & inference setting for 'Happ Dog'. ?
Looking forward for your reply, Thank you!

README.md spelling error

In the dataset portion you have the following:

 --custom_train_dataset_dir "your/costom/datset/dir/train" \
 --custom_test_dataset_dir "your/costom/dataset/dir/test" \

and just wanted to say it should be "custom" (small nitpick : ))

Distorted image color

Thanks for the excellent work!
All my generated images had distorted color whether I used self-trained checkpoints or released pretrained models.
test_19_4_ngen50.png
test_19_4_ngen50
test_7_4_ngen50.png
test_7_4_ngen50
train_0_0_ngen50.png
train_0_0_ngen50
train_0_1_ngen50.png
train_0_1_ngen50
train_0_2_ngen50.png
train_0_2_ngen50

I only modified some command options of the scripts:

sh_file_name="script_precompute.sh"
gpu="6"

config="custom.yml" # if you use other dataset, config/path_config.py should be matched
guid="smiling"  # guid should be in utils/text_dic.py


CUDA_VISIBLE_DEVICES=$gpu python main.py --run_train                        \
                        --config $config                                    \
                        --exp ./runs/$guid                                  \
                        --edit_attr $guid                                   \
                        --do_train 1                                        \
                        --do_test 1                                         \
                        --n_train_img 100                                   \
                        --n_test_img 32                                     \
                        --bs_train 1                                        \
                        --get_h_num 1                                       \
                        --train_delta_block                                 \
                        --t_0 999                                           \
                        --n_inv_step 50                                     \
                        --n_train_step 50                                   \
                        --n_test_step 50                                    \
                        --just_precompute                                   \
                        --custom_train_dataset_dir "/data/dataset/CelebA-HQ-1024_0-999"       \
                        --custom_test_dataset_dir "/data/dataset/CelebA-HQ-1024_0-999"         \
                        --sh_file_name "script_precompute.sh"       \
                        --model_path      "/mnt/Asyrp_official/pretrained/celebahq_p2.pt"
#!/bin/bash

sh_file_name="script_train.sh"
gpu="6"

config="custom.yml" # if you use other dataset, config/path_config.py should be matched
guid="smiling" # guid should be in utils/text_dic.py


CUDA_VISIBLE_DEVICES=$gpu python main.py --run_train                        \
                        --config $config                                    \
                        --exp ./runs/$guid                                  \
                        --edit_attr $guid                                   \
                        --do_train 1                                        \
                        --do_test 1                                         \
                        --n_train_img 100                                   \
                        --n_test_img 32                                     \
                        --n_iter 5                                          \
                        --bs_train 4                                        \
                        --t_0 999                                           \
                        --n_inv_step 1000                                     \
                        --n_train_step 50                                   \
                        --n_test_step 1000                                   \
                        --get_h_num 1                                       \
                        --train_delta_block                                 \
                        --sh_file_name $sh_file_name                        \
                        --save_x0                                           \
                        --use_x0_tensor                                     \
                        --hs_coeff_delta_h 1.0                              \
                        --lr_training 0.5                                   \
                        --clip_loss_w 1.0                                   \
                        --l1_loss_w 3.0                                     \
                        --retrain 1                                         \
                        --sh_file_name "script_train.sh"  \
                        --model_path      "/mnt/Asyrp_official/pretrained/celebahq_p2.pt" \
                        --custom_train_dataset_dir "/data/dataset/CelebA-HQ-1024_0-999"       \
                        --custom_test_dataset_dir "/data/dataset/CelebA-HQ-1024_0-999"         \
#!/bin/bash

sh_file_name="script_inference.sh"
gpu="6"
config="custom.yml"
guid="smiling"
test_step=1000    # if large, it takes long time.
dt_lambda=1.0   # hyperparameter for dt_lambda. This is the method that will appear in the next paper.

CUDA_VISIBLE_DEVICES=$gpu python main.py --run_test                         \
                        --config $config                                    \
                        --exp ./runs/${guid}                                \
                        --edit_attr $guid                                   \
                        --do_train 1                                    \
                        --do_test 1                                         \
                        --n_train_img 100                                  \
                        --n_test_img 32                                     \
                        --n_iter 5                                          \
                        --bs_train 1                                        \
                        --t_0 999                                           \
                        --n_inv_step 50                                     \
                        --n_train_step 50                                   \
                        --n_test_step $test_step                            \
                        --get_h_num 1                                       \
                        --train_delta_block                                 \
                        --sh_file_name $sh_file_name                        \
                        --save_x0                                           \
                        --use_x0_tensor                                     \
                        --hs_coeff_delta_h 1.0                              \
                        --dt_lambda $dt_lambda                              \
                        --add_noise_from_xt                                 \
                        --lpips_addnoise_th 1.2                             \
                        --lpips_edit_th 0.33                                \
                        --sh_file_name "script_inference.sh"              \
                        --model_path      "/mnt/Asyrp_official/pretrained/celebahq_p2.pt" \
                        --custom_train_dataset_dir "/data/dataset/CelebA-HQ-1024_0-999"                \
                        --custom_test_dataset_dir "/data/dataset/CelebA-HQ-1024_0-999"     \
                        --manual_checkpoint_name "smiling_LC_CelebA_HQ_t999_ninv40_ngen40_0.pth" \

Custom dataset training (P2 weighting)

Hello,

I have trained a diffusion model with a custom dataset using P2 weighting (the training code in the guided_diffusion folder), however, the generated samples are not correct when I use the functions (denoisin_step, generalized_steps) you have used in the main script. However, when I use the sampling codes in guided_diffusion/gaussian_diffusion.py module, it correctly generates samples. So could you please provied the details about how you trained CelebA_P2 model so that I can train my own dataset using P2 weighting with the same settings you trained in order to make it run with the sampling codes in your code?

Thanks

Colab demo

Hello,

Thanks for the fantastic work, would you plan to give any kind of demo (preferably Google Colab) so that it could smoothly test out the idea? Perhaps, could make the work more popular as a broad range of people can try it and reduce setting up time.

There is no option for Celeba_HQ_P2 despite recommendation

Hi,

when trying to apply your script with the config https://github.com/kwonminki/Asyrp_official/blob/main/configs/celeba_p2.yml you get the following error:

"sh script_get_lpips.sh
INFO: underlay of /usr/bin/nvidia-smi required more than 50 (274) bind mounts
INFO - main.py - 2023-12-07 17:58:17,991 - Using device: cuda
Get lpips distance...
ERROR - main.py - 2023-12-07 17:58:28,427 - Traceback (most recent call last):
File "/home/sidney/workspace/Asyrp_official/main.py", line 337, in main
runner.compute_lpips_distance()
File "/opt/conda/envs/peal/lib/python3.11/site-packages/torch/utils/_contextlib.py", line 115, in decorate_context
return func(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^
File "/home/sidney/workspace/Asyrp_official/diffusion_latent.py", line 1198, in compute_lpips_distance
model = self.load_pretrained_model()
^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/sidney/workspace/Asyrp_official/diffusion_latent.py", line 99, in load_pretrained_model
raise ValueError
"

because "CelebA_HQ_P2" is none of the options.

Best,
Sidney

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.