Giter Site home page Giter Site logo

zpdesu / sean Goto Github PK

View Code? Open in Web Editor NEW
650.0 16.0 96.0 89.43 MB

SEAN: Image Synthesis with Semantic Region-Adaptive Normalization (CVPR 2020, Oral)

Home Page: https://zpdesu.github.io/SEAN/

License: Other

Python 100.00%
cvpr2020 image-translation image-generation gan face-editing face-manipulation

sean's Introduction

SEAN: Image Synthesis with Semantic Region-Adaptive Normalization (CVPR 2020 Oral)

Python 3.7 pytorch 1.2.0 pyqt5 5.13.0

image Figure: Face image editing controlled via style images and segmentation masks with SEAN

We propose semantic region-adaptive normalization (SEAN), a simple but effective building block for Generative Adversarial Networks conditioned on segmentation masks that describe the semantic regions in the desired output image. Using SEAN normalization, we can build a network architecture that can control the style of each semantic region individually, e.g., we can specify one style reference image per region. SEAN is better suited to encode, transfer, and synthesize style than the best previous method in terms of reconstruction quality, variability, and visual quality. We evaluate SEAN on multiple datasets and report better quantitative metrics (e.g. FID, PSNR) than the current state of the art. SEAN also pushes the frontier of interactive image editing. We can interactively edit images by changing segmentation masks or the style for any given region. We can also interpolate styles from two reference images per region.

SEAN: Image Synthesis with Semantic Region-Adaptive Normalization
Peihao Zhu, Rameen Abdal, Yipeng Qin, Peter Wonka
Computer Vision and Pattern Recognition CVPR 2020, Oral

[Paper] [Project Page] [Demo]

Installation

Clone this repo.

git clone https://github.com/ZPdesu/SEAN.git
cd SEAN/

This code requires PyTorch, python 3+ and Pyqt5. Please install dependencies by

pip install -r requirements.txt

This model requires a lot of memory and time to train. To speed up the training, we recommend using 4 V100 GPUs

Dataset Preparation

This code uses CelebA-HQ and CelebAMask-HQ dataset. The prepared dataset can be directly downloaded here. After unzipping, put the entire CelebA-HQ folder in the datasets folder. The complete directory should look like ./datasets/CelebA-HQ/train/ and ./datasets/CelebA-HQ/test/.

Generating Images Using Pretrained Models

Once the dataset is prepared, the reconstruction results be got using pretrained models.

  1. Create ./checkpoints/ in the main folder and download the tar of the pretrained models from the Google Drive Folder. Save the tar in ./checkpoints/, then run

    cd checkpoints
    tar CelebA-HQ_pretrained.tar.gz
    cd ../
    
  2. Generate the reconstruction results using the pretrained model.

    python test.py --name CelebA-HQ_pretrained --load_size 256 --crop_size 256 --dataset_mode custom --label_dir datasets/CelebA-HQ/test/labels --image_dir datasets/CelebA-HQ/test/images --label_nc 19 --no_instance --gpu_ids 0
  3. The reconstruction images are saved at ./results/CelebA-HQ_pretrained/ and the corresponding style codes are stored at ./styles_test/style_codes/.

  4. Pre-calculate the mean style codes for the UI mode. The mean style codes can be found at ./styles_test/mean_style_code/.

    python calculate_mean_style_code.py

Training New Models

To train the new model, you need to specify the option --dataset_mode custom, along with --label_dir [path_to_labels] --image_dir [path_to_images]. You also need to specify options such as --label_nc for the number of label classes in the dataset, and --no_instance to denote the dataset doesn't have instance maps.

python train.py --name [experiment_name] --load_size 256 --crop_size 256 --dataset_mode custom --label_dir datasets/CelebA-HQ/train/labels --image_dir datasets/CelebA-HQ/train/images --label_nc 19 --no_instance --batchSize 32 --gpu_ids 0,1,2,3

If you only have single GPU with small memory, please use --batchSize 2 --gpu_ids 0.

UI Introduction

We provide a convenient UI for the users to do some extension works. To run the UI mode, you need to:

  1. run the step Generating Images Using Pretrained Models to save the style codes of the test images and the mean style codes. Or you can directly download the style codes from here. (Note: if you directly use the downloaded style codes, you have to use the pretrained model.

  2. Put the visualization images of the labels used for generating in ./imgs/colormaps/ and the style images in ./imgs/style_imgs_test/. Some example images are provided in these 2 folders. Note: the visualization image and the style image should be picked from ./datasets/CelebAMask-HQ/test/vis/ and ./datasets/CelebAMask-HQ/test/labels/, because only the style codes of the test images are saved in ./styles_test/style_codes/. If you want to use your own images, please prepare the images, labels and visualization of the labels in ./datasets/CelebAMask-HQ/test/ with the same format, and calculate the corresponding style codes.

  3. Run the UI mode

    python run_UI.py --name CelebA-HQ_pretrained --load_size 256 --crop_size 256 --dataset_mode custom --label_dir datasets/CelebA-HQ/test/labels --image_dir datasets/CelebA-HQ/test/images --label_nc 19 --no_instance --gpu_ids 0
  4. How to use the UI. Please check the detail usage of the UI from our Video.

    image

Other Datasets

Will be released soon.

License

All rights reserved. Licensed under the CC BY-NC-SA 4.0 (Attribution-NonCommercial-ShareAlike 4.0 International) The code is released for academic research use only.

Citation

If you use this code for your research, please cite our papers.

@InProceedings{Zhu_2020_CVPR,
author = {Zhu, Peihao and Abdal, Rameen and Qin, Yipeng and Wonka, Peter},
title = {SEAN: Image Synthesis With Semantic Region-Adaptive Normalization},
booktitle = {IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)},
month = {June},
year = {2020}
}

Acknowledgments

We thank Wamiq Reyaz Para for helpful comments. This code borrows heavily from SPADE. We thank Taesung Park for sharing his codes. This work was supported by the KAUST Office of Sponsored Research (OSR) under AwardNo. OSR-CRG2018-3730.

sean's People

Contributors

nerdyvedi avatar zpdesu avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

sean's Issues

Wanna see code of computing metrics

Thanks you for sharing your amazing work. I'm new to pytorch and find your code clean and tought me a lot. I found no code of computing metrics such as SSIM/RMSE/PSNR/FID as mentioned in the paper and I wanna know how you implemented that. Would you please upload those code? Thanks!
Also, I'm running training.py SO SLOWLY on a single nvidia 2080ti with batch_size of 2. It's like a few hours not even one epoch finished. Do you have any idea?

AttributeError: 'ACE' object has no attribute 'fc_mu105'

Traceback (most recent call last):
File "train.py", line 47, in
trainer.run_generator_one_step(data_i)
File "/d6295745ef534beab3ce2490bedcd8ab/lxy/inkpaint/SEAN/trainers/pix2pix_trainer.py", line 35, in run_generator_one_step
g_losses, generated = self.pix2pix_model(data, mode='generator')
File "/usr/local/lib/python3.6/dist-packages/torch/nn/modules/module.py", line 541, in call
result = self.forward(*input, **kwargs)
File "/usr/local/lib/python3.6/dist-packages/torch/nn/parallel/data_parallel.py", line 150, in forward
return self.module(*inputs[0], **kwargs[0])
File "/usr/local/lib/python3.6/dist-packages/torch/nn/modules/module.py", line 541, in call
result = self.forward(*input, **kwargs)
File "/d6295745ef534beab3ce2490bedcd8ab/lxy/inkpaint/SEAN/models/pix2pix_model.py", line 45, in forward
input_semantics, real_image)
File "/d6295745ef534beab3ce2490bedcd8ab/lxy/inkpaint/SEAN/models/pix2pix_model.py", line 144, in compute_generator_loss
input_semantics, real_image, compute_kld_loss=self.opt.use_vae)
File "/d6295745ef534beab3ce2490bedcd8ab/lxy/inkpaint/SEAN/models/pix2pix_model.py", line 196, in generate_fake
fake_image = self.netG(input_semantics, real_image)
File "/usr/local/lib/python3.6/dist-packages/torch/nn/modules/module.py", line 541, in call
result = self.forward(*input, **kwargs)
File "/d6295745ef534beab3ce2490bedcd8ab/lxy/inkpaint/SEAN/models/networks/generator.py", line 84, in forward
x = self.head_0(x, seg, style_codes, obj_dic=obj_dic)
File "/usr/local/lib/python3.6/dist-packages/torch/nn/modules/module.py", line 541, in call
result = self.forward(*input, **kwargs)
File "/d6295745ef534beab3ce2490bedcd8ab/lxy/inkpaint/SEAN/models/networks/architecture.py", line 75, in forward
dx = self.ace_0(x, seg, style_codes, obj_dic)
File "/usr/local/lib/python3.6/dist-packages/torch/nn/modules/module.py", line 541, in call
result = self.forward(*input, **kwargs)
File "/d6295745ef534beab3ce2490bedcd8ab/lxy/inkpaint/SEAN/models/networks/normalization.py", line 157, in forward
middle_mu = F.relu(self.getattr('fc_mu' + str(j))(style_codes[i][j]))
File "/usr/local/lib/python3.6/dist-packages/torch/nn/modules/module.py", line 585, in getattr
type(self).name, name))
AttributeError: 'ACE' object has no attribute 'fc_mu105'

I have meet this question, someone can help me?

how to prepare the custom datasets?

I use the mask images as label(background is 0๏ผŒforeground is 255),but it can not work when "--label_nc"==2 or == 256.Only when I change the mask images(background is 0๏ผŒforeground is 1) ,and set "--label_nc"==2,it works.I don't know why it can works.
Is it the problem of the custom datasets?

question about results

Hi, why your results in Table 2 (cityspaces and ade) different from those from SPADE paper while you used the same dataset train/test splits? For instance, results of SPADE on cityscapes are 62.3 mIoU, 81.9 accu and 71.8 FID, but you reported 57.88 mIoU, 93.59 accu and 50.38 FID, respectively.

run_UI.py wont work

sudo python run_UI.py --name CelebA-HQ_pretrained --lo/labels --image_dir datasets/CelebA-HQ/test/images --label_nc 19 --no_instance --gpu

----------------- Options ---------------
aspect_ratio: 1.0
batchSize: 1
cache_filelist_read: False
cache_filelist_write: False
checkpoints_dir: ./checkpoints
contain_dontcare_label: False
crop_size: 256
dataroot: ./datasets/cityscapes/
dataset_mode: custom [default: coco]
display_winsize: 256
gpu_ids: 0
how_many: inf
image_dir: datasets/CelebA-HQ/test/images [default: None]
init_type: xavier
init_variance: 0.02
instance_dir:
isTrain: False [default: None]
label_dir: datasets/CelebA-HQ/test/labels [default: None]
label_nc: 19 [default: 13]
load_from_opt_file: False
load_size: 256
max_dataset_size: 9223372036854775807
model: pix2pix
nThreads: 28
name: CelebA-HQ_pretrained [default: label2coco
nef: 16
netG: spade
ngf: 64
no_flip: True
no_instance: True [default: False]
no_pairing_check: False
norm_D: spectralinstance
norm_E: spectralinstance
norm_G: spectralspadesyncbatch3x3
num_upsampling_layers: normal
output_nc: 3
phase: test
preprocess_mode: resize_and_crop
results_dir: ./results/
serial_batches: True
status: test
use_vae: False
which_epoch: latest
z_dim: 256
----------------- End -------------------
QStandardPaths: XDG_RUNTIME_DIR not set, defaulting to '/tmp/runtime-root'
Network [SPADEGenerator] was created. Total number of parameters: 266.9 million. To
/home/user/anaconda3/envs/sean/lib/python3.6/site-packages/torch/nn/functional.py:1350: UserWarning: nn.functional.sigmoid is deprecated. Use torch.sigmoid instead.
warnings.warn("nn.functional.sigmoid is deprecated. Use torch.sigmoid instead.")
/home/user/anaconda3/envs/sean/lib/python3.6/site-packages/torch/nn/functional.py:1339: UserWarning: nn.functional.tanh is deprecated. Use torch.tanh instead.
warnings.warn("nn.functional.tanh is deprecated. Use torch.tanh instead.")
Traceback (most recent call last):
File "run_UI.py", line 566, in
ex = ExWindow(opt)
File "run_UI.py", line 35, in init
self.EX = Ex(opt)
File "run_UI.py", line 74, in init
self.init_screen()
File "run_UI.py", line 106, in init_screen
self.run_deep_model()
File "run_UI.py", line 127, in run_deep_model
qim = QImage(generated_img.data, generated_img.shape[1], generated_img.shape[0], QImage.Format_RGB888)
TypeError: arguments did not match any overloaded call:
QImage(): too many arguments
QImage(int, int, QImage.Format): argument 1 has unexpected type 'memoryview'
QImage(bytes, int, int, QImage.Format): argument 1 has unexpected type 'memoryview'
QImage(sip.voidptr, int, int, QImage.Format): argument 1 has unexpected type 'memoryview'
QImage(bytes, int, int, int, QImage.Format): argument 1 has unexpected type 'memoryview'
QImage(sip.voidptr, int, int, int, QImage.Format): argument 1 has unexpected type 'memoryview'
QImage(List[str]): argument 1 has unexpected type 'memoryview'
QImage(str, format: str = None): argument 1 has unexpected type 'memoryview'
QImage(QImage): argument 1 has unexpected type 'memoryview'
QImage(Any): too many arguments

I can't get the UI to work. Any ideas?

Questions for input range

Hello,

I'm reimplementing your work in TensorFlow.

I have some questions about the input range.

Based on your code, input image is [-1, 1] , segmentation map is [0, 255].

Am I right?

Furthermore, PSNR and SSIM are calculated in RGB domain or Y domain?

Concerns about evaluation methods in the paper

Thanks for your contribution, @ZPdesu, but I have some questions about your paper.

  1. In this comment (#7 (comment)) you say that you compute the FID between generated images from the test set and real images from the training set. Could you explain why you chose this approach? In my opinion this is not what we really want to measure, the networks might overfit to the training set and performance should always be measured using only unseen images.
  2. Why are the pix2pixHD images for Cityscapes that are shown in the paper and with which you supposedly compare the SEAN method so bad? They look nothing like the pix2pixHD images from the original paper or what I have reproduced with pix2pixHD in smaller resolutions. Did you train pix2pixHD yourself, if so could you specify with what parameters?

ask for advice~

Great job !
Your work can control the style via the mask areas, I want to do some work about controlling the resolution via the mask area, could you please give me some advices๏ผŸ
Thanks๏ผ๏ผ๏ผ

Clarification of network training

Hello,

Thanks for providing source-code of this amazing work.

I am trying to reproduce your work line-by-line, and found a lot of implementation unclear. Hopefully, you would help to clarify the following questions:

  1. The code function and module namings are very confusing. The region-based adaptive method from the paper is called SEAN, while in this repo is called ACE. I hope the author (if having some time) could slightly modify the code into a more readable way. This would significantly help the readers to easily catch up with the detailed idea, since the majority code base is from SPADE.

  2. What's the intuition for the last ResBlk degraded into standard SPADE block rather than using the SEAN across all layers?

  3. From the paper (and the code), the loss function is exactly the same from SPADE and pix2pix. I am wondering, how to have the reconstruction* signal to push the generator generates the same input data? Just from the perceptual loss and feature matching loss?

  4. So the entire training only does reconstruction, or other words, GAN training, using the same style code extracted from the same training data? Only after it fully converges, then we re-shuffle style code from different data to perform style transfer. Am I understanding this correctly? So we shouldn't reshuffle the style code during training?

Thanks very much for the help.

Best,

about the loss function

Hi, thanks for your excellent work! By the way, I have some question about the work:

  1. why not use reconstruction loss such as L1 or L2 loss for better reconstruction quality?
  2. If the model is trained special for face application, it is better to use vgg-face for calulating the perceptual loss ?

if you need the label mask of the style image in the testing phase

Greeting!
I want to ask a question about if you need the label mask of the style image in the testing phase. I notice that the label mask of the style image and the input layout may be very different, e.g. the samples in the ADE20K dataset. When encoding the style, it is natural to use a corresponding segmentation mask to do the "region-wise average pooling" for the style image.
Thank you.

Got wrong metric results

Help, please! I used skimage.measure.compare_ssim to your CelebA-HQ_pretrained on the test set and got a result SSIM of 0.54 which was supposed to be 0.73 as in your paper. The generated image looks just fine with high reconstruction quality. Would you please tell me where has gone wrong? (actually other metrics went wrong too. https://github.com/mseitzer/pytorch-fid got a FID of 30 but was supposed to be 17.66)
Here's my code:

def ssim_score(generated_images, reference_images):
    ssim_score_list = []
    for reference_image, generated_image in zip(reference_images, generated_images):

        ssim = skimage.measure.compare_ssim(reference_image, generated_image, multichannel=True,
                            data_range=generated_image.max() - generated_image.min())
        ssim_score_list.append(ssim)
    return np.mean(ssim_score_list)

Region-wise operation on ADE20k dataset

Hello, thanks for the great work.

I just wonder how did you implement region-wise pooling and SEAN block on ADE20k dataset.
In CelebA dataset, it might manageable via for-loop since the number of styles is 19 whereas not sure it is possible to the ADE20k since it has more than 100 styles (or classes).
So could you share some experiences about the implementation or training/testing time in this dataset?

Problems when trying UI

Traceback (most recent call last):
File "run_UI.py", line 457, in update_entire_feature
if cb_status and i in self.label_count:
AttributeError: 'Ex' object has no attribute 'label_count'
Abort trap: 6

pretrained models

Hi, can you provide the pretrained models on cityscapes and ade20k datasets? SEAN is a wonderful work. Thanks

Question about the SEAN module

Hi, thanks for your work!

I have a question about the SEAN module implementation detail. In the de-normalization process, why did you add an extra 1 to the gamma? i.e., out = normalized * (1 + gamma_final) + beta_final.

Why 1+gamma here? According to Eq. (1) of the paper, it is gramma exactly.

How to generate images not via run_UI.py?

Hi,
Thanks for the great work! I wonder that if there is a way to generate images with style images and mask but not using run_UI.py? Because I cannot run the UI demo successfully. Thanks a lot!

Can't run code with multiple GPUs

Hello, my terminal hangs if I try to run the training code with more than 1 gpu. Do I need to write some additional code to support multiple GPUs? If so, where?

No activation in skip connections

Hi, while reading the code I noticed that SPADEResnetBlock.shortcut() doesn't apply an activation function between ACE and Conv.
Since it's different from Figure 4 (B) in the paper, could you explain the reason for it? Thank you in advance

Missing Activation in Shortcut

Hello,

In SEAN paper, in Figure 4B, there is a ReLu in the shortcut. However in your code, there is no activation in the shortcut that is applied to ace_s. Which one is used to generate the results demonstrated in the paper?

Thanks!

x_s = self.ace_s(x, seg, style_codes, obj_dic)

Available platform plugins are: xcb, eglfs, linuxfb, minimal, minimalegl, offscreen, vnc, wayland-egl, wayland, wayland-xcomposite-egl, wayland-xcomposite-glx, webgl.

QObject::moveToThread: Current thread (0x5de04f0) is not the object's thread (0x646e910).
Cannot move to target thread (0x5de04f0)

qt.qpa.plugin: Could not load the Qt platform plugin "xcb" in "/home/muneeb/Desktop/projects/face_style/SEAN/.env/lib/python3.8/site-packages/cv2/qt/plugins" even though it was found.
This application failed to start because no Qt platform plugin could be initialized. Reinstalling the application may fix this problem.

Available platform plugins are: xcb, eglfs, linuxfb, minimal, minimalegl, offscreen, vnc, wayland-egl, wayland, wayland-xcomposite-egl, wayland-xcomposite-glx, webgl.

Aborted (core dumped)

weird results for custom images

Thanks for providing good code.

I want to use a custom image with pretrained model.

But when I create the original custom image with the custom image and mask it gives weird results.

image

image

It produces accurate results when using celebAMask images, but not accurate results when using custom images.

How can I solve this?

Note: I used the Bisenet model to generate the mask. -> Here

Paired image to image translation

Thank you very much for making the code accessible!

I was pleasantly surprised that the training command works exactly as in SPADE.

Instead of semantic image synthesis, I am interested in paired image to image translation tasks as in pix2pixHD (domain A -> domain B with paired images of the same name), Over in the SPADE repo the same question was asked multiple times, however, with no answer yet:

NVlabs/SPADE#46
NVlabs/SPADE#112

Is it possible to have a paired image2image translation with the SEAN model? If not, can you recommend some state of the art model that is more recent compared to pix2pixHD? pix2pixHD is nice, but even the SPADE authors found several improvements over it in their ablation study in the SPADE paper.

Thank you.

troubles with getting images with styles using run_UI.py

I tried to run run_UI.py.
I can't get the interpolated results
First - I don't understand how to apply a style image on an Opened Image.
When I select one of the styles on the right side, the image, doesn't appear in the bottom part of the GUI window. (as it happened in the video that is attached).

I tried to press "mask" button, and style_linear_interpolation method runs but the saved images in style_interpolation folder, are all the same; it looks that self.obj_dic doesn't effect the generate_img. (even though odj_dic is updated according to the alpha and the styles in the loop)

Can you please show how exactly to use the UI (not the demo video, cause I can't get it from there)

Thanks in advance

RuntimeError: cuDNN error: CUDNN_STATUS_NOT_INITIALIZED

I get an error when I run test.py

!python test.py --checkpoints_dir "" --name CelebA-HQ_pretrained --load_size 256 --crop_size 256 --dataset_mode custom --label_dir /content/SEAN/imgs/colormaps --image_dir /content/SEAN/imgs/colormaps --label_nc 19 --no_instance --gpu_ids 0

----------------- Options ---------------
             aspect_ratio: 1.0                           
                batchSize: 1                             
      cache_filelist_read: False                         
     cache_filelist_write: False                         
          checkpoints_dir:                               	[default: ./checkpoints]
   contain_dontcare_label: False                         
                crop_size: 256                           
                 dataroot: ./datasets/cityscapes/        
             dataset_mode: custom                        	[default: coco]
          display_winsize: 256                           
                  gpu_ids: 0                             
                 how_many: inf                           
                image_dir: /content/SEAN/imgs/colormaps  	[default: None]
                init_type: xavier                        
            init_variance: 0.02                          
             instance_dir:                               
                  isTrain: False                         	[default: None]
                label_dir: /content/SEAN/imgs/colormaps  	[default: None]
                 label_nc: 19                            	[default: 13]
       load_from_opt_file: False                         
                load_size: 256                           
         max_dataset_size: 9223372036854775807           
                    model: pix2pix                       
                 nThreads: 28                            
                     name: CelebA-HQ_pretrained          	[default: label2coco]
                      nef: 16                            
                     netG: spade                         
                      ngf: 64                            
                  no_flip: True                          
              no_instance: True                          	[default: False]
         no_pairing_check: False                         
                   norm_D: spectralinstance              
                   norm_E: spectralinstance              
                   norm_G: spectralspadesyncbatch3x3     
    num_upsampling_layers: normal                        
                output_nc: 3                             
                    phase: test                          
          preprocess_mode: resize_and_crop               
              results_dir: ./results/                    
           serial_batches: True                          
                   status: test                          
                  use_vae: False                         
              which_epoch: latest                        
                    z_dim: 256                           
----------------- End -------------------
dataset [CustomDataset] of size 2 was created
Network [SPADEGenerator] was created. Total number of parameters: 266.9 million. To see the architecture, do print(network).
/pytorch/aten/src/THC/THCTensorScatterGather.cu:188: void THCudaTensor_scatterFillKernel(TensorInfo<Real, IndexType>, TensorInfo<long, IndexType>, Real, int, IndexType) [with IndexType = unsigned int, Real = float, Dims = -1]: block: [32,0,0], thread: [160,0,0] Assertion `indexValue >= 0 && indexValue < tensor.sizes[dim]` failed.
/pytorch/aten/src/THC/THCTensorScatterGather.cu:188: void THCudaTensor_scatterFillKernel(TensorInfo<Real, IndexType>, TensorInfo<long, IndexType>, Real, int, IndexType) [with IndexType = unsigned int, Real = float, Dims = -1]: block: [32,0,0], thread: [161,0,0] Assertion `indexValue >= 0 && indexValue < tensor.sizes[dim]` failed.
/pytorch/aten/src/THC/THCTensorScatterGather.cu:188: void THCudaTensor_scatterFillKernel(TensorInfo<Real, IndexType>, TensorInfo<long, IndexType>, Real, int, IndexType) [with IndexType = unsigned int, Real = float, Dims = -1]: block: [32,0,0], thread: [162,0,0] Assertion `indexValue >= 0 && indexValue < tensor.sizes[dim]` failed.
/pytorch/aten/src/THC/THCTensorScatterGather.cu:188: void THCudaTensor_scatterFillKernel(TensorInfo<Real, IndexType>, TensorInfo<long, IndexType>, Real, int, IndexType) [with IndexType = unsigned int, Real = float, Dims = -1]: block: [32,0,0], thread: [163,0,0] Assertion `indexValue >= 0 && indexValue < tensor.sizes[dim]` failed.
/pytorch/aten/src/THC/THCTensorScatterGather.cu:188: void THCudaTensor_scatterFillKernel(TensorInfo<Real, IndexType>, TensorInfo<long, IndexType>, Real, int, IndexType) [with IndexType = unsigned int, Real = float, Dims = -1]: block: [32,0,0], thread: [164,0,0] Assertion `indexValue >= 0 && indexValue < tensor.sizes[dim]` failed.
/pytorch/aten/src/THC/THCTensorScatterGather.cu:188: void THCudaTensor_scatterFillKernel(TensorInfo<Real, IndexType>, TensorInfo<long, IndexType>, Real, int, IndexType) [with IndexType = unsigned int, Real = float, Dims = -1]: block: [32,0,0], thread: [165,0,0] Assertion `indexValue >= 0 && indexValue < tensor.sizes[dim]` failed.
/pytorch/aten/src/THC/THCTensorScatterGather.cu:188: void THCudaTensor_scatterFillKernel(TensorInfo<Real, IndexType>, TensorInfo<long, IndexType>, Real, int, IndexType) [with IndexType = unsigned int, Real = float, Dims = -1]: block: [32,0,0], thread: [166,0,0] Assertion `indexValue >= 0 && indexValue < tensor.sizes[dim]` failed.
/pytorch/aten/src/THC/THCTensorScatterGather.cu:188: void THCudaTensor_scatterFillKernel(TensorInfo<Real, IndexType>, TensorInfo<long, IndexType>, Real, int, IndexType) [with IndexType = unsigned int, Real = float, Dims = -1]: block: [32,0,0], thread: [167,0,0] Assertion `indexValue >= 0 && indexValue < tensor.sizes[dim]` failed.
/pytorch/aten/src/THC/THCTensorScatterGather.cu:188: void THCudaTensor_scatterFillKernel(TensorInfo<Real, IndexType>, TensorInfo<long, IndexType>, Real, int, IndexType) [with IndexType = unsigned int, Real = float, Dims = -1]: block: [32,0,0], thread: [168,0,0] Assertion `indexValue >= 0 && indexValue < tensor.sizes[dim]` failed.
/pytorch/aten/src/THC/THCTensorScatterGather.cu:188: void THCudaTensor_scatterFillKernel(TensorInfo<Real, IndexType>, TensorInfo<long, IndexType>, Real, int, IndexType) [with IndexType = unsigned int, Real = float, Dims = -1]: block: [32,0,0], thread: [169,0,0] Assertion `indexValue >= 0 && indexValue < tensor.sizes[dim]` failed.
/pytorch/aten/src/THC/THCTensorScatterGather.cu:188: void THCudaTensor_scatterFillKernel(TensorInfo<Real, IndexType>, TensorInfo<long, IndexType>, Real, int, IndexType) [with IndexType = unsigned int, Real = float, Dims = -1]: block: [32,0,0], thread: [170,0,0] Assertion `indexValue >= 0 && indexValue < tensor.sizes[dim]` failed.
/pytorch/aten/src/THC/THCTensorScatterGather.cu:188: void THCudaTensor_scatterFillKernel(TensorInfo<Real, IndexType>, TensorInfo<long, IndexType>, Real, int, IndexType) [with IndexType = unsigned int, Real = float, Dims = -1]: block: [32,0,0], thread: [171,0,0] Assertion `indexValue >= 0 && indexValue < tensor.sizes[dim]` failed.
/pytorch/aten/src/THC/THCTensorScatterGather.cu:188: void THCudaTensor_scatterFillKernel(TensorInfo<Real, IndexType>, TensorInfo<long, IndexType>, Real, int, IndexType) [with IndexType = unsigned int, Real = float, Dims = -1]: block: [32,0,0], thread: [172,0,0] Assertion `indexValue >= 0 && indexValue < tensor.sizes[dim]` failed.
/pytorch/aten/src/THC/THCTensorScatterGather.cu:188: void THCudaTensor_scatterFillKernel(TensorInfo<Real, IndexType>, TensorInfo<long, IndexType>, Real, int, IndexType) [with IndexType = unsigned int, Real = float, Dims = -1]: block: [32,0,0], thread: [173,0,0] Assertion `indexValue >= 0 && indexValue < tensor.sizes[dim]` failed.
/pytorch/aten/src/THC/THCTensorScatterGather.cu:188: void THCudaTensor_scatterFillKernel(TensorInfo<Real, IndexType>, TensorInfo<long, IndexType>, Real, int, IndexType) [with IndexType = unsigned int, Real = float, Dims = -1]: block: [32,0,0], thread: [174,0,0] Assertion `indexValue >= 0 && indexValue < tensor.sizes[dim]` failed.
/pytorch/aten/src/THC/THCTensorScatterGather.cu:188: void THCudaTensor_scatterFillKernel(TensorInfo<Real, IndexType>, TensorInfo<long, IndexType>, Real, int, IndexType) [with IndexType = unsigned int, Real = float, Dims = -1]: block: [32,0,0], thread: [175,0,0] Assertion `indexValue >= 0 && indexValue < tensor.sizes[dim]` failed.
/pytorch/aten/src/THC/THCTensorScatterGather.cu:188: void THCudaTensor_scatterFillKernel(TensorInfo<Real, IndexType>, TensorInfo<long, IndexType>, Real, int, IndexType) [with IndexType = unsigned int, Real = float, Dims = -1]: block: [32,0,0], thread: [181,0,0] Assertion `indexValue >= 0 && indexValue < tensor.sizes[dim]` failed.
/pytorch/aten/src/THC/THCTensorScatterGather.cu:188: void THCudaTensor_scatterFillKernel(TensorInfo<Real, IndexType>, TensorInfo<long, IndexType>, Real, int, IndexType) [with IndexType = unsigned int, Real = float, Dims = -1]: block: [10,0,0], thread: [32,0,0] Assertion `indexValue >= 0 && indexValue < tensor.sizes[dim]` failed.
/pytorch/aten/src/THC/THCTensorScatterGather.cu:188: void THCudaTensor_scatterFillKernel(TensorInfo<Real, IndexType>, TensorInfo<long, IndexType>, Real, int, IndexType) [with IndexType = unsigned int, Real = float, Dims = -1]: block: [10,0,0], thread: [33,0,0] Assertion `indexValue >= 0 && indexValue < tensor.sizes[dim]` failed.
/pytorch/aten/src/THC/THCTensorScatterGather.cu:188: void THCudaTensor_scatterFillKernel(TensorInfo<Real, IndexType>, TensorInfo<long, IndexType>, Real, int, IndexType) [with IndexType = unsigned int, Real = float, Dims = -1]: block: [10,0,0], thread: [34,0,0] Assertion `indexValue >= 0 && indexValue < tensor.sizes[dim]` failed.
/pytorch/aten/src/THC/THCTensorScatterGather.cu:188: void THCudaTensor_scatterFillKernel(TensorInfo<Real, IndexType>, TensorInfo<long, IndexType>, Real, int, IndexType) [with IndexType = unsigned int, Real = float, Dims = -1]: block: [10,0,0], thread: [35,0,0] Assertion `indexValue >= 0 && indexValue < tensor.sizes[dim]` failed.
/pytorch/aten/src/THC/THCTensorScatterGather.cu:188: void THCudaTensor_scatterFillKernel(TensorInfo<Real, IndexType>, TensorInfo<long, IndexType>, Real, int, IndexType) [with IndexType = unsigned int, Real = float, Dims = -1]: block: [10,0,0], thread: [36,0,0] Assertion `indexValue >= 0 && indexValue < tensor.sizes[dim]` failed.
/pytorch/aten/src/THC/THCTensorScatterGather.cu:188: void THCudaTensor_scatterFillKernel(TensorInfo<Real, IndexType>, TensorInfo<long, IndexType>, Real, int, IndexType) [with IndexType = unsigned int, Real = float, Dims = -1]: block: [10,0,0], thread: [37,0,0] Assertion `indexValue >= 0 && indexValue < tensor.sizes[dim]` failed.
/pytorch/aten/src/THC/THCTensorScatterGather.cu:188: void THCudaTensor_scatterFillKernel(TensorInfo<Real, IndexType>, TensorInfo<long, IndexType>, Real, int, IndexType) [with IndexType = unsigned int, Real = float, Dims = -1]: block: [10,0,0], thread: [38,0,0] Assertion `indexValue >= 0 && indexValue < tensor.sizes[dim]` failed.
/pytorch/aten/src/THC/THCTensorScatterGather.cu:188: void THCudaTensor_scatterFillKernel(TensorInfo<Real, IndexType>, TensorInfo<long, IndexType>, Real, int, IndexType) [with IndexType = unsigned int, Real = float, Dims = -1]: block: [10,0,0], thread: [39,0,0] Assertion `indexValue >= 0 && indexValue < tensor.sizes[dim]` failed.
/pytorch/aten/src/THC/THCTensorScatterGather.cu:188: void THCudaTensor_scatterFillKernel(TensorInfo<Real, IndexType>, TensorInfo<long, IndexType>, Real, int, IndexType) [with IndexType = unsigned int, Real = float, Dims = -1]: block: [10,0,0], thread: [40,0,0] Assertion `indexValue >= 0 && indexValue < tensor.sizes[dim]` failed.
/pytorch/aten/src/THC/THCTensorScatterGather.cu:188: void THCudaTensor_scatterFillKernel(TensorInfo<Real, IndexType>, TensorInfo<long, IndexType>, Real, int, IndexType) [with IndexType = unsigned int, Real = float, Dims = -1]: block: [10,0,0], thread: [41,0,0] Assertion `indexValue >= 0 && indexValue < tensor.sizes[dim]` failed.
/pytorch/aten/src/THC/THCTensorScatterGather.cu:188: void THCudaTensor_scatterFillKernel(TensorInfo<Real, IndexType>, TensorInfo<long, IndexType>, Real, int, IndexType) [with IndexType = unsigned int, Real = float, Dims = -1]: block: [10,0,0], thread: [42,0,0] Assertion `indexValue >= 0 && indexValue < tensor.sizes[dim]` failed.
/pytorch/aten/src/THC/THCTensorScatterGather.cu:188: void THCudaTensor_scatterFillKernel(TensorInfo<Real, IndexType>, TensorInfo<long, IndexType>, Real, int, IndexType) [with IndexType = unsigned int, Real = float, Dims = -1]: block: [10,0,0], thread: [43,0,0] Assertion `indexValue >= 0 && indexValue < tensor.sizes[dim]` failed.
/pytorch/aten/src/THC/THCTensorScatterGather.cu:188: void THCudaTensor_scatterFillKernel(TensorInfo<Real, IndexType>, TensorInfo<long, IndexType>, Real, int, IndexType) [with IndexType = unsigned int, Real = float, Dims = -1]: block: [10,0,0], thread: [44,0,0] Assertion `indexValue >= 0 && indexValue < tensor.sizes[dim]` failed.
/pytorch/aten/src/THC/THCTensorScatterGather.cu:188: void THCudaTensor_scatterFillKernel(TensorInfo<Real, IndexType>, TensorInfo<long, IndexType>, Real, int, IndexType) [with IndexType = unsigned int, Real = float, Dims = -1]: block: [10,0,0], thread: [45,0,0] Assertion `indexValue >= 0 && indexValue < tensor.sizes[dim]` failed.
/pytorch/aten/src/THC/THCTensorScatterGather.cu:188: void THCudaTensor_scatterFillKernel(TensorInfo<Real, IndexType>, TensorInfo<long, IndexType>, Real, int, IndexType) [with IndexType = unsigned int, Real = float, Dims = -1]: block: [10,0,0], thread: [46,0,0] Assertion `indexValue >= 0 && indexValue < tensor.sizes[dim]` failed.
/pytorch/aten/src/THC/THCTensorScatterGather.cu:188: void THCudaTensor_scatterFillKernel(TensorInfo<Real, IndexType>, TensorInfo<long, IndexType>, Real, int, IndexType) [with IndexType = unsigned int, Real = float, Dims = -1]: block: [10,0,0], thread: [47,0,0] Assertion `indexValue >= 0 && indexValue < tensor.sizes[dim]` failed.
/pytorch/aten/src/THC/THCTensorScatterGather.cu:188: void THCudaTensor_scatterFillKernel(TensorInfo<Real, IndexType>, TensorInfo<long, IndexType>, Real, int, IndexType) [with IndexType = unsigned int, Real = float, Dims = -1]: block: [10,0,0], thread: [48,0,0] Assertion `indexValue >= 0 && indexValue < tensor.sizes[dim]` failed.
/pytorch/aten/src/THC/THCTensorScatterGather.cu:188: void THCudaTensor_scatterFillKernel(TensorInfo<Real, IndexType>, TensorInfo<long, IndexType>, Real, int, IndexType) [with IndexType = unsigned int, Real = float, Dims = -1]: block: [10,0,0], thread: [49,0,0] Assertion `indexValue >= 0 && indexValue < tensor.sizes[dim]` failed.
/pytorch/aten/src/THC/THCTensorScatterGather.cu:188: void THCudaTensor_scatterFillKernel(TensorInfo<Real, IndexType>, TensorInfo<long, IndexType>, Real, int, IndexType) [with IndexType = unsigned int, Real = float, Dims = -1]: block: [10,0,0], thread: [50,0,0] Assertion `indexValue >= 0 && indexValue < tensor.sizes[dim]` failed.
/pytorch/aten/src/THC/THCTensorScatterGather.cu:188: void THCudaTensor_scatterFillKernel(TensorInfo<Real, IndexType>, TensorInfo<long, IndexType>, Real, int, IndexType) [with IndexType = unsigned int, Real = float, Dims = -1]: block: [10,0,0], thread: [51,0,0] Assertion `indexValue >= 0 && indexValue < tensor.sizes[dim]` failed.
/pytorch/aten/src/THC/THCTensorScatterGather.cu:188: void THCudaTensor_scatterFillKernel(TensorInfo<Real, IndexType>, TensorInfo<long, IndexType>, Real, int, IndexType) [with IndexType = unsigned int, Real = float, Dims = -1]: block: [10,0,0], thread: [52,0,0] Assertion `indexValue >= 0 && indexValue < tensor.sizes[dim]` failed.
/pytorch/aten/src/THC/THCTensorScatterGather.cu:188: void THCudaTensor_scatterFillKernel(TensorInfo<Real, IndexType>, TensorInfo<long, IndexType>, Real, int, IndexType) [with IndexType = unsigned int, Real = float, Dims = -1]: block: [10,0,0], thread: [53,0,0] Assertion `indexValue >= 0 && indexValue < tensor.sizes[dim]` failed.
/pytorch/aten/src/THC/THCTensorScatterGather.cu:188: void THCudaTensor_scatterFillKernel(TensorInfo<Real, IndexType>, TensorInfo<long, IndexType>, Real, int, IndexType) [with IndexType = unsigned int, Real = float, Dims = -1]: block: [10,0,0], thread: [54,0,0] Assertion `indexValue >= 0 && indexValue < tensor.sizes[dim]` failed.
/pytorch/aten/src/THC/THCTensorScatterGather.cu:188: void THCudaTensor_scatterFillKernel(TensorInfo<Real, IndexType>, TensorInfo<long, IndexType>, Real, int, IndexType) [with IndexType = unsigned int, Real = float, Dims = -1]: block: [10,0,0], thread: [55,0,0] Assertion `indexValue >= 0 && indexValue < tensor.sizes[dim]` failed.
/pytorch/aten/src/THC/THCTensorScatterGather.cu:188: void THCudaTensor_scatterFillKernel(TensorInfo<Real, IndexType>, TensorInfo<long, IndexType>, Real, int, IndexType) [with IndexType = unsigned int, Real = float, Dims = -1]: block: [10,0,0], thread: [56,0,0] Assertion `indexValue >= 0 && indexValue < tensor.sizes[dim]` failed.
/pytorch/aten/src/THC/THCTensorScatterGather.cu:188: void THCudaTensor_scatterFillKernel(TensorInfo<Real, IndexType>, TensorInfo<long, IndexType>, Real, int, IndexType) [with IndexType = unsigned int, Real = float, Dims = -1]: block: [10,0,0], thread: [57,0,0] Assertion `indexValue >= 0 && indexValue < tensor.sizes[dim]` failed.
/pytorch/aten/src/THC/THCTensorScatterGather.cu:188: void THCudaTensor_scatterFillKernel(TensorInfo<Real, IndexType>, TensorInfo<long, IndexType>, Real, int, IndexType) [with IndexType = unsigned int, Real = float, Dims = -1]: block: [10,0,0], thread: [58,0,0] Assertion `indexValue >= 0 && indexValue < tensor.sizes[dim]` failed.
/pytorch/aten/src/THC/THCTensorScatterGather.cu:188: void THCudaTensor_scatterFillKernel(TensorInfo<Real, IndexType>, TensorInfo<long, IndexType>, Real, int, IndexType) [with IndexType = unsigned int, Real = float, Dims = -1]: block: [10,0,0], thread: [59,0,0] Assertion `indexValue >= 0 && indexValue < tensor.sizes[dim]` failed.
/pytorch/aten/src/THC/THCTensorScatterGather.cu:188: void THCudaTensor_scatterFillKernel(TensorInfo<Real, IndexType>, TensorInfo<long, IndexType>, Real, int, IndexType) [with IndexType = unsigned int, Real = float, Dims = -1]: block: [10,0,0], thread: [60,0,0] Assertion `indexValue >= 0 && indexValue < tensor.sizes[dim]` failed.
/pytorch/aten/src/THC/THCTensorScatterGather.cu:188: void THCudaTensor_scatterFillKernel(TensorInfo<Real, IndexType>, TensorInfo<long, IndexType>, Real, int, IndexType) [with IndexType = unsigned int, Real = float, Dims = -1]: block: [10,0,0], thread: [61,0,0] Assertion `indexValue >= 0 && indexValue < tensor.sizes[dim]` failed.
/pytorch/aten/src/THC/THCTensorScatterGather.cu:188: void THCudaTensor_scatterFillKernel(TensorInfo<Real, IndexType>, TensorInfo<long, IndexType>, Real, int, IndexType) [with IndexType = unsigned int, Real = float, Dims = -1]: block: [10,0,0], thread: [62,0,0] Assertion `indexValue >= 0 && indexValue < tensor.sizes[dim]` failed.
/pytorch/aten/src/THC/THCTensorScatterGather.cu:188: void THCudaTensor_scatterFillKernel(TensorInfo<Real, IndexType>, TensorInfo<long, IndexType>, Real, int, IndexType) [with IndexType = unsigned int, Real = float, Dims = -1]: block: [10,0,0], thread: [63,0,0] Assertion `indexValue >= 0 && indexValue < tensor.sizes[dim]` failed.
/pytorch/aten/src/THC/THCTensorScatterGather.cu:188: void THCudaTensor_scatterFillKernel(TensorInfo<Real, IndexType>, TensorInfo<long, IndexType>, Real, int, IndexType) [with IndexType = unsigned int, Real = float, Dims = -1]: block: [10,0,0], thread: [96,0,0] Assertion `indexValue >= 0 && indexValue < tensor.sizes[dim]` failed.
/pytorch/aten/src/THC/THCTensorScatterGather.cu:188: void THCudaTensor_scatterFillKernel(TensorInfo<Real, IndexType>, TensorInfo<long, IndexType>, Real, int, IndexType) [with IndexType = unsigned int, Real = float, Dims = -1]: block: [10,0,0], thread: [97,0,0] Assertion `indexValue >= 0 && indexValue < tensor.sizes[dim]` failed.
/pytorch/aten/src/THC/THCTensorScatterGather.cu:188: void THCudaTensor_scatterFillKernel(TensorInfo<Real, IndexType>, TensorInfo<long, IndexType>, Real, int, IndexType) [with IndexType = unsigned int, Real = float, Dims = -1]: block: [10,0,0], thread: [98,0,0] Assertion `indexValue >= 0 && indexValue < tensor.sizes[dim]` failed.
/pytorch/aten/src/THC/THCTensorScatterGather.cu:188: void THCudaTensor_scatterFillKernel(TensorInfo<Real, IndexType>, TensorInfo<long, IndexType>, Real, int, IndexType) [with IndexType = unsigned int, Real = float, Dims = -1]: block: [10,0,0], thread: [99,0,0] Assertion `indexValue >= 0 && indexValue < tensor.sizes[dim]` failed.
/pytorch/aten/src/THC/THCTensorScatterGather.cu:188: void THCudaTensor_scatterFillKernel(TensorInfo<Real, IndexType>, TensorInfo<long, IndexType>, Real, int, IndexType) [with IndexType = unsigned int, Real = float, Dims = -1]: block: [10,0,0], thread: [100,0,0] Assertion `indexValue >= 0 && indexValue < tensor.sizes[dim]` failed.
/pytorch/aten/src/THC/THCTensorScatterGather.cu:188: void THCudaTensor_scatterFillKernel(TensorInfo<Real, IndexType>, TensorInfo<long, IndexType>, Real, int, IndexType) [with IndexType = unsigned int, Real = float, Dims = -1]: block: [10,0,0], thread: [101,0,0] Assertion `indexValue >= 0 && indexValue < tensor.sizes[dim]` failed.
/pytorch/aten/src/THC/THCTensorScatterGather.cu:188: void THCudaTensor_scatterFillKernel(TensorInfo<Real, IndexType>, TensorInfo<long, IndexType>, Real, int, IndexType) [with IndexType = unsigned int, Real = float, Dims = -1]: block: [10,0,0], thread: [102,0,0] Assertion `indexValue >= 0 && indexValue < tensor.sizes[dim]` failed.
/pytorch/aten/src/THC/THCTensorScatterGather.cu:188: void THCudaTensor_scatterFillKernel(TensorInfo<Real, IndexType>, TensorInfo<long, IndexType>, Real, int, IndexType) [with IndexType = unsigned int, Real = float, Dims = -1]: block: [10,0,0], thread: [103,0,0] Assertion `indexValue >= 0 && indexValue < tensor.sizes[dim]` failed.
/pytorch/aten/src/THC/THCTensorScatterGather.cu:188: void THCudaTensor_scatterFillKernel(TensorInfo<Real, IndexType>, TensorInfo<long, IndexType>, Real, int, IndexType) [with IndexType = unsigned int, Real = float, Dims = -1]: block: [10,0,0], thread: [104,0,0] Assertion `indexValue >= 0 && indexValue < tensor.sizes[dim]` failed.
/pytorch/aten/src/THC/THCTensorScatterGather.cu:188: void THCudaTensor_scatterFillKernel(TensorInfo<Real, IndexType>, TensorInfo<long, IndexType>, Real, int, IndexType) [with IndexType = unsigned int, Real = float, Dims = -1]: block: [10,0,0], thread: [105,0,0] Assertion `indexValue >= 0 && indexValue < tensor.sizes[dim]` failed.
/pytorch/aten/src/THC/THCTensorScatterGather.cu:188: void THCudaTensor_scatterFillKernel(TensorInfo<Real, IndexType>, TensorInfo<long, IndexType>, Real, int, IndexType) [with IndexType = unsigned int, Real = float, Dims = -1]: block: [10,0,0], thread: [106,0,0] Assertion `indexValue >= 0 && indexValue < tensor.sizes[dim]` failed.
/pytorch/aten/src/THC/THCTensorScatterGather.cu:188: void THCudaTensor_scatterFillKernel(TensorInfo<Real, IndexType>, TensorInfo<long, IndexType>, Real, int, IndexType) [with IndexType = unsigned int, Real = float, Dims = -1]: block: [10,0,0], thread: [107,0,0] Assertion `indexValue >= 0 && indexValue < tensor.sizes[dim]` failed.
/pytorch/aten/src/THC/THCTensorScatterGather.cu:188: void THCudaTensor_scatterFillKernel(TensorInfo<Real, IndexType>, TensorInfo<long, IndexType>, Real, int, IndexType) [with IndexType = unsigned int, Real = float, Dims = -1]: block: [10,0,0], thread: [108,0,0] Assertion `indexValue >= 0 && indexValue < tensor.sizes[dim]` failed.
/pytorch/aten/src/THC/THCTensorScatterGather.cu:188: void THCudaTensor_scatterFillKernel(TensorInfo<Real, IndexType>, TensorInfo<long, IndexType>, Real, int, IndexType) [with IndexType = unsigned int, Real = float, Dims = -1]: block: [10,0,0], thread: [109,0,0] Assertion `indexValue >= 0 && indexValue < tensor.sizes[dim]` failed.
/pytorch/aten/src/THC/THCTensorScatterGather.cu:188: void THCudaTensor_scatterFillKernel(TensorInfo<Real, IndexType>, TensorInfo<long, IndexType>, Real, int, IndexType) [with IndexType = unsigned int, Real = float, Dims = -1]: block: [10,0,0], thread: [110,0,0] Assertion `indexValue >= 0 && indexValue < tensor.sizes[dim]` failed.
/pytorch/aten/src/THC/THCTensorScatterGather.cu:188: void THCudaTensor_scatterFillKernel(TensorInfo<Real, IndexType>, TensorInfo<long, IndexType>, Real, int, IndexType) [with IndexType = unsigned int, Real = float, Dims = -1]: block: [10,0,0], thread: [111,0,0] Assertion `indexValue >= 0 && indexValue < tensor.sizes[dim]` failed.
/pytorch/aten/src/THC/THCTensorScatterGather.cu:188: void THCudaTensor_scatterFillKernel(TensorInfo<Real, IndexType>, TensorInfo<long, IndexType>, Real, int, IndexType) [with IndexType = unsigned int, Real = float, Dims = -1]: block: [10,0,0], thread: [112,0,0] Assertion `indexValue >= 0 && indexValue < tensor.sizes[dim]` failed.
/pytorch/aten/src/THC/THCTensorScatterGather.cu:188: void THCudaTensor_scatterFillKernel(TensorInfo<Real, IndexType>, TensorInfo<long, IndexType>, Real, int, IndexType) [with IndexType = unsigned int, Real = float, Dims = -1]: block: [10,0,0], thread: [113,0,0] Assertion `indexValue >= 0 && indexValue < tensor.sizes[dim]` failed.
/pytorch/aten/src/THC/THCTensorScatterGather.cu:188: void THCudaTensor_scatterFillKernel(TensorInfo<Real, IndexType>, TensorInfo<long, IndexType>, Real, int, IndexType) [with IndexType = unsigned int, Real = float, Dims = -1]: block: [10,0,0], thread: [114,0,0] Assertion `indexValue >= 0 && indexValue < tensor.sizes[dim]` failed.
/pytorch/aten/src/THC/THCTensorScatterGather.cu:188: void THCudaTensor_scatterFillKernel(TensorInfo<Real, IndexType>, TensorInfo<long, IndexType>, Real, int, IndexType) [with IndexType = unsigned int, Real = float, Dims = -1]: block: [10,0,0], thread: [115,0,0] Assertion `indexValue >= 0 && indexValue < tensor.sizes[dim]` failed.
/pytorch/aten/src/THC/THCTensorScatterGather.cu:188: void THCudaTensor_scatterFillKernel(TensorInfo<Real, IndexType>, TensorInfo<long, IndexType>, Real, int, IndexType) [with IndexType = unsigned int, Real = float, Dims = -1]: block: [10,0,0], thread: [116,0,0] Assertion `indexValue >= 0 && indexValue < tensor.sizes[dim]` failed.
/pytorch/aten/src/THC/THCTensorScatterGather.cu:188: void THCudaTensor_scatterFillKernel(TensorInfo<Real, IndexType>, TensorInfo<long, IndexType>, Real, int, IndexType) [with IndexType = unsigned int, Real = float, Dims = -1]: block: [10,0,0], thread: [117,0,0] Assertion `indexValue >= 0 && indexValue < tensor.sizes[dim]` failed.
/pytorch/aten/src/THC/THCTensorScatterGather.cu:188: void THCudaTensor_scatterFillKernel(TensorInfo<Real, IndexType>, TensorInfo<long, IndexType>, Real, int, IndexType) [with IndexType = unsigned int, Real = float, Dims = -1]: block: [10,0,0], thread: [118,0,0] Assertion `indexValue >= 0 && indexValue < tensor.sizes[dim]` failed.
/pytorch/aten/src/THC/THCTensorScatterGather.cu:188: void THCudaTensor_scatterFillKernel(TensorInfo<Real, IndexType>, TensorInfo<long, IndexType>, Real, int, IndexType) [with IndexType = unsigned int, Real = float, Dims = -1]: block: [10,0,0], thread: [119,0,0] Assertion `indexValue >= 0 && indexValue < tensor.sizes[dim]` failed.
/pytorch/aten/src/THC/THCTensorScatterGather.cu:188: void THCudaTensor_scatterFillKernel(TensorInfo<Real, IndexType>, TensorInfo<long, IndexType>, Real, int, IndexType) [with IndexType = unsigned int, Real = float, Dims = -1]: block: [10,0,0], thread: [120,0,0] Assertion `indexValue >= 0 && indexValue < tensor.sizes[dim]` failed.
/pytorch/aten/src/THC/THCTensorScatterGather.cu:188: void THCudaTensor_scatterFillKernel(TensorInfo<Real, IndexType>, TensorInfo<long, IndexType>, Real, int, IndexType) [with IndexType = unsigned int, Real = float, Dims = -1]: block: [10,0,0], thread: [121,0,0] Assertion `indexValue >= 0 && indexValue < tensor.sizes[dim]` failed.
/pytorch/aten/src/THC/THCTensorScatterGather.cu:188: void THCudaTensor_scatterFillKernel(TensorInfo<Real, IndexType>, TensorInfo<long, IndexType>, Real, int, IndexType) [with IndexType = unsigned int, Real = float, Dims = -1]: block: [10,0,0], thread: [122,0,0] Assertion `indexValue >= 0 && indexValue < tensor.sizes[dim]` failed.
/pytorch/aten/src/THC/THCTensorScatterGather.cu:188: void THCudaTensor_scatterFillKernel(TensorInfo<Real, IndexType>, TensorInfo<long, IndexType>, Real, int, IndexType) [with IndexType = unsigned int, Real = float, Dims = -1]: block: [10,0,0], thread: [123,0,0] Assertion `indexValue >= 0 && indexValue < tensor.sizes[dim]` failed.
/pytorch/aten/src/THC/THCTensorScatterGather.cu:188: void THCudaTensor_scatterFillKernel(TensorInfo<Real, IndexType>, TensorInfo<long, IndexType>, Real, int, IndexType) [with IndexType = unsigned int, Real = float, Dims = -1]: block: [10,0,0], thread: [124,0,0] Assertion `indexValue >= 0 && indexValue < tensor.sizes[dim]` failed.
/pytorch/aten/src/THC/THCTensorScatterGather.cu:188: void THCudaTensor_scatterFillKernel(TensorInfo<Real, IndexType>, TensorInfo<long, IndexType>, Real, int, IndexType) [with IndexType = unsigned int, Real = float, Dims = -1]: block: [10,0,0], thread: [125,0,0] Assertion `indexValue >= 0 && indexValue < tensor.sizes[dim]` failed.
/pytorch/aten/src/THC/THCTensorScatterGather.cu:188: void THCudaTensor_scatterFillKernel(TensorInfo<Real, IndexType>, TensorInfo<long, IndexType>, Real, int, IndexType) [with IndexType = unsigned int, Real = float, Dims = -1]: block: [10,0,0], thread: [126,0,0] Assertion `indexValue >= 0 && indexValue < tensor.sizes[dim]` failed.
/pytorch/aten/src/THC/THCTensorScatterGather.cu:188: void THCudaTensor_scatterFillKernel(TensorInfo<Real, IndexType>, TensorInfo<long, IndexType>, Real, int, IndexType) [with IndexType = unsigned int, Real = float, Dims = -1]: block: [10,0,0], thread: [127,0,0] Assertion `indexValue >= 0 && indexValue < tensor.sizes[dim]` failed.
/pytorch/aten/src/THC/THCTensorScatterGather.cu:188: void THCudaTensor_scatterFillKernel(TensorInfo<Real, IndexType>, TensorInfo<long, IndexType>, Real, int, IndexType) [with IndexType = unsigned int, Real = float, Dims = -1]: block: [10,0,0], thread: [160,0,0] Assertion `indexValue >= 0 && indexValue < tensor.sizes[dim]` failed.
/pytorch/aten/src/THC/THCTensorScatterGather.cu:188: void THCudaTensor_scatterFillKernel(TensorInfo<Real, IndexType>, TensorInfo<long, IndexType>, Real, int, IndexType) [with IndexType = unsigned int, Real = float, Dims = -1]: block: [10,0,0], thread: [161,0,0] Assertion `indexValue >= 0 && indexValue < tensor.sizes[dim]` failed.
/pytorch/aten/src/THC/THCTensorScatterGather.cu:188: void THCudaTensor_scatterFillKernel(TensorInfo<Real, IndexType>, TensorInfo<long, IndexType>, Real, int, IndexType) [with IndexType = unsigned int, Real = float, Dims = -1]: block: [10,0,0], thread: [162,0,0] Assertion `indexValue >= 0 && indexValue < tensor.sizes[dim]` failed.
/pytorch/aten/src/THC/THCTensorScatterGather.cu:188: void THCudaTensor_scatterFillKernel(TensorInfo<Real, IndexType>, TensorInfo<long, IndexType>, Real, int, IndexType) [with IndexType = unsigned int, Real = float, Dims = -1]: block: [10,0,0], thread: [163,0,0] Assertion `indexValue >= 0 && indexValue < tensor.sizes[dim]` failed.
/pytorch/aten/src/THC/THCTensorScatterGather.cu:188: void THCudaTensor_scatterFillKernel(TensorInfo<Real, IndexType>, TensorInfo<long, IndexType>, Real, int, IndexType) [with IndexType = unsigned int, Real = float, Dims = -1]: block: [10,0,0], thread: [165,0,0] Assertion `indexValue >= 0 && indexValue < tensor.sizes[dim]` failed.
/pytorch/aten/src/THC/THCTensorScatterGather.cu:188: void THCudaTensor_scatterFillKernel(TensorInfo<Real, IndexType>, TensorInfo<long, IndexType>, Real, int, IndexType) [with IndexType = unsigned int, Real = float, Dims = -1]: block: [10,0,0], thread: [166,0,0] Assertion `indexValue >= 0 && indexValue < tensor.sizes[dim]` failed.
/pytorch/aten/src/THC/THCTensorScatterGather.cu:188: void THCudaTensor_scatterFillKernel(TensorInfo<Real, IndexType>, TensorInfo<long, IndexType>, Real, int, IndexType) [with IndexType = unsigned int, Real = float, Dims = -1]: block: [10,0,0], thread: [167,0,0] Assertion `indexValue >= 0 && indexValue < tensor.sizes[dim]` failed.
/pytorch/aten/src/THC/THCTensorScatterGather.cu:188: void THCudaTensor_scatterFillKernel(TensorInfo<Real, IndexType>, TensorInfo<long, IndexType>, Real, int, IndexType) [with IndexType = unsigned int, Real = float, Dims = -1]: block: [10,0,0], thread: [168,0,0] Assertion `indexValue >= 0 && indexValue < tensor.sizes[dim]` failed.
/pytorch/aten/src/THC/THCTensorScatterGather.cu:188: void THCudaTensor_scatterFillKernel(TensorInfo<Real, IndexType>, TensorInfo<long, IndexType>, Real, int, IndexType) [with IndexType = unsigned int, Real = float, Dims = -1]: block: [10,0,0], thread: [169,0,0] Assertion `indexValue >= 0 && indexValue < tensor.sizes[dim]` failed.
/pytorch/aten/src/THC/THCTensorScatterGather.cu:188: void THCudaTensor_scatterFillKernel(TensorInfo<Real, IndexType>, TensorInfo<long, IndexType>, Real, int, IndexType) [with IndexType = unsigned int, Real = float, Dims = -1]: block: [10,0,0], thread: [170,0,0] Assertion `indexValue >= 0 && indexValue < tensor.sizes[dim]` failed.
/pytorch/aten/src/THC/THCTensorScatterGather.cu:188: void THCudaTensor_scatterFillKernel(TensorInfo<Real, IndexType>, TensorInfo<long, IndexType>, Real, int, IndexType) [with IndexType = unsigned int, Real = float, Dims = -1]: block: [10,0,0], thread: [171,0,0] Assertion `indexValue >= 0 && indexValue < tensor.sizes[dim]` failed.
/pytorch/aten/src/THC/THCTensorScatterGather.cu:188: void THCudaTensor_scatterFillKernel(TensorInfo<Real, IndexType>, TensorInfo<long, IndexType>, Real, int, IndexType) [with IndexType = unsigned int, Real = float, Dims = -1]: block: [10,0,0], thread: [172,0,0] Assertion `indexValue >= 0 && indexValue < tensor.sizes[dim]` failed.
/pytorch/aten/src/THC/THCTensorScatterGather.cu:188: void THCudaTensor_scatterFillKernel(TensorInfo<Real, IndexType>, TensorInfo<long, IndexType>, Real, int, IndexType) [with IndexType = unsigned int, Real = float, Dims = -1]: block: [10,0,0], thread: [173,0,0] Assertion `indexValue >= 0 && indexValue < tensor.sizes[dim]` failed.
/pytorch/aten/src/THC/THCTensorScatterGather.cu:188: void THCudaTensor_scatterFillKernel(TensorInfo<Real, IndexType>, TensorInfo<long, IndexType>, Real, int, IndexType) [with IndexType = unsigned int, Real = float, Dims = -1]: block: [10,0,0], thread: [174,0,0] Assertion `indexValue >= 0 && indexValue < tensor.sizes[dim]` failed.
/pytorch/aten/src/THC/THCTensorScatterGather.cu:188: void THCudaTensor_scatterFillKernel(TensorInfo<Real, IndexType>, TensorInfo<long, IndexType>, Real, int, IndexType) [with IndexType = unsigned int, Real = float, Dims = -1]: block: [10,0,0], thread: [175,0,0] Assertion `indexValue >= 0 && indexValue < tensor.sizes[dim]` failed.
/pytorch/aten/src/THC/THCTensorScatterGather.cu:188: void THCudaTensor_scatterFillKernel(TensorInfo<Real, IndexType>, TensorInfo<long, IndexType>, Real, int, IndexType) [with IndexType = unsigned int, Real = float, Dims = -1]: block: [10,0,0], thread: [176,0,0] Assertion `indexValue >= 0 && indexValue < tensor.sizes[dim]` failed.
/pytorch/aten/src/THC/THCTensorScatterGather.cu:188: void THCudaTensor_scatterFillKernel(TensorInfo<Real, IndexType>, TensorInfo<long, IndexType>, Real, int, IndexType) [with IndexType = unsigned int, Real = float, Dims = -1]: block: [10,0,0], thread: [177,0,0] Assertion `indexValue >= 0 && indexValue < tensor.sizes[dim]` failed.
/pytorch/aten/src/THC/THCTensorScatterGather.cu:188: void THCudaTensor_scatterFillKernel(TensorInfo<Real, IndexType>, TensorInfo<long, IndexType>, Real, int, IndexType) [with IndexType = unsigned int, Real = float, Dims = -1]: block: [10,0,0], thread: [178,0,0] Assertion `indexValue >= 0 && indexValue < tensor.sizes[dim]` failed.
/pytorch/aten/src/THC/THCTensorScatterGather.cu:188: void THCudaTensor_scatterFillKernel(TensorInfo<Real, IndexType>, TensorInfo<long, IndexType>, Real, int, IndexType) [with IndexType = unsigned int, Real = float, Dims = -1]: block: [10,0,0], thread: [179,0,0] Assertion `indexValue >= 0 && indexValue < tensor.sizes[dim]` failed.
/pytorch/aten/src/THC/THCTensorScatterGather.cu:188: void THCudaTensor_scatterFillKernel(TensorInfo<Real, IndexType>, TensorInfo<long, IndexType>, Real, int, IndexType) [with IndexType = unsigned int, Real = float, Dims = -1]: block: [10,0,0], thread: [180,0,0] Assertion `indexValue >= 0 && indexValue < tensor.sizes[dim]` failed.
/pytorch/aten/src/THC/THCTensorScatterGather.cu:188: void THCudaTensor_scatterFillKernel(TensorInfo<Real, IndexType>, TensorInfo<long, IndexType>, Real, int, IndexType) [with IndexType = unsigned int, Real = float, Dims = -1]: block: [10,0,0], thread: [181,0,0] Assertion `indexValue >= 0 && indexValue < tensor.sizes[dim]` failed.
/pytorch/aten/src/THC/THCTensorScatterGather.cu:188: void THCudaTensor_scatterFillKernel(TensorInfo<Real, IndexType>, TensorInfo<long, IndexType>, Real, int, IndexType) [with IndexType = unsigned int, Real = float, Dims = -1]: block: [10,0,0], thread: [182,0,0] Assertion `indexValue >= 0 && indexValue < tensor.sizes[dim]` failed.
/pytorch/aten/src/THC/THCTensorScatterGather.cu:188: void THCudaTensor_scatterFillKernel(TensorInfo<Real, IndexType>, TensorInfo<long, IndexType>, Real, int, IndexType) [with IndexType = unsigned int, Real = float, Dims = -1]: block: [10,0,0], thread: [183,0,0] Assertion `indexValue >= 0 && indexValue < tensor.sizes[dim]` failed.
/pytorch/aten/src/THC/THCTensorScatterGather.cu:188: void THCudaTensor_scatterFillKernel(TensorInfo<Real, IndexType>, TensorInfo<long, IndexType>, Real, int, IndexType) [with IndexType = unsigned int, Real = float, Dims = -1]: block: [10,0,0], thread: [184,0,0] Assertion `indexValue >= 0 && indexValue < tensor.sizes[dim]` failed.
/pytorch/aten/src/THC/THCTensorScatterGather.cu:188: void THCudaTensor_scatterFillKernel(TensorInfo<Real, IndexType>, TensorInfo<long, IndexType>, Real, int, IndexType) [with IndexType = unsigned int, Real = float, Dims = -1]: block: [10,0,0], thread: [185,0,0] Assertion `indexValue >= 0 && indexValue < tensor.sizes[dim]` failed.
/pytorch/aten/src/THC/THCTensorScatterGather.cu:188: void THCudaTensor_scatterFillKernel(TensorInfo<Real, IndexType>, TensorInfo<long, IndexType>, Real, int, IndexType) [with IndexType = unsigned int, Real = float, Dims = -1]: block: [10,0,0], thread: [186,0,0] Assertion `indexValue >= 0 && indexValue < tensor.sizes[dim]` failed.
/pytorch/aten/src/THC/THCTensorScatterGather.cu:188: void THCudaTensor_scatterFillKernel(TensorInfo<Real, IndexType>, TensorInfo<long, IndexType>, Real, int, IndexType) [with IndexType = unsigned int, Real = float, Dims = -1]: block: [10,0,0], thread: [187,0,0] Assertion `indexValue >= 0 && indexValue < tensor.sizes[dim]` failed.
/pytorch/aten/src/THC/THCTensorScatterGather.cu:188: void THCudaTensor_scatterFillKernel(TensorInfo<Real, IndexType>, TensorInfo<long, IndexType>, Real, int, IndexType) [with IndexType = unsigned int, Real = float, Dims = -1]: block: [10,0,0], thread: [188,0,0] Assertion `indexValue >= 0 && indexValue < tensor.sizes[dim]` failed.
/pytorch/aten/src/THC/THCTensorScatterGather.cu:188: void THCudaTensor_scatterFillKernel(TensorInfo<Real, IndexType>, TensorInfo<long, IndexType>, Real, int, IndexType) [with IndexType = unsigned int, Real = float, Dims = -1]: block: [10,0,0], thread: [189,0,0] Assertion `indexValue >= 0 && indexValue < tensor.sizes[dim]` failed.
/pytorch/aten/src/THC/THCTensorScatterGather.cu:188: void THCudaTensor_scatterFillKernel(TensorInfo<Real, IndexType>, TensorInfo<long, IndexType>, Real, int, IndexType) [with IndexType = unsigned int, Real = float, Dims = -1]: block: [10,0,0], thread: [190,0,0] Assertion `indexValue >= 0 && indexValue < tensor.sizes[dim]` failed.
/pytorch/aten/src/THC/THCTensorScatterGather.cu:188: void THCudaTensor_scatterFillKernel(TensorInfo<Real, IndexType>, TensorInfo<long, IndexType>, Real, int, IndexType) [with IndexType = unsigned int, Real = float, Dims = -1]: block: [10,0,0], thread: [191,0,0] Assertion `indexValue >= 0 && indexValue < tensor.sizes[dim]` failed.
Traceback (most recent call last):
  File "test.py", line 37, in <module>
    generated = model(data_i, mode='inference')
  File "/usr/local/lib/python3.6/dist-packages/torch/nn/modules/module.py", line 547, in __call__
    result = self.forward(*input, **kwargs)
  File "/content/SEAN/models/pix2pix_model.py", line 58, in forward
    fake_image = self.save_style_codes(input_semantics, real_image, obj_dic)
  File "/content/SEAN/models/pix2pix_model.py", line 205, in save_style_codes
    fake_image = self.netG(input_semantics, real_image, obj_dic=obj_dic)
  File "/usr/local/lib/python3.6/dist-packages/torch/nn/modules/module.py", line 547, in __call__
    result = self.forward(*input, **kwargs)
  File "/content/SEAN/models/networks/generator.py", line 79, in forward
    x = self.fc(x)
  File "/usr/local/lib/python3.6/dist-packages/torch/nn/modules/module.py", line 547, in __call__
    result = self.forward(*input, **kwargs)
  File "/usr/local/lib/python3.6/dist-packages/torch/nn/modules/conv.py", line 343, in forward
    return self.conv2d_forward(input, self.weight)
  File "/usr/local/lib/python3.6/dist-packages/torch/nn/modules/conv.py", line 340, in conv2d_forward
    self.padding, self.dilation, self.groups)
RuntimeError: cuDNN error: CUDNN_STATUS_NOT_INITIALIZED

The linear layers in ADE20k dataset

Thanks for your interesting work, i am a little confused with this paragraph of codes, is that means each class in the semantic label map needs a linear layer? So, are there 150 linear layers in the ADE20k dataset??? Can you tell me how to implement your method in the ADE20k in details?

self.fc_mu0 = nn.Linear(style_length, style_length)

The results from validation of train mode and inference mode are different each other.

Hi, Thank you for your code.
I have question about the nonsense situation.

As you know, visualizer.display_current_results save function saves the result images. Maybe the result from generator 'G' outputs with latest weights learned.
However, the synthesized image from the inference mode by test.py, is being terrible even though train-set and test-set are same. It makes nonsense. Because I saw already the G can makes good synthesized images from same dataset. While 50~100 epoch, I can see always good synthesized images. And the weight file should get same result in inference mode. is it right?

is that anything that I missed?

Thank you.

having trouble with UI "qt.qpa.plugin issue"

qt.qpa.plugin: Could not load the Qt platform plugin "xcb" in "/home/student/anaconda3/envs/sean_env/lib/python3.7/site-packages/cv2/qt/plugins" even though it was found.
This application failed to start because no Qt platform plugin could be initialized. Reinstalling the application may fix this problem.

any suggestion?

all the other pretrained model works fine!
currently using ubuntu server from VScode

how to run code in the ADE20K Dataset

Thanks for your good work, I try to run your code in the ADE20K dataset, I set the --dataset_mode custom --label_nc 150 --label_dir datasets/try_work/labels --image_dir datasets/try_work/images , but the code report error.
image

Losses turned into NaN on ADE20K dataset

Hello there, I'm trying to train the model on ADE20K dataset, but after a few epoches of training, the losses turned into NaN and D_real turned into 0.000(generated blank images). Would you please tell me what could be the problem? Thanks a lot!
Here's how I trained:
python train.py --name ADE --load_size 256 --crop_size 256 --dataset_mode custom --label_dir /home/ADEChallengeData2016/annotations/training --image_dir /home/ADEChallengeData2016/images/training --label_nc 151 --no_instance --batchSize 2 --gpu_ids 7

Unable to get the correct output

Hi,
Thanks a lot for uploading such a wonderful work. I have been trying to test this model, Not sure what I am doing wrong.
I downloaded the test dataset and the checkpoints you have uploaded, I am sure the paths are correct for the test command.
The reconstructed images that I am getting are almost same as the input images, I am really confused why.

This is the link to the result I am getting.

what is you parsing color in your paper?

I used the Bisenet to parse the face on my dataset and used your script in util/CelebMask-HQ/Data_preprocessing/v_mask.py ,but I get the different color ,so I want to known how to generate the color parsing map the same with you? Thanks!!!

size mismatch trying to continue train

trying to continue train CelebA-HQ_pretrained using your provided checkpoints and got this error
size mismatch for fc.weight: copying a param with shape torch.Size([1024, 19, 3, 3]) from checkpoint, the shape in current model is torch.Size([1024, 20, 3, 3]).
the model changed?

labels

CelebA-HQ/test/labels

how can i get these labels? what did u use as labels?

some questions about training details

hello,
question1:did you try synthesis of 512*512 size ? Can the model solve synthesis mission of 512 size ?i trained 512 size model with celeba-HQ, synthesis images are not clear and not similar to input images, i just trained 3epoches,
question2: did you try IN instead of BN in generator?i found synthesis results of test datasets ,not very clear, lost some details,i guess using IN instead of BN may improve synthesis results.

No module named 'data.coco_dataset'

Hi, I'm trying to run test.py through anaconda but It makes this error code. I try to download data.coco_dataset But I couldn't do it.
My system is window10 torch 1.11.0 +cpu
Best regards

image

Synthesis with custom style codes

Hi ! Thank you for your great work. I was wondering if it was somehow possible to use your framework to synthesize faces with custom/arbitrary style codes (i.e. not with a style image as input but directly with the style encodings) ? In order to perform walks/interpolations in the style latent space for instance. Thanks a lot :)

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.