Giter Site home page Giter Site logo

universome / alis Goto Github PK

View Code? Open in Web Editor NEW
239.0 7.0 35.0 7.09 MB

[ICCV 2021] Aligning Latent and Image Spaces to Connect the Unconnectable

Home Page: http://universome.github.io/alis

Python 11.84% Jupyter Notebook 86.89% C++ 0.39% Cuda 0.88%
gan infinite image-generation nature landscapes dataset

alis's Introduction

About

This repo contains the official implementation of the Aligning Latent and Image Spaces to Connect the Unconnectable paper. It is a GAN model which can generate infinite images of diverse and complex scenes.

ALIS generation example

[Project page] [Paper]

PWC

Python 3.7 Pytorch 1.7

Installation

To install, run the following command:

conda env create --file environment.yml --prefix ./env
conda activate ./env

Note: the tensorboard requirement is crucial, because otherwise upfirdn2d will not compile for some magical reason. The repo should work both on Linux/MacOS and Windows machines. However, on Windows, there might arise difficulties with installing some requirements: please see #3 to troubleshoot. Also, since the current repo is heavily based on StyleGAN2-ADA, it might be helpful to check the original installation requirements.

Training

To train the model, navigate to the project directory and run:

python infra/launch_local.py hydra.run.dir=. +experiment_name=my_experiment_name +dataset=dataset_name num_gpus=4

where dataset_name is the name of the dataset without .zip extension inside data/ directory (you can easily override the paths in configs/main.yml). So make sure that data/dataset_name.zip exists and should be a plain directory of images. See StyleGAN2-ADA repo for additional data format details. This training command will create an experiment inside experiments/ directory and will copy the project files into it. This is needed to isolate the code which produces the model.

Inference

The inference example can be found in notebooks/generate.ipynb

Data format

We use the same data format as the original StyleGAN2-ADA repo: it is a zip of images. It is assumed that all data is located in a single directory, specified in configs/main.yml. Put your datasets as zip archives into data/ directory. It is recommended to preprocess the dataset with the procedure described in Algorithm 1 since it noticeably affects the results (see Table 3).

Pretrained checkpoints

We provide checkpoints for the following datasets:

  • LHQ 1024x1024 with FID = 7.8. Note: this checkpoint has patch size of 1024x512, i.e. the image is generated in just 2 halves.

LHQ dataset

Note: images are sorted by their likelihood. That's why images with smaller idx are much more noisy. We will release a filtered version soon.

25 random images from LHQ

We collected 90k high-resolution nature landscape images and provide them for download in the following formats:

Path Size Number of files Format Description
Landscapes HQ 283G 90,000 PNG The root directory with all the files
├  LHQ 155G 90,000 PNG The complete dataset. Split into 4 zip archives.
├  LHQ1024 107G 90,000 PNG LHQ images, resized to min-side=1024 and center-cropped to 1024x1024. Split into 3 zip archives.
├  LHQ1024_jpg 12G 90,000 JPG LHQ1024 converted to JPG format with quality=95 (with Pillow)*
├  LHQ256 8.7G 90,000 PNG LHQ1024 resized to 256x256 with Lanczos interpolation
└  metadata.json 27M 1 JSON Dataset metadata (author names, licenses, descriptions, etc.)

*quality=95 in Pillow for JPG images (the default one is 75) provides images almost indistinguishable from PNG ones both visually and in terms of FID.

The images come with Unsplash/Creative Commons/U.S. Government Works licenses which allow distribution and use for research purposes. For details, see lhq.md and Section 4 in the paper.

Downloading files:

python download_lhq.py [DATASET_NAME]

License

The project is based on the StyleGAN2-ADA repo developed by NVidia. I am not a lawyer, but I suppose that NVidia License applies to the code of this project then. But the LHQ dataset is released under the Creative Commons Attribution 2.0 Generic (CC BY 2.0) License, which allows to use it in any way you like. See lhq.md.

BibTeX

@article{ALIS,
  title={Aligning Latent and Image Spaces to Connect the Unconnectable},
  author={Skorokhodov, Ivan and Sotnikov, Grigorii and Elhoseiny, Mohamed},
  journal={arXiv preprint arXiv:2104.06954},
  year={2021}
}

alis's People

Contributors

brandozhang avatar universome avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar

alis's Issues

Comments / Clarification of Generate.ipynb

Thank you for the great project!
I'd like to ask for some clarification of how the generation process works in the generate.ipynb notebook. A bit of guidance / code comments in the third cell (starts with "num_frames") would be most appreciated. In particular, I'd like to:

  • Explicitly define the starting vector of an inference (rather than starting from a random location).
  • More carefully control the magnitude of each "shift" that produces the segments of the panorama. Where can I adjust how "far away" in latent space each step is?

I'm asking for these clarifications because I'd like to make a different sort of animation. Rather than panning across a long panoramic strip, I'd like the strip as a whole to animate across a "latent walk" (something like shown here, but in which the animation could be of arbitrary width).

Again, thank you for an excellent project, and for any guidence.

Is Python 3.8.5 necessary?

Thanks for your interesting work!

When running your generate.ipynb on Linux, I am trapped in a messy dependency issue.
By reviewing your environment specification, I notice your Python version is quite high (for now):

- python=3.8.5

For example, when I install opencv via conda, I get:

Collecting package metadata (current_repodata.json): done
Solving environment: failed with initial frozen solve. Retrying with flexible solve.
Solving environment: failed with repodata from current_repodata.json, will retry with next repodata source.
Collecting package metadata (repodata.json): done
Solving environment: failed with initial frozen solve. Retrying with flexible solve.
Solving environment: \ 
Found conflicts! Looking for incompatible packages.
This can take several minutes.  Press CTRL-C to abort.                                                                                                                                                                failed                                                                                                                                                                                                                    

UnsatisfiableError: The following specifications were found
to be incompatible with the existing python installation in your environment:

Specifications:

  - opencv -> python[version='>=2.7,<2.8.0a0|>=3.5,<3.6.0a0|>=3.6,<3.7.0a0|>=3.7,<3.8.0a0']

Your python: defaults/linux-64::python==3.8.5=h7579374_1

If python is on the left-most side of the chain, that's the version you've asked for.
When python appears to the right, that indicates that the thing on the left is somehow
not available for the python version you are constrained to. Note that conda will not
change your python version to a different minor version unless you explicitly specify
that.

I wanna know if this Python version is necessary since I would like to downgrade it.

BTW, there's no opencv specified in your environment.yml.

Issue while launching generate.py

Hello.

THanks for your code. When i'm trying to launch generate file i'm getting this error:

Downloading https://vision-cair.s3.amazonaws.com/alis/lhq1024-snapshot.pkl ... done
Setting up PyTorch plugin "bias_act_plugin"... Failed!
/home/daddywesker/SingularityNet/MixingImages/alis/torch_utils/ops/bias_act.py:50: UserWarning: Failed to build CUDA kernels for bias_act. Falling back to slow reference implementation. Details:

Error building extension 'bias_act_plugin': [1/3] /usr/bin/nvcc --generate-dependencies-with-compile --dependency-output bias_act.cuda.o.d -DTORCH_EXTENSION_NAME=bias_act_plugin -DTORCH_API_INCLUDE_EXTENSION_H -DPYBIND11_COMPILER_TYPE=\"_gcc\" -DPYBIND11_STDLIB=\"_libstdcpp\" -DPYBIND11_BUILD_ABI=\"_cxxabi1011\" -isystem /home/daddywesker/anaconda3/envs/torch/lib/python3.7/site-packages/torch/include -isystem /home/daddywesker/anaconda3/envs/torch/lib/python3.7/site-packages/torch/include/torch/csrc/api/include -isystem /home/daddywesker/anaconda3/envs/torch/lib/python3.7/site-packages/torch/include/TH -isystem /home/daddywesker/anaconda3/envs/torch/lib/python3.7/site-packages/torch/include/THC -isystem /home/daddywesker/anaconda3/envs/torch/include/python3.7m -D_GLIBCXX_USE_CXX11_ABI=1 -D__CUDA_NO_HALF_OPERATORS__ -D__CUDA_NO_HALF_CONVERSIONS__ -D__CUDA_NO_BFLOAT16_CONVERSIONS__ -D__CUDA_NO_HALF2_OPERATORS__ --expt-relaxed-constexpr -gencode=arch=compute_75,code=compute_75 -gencode=arch=compute_75,code=sm_75 --compiler-options '-fPIC' --use_fast_math -std=c++14 -c /home/daddywesker/SingularityNet/MixingImages/alis/torch_utils/ops/bias_act.cu -o bias_act.cuda.o 
FAILED: bias_act.cuda.o 
/usr/bin/nvcc --generate-dependencies-with-compile --dependency-output bias_act.cuda.o.d -DTORCH_EXTENSION_NAME=bias_act_plugin -DTORCH_API_INCLUDE_EXTENSION_H -DPYBIND11_COMPILER_TYPE=\"_gcc\" -DPYBIND11_STDLIB=\"_libstdcpp\" -DPYBIND11_BUILD_ABI=\"_cxxabi1011\" -isystem /home/daddywesker/anaconda3/envs/torch/lib/python3.7/site-packages/torch/include -isystem /home/daddywesker/anaconda3/envs/torch/lib/python3.7/site-packages/torch/include/torch/csrc/api/include -isystem /home/daddywesker/anaconda3/envs/torch/lib/python3.7/site-packages/torch/include/TH -isystem /home/daddywesker/anaconda3/envs/torch/lib/python3.7/site-packages/torch/include/THC -isystem /home/daddywesker/anaconda3/envs/torch/include/python3.7m -D_GLIBCXX_USE_CXX11_ABI=1 -D__CUDA_NO_HALF_OPERATORS__ -D__CUDA_NO_HALF_CONVERSIONS__ -D__CUDA_NO_BFLOAT16_CONVERSIONS__ -D__CUDA_NO_HALF2_OPERATORS__ --expt-relaxed-constexpr -gencode=arch=compute_75,code=compute_75 -gencode=arch=compute_75,code=sm_75 --compiler-options '-fPIC' --use_fast_math -std=c++14 -c /home/daddywesker/SingularityNet/MixingImages/alis/torch_utils/ops/bias_act.cu -o bias_act.cuda.o 
nvcc fatal   : Unknown option '-generate-dependencies-with-compile'
[2/3] c++ -MMD -MF bias_act.o.d -DTORCH_EXTENSION_NAME=bias_act_plugin -DTORCH_API_INCLUDE_EXTENSION_H -DPYBIND11_COMPILER_TYPE=\"_gcc\" -DPYBIND11_STDLIB=\"_libstdcpp\" -DPYBIND11_BUILD_ABI=\"_cxxabi1011\" -isystem /home/daddywesker/anaconda3/envs/torch/lib/python3.7/site-packages/torch/include -isystem /home/daddywesker/anaconda3/envs/torch/lib/python3.7/site-packages/torch/include/torch/csrc/api/include -isystem /home/daddywesker/anaconda3/envs/torch/lib/python3.7/site-packages/torch/include/TH -isystem /home/daddywesker/anaconda3/envs/torch/lib/python3.7/site-packages/torch/include/THC -isystem /home/daddywesker/anaconda3/envs/torch/include/python3.7m -D_GLIBCXX_USE_CXX11_ABI=1 -fPIC -std=c++14 -c /home/daddywesker/SingularityNet/MixingImages/alis/torch_utils/ops/bias_act.cpp -o bias_act.o 
ninja: build stopped: subcommand failed.

  warnings.warn('Failed to build CUDA kernels for bias_act. Falling back to slow reference implementation. Details:\n\n' + str(sys.exc_info()[1]))
Setting up PyTorch plugin "upfirdn2d_plugin"... Failed!
/home/daddywesker/SingularityNet/MixingImages/alis/torch_utils/ops/upfirdn2d.py:34: UserWarning: Failed to build CUDA kernels for upfirdn2d. Falling back to slow reference implementation. Details:

Error building extension 'upfirdn2d_plugin': [1/3] /usr/bin/nvcc --generate-dependencies-with-compile --dependency-output upfirdn2d.cuda.o.d -DTORCH_EXTENSION_NAME=upfirdn2d_plugin -DTORCH_API_INCLUDE_EXTENSION_H -DPYBIND11_COMPILER_TYPE=\"_gcc\" -DPYBIND11_STDLIB=\"_libstdcpp\" -DPYBIND11_BUILD_ABI=\"_cxxabi1011\" -isystem /home/daddywesker/anaconda3/envs/torch/lib/python3.7/site-packages/torch/include -isystem /home/daddywesker/anaconda3/envs/torch/lib/python3.7/site-packages/torch/include/torch/csrc/api/include -isystem /home/daddywesker/anaconda3/envs/torch/lib/python3.7/site-packages/torch/include/TH -isystem /home/daddywesker/anaconda3/envs/torch/lib/python3.7/site-packages/torch/include/THC -isystem /home/daddywesker/anaconda3/envs/torch/include/python3.7m -D_GLIBCXX_USE_CXX11_ABI=1 -D__CUDA_NO_HALF_OPERATORS__ -D__CUDA_NO_HALF_CONVERSIONS__ -D__CUDA_NO_BFLOAT16_CONVERSIONS__ -D__CUDA_NO_HALF2_OPERATORS__ --expt-relaxed-constexpr -gencode=arch=compute_75,code=compute_75 -gencode=arch=compute_75,code=sm_75 --compiler-options '-fPIC' --use_fast_math -std=c++14 -c /home/daddywesker/SingularityNet/MixingImages/alis/torch_utils/ops/upfirdn2d.cu -o upfirdn2d.cuda.o 
FAILED: upfirdn2d.cuda.o 
/usr/bin/nvcc --generate-dependencies-with-compile --dependency-output upfirdn2d.cuda.o.d -DTORCH_EXTENSION_NAME=upfirdn2d_plugin -DTORCH_API_INCLUDE_EXTENSION_H -DPYBIND11_COMPILER_TYPE=\"_gcc\" -DPYBIND11_STDLIB=\"_libstdcpp\" -DPYBIND11_BUILD_ABI=\"_cxxabi1011\" -isystem /home/daddywesker/anaconda3/envs/torch/lib/python3.7/site-packages/torch/include -isystem /home/daddywesker/anaconda3/envs/torch/lib/python3.7/site-packages/torch/include/torch/csrc/api/include -isystem /home/daddywesker/anaconda3/envs/torch/lib/python3.7/site-packages/torch/include/TH -isystem /home/daddywesker/anaconda3/envs/torch/lib/python3.7/site-packages/torch/include/THC -isystem /home/daddywesker/anaconda3/envs/torch/include/python3.7m -D_GLIBCXX_USE_CXX11_ABI=1 -D__CUDA_NO_HALF_OPERATORS__ -D__CUDA_NO_HALF_CONVERSIONS__ -D__CUDA_NO_BFLOAT16_CONVERSIONS__ -D__CUDA_NO_HALF2_OPERATORS__ --expt-relaxed-constexpr -gencode=arch=compute_75,code=compute_75 -gencode=arch=compute_75,code=sm_75 --compiler-options '-fPIC' --use_fast_math -std=c++14 -c /home/daddywesker/SingularityNet/MixingImages/alis/torch_utils/ops/upfirdn2d.cu -o upfirdn2d.cuda.o 
nvcc fatal   : Unknown option '-generate-dependencies-with-compile'
[2/3] c++ -MMD -MF upfirdn2d.o.d -DTORCH_EXTENSION_NAME=upfirdn2d_plugin -DTORCH_API_INCLUDE_EXTENSION_H -DPYBIND11_COMPILER_TYPE=\"_gcc\" -DPYBIND11_STDLIB=\"_libstdcpp\" -DPYBIND11_BUILD_ABI=\"_cxxabi1011\" -isystem /home/daddywesker/anaconda3/envs/torch/lib/python3.7/site-packages/torch/include -isystem /home/daddywesker/anaconda3/envs/torch/lib/python3.7/site-packages/torch/include/torch/csrc/api/include -isystem /home/daddywesker/anaconda3/envs/torch/lib/python3.7/site-packages/torch/include/TH -isystem /home/daddywesker/anaconda3/envs/torch/lib/python3.7/site-packages/torch/include/THC -isystem /home/daddywesker/anaconda3/envs/torch/include/python3.7m -D_GLIBCXX_USE_CXX11_ABI=1 -fPIC -std=c++14 -c /home/daddywesker/SingularityNet/MixingImages/alis/torch_utils/ops/upfirdn2d.cpp -o upfirdn2d.o 
ninja: build stopped: subcommand failed.

  warnings.warn('Failed to build CUDA kernels for upfirdn2d. Falling back to slow reference implementation. Details:\n\n' + str(sys.exc_info()[1]))
Traceback (most recent call last):
  File "/home/daddywesker/anaconda3/envs/torch/lib/python3.7/site-packages/torch/utils/cpp_extension.py", line 1673, in _run_ninja_build
    env=env)
  File "/home/daddywesker/anaconda3/envs/torch/lib/python3.7/subprocess.py", line 512, in run
    output=stdout, stderr=stderr)
subprocess.CalledProcessError: Command '['ninja', '-v']' returned non-zero exit status 1.

The above exception was the direct cause of the following exception:

Traceback (most recent call last):
  File "/home/daddywesker/SingularityNet/MixingImages/alis/generate.py", line 67, in <module>
    img = G.synthesis(curr_ws, ws_context=curr_ws_context, left_borders_idx=curr_left_borders_idx, noise='const')
  File "/home/daddywesker/anaconda3/envs/torch/lib/python3.7/site-packages/torch/nn/modules/module.py", line 889, in _call_impl
    result = self.forward(*input, **kwargs)
  File "/home/daddywesker/SingularityNet/MixingImages/alis/training/networks.py", line 1030, in forward
    x, img = block(x, img, cur_ws, ws_context=curr_ws_context, left_borders_idx=left_borders_idx, **block_kwargs)
  File "/home/daddywesker/anaconda3/envs/torch/lib/python3.7/site-packages/torch/nn/modules/module.py", line 889, in _call_impl
    result = self.forward(*input, **kwargs)
  File "/home/daddywesker/SingularityNet/MixingImages/alis/training/networks.py", line 903, in forward
    **layer_kwargs)
  File "/home/daddywesker/anaconda3/envs/torch/lib/python3.7/site-packages/torch/nn/modules/module.py", line 889, in _call_impl
    result = self.forward(*input, **kwargs)
  File "/home/daddywesker/SingularityNet/MixingImages/alis/training/networks.py", line 531, in forward
    w_lerp_multiplier=self.cfg.patchwise.w_lerp_multiplier,
  File "/home/daddywesker/SingularityNet/MixingImages/alis/torch_utils/misc.py", line 101, in decorator
    return fn(*args, **kwargs)
  File "/home/daddywesker/SingularityNet/MixingImages/alis/training/networks.py", line 200, in patchwise_conv2d
    flip_weight=flip_weight)
  File "/home/daddywesker/SingularityNet/MixingImages/alis/torch_utils/misc.py", line 101, in decorator
    return fn(*args, **kwargs)
  File "/home/daddywesker/SingularityNet/MixingImages/alis/training/networks.py", line 222, in patchwise_op
    y = op(x, *args, **kwargs)
  File "/home/daddywesker/SingularityNet/MixingImages/alis/torch_utils/misc.py", line 101, in decorator
    return fn(*args, **kwargs)
  File "/home/daddywesker/SingularityNet/MixingImages/alis/torch_utils/ops/conv2d_resample.py", line 139, in conv2d_resample
    x = upfirdn2d.upfirdn2d(x=x, f=f, padding=[px0+pxt,px1+pxt,py0+pyt,py1+pyt], gain=up**2, flip_filter=flip_filter)
  File "/home/daddywesker/SingularityNet/MixingImages/alis/torch_utils/ops/upfirdn2d.py", line 163, in upfirdn2d
    if impl == 'cuda' and x.device.type == 'cuda' and _init():
  File "/home/daddywesker/SingularityNet/MixingImages/alis/torch_utils/ops/upfirdn2d.py", line 32, in _init
    _plugin = custom_ops.get_plugin('upfirdn2d_plugin', sources=sources, extra_cuda_cflags=['--use_fast_math'])
  File "/home/daddywesker/SingularityNet/MixingImages/alis/torch_utils/custom_ops.py", line 110, in get_plugin
    torch.utils.cpp_extension.load(name=module_name, verbose=verbose_build, sources=sources, **build_kwargs)
  File "/home/daddywesker/anaconda3/envs/torch/lib/python3.7/site-packages/torch/utils/cpp_extension.py", line 1091, in load
    keep_intermediates=keep_intermediates)
  File "/home/daddywesker/anaconda3/envs/torch/lib/python3.7/site-packages/torch/utils/cpp_extension.py", line 1302, in _jit_compile
    is_standalone=is_standalone)
  File "/home/daddywesker/anaconda3/envs/torch/lib/python3.7/site-packages/torch/utils/cpp_extension.py", line 1407, in _write_ninja_file_and_build_library
    error_prefix=f"Error building extension '{name}'")
  File "/home/daddywesker/anaconda3/envs/torch/lib/python3.7/site-packages/torch/utils/cpp_extension.py", line 1683, in _run_ninja_build
    raise RuntimeError(message) from e
RuntimeError: Error building extension 'upfirdn2d_plugin': [1/3] /usr/bin/nvcc --generate-dependencies-with-compile --dependency-output upfirdn2d.cuda.o.d -DTORCH_EXTENSION_NAME=upfirdn2d_plugin -DTORCH_API_INCLUDE_EXTENSION_H -DPYBIND11_COMPILER_TYPE=\"_gcc\" -DPYBIND11_STDLIB=\"_libstdcpp\" -DPYBIND11_BUILD_ABI=\"_cxxabi1011\" -isystem /home/daddywesker/anaconda3/envs/torch/lib/python3.7/site-packages/torch/include -isystem /home/daddywesker/anaconda3/envs/torch/lib/python3.7/site-packages/torch/include/torch/csrc/api/include -isystem /home/daddywesker/anaconda3/envs/torch/lib/python3.7/site-packages/torch/include/TH -isystem /home/daddywesker/anaconda3/envs/torch/lib/python3.7/site-packages/torch/include/THC -isystem /home/daddywesker/anaconda3/envs/torch/include/python3.7m -D_GLIBCXX_USE_CXX11_ABI=1 -D__CUDA_NO_HALF_OPERATORS__ -D__CUDA_NO_HALF_CONVERSIONS__ -D__CUDA_NO_BFLOAT16_CONVERSIONS__ -D__CUDA_NO_HALF2_OPERATORS__ --expt-relaxed-constexpr -gencode=arch=compute_75,code=compute_75 -gencode=arch=compute_75,code=sm_75 --compiler-options '-fPIC' --use_fast_math -std=c++14 -c /home/daddywesker/SingularityNet/MixingImages/alis/torch_utils/ops/upfirdn2d.cu -o upfirdn2d.cuda.o 
FAILED: upfirdn2d.cuda.o 
/usr/bin/nvcc --generate-dependencies-with-compile --dependency-output upfirdn2d.cuda.o.d -DTORCH_EXTENSION_NAME=upfirdn2d_plugin -DTORCH_API_INCLUDE_EXTENSION_H -DPYBIND11_COMPILER_TYPE=\"_gcc\" -DPYBIND11_STDLIB=\"_libstdcpp\" -DPYBIND11_BUILD_ABI=\"_cxxabi1011\" -isystem /home/daddywesker/anaconda3/envs/torch/lib/python3.7/site-packages/torch/include -isystem /home/daddywesker/anaconda3/envs/torch/lib/python3.7/site-packages/torch/include/torch/csrc/api/include -isystem /home/daddywesker/anaconda3/envs/torch/lib/python3.7/site-packages/torch/include/TH -isystem /home/daddywesker/anaconda3/envs/torch/lib/python3.7/site-packages/torch/include/THC -isystem /home/daddywesker/anaconda3/envs/torch/include/python3.7m -D_GLIBCXX_USE_CXX11_ABI=1 -D__CUDA_NO_HALF_OPERATORS__ -D__CUDA_NO_HALF_CONVERSIONS__ -D__CUDA_NO_BFLOAT16_CONVERSIONS__ -D__CUDA_NO_HALF2_OPERATORS__ --expt-relaxed-constexpr -gencode=arch=compute_75,code=compute_75 -gencode=arch=compute_75,code=sm_75 --compiler-options '-fPIC' --use_fast_math -std=c++14 -c /home/daddywesker/SingularityNet/MixingImages/alis/torch_utils/ops/upfirdn2d.cu -o upfirdn2d.cuda.o 
nvcc fatal   : Unknown option '-generate-dependencies-with-compile'
[2/3] c++ -MMD -MF upfirdn2d.o.d -DTORCH_EXTENSION_NAME=upfirdn2d_plugin -DTORCH_API_INCLUDE_EXTENSION_H -DPYBIND11_COMPILER_TYPE=\"_gcc\" -DPYBIND11_STDLIB=\"_libstdcpp\" -DPYBIND11_BUILD_ABI=\"_cxxabi1011\" -isystem /home/daddywesker/anaconda3/envs/torch/lib/python3.7/site-packages/torch/include -isystem /home/daddywesker/anaconda3/envs/torch/lib/python3.7/site-packages/torch/include/torch/csrc/api/include -isystem /home/daddywesker/anaconda3/envs/torch/lib/python3.7/site-packages/torch/include/TH -isystem /home/daddywesker/anaconda3/envs/torch/lib/python3.7/site-packages/torch/include/THC -isystem /home/daddywesker/anaconda3/envs/torch/include/python3.7m -D_GLIBCXX_USE_CXX11_ABI=1 -fPIC -std=c++14 -c /home/daddywesker/SingularityNet/MixingImages/alis/torch_utils/ops/upfirdn2d.cpp -o upfirdn2d.o 
ninja: build stopped: subcommand failed.


Process finished with exit code 1

As a clarification, generate.py is just your pynb copied to regular python script file, it's just easier for me.
Tried to search for this nvcc fatal : Unknown option '-generate-dependencies-with-compile' but currently got no clue.

I'm using pytorch 1.8, cuda 10.1, Ubuntu 20. THanks in advance for the help.

How to resume training from provided checkpoint?

I couldn't locate an option to specify the pretrained checkpoint so I specified it in training/training_loop.py in resume_pkl field.
After running the training job, I encounter this error:

-- Process 0 terminated with the following error:
Traceback (most recent call last):
  File "/opt/conda/envs/alis/lib/python3.8/site-packages/torch/multiprocessing/spawn.py", line 19, in _wrap
    fn(i, *args)
  File "/alis/experiments/shinkansen_slide2-b696bbb/scripts/train.py", line 443, in subprocess_fn
    training_loop.training_loop(rank=rank, **args)
  File "/alis/training/training_loop.py", line 177, in training_loop
    resume_data = legacy.load_network_pkl(f)
  File "/alis/experiments/shinkansen_slide2-b696bbb/scripts/legacy.py", line 22, in load_network_pkl
    data = _LegacyUnpickler(f).load()
  File "/alis/torch_utils/persistence.py", line 190, in _reconstruct_persistent_obj
    module = _src_to_module(meta.module_src)
  File "/alis/torch_utils/persistence.py", line 226, in _src_to_module
    exec(src, module.__dict__) # pylint: disable=exec-used
  File "<string>", line 25, in <module>
ImportError: attempted relative import with no known parent package

What is the correct way to specify a pretrained checkpoint or resume from an existing one?

Dataset

Any plans to upload the dataset? Thanks.

Why instance_norm is False?

I just wonder why the instance_norm term in the configuration file is False even though the normalization is done in equation 6 in the paper ? I think I am missing somehting.

Feedback from running inference on Win10

I'm on conda 4.9.2, Windows 10 64bit

In case anyone finds it useful, then:
I had to run
conda install -c anaconda cmake
To get CMAKE into the environment, however, for some reason pip has failed to install ninja from source.

So after activating the environment, I did the following steps which did not get executed due to the ninja build fail:
conda install ninja==1.10.0
conda install tqdm==4.59.0 gitpython scikit-learn
pip install gpustat
pip install tensorboard==2.4.1
pip install -e .
pip install omegaconf click

I was then able to run the inference notebook and generate a lovely panoramic image

To generate the video, I had to first run:
conda install -c conda-forge opencv
To get opencv into the environment

The video was then generated and it looks beautiful.
Cheers!

Colab?

Any way to run this in colab?

About reproducing

Hi,

Thank you for your great work! Have you ever cleaned the lhq dataset? I used the lhq_1024_jpg dataset to reproduce the effect, and the FID can only reach 9. I tried to fine-tune on the open source model, but the initial FID was as high as 24.

Best,
JiKun

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.