Giter Site home page Giter Site logo

harskish / ganspace Goto Github PK

View Code? Open in Web Editor NEW
1.8K 1.8K 261.0 46.76 MB

Discovering Interpretable GAN Controls [NeurIPS 2020]

License: Apache License 2.0

Python 45.58% Shell 0.07% HTML 2.70% C 0.33% Cuda 1.35% Jupyter Notebook 49.97%
deep-learning gan generative-adversarial-network image-generation interactive-visualizations pytorch

ganspace's People

Contributors

harskish avatar sdtblck avatar thiojoe avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

ganspace's Issues

Comparison of edit directions in StyleGAN1

Hello! Thanks for the excellent work! There are some examples compared with InterfaceGAN in Figure 5 of the paper. But i can not found the corresponding settings in the provided figure_edit_zoo.ipynb which contains the StyleGAN1 wikiart and bedroom examples except ffhq. If I want to reproduce the result, should I retrained the ganspace models in StyleGAN1 in ffhq?

Issue with setting up StyleGAN2

I follow all the instructions at https://github.com/harskish/ganspace/blob/master/SETUP.md. But when I run python -c "import torch; import upfirdn2d_op; import fused; print('OK')", I get the following error:

ImportError: /home/csundaram/anaconda3/envs/ganspace/lib/python3.7/site-packages/upfirdn2d-0.0.0-py3.7-linux-x86_64.egg/upfirdn2d_op.cpython-37m-x86_64-linux-gnu.so: undefined symbol: cudaSetupArgument

I am not sure why this occurs. This happens every time I run interactive.py or visualize.py

Any help is appreciated. Thanks!

Stylegan model conversion doesn't work

Hi,

I am using a pretrained stylegan trained on ffhq, but the manipulation just shows weird coloring on images (see attached). Any idea why this can happen? Can it be a problem when converting the tf model to pytorch?

Thanks.

505363059a0fb7811a7310e095dc6d82a5513027

Forked repository mentioned in the README?

In the README, you mention a forked repository:

Follow the instructions here. Make sure to use the forked repository in the conversion for compatibility reasons.

However, there is no fork. Is it an obsolete sentence?

Manipulating my own npy file

Hello Sir,
I am a newbie. I am trying to manipulate (changing smile, age, gender...) an image encoded by StyleGAN2 (using pixel2style2pixel). I have a .npy file (shape (18,512)). I want it to manipulate using your repository but I don't know how to do it. Is there a way to do so?

Pycuda install error: command 'gcc' failed with exit status 1

Hi, thank you for making an amazing library.

I am trying to use your library for my research. However, I have been facing errors while I was trying to install pycuda.

I configure like this

python configure.py --cuda-enable-gl --cuda-root=/usr/local/cuda-10.0

and when I try to compile and install by "make install" ,

warning: no files found matching 'doc/source/_static/*.css'
warning: no files found matching 'doc/source/_templates/*.html'
warning: no files found matching '*.cpp' under directory 'bpl-subset/bpl_subset/boost'
warning: no files found matching '*.html' under directory 'bpl-subset/bpl_subset/boost'
warning: no files found matching '*.inl' under directory 'bpl-subset/bpl_subset/boost'
warning: no files found matching '*.txt' under directory 'bpl-subset/bpl_subset/boost'
warning: no files found matching '*.h' under directory 'bpl-subset/bpl_subset/libs'
warning: no files found matching '*.ipp' under directory 'bpl-subset/bpl_subset/libs'
warning: no files found matching '*.pl' under directory 'bpl-subset/bpl_subset/libs'
writing manifest file 'pycuda.egg-info/SOURCES.txt'
installing library code to build/bdist.linux-x86_64/egg
running install_lib
running build_py
running build_ext
building '_driver' extension
gcc -pthread -B /home/orthoinsight/anaconda3/envs/ganspace/compiler_compat -Wl,--sysroot=/ -Wsign-compare -fwrapv -Wall -O3 -DNDEBUG -fPIC -DBOOST_ALL_NO_LIB=1 -DBOOST_THREAD_BUILD_DLL=1 -DBOOST_MULTI_INDEX_DISABLE_SERIALIZATION=1 -DBOOST_PYTHON_SOURCE=1 -Dboost=pycudaboost -DBOOST_THREAD_DONT_USE_CHRONO=1 -DPYGPU_PACKAGE=pycuda -DPYGPU_PYCUDA=1 -DHAVE_GL=1 -DHAVE_CURAND=1 -Isrc/cpp -Ibpl-subset/bpl_subset -I/usr/local/include -I/home/orthoinsight/anaconda3/envs/ganspace/lib/python3.7/site-packages/numpy/core/include -I/home/orthoinsight/anaconda3/envs/ganspace/include/python3.7m -c src/cpp/cuda.cpp -o build/temp.linux-x86_64-3.7/src/cpp/cuda.o
gcc: error trying to exec 'cc1plus': execvp: No such file or directory
error: command 'gcc' failed with exit status 1
Makefile:12: recipe for target 'install' failed
make: *** [install] Error 1

I have been facing that kind of error. Can you please help me with that?

Thank you

Best Regards,
Gun Ahn

TypeError: forward() got an unexpected keyword argument 'input_is_w'

Hi!
First of all, thank you very much for your contribution!


Only this code can work smoothly:

Explore BigGAN-deep husky
python interactive.py --model=BigGAN-512 --class=husky --layer=generator.gen_z -n=1_000_000



The following prompt appears when other codes are executed:

TypeError: forward() got an unexpected keyword argument 'input_is_w'



Traceback (most recent call last):
File "interactive.py", line 645, in
setup_model()
File "interactive.py", line 144, in setup_model
inst = get_instrumented_model(model_name, class_name, layer_name, torch.device('cuda'), use_w=args.use_w)
File "C:\Users\Creator\miniconda3\envs\ganspace\lib\functools.py", line 840, in wrapper
return dispatch(args[0].class)(*args, **kw)
File "C:\MyWork\My_GAN_Work\ganspace-master\models\wrappers.py", line 723, in get_instrumented_model
latent_shape = model.get_latent_shape()
File "C:\MyWork\My_GAN_Work\ganspace-master\netdissect\modelconfig.py", line 107, in create_instrumented_model
latent_shape=getattr(args, 'latent_shape', None))
File "C:\MyWork\My_GAN_Work\ganspace-master\netdissect\modelconfig.py", line 137, in annotate_model_shapes
output = model(dry_run)
File "C:\Users\Creator\miniconda3\envs\ganspace\lib\site-packages\torch\nn\modules\module.py", line 722, in _call_impl
result = self.forward(*input, **kwargs)
File "C:\MyWork\My_GAN_Work\ganspace-master\netdissect\nethook.py", line 48, in forward
return self.model(*inputs, **kwargs)
File "C:\Users\Creator\miniconda3\envs\ganspace\lib\site-packages\torch\nn\modules\module.py", line 722, in _call_impl
result = self.forward(*input, **kwargs)
File "C:\MyWork\My_GAN_Work\ganspace-master\models\wrappers.py", line 191, in forward
truncation=self.truncation, truncation_latent=self.latent_avg, input_is_w=self.w_primary)
File "C:\Users\Creator\miniconda3\envs\ganspace\lib\site-packages\torch\nn\modules\module.py", line 722, in _call_impl
result = self.forward(*input, **kwargs)
TypeError: forward() got an unexpected keyword argument 'input_is_w'



This is my conda list:

Name Version Build Channel

tflow_select________ 2.3.0_______________mkl____main
absl-py_______________0.11.0____________pyhd3eb1b0_1____main
aim __________________ 2.1.5_____________ pypi_0____pypi
aimrecords___________0.0.7_______________pypi_0____pypi
anytree_______________ 2.8.0_____________ pypi_0____pypi
appdirs_______________1.4.4_______________pypi_0____pypi
astor _________________ 0.8.1 py37_0____main
base58________________2.0.1
_ pypi_0____pypi
blas ________________ 1.0 ________________ mkl____main
blosc _________________1.20.1______________h7bd577a_0____main
boto3_______________1.16.55_______________pyhd3eb1b0_0
botocore____________1.19.55______________pyhd3eb1b0_0____main
brotli ___1.0.9_________________ha925a31_2____main
brotlipy
0.7.0_________________py37h2bbff1b_1003____main
bzip2_______________1.0.8___________________he774522_0____main
ca-certificates______2020.12.8_________________haa95532_0____main
cached-property________1.5.2___________________pypi_0____pypi
certifi________________2020.12.5_______________pypi_0____pypi
cffi__________________1.14.4________________py37hcd4344a_0____main
chardet_______________4.0.0_______________py37haa95532_1003____main
charls_________________2.1.0_______________h33f27b4_2____main
click__________________7.1.2________________pyhd3eb1b0_0____main
cloudpickle___________1.6.0_________________py_0____main
cmake________________3.18.2_____________hab937c2_0____main
cryptography_________3.3.1_______________py37hcd4344a_0____main
cudatoolkit___________10.1.243___________h74a9793_0____main
cycler__________________0.10.0_______________py37_0____main
cython_______________0.29.21______________pypi_0____pypi
cytoolz________________0.11.0____________py37he774522_0____main
dask-core____________2020.12.0__________pyhd3eb1b0_0____main
decorator____________4.4.2________________py_0____main
deprecated____________1.2.10____________py_0____main
distro__________________1.5.0______________pyhd3eb1b0_1 main
docker__________________4.4.1____________pypi_0____pypi
docutils________________0.15.2____________py37_0____main
fbpca____________________1.0______________pypi_0____pypi
ffmpeg__________________4.2.2____________he774522_0
fire______________________0.4.0____________pypi_0____pypi
freetype________________2.10.4__________hd328e21_0____main
future__________________0.18.2__________pypi_0____pypi
gast____________________0.2.2____________pypi_0____pypi
giflib____________________5.2.1__________h62dcd97_0____main
git______________________2.23.0__________h6bb4b03_0____main
gitdb____________________4.0.5____________pypi_0____pypi
gitpython______________3.1.12____________pypi_0____pypi
glumpy________________1.1.0____________pypi_0____pypi
grpcio__________________1.31.0____________py37he7da953_0____main
h5py__________________3.1.0______________pypi_0____pypi
hdf5__________________1.10.2________________hac2f561_1____main
helpdev________________0.7.1________________pypi_0____pypi
icc_rt________________2019.0.0______________h0cc432a_1____main
icu______________________58.2______________ha925a31_3____main
idna______________________2.10______________pyhd3eb1b0_0____main
imagecodecs__________2020.5.30____________py37hb1be65f_2____main
imageio____________________2.9.0______________py_0____main
importlib-metadata________3.4.0______________pypi_0____pypi
intel-openmp______________2021.1.2____________pypi_0____pypi
jmespath__________________0.10.0________________py_0____main
joblib______________________1.0.0________________pyhd3eb1b0_0____main
jpeg__________________________9b________________hb83a4c4_2____main
keras-applications__________1.0.8________________py_1____main
keras-preprocessing________1.1.0________________py_1____main
kiwisolver____________________1.3.0______________py37hd77b12b_0____main
kornia________________________0.4.1________________pypi_0____pypi
lcms2__________________________2.11__________________hc51a39a_0____main
libaec__________________________1.0.4________________h33f27b4_1____main
libmklml____________________2019.0.5________________haa95532_0____main
libpng__________________________1.6.37__________________h2a8f88b_0____main
libprotobuf____________________3.13.0.1________________h200bbdf_0____main
libtiff____________________________4.1.0__________________h56a325e_1____main
libuv____________________________1.39.0____________________he774522_0____main
libzopfli_________________________1.0.3____________________ha925a31_0____main
linear-attention-transformer____0.15.3__________________pypi_0____pypi
linformer__________________________0.2.1__________________pypi_0____pypi
local-attention__________________1.2.1____________________pypi_0____pypi
lz4-c____________________________1.9.2____________________hf4a77e7_3____main
mako__________________________1.1.4______________________pypi_0____pypi
markdown____________________3.3.3______________________py37haa95532_0____main
markupsafe__________________2.0.0a1____________________pypi_0____pypi
matplotlib____________________3.3.3______________________pypi_0____pypi
matplotlib-base______________3.3.2______________________py37hba9282a_0____main
mkl________________________2020.2__________________________256____main
mkl-service__________________2.3.0________________________py37h196d8e1_0____main
mkl_fft______________________1.2.0______________________py37h45dec08_0____main
mkl_random________________1.1.1______________________py37h47e9c7a_0____main
networkx____________________2.5________________________py_0____main
ninja______________________1.10.0.post2________________pypi_0____pypi
nltk__________________________3.5________________________py_0
numpy____________________1.19.2______________________py37hadc3359_0____main
numpy-base______________1.19.2______________________py37ha3acd2a_0____main
olefile______________________0.46________________________py37_0____main
opencv-python____________4.5.1.48____________________pypi_0____pypi
openjpeg__________________2.3.0______________________h5ec785f_1____main
openssl____________________1.1.1i____________________h2bbff1b_0____main
opt-einsum________________3.3.0____________________pypi_0____pypi
packaging_________________20.8____________________pyhd3eb1b0_0 ____ main
pandas____________________1.2.0__________________py37hf11a4ad_0____main
pillow______________________6.2.1__________________py37hdc69c19_0
pip________________________20.3.3__________________py37haa95532_0
protobuf__________________3.14.0__________________pypi_0____pypi
psutil______________________5.8.0____________________pypi_0____pypi
py________________________1.10.0____________________pypi_0____pypi
pycparser__________________2.20________________________py_2____main
pycuda____________________2019.1.2+cuda101__________pypi_0____pypi
pyopengl______________________3.1.4___________________________pypi_0____pypi
pyopengltk________________________0.0.3________________________pypi_0 pypi
pyopenssl__________________________20.0.1____________________pyhd3eb1b0_1____main
pyparsing__________________________2.4.7________________________pyhd3eb1b0_0____main
pyqt________________________________5.9.2________________________py37h6538335_2____main
pyqt5________________________________5.15.2____________________________pypi_0____pypi
pyqt5-sip____________________________12.8.1________________________pypi_0____pypi
pyrser________________________________0.2.0________________________________pypi_0____pypi
pysocks________________________________1.7.1____________________________py37_1____main
python________________________________3.7.9____________________________h60c2a47_0
python-dateutil________________________2.8.1________________________________py_0____main
pytools________________________________2021.1____________________________pypi_0___pypi
pytorch________________________________1.6.0____________________________py3.7_cuda101_cudnn7_0____pytorch
pytz____________________________________2020.5____________________________pyhd3eb1b0_0____main
pywavelets____________________________1.1.1________________________________py37he774522_2____main
pywin32____________________________________227____________________________pypi_0____pypi
pyyaml____________________________________5.3.1____________________________py37he774522_1____main
qdarkstyle________________________________2.8.1____________________________pypi_0____pypi
qt________________________________________5.9.7____________________________vc14h73c81de_0____main
qtpy____________________________________1.9.0____________________________________pypi_0____pypi
regex________________________________2020.11.13_____________________________py37h2bbff1b_0____main
requests________________________________2.25.1_______________________________pyhd3eb1b0_0
retry____________________________________0.9.2________________________________pypi_0 pypi
s3transfer________________________________0.3.4____________________________pyhd3eb1b0_0____main
scikit-build____________________________0.11.1________________________________py37hd77b12b_2____main
scikit-image________________________0.17.2________________________________py37h1e1f486_0
scikit-learn____________________________0.23.2________________________________py37h47e9c7a_0
scipy__________________________________1.5.2________________________________py37h9439919_0____main
setuptools____________________________51.3.3________________________________py37haa95532_4____main
sip____________________________________4.19.8________________________________py37h6538335_0____main
six____________________________________1.15.0________________________________py37haa95532_0____main
smmap________________________________3.0.5____________________________________pypi_0____pypi
snappy________________________________1.1.8________________________________h33f27b4_0____main
sqlite____________________________________3.33.0____________________________h2a8f88b_0____main
tensorboard____________________________1.15.0____________________________pypi_0____pypi
tensorflow________________________________1.14.0________________________mkl_py37h7908ca0_0 main
tensorflow-base________________________1.14.0____________________mkl_py37ha978198_0 main
tensorflow-cpu-estimator____________1.15.1________________________pypi_0____pypi
tensorflow-estimator________________1.14.0____________________________py_0____main
termcolor____________________________1.1.0________________________pypi_0____pypi
threadpoolctl________________________2.1.0________________________pyh5ca1d4c_0____main
tifffile____________________________2021.1.14____________________pyhd3eb1b0_1____main
tk____________________________________8.6.10____________________he774522_0____main
toolz________________________________0.11.1____________________pyhd3eb1b0_0____main
torch____________________________1.6.0+cu101____________________pypi_0____pypi
torchdiffeq________________________0.0.1________________________pypi_0____pypi
torchvision________________________0.1.8________________________pypi_0____pypi
tornado____________________________6.1________________________py37h2bbff1b_0 main
tqdm____________________________4.55.1________________________pyhd3eb1b0_0
triangle____________________20190115.3________________________pypi_0____pypi
typing-extensions____________3.7.4.3________________________pypi_0____pypi
urllib3________________________1.25.11________________________py_0____main
vc________________________________14.2____________________________h21ff451_1 main
vs2015_runtime____________14.27.29016____________________h5e58377_2____main
websocket-client______________0.57.0__________________pypi_0____pypi
werkzeug______________________1.0.1__________________py_0____main
wheel__________________________0.36.2________________pyhd3eb1b0_0____main
win_inet_pton________________1.1.0______________________py37haa95532_0____main
wincertstore__________________0.2______________________py37_0____main
wrapt________________________1.12.1____________________py37he774522_1____main
xz____________________________5.2.5____________________h62dcd97_0____main
yaml__________________________0.2.5__________________he774522_0____main
zipp________________________3.4.0________________________pypi_0____pypi
zlib________________________1.2.11______________________h62dcd97_4____main
zstd______________________1.4.8.1________________________pypi_0____pypi

Pycuda import error

Trying to run interactive.py I get this error:

import pycuda.driver
ModuleNotFoundError: No module named 'pycuda'

Is pycuda not installed through the conda environment (.yml) file ?

ModuleNotFoundError: No module named 'model'

Traceback (most recent call last):
File "interactive.py", line 22, in
from models import get_instrumented_model
File "C:\Users\Creator\Downloads\ganspace-master\ganspace-master\models_init_.py", line 11, in
from .wrappers import *
File "C:\Users\Creator\Downloads\ganspace-master\ganspace-master\models\wrappers.py", line 23, in
from . import stylegan2
File "C:\Users\Creator\Downloads\ganspace-master\ganspace-master\models\stylegan2_init_.py", line 14, in
from model import Generator
ModuleNotFoundError: No module named 'model'

Does my installation fail on windows?

Ganspace not compatible with StyleGan2-ADA-PyTorch models

Hey,
I tried setting up Ganspace with StyleGan2-ADA-PyTorch models but it seems not possible without modifying code.
Most errors show that the module torch_utils is necessary to load the model pkl files.

    magic_number = pickle_module.load(f, **pickle_load_args)
ModuleNotFoundError: No module named 'torch_utils'

I also was not able to convert weights.
Any ideas on how to make ganspace working with the new model files? Thanks

Trouble installing CUDA for stylegan2

I am having an issue with setting up stylegan2 at step 5 python setup.py install

image

I have installed CUDA toolkit version 10.1

image

I thought nothing of it and skipped to step 6 python -c "import torch; import upfirdn2d_op; import fused; print('OK')"

image

Anyone knows how to fix it or if I did something wrong?

Missing submodule?

Hey guys,

I've been trying to play with this and I've been running into an error after installation when I'm trying to start the interactive.py. I did see the other listing with the same-ish issue, however there are no files in stylegan-pytorch to copy elsewhere, so I'm not sure what I'm missing. The error traceback is as follows:

(ganspace) C:\Users\pixlo\Documents\ganspace-master>python interactive.py --model=BigGAN-512 --class=husky --layer=generator.gen_z -n=1_000_000
Traceback (most recent call last):
File "interactive.py", line 22, in
from models import get_instrumented_model
File "C:\Users\pixlo\Documents\ganspace-master\models_init_.py", line 11, in
from .wrappers import *
File "C:\Users\pixlo\Documents\ganspace-master\models\wrappers.py", line 23, in
from . import stylegan2
File "C:\Users\pixlo\Documents\ganspace-master\models\stylegan2_init_.py", line 14, in
from model import Generator
ModuleNotFoundError: No module named 'model'

As far as I can tell this is because there is a missing model.py that is present in the stylegan folder but not the stylegan2 folder, however copying the one over to the other doesn't change anything. It may as well be that I'm missing something, python isn't my strong suit. Any suggestions?

Thanks,

-Morgan

Edit: I deleted and re-extracted the folder from the master and I noticed that I'm now getting the error:
(ganspace) C:\Users\pixlo\Documents\ganspace-master>git submodule update --init --recursive
fatal: not a git repository (or any of the parent directories): .git

I imagine this is the real issue, as it seems the subdirectories are tied to whatever I'm missing.

VS Version

Hi,

As noted for the windows install, this requries 'x64 Native Tools Command Prompt for VS 2017'

However, the only free version of VS is VS Community 2019. Can ganspace work for this version?

Save direction error

I got the following error could you please add an example direction i.e. "open_mouth" it may be easy to edit for further exploration.

AssertionError Traceback (most recent call last)
in ()
3 direction_name = 'raise_eyebrows'
4 num_samples = 5
----> 5 assert direction_name in named_directions, f'"{direction_name}" not found, please save it first using the cell above.'
6
7 loc = named_directions[direction_name][0]

AssertionError: "raise_eyebrows" not found, please save it first using the cell above.

Question: Pre-trained principal directions for ProgGAN/StyleGAN2

Hi, I'm interested in using your method for comparisons, and I would like to traverse the latent space of ProgGAN and StyleGAN2/W-space along the principal directions. More specifically, given a latent code z, I'm looking for a way to produce a new latent code z'= z + Ux, where U is the matrix of the principal directions and x is an one-hot vector determining which direction(s) to use. My question is whether there is a way to have/extract a pre-trained matrix U for ProgGAN (CelebA-HQ) and StyleGAN2 (FFHQ).

Thank you.

RuntimeError: CUDA out of memory

I've found a fix, it's below.

First of all, thank you for this wonderful tool!

I have gtx 750ti GPU, and I'm getting this error (running python interactive.py --model=StyleGAN2 --class=ffhq --layer=style --use_w -n=1_000_000 -b=10_000). I'm using Linux Mint 20. I've also tried to setup StyleGAN2. When I run the commands from "Usage", it opens the window and then closes it when I move any slider. Apparently, 2 GB of VRAM is not enough. Is there any way to make it work with ganspace? For example reduce batch size? Or use CPU instead? If not, are there google colab notebooks that can be used with the same functionality?

P.S. I was able to run it after changing -b option to 1_000. I assume it's the batch size.

Question about a new application

Hi there,

Thanks for sharing this amazing project.
I am new in Deep Learning, but I want to learn everything I need to finish my project.

I am trying to build a system to perform a specific task: add acne/wrinkles/scars/etc on faces.
So, once I have a good dataset set of face images with acne (for example), can I accomplish this using your project?

I saw your results with wrinkles, were awesome.
Can you give the directions to perform the same thing with acne?

Setup Issues from: Step 5 + (Windows - Anaconda)

I'm trying to follow the Setup step by step for Windows 10 using Anaconda and Nvidia GPU.

When I get to step 5:
5. Setup submodules: git submodule update --init --recursive

I get this:
fatal: not a git repository (or any of the parent directories): .git

I skipped to step 6 and it installed the package with no issues,
So I tried to ignore it for now and proceed with the because I was curious about the Windows step:
Install included dependencies (downloaded from https://www.lfd.uci.edu/~gohlke/pythonlibs/):
pip install deps/windows/*

And I get this Error in red:
ERROR: Invalid requirement: 'deps/windows/*'

It says to download from the link, but... it's not clear, to download which file?
There are hundreds of links so I'm a bit confused and hope that anyone can help trying to install it.

Thanks ahead :)

"No module named 'model'" when trying to run interactive.py

I have spent the last 5 hours trying to get this script to work assuming that it is something wrong on my end, but after reinstalling countless times I thought I would bring it up here to see if i could get some help. In SETUP.md for setting up stylegan2 it says to go to models/stylegan2/stylegan2-pytorch/op and run setup.py, however that directory does not exist, and so I was forced to skip that part of the setup, which I assume is causing this issue. The full error is as follows:
Traceback (most recent call last): File "interactive.py", line 22, in <module> from models import get_instrumented_model File "/home/noah/ganspace-master/models/__init__.py", line 11, in <module> from .wrappers import * File "/home/noah/ganspace-master/models/wrappers.py", line 22, in <module> from . import stylegan2 File "/home/noah/ganspace-master/models/stylegan2/__init__.py", line 14, in <module> from model import Generator ModuleNotFoundError: No module named 'model'

I am also unsure as to how I am meant to load up my own .pt file, as with the normal PyTorch version of stylegan2 you are able to specify the file when you run the generate command

CUDA 10.1 is not compatible with newer GPUs

I have used ganspace with stylegan2 back when I was on an older system using a GTX 1060 6GB, however I have recently upgraded to an RTX 3060 Ti and have run into issues with CUDA 10.1 compatibility.

I am using Windows 10

I have tried to fix this by installing CUDA Toolkit 11.1.1 and changing environment.yml to 11.1 then restarting the installation process, however this lead to some pytorch issues, and I believe that pytorch 1.7.1 or newer is required for cuda 11.1 compatibility.
I also tried installing pytorch 1.7.1 and 1.10.0, but this lead to even more issues and so I thought I'd come here for some advice on how to proceed.

I am not very experienced when it comes to this area so apologies if I get stuck at any point or don't understand something

Error checking compiler version for cl: 'utf-8' codec can't decode byte 0xd3 in position 0: invalid continuation byte

Hi!
I still cannot run successfully under win10, do I need to delete VS2019?
Thank you!

python interactive.py
C:\Users\Creator\miniconda3\lib\site-packages\torch\utils\cpp_extension.py:237: UserWarning: Error checking compiler version for cl: 'utf-8' codec can't decode byte 0xd3 in position 0: invalid continuation byte
warnings.warn('Error checking compiler version for {}: {}'.format(compiler, error))
Traceback (most recent call last):
File "C:\Users\Creator\miniconda3\lib\site-packages\torch\utils\cpp_extension.py", line 1400, in _run_ninja_build
check=True)
File "C:\Users\Creator\miniconda3\lib\subprocess.py", line 512, in run
output=stdout, stderr=stderr)
subprocess.CalledProcessError: Command '['ninja', '-v']' returned non-zero exit status 1.

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
File "interactive.py", line 23, in
from models import get_instrumented_model
File "C:\MyWork\My_GAN_Work\ganspace-master\models_init_.py", line 11, in
from .wrappers import *
File "C:\MyWork\My_GAN_Work\ganspace-master\models\wrappers.py", line 23, in
from . import stylegan2
File "C:\MyWork\My_GAN_Work\ganspace-master\models\stylegan2_init_.py", line 14, in
from model import Generator
File "C:\MyWork\My_GAN_Work\ganspace-master\models\stylegan2\stylegan2-pytorch\model.py", line 11, in
from op import FusedLeakyReLU, fused_leaky_relu, upfirdn2d
File "C:\MyWork\My_GAN_Work\ganspace-master\models\stylegan2\stylegan2-pytorch\op_init_.py", line 1, in
from .fused_act import FusedLeakyReLU, fused_leaky_relu
File "C:\MyWork\My_GAN_Work\ganspace-master\models\stylegan2\stylegan2-pytorch\op\fused_act.py", line 15, in
os.path.join(module_path, "fused_bias_act_kernel.cu"),
File "C:\Users\Creator\miniconda3\lib\site-packages\torch\utils\cpp_extension.py", line 898, in load
is_python_module)
File "C:\Users\Creator\miniconda3\lib\site-packages\torch\utils\cpp_extension.py", line 1086, in _jit_compile
with_cuda=with_cuda)
File "C:\Users\Creator\miniconda3\lib\site-packages\torch\utils\cpp_extension.py", line 1186, in _write_ninja_file_and_build_library
error_prefix="Error building extension '{}'".format(name))
File "C:\Users\Creator\miniconda3\lib\site-packages\torch\utils\cpp_extension.py", line 1412, in _run_ninja_build
message += ": {}".format(error.output.decode())
UnicodeDecodeError: 'utf-8' codec can't decode byte 0xd3 in position 1289: invalid continuation byte

AttributeError because initgl() never called

When running interactive.py, I get an AttributeError with the TorchImageView object on attributes such as 't_last' or 'tex'.

Looking into it, this is because the code calls draw() without ever calling before initgl() where those attributes are first initialized.

Could you point to me the right place to call initgl() so those attributes are initialized ?

Ganspace_colab.ipynb has no attributes to modify like figure_teaser.ipynb

Ganspace_colab.ipynb seems that it was made only to search for attributes but, what I want is to modify attributes like hair color, I don't want to search for attributes.
It seems like figure_teaser.ipynb has many factory attributes to modify, but Ganspace_colab.ipynb has nothing.
How can I edit hair color in Ganspace_colab.ipynb without searching?

KeyError

Hello, when I run your project I get
KeyError: 'raise_eyebrows'
error. What can the reason be?

Model "NoneType"

Sorry if I'm missing something really obvious. I get this error when running interactive.py

File "interactive.py", line 644, in
setup_model()
File "interactive.py", line 153, in setup_model
load_components(class_name, inst)
File "interactive.py", line 57, in load_components
dump_name = get_or_compute(config, inst)
File "C:\Users\caleb\ganspace-master\decomposition.py", line 368, in get_or_compute
return _compute(submit_config, config, model, force_recompute)
File "C:\Users\caleb\ganspace-master\decomposition.py", line 387, in compute
config.output_class.replace(' ', '
'),
AttributeError: 'NoneType' object has no attribute 'replace'

Looking into it, the config.output_class is being returned as a Nonetype? I tried to just forcing it to read it as a string but that didn't fix it, so there seems to be some underlying output_class error?

All the dependencies installed correctly in its environment, and everything else seems to run fine!

named_directions is empty

Hello Sir,
I am trying to run your Ganspace_colab file. I encountered such an error that "KeyError: 'raise_eyebrows'". It raised because named_directions dictionary is empty I guess. Why could the reason be?

conda env update -f environment.yml --prune --> produces the following error on M1 macOS Big Sur.

conda env update -f environment.yml --prune
Collecting package metadata (repodata.json): done
Solving environment: failed

ResolvePackageNotFound:

  • cudatoolkit=10.1

And numba -s

Time Stamp
Report started (local time) : 2021-03-14 19:49:09.014002
UTC start time : 2021-03-14 10:49:09.014059
Running time (s) : 1.348543

Hardware Information
Machine : x86_64
CPU Name : westmere
CPU Count : 8
Number of accessible CPUs : ?
List of accessible CPUs cores : ?
CFS Restrictions (CPUs worth of runtime) : None

CPU Features : 64bit aes cmov cx16 cx8 fxsr mmx
pclmul popcnt sahf sse sse2 sse3
sse4.1 sse4.2 ssse3

Memory Total (MB) : 16384
Memory Available (MB) : 1617

OS Information
Platform Name : macOS-10.16-x86_64-i386-64bit
Platform Release : 20.3.0
OS Name : Darwin
OS Version : Darwin Kernel Version 20.3.0: Thu Jan 21 00:06:51 PST 2021; root:xnu-7195.81.3~1/RELEASE_ARM64_T8101
OS Specific Version : 10.16 x86_64
Libc Version : ?

Python Information
Python Compiler : Clang 10.0.0
Python Implementation : CPython
Python Version : 3.8.5
Python Locale : en_US.UTF-8

LLVM Information
LLVM Version : 10.0.1

CUDA Information
CUDA Device Initialized : False
CUDA Driver Version : ?
CUDA Detect Output:
None
CUDA Librairies Test Output:
None

ROC information
ROC Available : False
ROC Toolchains : None
HSA Agents Count : 0
HSA Agents:
None
HSA Discrete GPUs Count : 0
HSA Discrete GPUs : None

SVML Information
SVML State, config.USING_SVML : False
SVML Library Loaded : False
llvmlite Using SVML Patched LLVM : True
SVML Operational : False

Threading Layer Information
TBB Threading Layer Available : False
+--> Disabled due to Unknown import problem.
OpenMP Threading Layer Available : True
+-->Vendor: Intel
Workqueue Threading Layer Available : True
+-->Workqueue imported successfully.

Numba Environment Variable Information
None found.

Conda Information
Conda Build : 3.20.5
Conda Env : 4.9.2
Conda Platform : osx-64
Conda Python Version : 3.8.5.final.0
Conda Root Writable : True

Installed Packages
ca-certificates 2021.1.19 hecd8cb5_1
certifi 2020.12.5 py37hecd8cb5_0
libcxx 10.0.0 1
libedit 3.1.20191231 h1de35cc_1
libffi 3.3 hb1e8313_2
ncurses 6.2 h0a44026_1
openssl 1.1.1j h9ed2024_0
pip 21.0.1 py37hecd8cb5_0
python 3.7.10 h88f2d9e_0
readline 8.1 h9ed2024_0
setuptools 52.0.0 py37hecd8cb5_0
sqlite 3.33.0 hffcf06c_0
tk 8.6.10 hb0a8c7a_0
wheel 0.36.2 pyhd3eb1b0_0
xz 5.2.5 h1de35cc_0
zlib 1.2.11 h1de35cc_3

No errors reported.

Warning log
Warning (cuda): CUDA driver library cannot be found or no CUDA enabled devices are present.
Exception class: <class 'numba.cuda.cudadrv.error.CudaSupportError'>
Warning (roc): Error initialising ROC: No ROC toolchains found.
Warning (roc): No HSA Agents found, encountered exception when searching: Error at driver init:

HSA is not currently supported on this platform (darwin).

missing strided_style method ?

I didn't find any explanation to this error since I did not find where strided_style is defined. here's the error:

`
Traceback (most recent call last):
File "interactive.py", line 643, in
setup_model()
File "interactive.py", line 152, in setup_model
load_components(class_name, inst)
File "interactive.py", line 57, in load_components
dump_name = get_or_compute(config, inst)
File "/home/fouratthamri/Documents/ganspace/ganspace/decomposition.py", line 363, in get_or_compute
return _compute(submit_config, config, model, force_recompute)
File "/home/fouratthamri/Documents/ganspace/ganspace/decomposition.py", line 394, in _compute
compute(config, dump_path, model)
File "/home/fouratthamri/Documents/ganspace/ganspace/decomposition.py", line 178, in compute
model.partial_forward(model.sample_latent(1), layer_key)
File "/home/fouratthamri/Documents/ganspace/ganspace/models/wrappers.py", line 202, in partial_forward
if len(styles) == 1:
File "/home/fouratthamri/anaconda3/envs/ganspace/lib/python3.7/site-packages/torch/nn/modules/module.py", line 585, in getattr
type(self).name, name))
AttributeError: 'Generator' object has no attribute 'strided_style'

`

AttributeError: 'TorchImageView' object has no attribute 't_last'

Error when running interactive.py. I get two TK windows opened, and then immediately shut down with the error below.

Ubuntu 20.04, Python 3.7.9, NVidia, Cuda 10.1.243, pyCuda from source, compiled with gl enabled, version 2020.1

#python interactive.py --model=BigGAN-512 --class=husky --layer=generator.gen_z -n=1_000_000
StyleGAN2: Optimized CUDA op FusedLeakyReLU not available, using native PyTorch fallback.
StyleGAN2: Optimized CUDA op UpFirDn2d not available, using native PyTorch fallback.
Loaded components for husky from /home/ofer/ganspace/ganspace/cache/components/biggan-512-husky_generator.gen_z_ipca_c80_n1000000.npz
Seed: 316727503
GLX version: 1.4
Screen is  0
Number of FBconfigs 260
Got a matching visual: index 104 33 xid 0x21
Is Direct?:  1
Done making a first context
Exception in Tkinter callback
Traceback (most recent call last):
  File "/home/ofer/anaconda3/envs/ganspace/lib/python3.7/tkinter/__init__.py", line 1705, in __call__
    return self.func(*args)
  File "/home/ofer/anaconda3/envs/ganspace/lib/python3.7/site-packages/pyopengltk/base.py", line 30, in tkMap
    self.initgl()
  File "/home/ofer/ganspace/ganspace/TkTorchWindow.py", line 75, in initgl
    self.setup_gl(self.width, self.height)
  File "/home/ofer/ganspace/ganspace/TkTorchWindow.py", line 88, in setup_gl
    import pycuda.gl.autoinit
  File "/home/ofer/anaconda3/envs/ganspace/lib/python3.7/site-packages/pycuda-2020.1-py3.7-linux-x86_64.egg/pycuda/gl/autoinit.py", line 9, in <module>
    context = make_default_context(lambda dev: cudagl.make_context(dev))
  File "/home/ofer/anaconda3/envs/ganspace/lib/python3.7/site-packages/pycuda-2020.1-py3.7-linux-x86_64.egg/pycuda/tools.py", line 205, in make_default_context
    "on any of the %d detected devices" % ndevices)
RuntimeError: make_default_context() wasn't able to create a context on any of the 1 detected devices
Exception in Tkinter callback
Traceback (most recent call last):
  File "/home/ofer/anaconda3/envs/ganspace/lib/python3.7/tkinter/__init__.py", line 1705, in __call__
    return self.func(*args)
  File "/home/ofer/anaconda3/envs/ganspace/lib/python3.7/site-packages/pyopengltk/base.py", line 74, in tkExpose
    self._display()
  File "/home/ofer/anaconda3/envs/ganspace/lib/python3.7/site-packages/pyopengltk/base.py", line 98, in _display
    self.redraw()
  File "/home/ofer/ganspace/ganspace/TkTorchWindow.py", line 141, in redraw
    dt = t_now - self.t_last
AttributeError: 'TorchImageView' object has no attribute 't_last'
Traceback (most recent call last):
  File "interactive.py", line 651, in <module>
    app.update()
  File "/home/ofer/ganspace/ganspace/TkTorchWindow.py", line 198, in update
    self.redraw()
  File "/home/ofer/ganspace/ganspace/TkTorchWindow.py", line 141, in redraw
    dt = t_now - self.t_last
AttributeError: 'TorchImageView' object has no attribute 't_last'


Interactive exploration using Stylegan2 model config-e

Hi,
I'm trying to test the interactive.py script with different stylegan2 models. The one I'm using is coming from the original config-e training and I managed to convert it for Pytorch using the modified version of the convert_weight script implemented here https://github.com/rosinality/stylegan2-pytorch/blob/master/convert_weight.py .
When I try to use the converted model I get mismatch error for the tensor sizes:
... size mismatch for to_rgbs.7.conv.modulation.bias: copying a param with shape torch.Size([16]) from checkpoint, the shape in current model is torch.Size([32]).
If I force the channel_multiplier=1 when the model generator is loaded in wrappers.py inside the Stylegan2 class :
self.model = stylegan2.Generator(self.resolution, 512, 8, channel_multiplier=1).to(self.device)
I get a different error
RuntimeError: Error(s) in loading state_dict for Generator: Unexpected key(s) in state_dict: "noises.noise_0", "noises.noise_1", "noises.noise_2", "noises.noise_3", "noises.noise_4", "noises.noise_5", "noises.noise_6", "noises.noise_7", "noises.noise_8", "noises.noise_9", "noises.noise_10", "noises.noise_11", "noises.noise_12", "noises.noise_13", "noises.noise_14", "noises.noise_15", "noises.noise_16".
Is there any way to implement this use case?

cuDNN error: CUDNN_STATUS_MAPPING_ERROR

I encountered the following error when trying interactive.py with a conda installation following the readme.

"...\models\stylegan2\stylegan2-pytorch\model.py", line 273, in forward
    out = F.conv2d(input, weight, padding=self.padding, groups=batch)
RuntimeError: cuDNN error: CUDNN_STATUS_MAPPING_ERROR

change hair color code

I want to use the method you mentioned in the paper to change the latent vector to modify the hair color of the person in the image, but your code is too complicated, I did not find the implementation code for this part, can you provide a code? thank you

Not working with conditional StyleGAN2-ADA model

Hello,

I trained a conditional StyleGAN2-ada-pytorch model with a custom dataset. Then, I converted my pkl model to pt model by using https://github.com/dvschultz/stylegan2-ada-pytorch/blob/main/export_weights.py. Then, I integrated my model to Ganspace wrapper and I ran it. This gives me the following error.

RuntimeError: Error(s) in loading state_dict for Generator:
	size mismatch for style.1.weight: copying a param with shape torch.Size([512, 1024]) from checkpoint, the shape in current model is torch.Size([512, 512]).
	size mismatch for convs.6.conv.weight: copying a param with shape torch.Size([1, 256, 512, 3, 3]) from checkpoint, the shape in current model is torch.Size([1, 512, 512, 3, 3]).
	size mismatch for convs.6.activate.bias: copying a param with shape torch.Size([256]) from checkpoint, the shape in current model is torch.Size([512]).
	size mismatch for convs.7.conv.weight: copying a param with shape torch.Size([1, 256, 256, 3, 3]) from checkpoint, the shape in current model is torch.Size([1, 512, 512, 3, 3]).
	size mismatch for convs.7.conv.modulation.weight: copying a param with shape torch.Size([256, 512]) from checkpoint, the shape in current model is torch.Size([512, 512]).
	size mismatch for convs.7.conv.modulation.bias: copying a param with shape torch.Size([256]) from checkpoint, the shape in current model is torch.Size([512]).
	size mismatch for convs.7.activate.bias: copying a param with shape torch.Size([256]) from checkpoint, the shape in current model is torch.Size([512]).
	size mismatch for convs.8.conv.weight: copying a param with shape torch.Size([1, 128, 256, 3, 3]) from checkpoint, the shape in current model is torch.Size([1, 256, 512, 3, 3]).
	size mismatch for convs.8.conv.modulation.weight: copying a param with shape torch.Size([256, 512]) from checkpoint, the shape in current model is torch.Size([512, 512]).
	size mismatch for convs.8.conv.modulation.bias: copying a param with shape torch.Size([256]) from checkpoint, the shape in current model is torch.Size([512]).
	size mismatch for convs.8.activate.bias: copying a param with shape torch.Size([128]) from checkpoint, the shape in current model is torch.Size([256]).
	size mismatch for convs.9.conv.weight: copying a param with shape torch.Size([1, 128, 128, 3, 3]) from checkpoint, the shape in current model is torch.Size([1, 256, 256, 3, 3]).
	size mismatch for convs.9.conv.modulation.weight: copying a param with shape torch.Size([128, 512]) from checkpoint, the shape in current model is torch.Size([256, 512]).
	size mismatch for convs.9.conv.modulation.bias: copying a param with shape torch.Size([128]) from checkpoint, the shape in current model is torch.Size([256]).
	size mismatch for convs.9.activate.bias: copying a param with shape torch.Size([128]) from checkpoint, the shape in current model is torch.Size([256]).
	size mismatch for convs.10.conv.weight: copying a param with shape torch.Size([1, 64, 128, 3, 3]) from checkpoint, the shape in current model is torch.Size([1, 128, 256, 3, 3]).
	size mismatch for convs.10.conv.modulation.weight: copying a param with shape torch.Size([128, 512]) from checkpoint, the shape in current model is torch.Size([256, 512]).
	size mismatch for convs.10.conv.modulation.bias: copying a param with shape torch.Size([128]) from checkpoint, the shape in current model is torch.Size([256]).
	size mismatch for convs.10.activate.bias: copying a param with shape torch.Size([64]) from checkpoint, the shape in current model is torch.Size([128]).
	size mismatch for convs.11.conv.weight: copying a param with shape torch.Size([1, 64, 64, 3, 3]) from checkpoint, the shape in current model is torch.Size([1, 128, 128, 3, 3]).
	size mismatch for convs.11.conv.modulation.weight: copying a param with shape torch.Size([64, 512]) from checkpoint, the shape in current model is torch.Size([128, 512]).
	size mismatch for convs.11.conv.modulation.bias: copying a param with shape torch.Size([64]) from checkpoint, the shape in current model is torch.Size([128]).
	size mismatch for convs.11.activate.bias: copying a param with shape torch.Size([64]) from checkpoint, the shape in current model is torch.Size([128]).
	size mismatch for to_rgbs.3.conv.weight: copying a param with shape torch.Size([1, 3, 256, 1, 1]) from checkpoint, the shape in current model is torch.Size([1, 3, 512, 1, 1]).
	size mismatch for to_rgbs.3.conv.modulation.weight: copying a param with shape torch.Size([256, 512]) from checkpoint, the shape in current model is torch.Size([512, 512]).
	size mismatch for to_rgbs.3.conv.modulation.bias: copying a param with shape torch.Size([256]) from checkpoint, the shape in current model is torch.Size([512]).
	size mismatch for to_rgbs.4.conv.weight: copying a param with shape torch.Size([1, 3, 128, 1, 1]) from checkpoint, the shape in current model is torch.Size([1, 3, 256, 1, 1]).
	size mismatch for to_rgbs.4.conv.modulation.weight: copying a param with shape torch.Size([128, 512]) from checkpoint, the shape in current model is torch.Size([256, 512]).
	size mismatch for to_rgbs.4.conv.modulation.bias: copying a param with shape torch.Size([128]) from checkpoint, the shape in current model is torch.Size([256]).
	size mismatch for to_rgbs.5.conv.weight: copying a param with shape torch.Size([1, 3, 64, 1, 1]) from checkpoint, the shape in current model is torch.Size([1, 3, 128, 1, 1]).
	size mismatch for to_rgbs.5.conv.modulation.weight: copying a param with shape torch.Size([64, 512]) from checkpoint, the shape in current model is torch.Size([128, 512]).
	size mismatch for to_rgbs.5.conv.modulation.bias: copying a param with shape torch.Size([64]) from checkpoint, the shape in current model is torch.Size([128]).

I believe that the mismatches arise from conditioning. It looks like something changes after label embeddings are added to unconditional Stylegan2-ada-pytorch network. Is there any solution for this problem or any suggestion to overcome this?

Thanks in advance

ModuleNotFoundError: No module named 'model'

I am running following Syntex to run in colab

Explore StyleGAN2 ffhq in W space

!python interactive.py --model=StyleGAN2 --class=ffhq --layer=style --use_w -n=1_000_000 -b=10_000

But I am getting model module not found error.

Traceback (most recent call last):
File "interactive.py", line 22, in
from models import get_instrumented_model
File "/content/ganspace/models/init.py", line 11, in
from .wrappers import *
File "/content/ganspace/models/wrappers.py", line 23, in
from . import stylegan2
File "/content/ganspace/models/stylegan2/init.py", line 14, in
from model import Generator

ModuleNotFoundError: No module named 'model'

Any Suggested fix or any help?

AttributeError: 'TorchImageView' object has no attribute '_OpenGLFrame__context'

the error is:

Loaded components for car from /home/jupyter/ganspace/cache/components/stylegan2-car_style_ipca_c80_n1000000.npz
Seed: 1116518040
GLX version: 0.0
Screen is 0
Xlib: extension "GLX" missing on display ":20.0".
Number of FBconfigs 0
oh dear - visual does not match
Exception in Tkinter callback
Traceback (most recent call last):
File "/opt/conda/lib/python3.7/tkinter/init.py", line 1705, in call
return self.func(*args)
File "/opt/conda/lib/python3.7/site-packages/pyopengltk/base.py", line 29, in tkMap
self.tkCreateContext()
File "/opt/conda/lib/python3.7/site-packages/pyopengltk/linux.py", line 103, in tkCreateContext
cfgs[best],
ValueError: NULL pointer access
Exception in Tkinter callback
Traceback (most recent call last):
File "/opt/conda/lib/python3.7/tkinter/init.py", line 1705, in call
return self.func(*args)
File "/opt/conda/lib/python3.7/site-packages/pyopengltk/base.py", line 74, in tkExpose
self._display()
File "/opt/conda/lib/python3.7/site-packages/pyopengltk/base.py", line 97, in _display
self.tkMakeCurrent()
File "/opt/conda/lib/python3.7/site-packages/pyopengltk/linux.py", line 156, in tkMakeCurrent
GLX.glXMakeCurrent(self.__window, self._wid, self.__context)
AttributeError: 'TorchImageView' object has no attribute '_OpenGLFrame__context'
Traceback (most recent call last):
File "interactive.py", line 651, in
app.update()
File "/home/jupyter/ganspace/TkTorchWindow.py", line 195, in update
self.tkMakeCurrent()
File "/opt/conda/lib/python3.7/site-packages/pyopengltk/linux.py", line 156, in tkMakeCurrent
GLX.glXMakeCurrent(self.__window, self._wid, self.__context)
AttributeError: 'TorchImageView' object has no attribute '_OpenGLFrame__context'

Can you help?

Getting error when trying to run StyleGAN2 models

Currently trying to play around with this as a little experiment, but whenever I try to run the StyleGAN2 interactive models, I get the following runtime error:

RuntimeError: cuDNN error: CUDNN_STATUS_MAPPING_ERROR

I believe I may not have correctly configured/set up the StyleGAN2 installation from the page, as when I run the command prompt for VS2017, I can't even use Anaconda correctly, but I can get through the other steps fine.

I'm on a RTX 2070 and a Ryzen 5 3600.

Request - Trained weights

Hello, thanks for sharing your research.
I'm studying Style transfer and would like to test with your trained weights for wrinkles in latent space, can you be so kind and share them?

Issues installing on Ubuntu 20.04 (Solved)

Setup (with extra's for Ubuntu 20.04)

  1. Install anaconda or miniconda
  2. Install git, then clone respository: git clone https://github.com/harskish/ganspace/
  3. Create environment: conda create -n ganspace python=3.7
  4. Activate environment: conda activate ganspace
    • EXTRA: cd ganspace/
  5. Install dependencies: conda env update -f environment.yml --prune
  6. Setup submodules: git submodule update --init --recursive
  7. Run command python -c "import nltk; nltk.download('wordnet')"

Interactive viewer

The interactive viewer (interactive.py) has the following dependencies:

  • Glumpy
  • PyCUDA with OpenGL support
  • EXTRA: instructions for ubuntu 20.04 given below

Linux (ubuntu 20.04)

EXTRA: Activate environment: conda activate ganspace

  1. Install CUDA toolkit (match the version in environment.yml)
    • EXTRA: Follow instructions from here
    • EXTRA: sudo apt install ctags
    • EXTRA: export CUDA_HOME=/usr/local/cuda-10.1
  2. Download pycuda sources from: https://pypi.org/project/pycuda/#files
  3. Extract files: tar -xzf pycuda-VERSION.tar.gz
    • EXTRA: cd pycuda-VERSION/
  4. Configure: python configure.py --cuda-enable-gl --cuda-root=/path/to/cuda
    • EXTRA: used this one instead python configure.py --cuda-root=$CUDA_HOME --cuda-enable-gl
  5. Compile and install: make install
    • EXTRA: sudo apt -y install gcc-8 g++-8
    • EXTRA: sudo update-alternatives --install /usr/bin/gcc gcc /usr/bin/gcc-8 8
    • EXTRA: sudo update-alternatives --install /usr/bin/g++ g++ /usr/bin/g++-8 8
    • EXTRA: pip install pytest
    • EXTRA: cd test/
    • EXTRA: python test_driver.py
      • EXTRA: should all be green (and thus 100%)
  6. Install Glumpy: pip install setuptools cython glumpy
    • EXTRA: before this pip install, install required package, namely: pip install Cython

EXTRA: non installed pip dependencies

  • pip install colormap
  • pip install easydev
  • pip install pillow
  • pip uninstall glumpy
  • pip install glumpy

StyleGAN2 setup (optional)

StyleGAN2 contains custom CUDA kernels for improved performance.

Less performant native PyTorch fallbacks are used by default.

EXTRA: go to root of ganspace

  1. Install CUDA toolkit (match the version in environment.yml)
    • EXTRA: already done in previous step
  2. conda activate ganspace
    • EXTRA: export CUDA_HOME=/usr/local/cuda-10.1
    • EXTRA: sudo cp /usr/local/cuda-10.2/targets/x86_64-linux/include/cublas_v2.h /usr/local/cuda-10.1/targets/x86_64-linux/include/cublas_v2.h
    • EXTRA: sudo cp /usr/local/cuda-10.2/targets/x86_64-linux/include/cublas_api.h /usr/local/cuda-10.1/targets/x86_64-linux/include/cublas_api.h
  3. cd models/stylegan2/stylegan2-pytorch/op
  4. python setup.py install
  5. Test: python -c "import torch; import upfirdn2d_op; import fused; print('OK')"

missing 'model.py'

I've followed the install instructions to the letter, but I'm unable to run interactive.py with any of the examples in the README (stylegan, stylegan2 or biggan)

(NB the notes at the end about StyleGAN2 installation don't work - there is no setup.py in the directory)

the error is:

Traceback (most recent call last):
File "interactive.py", line 22, in
from models import get_instrumented_model
File "/xxx/ganspace/models/init.py", line 11, in
from .wrappers import *
File "/xxx/ganspace/models/wrappers.py", line 22, in
from . import stylegan2
File "/xxx/ganspace/models/stylegan2/init.py", line 14, in
from model import Generator
ModuleNotFoundError: No module named 'model'

can you help?

Cannot run Ganspace -- RuntimeError: CUDA error: invalid device function

I am very interested in this software, however, I am having difficulty getting interactive.py to run properly.

I am running Linux / Ubuntu 18.04 on an 8-core Intel Xeon E5462 tower with an NVIDIA GTX 1070 GPU, compute level 6.1. I installed all dependencies and built pycuda with opengl support; cuda 10.1 is available both on my system and in my anaconda environment.

On executing "python interactive.py --model=StyleGAN2 --class=ffhq --layer=style --use_w -n=1000000 -b=10000" I get the following error:

Traceback (most recent call last):
  File "interactive.py", line 644, in <module>
    setup_model()
  File "interactive.py", line 143, in setup_model
    inst = get_instrumented_model(model_name, class_name, layer_name, torch.device('cuda'), use_w=args.use_w)
  File "/mnt/drive2/pytorch/ganspace/models/wrappers.py", line 699, in get_instrumented_model
    latent_shape = model.get_latent_shape()
  File "/mnt/drive2/pytorch/ganspace/netdissect/modelconfig.py", line 107, in create_instrumented_model
    latent_shape=getattr(args, 'latent_shape', None))
  File "/mnt/drive2/pytorch/ganspace/netdissect/modelconfig.py", line 137, in annotate_model_shapes
    output = model(dry_run)
  File "/home/my-user/anaconda3/envs/ganspace/lib/python3.7/site-packages/torch/nn/modules/module.py", line 541, in __call__
    result = self.forward(*input, **kwargs)
  File "/mnt/drive2/pytorch/ganspace/netdissect/nethook.py", line 48, in forward
    return self.model(*inputs, **kwargs)
  File "/home/my-user/anaconda3/envs/ganspace/lib/python3.7/site-packages/torch/nn/modules/module.py", line 541, in __call__
    result = self.forward(*input, **kwargs)
  File "/mnt/drive2/pytorch/ganspace/models/wrappers.py", line 189, in forward
    truncation=self.truncation, truncation_latent=self.latent_avg, input_is_w=self.w_primary)
  File "/home/my-user/anaconda3/envs/ganspace/lib/python3.7/site-packages/torch/nn/modules/module.py", line 541, in __call__
    result = self.forward(*input, **kwargs)
  File "/mnt/drive2/pytorch/ganspace/models/stylegan2/stylegan2-pytorch/model.py", line 495, in forward
    styles = [self.style(s) for s in styles]
  File "/mnt/drive2/pytorch/ganspace/models/stylegan2/stylegan2-pytorch/model.py", line 495, in <listcomp>
    styles = [self.style(s) for s in styles]
  File "/home/my-user/anaconda3/envs/ganspace/lib/python3.7/site-packages/torch/nn/modules/module.py", line 541, in __call__
    result = self.forward(*input, **kwargs)
  File "/mnt/drive2/pytorch/ganspace/netdissect/nethook.py", line 181, in new_forward
    original_x = original_forward(*inputs, **kwargs)
  File "/home/my-user/anaconda3/envs/ganspace/lib/python3.7/site-packages/torch/nn/modules/container.py", line 92, in forward
    input = module(input)
  File "/home/my-user/anaconda3/envs/ganspace/lib/python3.7/site-packages/torch/nn/modules/module.py", line 541, in __call__
    result = self.forward(*input, **kwargs)
  File "/mnt/drive2/pytorch/ganspace/models/stylegan2/stylegan2-pytorch/model.py", line 153, in forward
    out = F.linear(input, self.weight * self.scale)
RuntimeError: CUDA error: invalid device function
Segmentation fault (core dumped)

Any help would be greatly appreciated.

Convert weights conflict

Hi! When I try to convert weight by stylegan2-pytorch, I found the environment conflict. The method from stylegan2-pytorch is based on cuda10.0/10.1, while the official stylegan2 is not support for tensorflow2.x. Can you tell me the correct environment like the cuda version, tensorflow-gpu version or the tensorflow version? Thanks!

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.