Giter Site home page Giter Site logo

csailvision / gandissect Goto Github PK

View Code? Open in Web Editor NEW
1.8K 75.0 283.0 26.02 MB

Pytorch-based tools for visualizing and understanding the neurons of a GAN. https://gandissect.csail.mit.edu/

License: MIT License

HTML 6.72% Python 79.38% C 0.68% C++ 0.15% Cuda 3.35% Shell 1.73% Jupyter Notebook 7.99%
pytorch gan image-manipulation deep-learning interactive-visualizations generative-adversarial-network interpretable-ml

gandissect's Introduction

GANDissect

Project | Demo | Paper | Video

GAN Dissection is a way to inspect the internal representations of a generative adversarial network (GAN) to understand how internal units align with human-interpretable concepts. It is part of NetDissect.

This repo allows you to dissect a GAN model. It provides the dissection results as a static summary or as an interactive visualization. Try our interactive GANPaint demo to interact with GANs and draw images.

Overview

Visualizing and Understanding Generative Adversarial Networks
David Bau, Jun-Yan Zhu, Hendrik Strobelt, Bolei Zhou, Joshua B. Tenenbaum, William T. Freeman, Antonio Torralba
MIT CSAIL, MIT-IBM Watson AI Lab, CUHK, IBM Research
In arXiv, 2018.

Analysis and Applications

Interpretable Units in GANs

Analyzing different layers

Diagnosing and improving GANs

Removing objects from conference rooms

Removing windows from different natural scenes

Inserting new objects into images

Release history

v 0.9 alpha - Nov 26, 2018

Getting Started

Let's set up the environment and dissect a churchoutdoor GAN. This requires some CUDA-enabled GPU and some disk space.

Setup

To install everything needed from this repo, have conda available, and run:

script/setup_env.sh      # Create a conda environment with dependencies
script/make_dirs.sh      # Create the dataset and dissect directories
script/download_data.sh  # Download support data and demo GANs
source activate netd     # Enter the conda environment
pip install -v -e .      # Link the local netdissect package into the env

Details. The code depends on python 3, Pytorch 4.1, and several other packages. For conda users, script/environment.yml provides the details of the dependencies. For pip users, setup.py lists everything needed.

Data. The download_data.sh script downloads the segmentation dataset used to dissect classifiers, the segmentation network used to dissect GANs, and several example GAN models to dissect. The downloads will go into the directories dataset/ and models/. If you do not wish to download the example networks, python -m netdissect --download will download just the data and models needed for netdissect itself.

Dissecting a GAN

GAN example: to dissect three layers of the LSUN living room progressive GAN trained by Karras:

python -m netdissect \
   --gan \
   --model "netdissect.proggan.from_pth_file('models/karras/livingroom_lsun.pth')" \
   --outdir "dissect/livingroom" \
   --layer layer1 layer4 layer7 \
   --size 1000

The result is a static HTML page at dissect/livingroom/dissect.html, and a JSON file of metrics at dissect/livingroom/dissect.json.

You can test your own model: the --model argument is a fully-qualified python function or constructor for loading the GAN to test. The --layer names are fully-qualified (state_dict-style) names for layers.

By default, a scene-based segmentation is used but a different segmenter class can be substituted by supplying an alternate class constructor to --segmenter. See netdissect/segmenter.py for the segmenter base class.

Running a GAN editing server (alpha)

Once a GAN is dissected, you can run a web server that provides an API that generates images with (optional) interventions.

python -m netdissect.server --address 0.0.0.0

The editing UI (right) is served at http://localhost:5001/ .

Other URLs:

Advanced Level

Dissecting a classifier (NetDissect)

Classifier example: to dissect three layers of the pretrained alexnet in torchvision:

python -m netdissect \
   --model "torchvision.models.alexnet(pretrained=True)" \
   --layers features.6:conv3 features.8:conv4 features.10:conv5 \
   --imgsize 227 \
   --outdir dissect/alexnet-imagenet

No special web server for a classifier.

Command Line Details

Documentation for the netdissect command-line utility.

usage: python -m netdissect [-h] [--model MODEL] [--pthfile PTHFILE]
                            [--outdir OUTDIR] [--layers LAYERS [LAYERS ...]]
                            [--segments SEGMENTS] [--segmenter SEGMENTER]
                            [--download] [--imgsize IMGSIZE]
                            [--netname NETNAME] [--meta META [META ...]]
                            [--examples EXAMPLES] [--size SIZE]
                            [--batch_size BATCH_SIZE]
                            [--num_workers NUM_WORKERS]
                            [--quantile_threshold {[0-1],iqr}] [--no-labels]
                            [--maxiou] [--covariance] [--no-images]
                            [--no-report] [--no-cuda] [--gen] [--gan]
                            [--perturbation PERTURBATION] [--add_scale_offset]
                            [--quiet]

optional arguments:

  -h, --help            show this help message and exit
  --model MODEL         constructor for the model to test
  --pthfile PTHFILE     filename of the .pth file for the model
  --outdir OUTDIR       directory for dissection output
  --layers LAYERS [LAYERS ...]
                        space-separated list of layer names to dissect, in the
                        form layername[:reportedname]
  --segments SEGMENTS   directory containing segmentation dataset
  --segmenter SEGMENTER
                        constructor for a segmenter class
  --download            downloads Broden dataset if needed
  --imgsize IMGSIZE     input image size to use
  --netname NETNAME     name for the network in generated reports
  --meta META [META ...]
                        json files of metadata to add to report
  --examples EXAMPLES   number of image examples per unit
  --size SIZE           dataset subset size to use
  --batch_size BATCH_SIZE
                        batch size for a forward pass
  --num_workers NUM_WORKERS
                        number of DataLoader workers
  --quantile_threshold {[0-1],iqr}
                        quantile to use for masks
  --no-labels           disables labeling of units
  --maxiou              enables maxiou calculation
  --covariance          enables covariance calculation
  --no-images           disables generation of unit images
  --no-report           disables generation report summary
  --no-cuda             disables CUDA usage
  --gen                 test a generator model (e.g., a GAN)
  --gan                 synonym for --gen
  --perturbation PERTURBATION
                        the filename of perturbation attack to apply
  --add_scale_offset    offsets masks according to stride and padding
  --quiet               silences console output

API, for classifiers

It can be used from code as a function, as follows:

  1. Load up the convolutional model you wish to dissect, and call imodel = InstrumentedModel(model) and then imodel.retain_layers([layernames,..]) to instrument the model.
  2. Load the segmentation dataset using the BrodenDataset class; use the transform_image argument to normalize images to be suitable for the model, and the size argument to truncate the dataset.
  3. Choose a directory in which to write the output, and call dissect(outdir, imodel, dataset).

A quick approximate dissection can be done by reducing the size of the BrodenDataset. Generating example images can be time-consuming and the number of images can be set via examples_per_unit.

Example:

    from netdissect import InstrumentedModel, dissect
    from netdissect import BrodenDataset

    model = InstrumentedModel(load_my_model())
    model.eval()
    model.cuda()
    model.retain_layers(['conv1', 'conv2', 'conv3', 'conv4', 'conv5'])
    bds = BrodenDataset('dataset/broden1_227',
            transform_image=transforms.Compose([
                transforms.ToTensor(),
                transforms.Normalize(IMAGE_MEAN, IMAGE_STDEV)]),
            size=10000)
    dissect('result/dissect', model, bds,
            batch_size=100,
            examples_per_unit=10)

The Broden dataset is oriented towards semantic objects, parts, material, colors, etc that are found in natural scene photographs. If you want to analyze your model with a different semantic segmentation, you can substitute a different segmentation dataset and supply a segrunner, an argument that describes how to get segmentations and RGB images from the dataset. See ClassifierSegRunner for the details.

API, for generators

Similarly:

  1. Load up the generator model wish to dissect, and call retain_layers(model, [layernames,..]) to instrument the model.
  2. Create a dataset of z input samples for testing. If your model uses a uniform normal distribution, z_dataset_for_model will make one.
  3. Choose a directory in which to write the output, and call dissect(outdir, model, dataset, segrunner=GeneratorSegRunner()).

The time for the dissection is proportional to the number of samples in the dataset.

    from netdissect import InstrumentedModel, dissect
    from netdissect import z_dataset_for_model, GeneratorSegRunner

    model = InstrumentedModel(load_my_model())
    model.eval()
    model.cuda()
    model.retain_layers(model, ['layer3', 'layer4', 'layer5'])
    zds = z_dataset_for_model(size, model)
    dissect('result/gandissect', model, zds,
            segrunner=GeneratorSegRunner(),
            batch_size=100,
            examples_per_unit=10)

The GeneratorSegRunner defaults to a running a semantic segmentation network oriented towards semantic objects, parts, and materials found in natural scene photographs. To use a different semantic segmentation, you can supply a custom Segmenter subclass to the constructor of GeneratorSegRunner.

Citation

If you use this code for your research, please cite our paper:

@inproceedings{bau2019gandissect,
 title={GAN Dissection: Visualizing and Understanding Generative Adversarial Networks},
 author={Bau, David and Zhu, Jun-Yan and Strobelt, Hendrik and Zhou, Bolei and Tenenbaum, Joshua B. and Freeman, William T. and Torralba, Antonio},
 booktitle={Proceedings of the International Conference on Learning Representations (ICLR)},
 year={2019}
}

gandissect's People

Contributors

davidbau avatar junyanz avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

gandissect's Issues

'EasyDict' object has no attribute 'settings' when trying to run the server

Hi, i have been trying to run the server however i have been given this bug:
'EasyDict' object has no attribute 'settings'.
This happens when the config is passed from server.py into serverstate.py and config.settings is passed into GanTester.
If i print each key in config.keys(), i get the following list:
netname
meta
default_ranking
quantile_threshold
iou_threshold
iqr_threshold
segcolors
layers,
none of them being settings. Am i missing something here?
i have tried importing and running server.py in google colab as well as using the command line interface and getting the same bug for both.

Edit1: I have realised why i don't have a settings attribute. In the main file, the settings is created by creating a dictionary of all the arguments passed in when running netdissect on the command line. The generate_report() function in dissection.py will pass the settings into dissect.json which is the config that is passed into load_projects() but only if there is a value for settings. The value for settings is defaulted to None and since i'm importing dissection.py into google colab, the settings value doesn't change and isn't passed into the top_record dictionary which stores the report. Therefore, config.settings doesn't exist and so i get the bug from above. I am going to work on editing the dissection code so that it adds a settings field to the report. If i can get that working, i will post the updated version here so that people are able to dissect a model and load the server to view the dissection without using the command line.

some doubts about gan painting

Hello, what an excellent job you have done from gan dissection to gan painting and gan rewritting, i have carefully read the paper "GAN Dissection: Visualizing and Understanding Generative Adversarial Networks" and the paper "Semantic Photo Manipulation with a Generative Image Prior". 
 In particular,  i have doubts about how to intervention the middle layer in gan to draw/remove the user-specified semantic concepts in the **user-marked area** in output image, for example , in gan-painting(https://ganpaint.io/), we could choose some marked region and use the gan dissection to find the units in the middle layer associated with most matched concept , then choose some way to edit the units in the middle layer like insert value k in the units

image

The question is hou to ensure that **the position of the middle layer editor will  correspond exactly to the position that calibrated by the user in the output image**。gan dissection , in my opinion , would find the agreement between the concept in the output image and the unit(feature map) in the middle layer, wheras , it would not ensure the concept just appears in the are drawn by the user.
 It is clear to understand the match between the concept and the unit,  but i have doubts about the match between the location area of the concept and the editing area in the unit. would you mind giving me more details on this issue ?

How to train own dataset

Thanks to your share.I am interested in your job,And I want to know how to train own dataset.Could you help me ?Thanks!

cannot import prroi_pool2d?

Windows10
cudatoolkit 9.0
python 3.6
pytorch 1.1.0
torchvision 0.3.0

C:\Users\86189\Anaconda3\envs\py36\lib\site-packages\torch\utils\cpp_extension.py:184: UserWarning: Error checking compiler version for cl: Command 'cl' returned non-zero exit status 1.
warnings.warn('Error checking compiler version for {}: {}'.format(compiler, error))
Traceback (most recent call last):
File "C:\Users\86189\Anaconda3\envs\py36\lib\site-packages\torch\utils\cpp_extension.py", line 949, in _build_extension_module
check=True)
File "C:\Users\86189\Anaconda3\envs\py36\lib\subprocess.py", line 438, in run
output=stdout, stderr=stderr)
subprocess.CalledProcessError: Command '['ninja', '-v']' returned non-zero exit status 1.

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
File "C:\Users\86189\Anaconda3\envs\py36\lib\runpy.py", line 193, in run_module_as_main
"main", mod_spec)
File "C:\Users\86189\Anaconda3\envs\py36\lib\runpy.py", line 85, in run_code
exec(code, run_globals)
File "C:\Users\86189\Desktop\GANDissection\netdissect_main
.py", line 410, in
main()
File "C:\Users\86189\Desktop\GANDissection\netdissect_main
.py", line 220, in main
segrunner = GeneratorSegRunner(autoimport_eval(args.segmenter))
File "C:\Users\86189\Desktop\GANDissection\netdissect\autoeval.py", line 36, in autoimport_eval
return eval(term, {}, AutoImportDict())
File "", line 1, in
File "C:\Users\86189\Desktop\GANDissection\netdissect\segmenter.py", line 68, in init
segarch, segvocab, epoch)
File "C:\Users\86189\Desktop\GANDissection\netdissect\segmenter.py", line 502, in load_unified_parsing_segmentation_model
weights=os.path.join(segmodel_dir, 'decoder_epoch_%d.pth' % epoch))
File "C:\Users\86189\Desktop\GANDissection\netdissect\upsegmodel\models.py", line 215, in build_decoder
fpn_dim=512)
File "C:\Users\86189\Desktop\GANDissection\netdissect\upsegmodel\models.py", line 271, in init
from .prroi_pool import PrRoIPool2D
File "C:\Users\86189\Desktop\GANDissection\netdissect\upsegmodel\prroi_pool_init_.py", line 12, in
from .prroi_pool import *
File "C:\Users\86189\Desktop\GANDissection\netdissect\upsegmodel\prroi_pool\prroi_pool.py", line 14, in
from .functional import prroi_pool2d
File "C:\Users\86189\Desktop\GANDissection\netdissect\upsegmodel\prroi_pool\functional.py", line 22, in
verbose=False
File "C:\Users\86189\Anaconda3\envs\py36\lib\site-packages\torch\utils\cpp_extension.py", line 644, in load
is_python_module)
File "C:\Users\86189\Anaconda3\envs\py36\lib\site-packages\torch\utils\cpp_extension.py", line 813, in jit_compile
with_cuda=with_cuda)
File "C:\Users\86189\Anaconda3\envs\py36\lib\site-packages\torch\utils\cpp_extension.py", line 866, in write_ninja_file_and_build
build_extension_module(name, build_directory, verbose)
File "C:\Users\86189\Anaconda3\envs\py36\lib\site-packages\torch\utils\cpp_extension.py", line 962, in build_extension_module
raise RuntimeError(message)
RuntimeError: Error building extension 'prroi_pooling': b'[1/3] cl /showIncludes -DTORCH_EXTENSION_NAME=prroi_pooling -DTORCH_API_INCLUDE_EXTENSION_H -IC:\Users\86189\Anaconda3\envs\py36\lib\site-packages\torch\include -IC:\Users\86189\Anaconda3\envs\py36\lib\site-packages\torch\include\torch\csrc\api\include -IC:\Users\86189\Anaconda3\envs\py36\lib\site-packages\torch\include\TH
-IC:\Users\86189\Anaconda3\envs\py36\lib\site-packages\torch\include\THC "-IC:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v10.1\include" -IC:\Users\86189\Anaconda3\envs\py36\Include -D_GLIBCXX_USE_CXX11_ABI=0 -fPIC -std=c++11 -c C:\Users\86189\Desktop\GANDissection\netdissect\upsegmodel\prroi_pool\src\prroi_pooling_gpu.c /Foprroi_pooling_gpu.o\r\nFAILED: prroi_pooling_gpu.o \r\ncl /showIncludes -DTORCH_EXTENSION_NAME=prroi_pooling -DTORCH_API_INCLUDE_EXTENSION_H -IC:\Users\86189\Anaconda3\envs\py36\lib\site-packages\torch\include -IC:\Users\86189\Anaconda3\envs\py36\lib\site-packages\torch\include\torch\csrc\api\include -IC:\Users\86189\Anaconda3\envs\py36\lib\site-packages\torch\include\TH -IC:\Users\86189\Anaconda3\envs\py36\lib\site-packages\torch\include\THC "-IC:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v10.1\include" -IC:\Users\86189\Anaconda3\envs\py36\Include -D_GLIBCXX_USE_CXX11_ABI=0 -fPIC -std=c++11 -c C:\Users\86189\Desktop\GANDissection\netdissect\upsegmodel\prroi_pool\src\prroi_pooling_gpu.c /Foprroi_pooling_gpu.o\r\nTraceback (most recent call last):\n File "c:\users\86189\anaconda3\envs\pytorch\lib\runpy.py", line 193, in run_module_as_main\n "main", mod_spec)\n File "c:\users\86189\anaconda3\envs\pytorch\lib\runpy.py", line
85, in run_code\n exec(code, run_globals)\n File "C:\Users\86189\Anaconda3\envs\pytorch\Scripts\cl.exe\main.py", line 4, in \n File "c:\users\86189\anaconda3\envs\pytorch\lib\site-packages\cl\bin\cl.py", line 5, in \n from kombu import Connection\n File "c:\users\86189\anaconda3\envs\pytorch\lib\site-packages\kombu\init.py", line 81, in getattr\n
module = import(object_origins[name], None, None, [name])\n File "c:\users\86189\anaconda3\envs\pytorch\lib\site-packages\kombu\connection.py", line 23, in \n from kombu import exceptions\n File "c:\users\86189\anaconda3\envs\pytorch\lib\site-packages\kombu\exceptions.py", line 6, in \n from amqp import ChannelError, ConnectionError, ResourceError\n File "c:\users\86189\anaconda3\envs\pytorch\lib\site-packages\amqp\init.py", line 33, in \n from .connection import Connection # noqa\n File "c:\users\86189\anaconda3\envs\pytorch\lib\site-packages\amqp\connection.py", line 21, in \n from .transport
import Transport\n File "c:\users\86189\anaconda3\envs\pytorch\lib\site-packages\amqp\transport.py", line 8, in \n import ssl\n File "c:\users\86189\anaconda3\envs\pytorch\lib\ssl.py", line 98, in \n import ssl # if we can't import it, let the error propagate\nImportError: DLL load failed: \xd5\xd2\xb2\xbb\xb5\xbd\xd6\xb8\xb6\xa8\xb5\xc4\xc4\xa3\xbf\xe9\xa1\xa3\n[2/3] C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v10.1\bin\nvcc -DTORCH_EXTENSION_NAME=prroi_pooling -DTORCH_API_INCLUDE_EXTENSION_H -IC:\Users\86189\Anaconda3\envs\py36\lib\site-packages\torch\include -IC:\Users\86189\Anaconda3\envs\py36\lib\site-packages\torch\include\torch\csrc\api\include -IC:\Users\86189\Anaconda3\envs\py36\lib\site-packages\torch\include\TH -IC:\Users\86189\Anaconda3\envs\py36\lib\site-packages\torch\include\THC "-IC:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v10.1\include" -IC:\Users\86189\Anaconda3\envs\py36\Include -D_GLIBCXX_USE_CXX11_ABI=0 -D__CUDA_NO_HALF_OPERATORS -D__CUDA_NO_HALF_CONVERSIONS
-D__CUDA_NO_HALF2_OPERATORS
-c C:\Users\86189\Desktop\GANDissection\netdissect\upsegmodel\prroi_pool\src\prroi_pooling_gpu_impl.cu -o prroi_pooling_gpu_impl.cuda.o\r\nFAILED: prroi_pooling_gpu_impl.cuda.o \r\nC:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v10.1\bin\nvcc -DTORCH_EXTENSION_NAME=prroi_pooling -DTORCH_API_INCLUDE_EXTENSION_H -IC:\Users\86189\Anaconda3\envs\py36\lib\site-packages\torch\include -IC:\Users\86189\Anaconda3\envs\py36\lib\site-packages\torch\include\torch\csrc\api\include -IC:\Users\86189\Anaconda3\envs\py36\lib\site-packages\torch\include\TH -IC:\Users\86189\Anaconda3\envs\py36\lib\site-packages\torch\include\THC "-IC:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v10.1\include" -IC:\Users\86189\Anaconda3\envs\py36\Include -D_GLIBCXX_USE_CXX11_ABI=0 -D__CUDA_NO_HALF_OPERATORS
-D__CUDA_NO_HALF_CONVERSIONS
-D__CUDA_NO_HALF2_OPERATORS
-c C:\Users\86189\Desktop\GANDissection\netdissect\upsegmodel\prroi_pool\src\prroi_pooling_gpu_impl.cu -o prroi_pooling_gpu_impl.cuda.o\r\nTraceback (most recent call last):\r\n File "c:\users\86189\anaconda3\envs\pytorch\lib\runpy.py", line 193, in _run_module_as_main\r\n "main", mod_spec)\r\n File "c:\users\86189\anaconda3\envs\pytorch\lib\runpy.py", line 85, in _run_code\r\n exec(code, run_globals)\r\n File "C:\Users\86189\Anaconda3\envs\pytorch\Scripts\cl.exe\main.py", line 4, in \r\n File "c:\users\86189\anaconda3\envs\pytorch\lib\site-packages\cl\bin\cl.py", line 5, in \r\n from kombu import Connection\r\n File "c:\users\86189\anaconda3\envs\pytorch\lib\site-packages\kombu\init.py", line 81, in getattr\r\n module = import(object_origins[name], None, None, [name])\r\n File "c:\users\86189\anaconda3\envs\pytorch\lib\site-packages\kombu\connection.py", line 23, in \r\n from kombu import exceptions\r\n File "c:\users\86189\anaconda3\envs\pytorch\lib\site-packages\kombu\exceptions.py", line 6, in \r\n from amqp import ChannelError, ConnectionError, ResourceError\r\n File "c:\users\86189\anaconda3\envs\pytorch\lib\site-packages\amqp\init.py", line 33, in \r\n from .connection import Connection # noqa\r\n
File "c:\users\86189\anaconda3\envs\pytorch\lib\site-packages\amqp\connection.py", line 21, in \r\n from .transport import Transport\r\n File "c:\users\86189\anaconda3\envs\pytorch\lib\site-packages\amqp\transport.py", line 8, in \r\n import ssl\r\n File "c:\users\86189\anaconda3\envs\pytorch\lib\ssl.py", line 98, in \r\n import _ssl # if we can't import it, let the error propagate\r\nImportError: DLL load failed: \xd5\xd2\xb2\xbb\xb5\xbd\xd6\xb8\xb6\xa8\xb5\xc4\xc4\xa3\xbf\xe9\xa1\xa3\r\nninja: build stopped: subcommand failed.\r\n'

stylegan2

Dear CSAILVision team,

Thank you for sharing with us the great implementation.

Is there a easy way to apply this mehtod for stylegan or stylegan2?

Thank you for your help.

Best Wishes,

Alex

GanImageSegmenter

Is there any code provided for GanImageSegmenter as used in ablate.py?

change the gan model and dataset

hello, I am interested in your work. could you tell me which .py files should change if I want to change the gan model as my own encode-decode gan model and own dataset?

CreateProcess failed: The system cannot find the file specified.;subprocess.CalledProcessError: Command '['ninja', '-v']' returned non-zero exit status 3221226356.

hi,CSAILVision team,
i have installed ninja, but i still cause a problem.
PS D:\Users\wei22\Desktop\gandissect\gandissect-master> python -m netdissect --gan --model "netdissect.proggan.from_pth_file('models/karras/livingroom_lsun.pth')" --outdir "dissect/livingroom" --layer layer1 layer4 layer7 --size 1000
D:\Users\wei22\Desktop\gandissect\gandissect-master\netdissect\upsegmodel\prroi_pool\src
Using C:\Users\wei22\AppData\Local\torch_extensions\torch_extensions\Cache\py36_cu113 as PyTorch extensions root...
Detected CUDA files, patching ldflags
Emitting ninja build file C:\Users\wei22\AppData\Local\torch_extensions\torch_extensions\Cache\py36_cu113_prroi_pooling\build.ninja...
Building extension module prroi_pooling...
Allowing ninja to set a default number of workers... (overridable by setting the environment variable MAX_JOBS=N)
[1/3] C:/Program Files/NVIDIA GPU Computing Toolkit/CUDA\v11.3\bin\nvcc --generate-dependencies-with-compile --dependency-output prroi_pooling_gpu_impl.cuda.o.d -Xcudafe --diag_suppre
ss=dll_interface_conflict_dllexport_assumed -Xcudafe --diag_suppress=dll_interface_conflict_none_assumed -Xcudafe --diag_suppress=field_without_dll_interface -Xcudafe --diag_suppress=
base_class_has_different_dll_interface -Xcompiler /EHsc -Xcompiler /wd4190 -Xcompiler /wd4018 -Xcompiler /wd4275 -Xcompiler /wd4267 -Xcompiler /wd4244 -Xcompiler /wd4251 -Xcompiler /w
d4819 -Xcompiler /MD -DTORCH_EXTENSION_NAME=prroi_pooling -DTORCH_API_INCLUDE_EXTENSION_H -ID:\Users\wei22\Desktop\gandissect\venv\lib\site-packages\torch\include -ID:\Users\wei22\De
sktop\gandissect\venv\lib\site-packages\torch\include\torch\csrc\api\include -ID:\Users\wei22\Desktop\gandissect\venv\lib\site-packages\torch\include\TH -ID:\Users\wei22\Desktop\gandi
ssect\venv\lib\site-packages\torch\include\THC "-IC:/Program Files/NVIDIA GPU Computing Toolkit/CUDA\v11.3\include" -IC:\Users\wei22.conda\envs\netd\Include -D_GLIBCXX_USE_CXX11_ABI=
0 -D__CUDA_NO_HALF_OPERATORS
-D__CUDA_NO_HALF_CONVERSIONS__ -D__CUDA_NO_BFLOAT16_CONVERSIONS__ -D__CUDA_NO_HALF2_OPERATORS__ --expt-relaxed-constexpr -gencode=arch=compute_61,code=c
ompute_61 -gencode=arch=compute_61,code=sm_61 -c D:\Users\wei22\Desktop\gandissect\gandissect-master\netdissect\upsegmodel\prroi_pool\src\prroi_pooling_gpu_impl.cu -o prroi_pooling_gpu_impl.cuda.o
FAILED: prroi_pooling_gpu_impl.cuda.o
C:/Program Files/NVIDIA GPU Computing Toolkit/CUDA\v11.3\bin\nvcc --generate-dependencies-with-compile --dependency-output prroi_pooling_gpu_impl.cuda.o.d -Xcudafe --diag_suppress=dll
interface_conflict_dllexport_assumed -Xcudafe --diag_suppress=dll_interface_conflict_none_assumed -Xcudafe --diag_suppress=field_without_dll_interface -Xcudafe --diag_suppress=base_c
lass_has_different_dll_interface -Xcompiler /EHsc -Xcompiler /wd4190 -Xcompiler /wd4018 -Xcompiler /wd4275 -Xcompiler /wd4267 -Xcompiler /wd4244 -Xcompiler /wd4251 -Xcompiler /wd4819
-Xcompiler /MD -DTORCH_EXTENSION_NAME=prroi_pooling -DTORCH_API_INCLUDE_EXTENSION_H -ID:\Users\wei22\Desktop\gandissect\venv\lib\site-packages\torch\include -ID:\Users\wei22\Desktop
gandissect\venv\lib\site-packages\torch\include\torch\csrc\api\include -ID:\Users\wei22\Desktop\gandissect\venv\lib\site-packages\torch\include\TH -ID:\Users\wei22\Desktop\gandissect
venv\lib\site-packages\torch\include\THC "-IC:/Program Files/NVIDIA GPU Computing Toolkit/CUDA\v11.3\include" -IC:\Users\wei22.conda\envs\netd\Include -D_GLIBCXX_USE_CXX11_ABI=0 -D

CUDA_NO_HALF_OPERATORS__ -D__CUDA_NO_HALF_CONVERSIONS__ -D__CUDA_NO_BFLOAT16_CONVERSIONS__ -D__CUDA_NO_HALF2_OPERATORS__ --expt-relaxed-constexpr -gencode=arch=compute_61,code=compute
_61 -gencode=arch=compute_61,code=sm_61 -c D:\Users\wei22\Desktop\gandissect\gandissect-master\netdissect\upsegmodel\prroi_pool\src\prroi_pooling_gpu_impl.cu -o prroi_pooling_gpu_impl.cuda.o
CreateProcess failed: The system cannot find the file specified.
Traceback (most recent call last):
File "D:\Users\wei22\Desktop\gandissect\venv\lib\site-packages\torch\utils\cpp_extension.py", line 1723, in _run_ninja_build
env=env)
File "C:\Users\wei22.conda\envs\netd\lib\subprocess.py", line 438, in run
output=stdout, stderr=stderr)
subprocess.CalledProcessError: Command '['ninja', '-v']' returned non-zero exit status 3221226356.

The above exception was the direct cause of the following exception:

Traceback (most recent call last):
File "C:\Users\wei22.conda\envs\netd\lib\runpy.py", line 193, in _run_module_as_main
"main", mod_spec)
File "C:\Users\wei22.conda\envs\netd\lib\runpy.py", line 85, in run_code
exec(code, run_globals)
File "D:\Users\wei22\Desktop\gandissect\gandissect-master\netdissect_main
.py", line 409, in
File "D:\Users\wei22\Desktop\gandissect\gandissect-master\netdissect\upsegmodel\prroi_pool\prroi_pool.py", line 14, in
from .functional import prroi_pool2d
File "D:\Users\wei22\Desktop\gandissect\gandissect-master\netdissect\upsegmodel\prroi_pool\functional.py", line 23, in
verbose=True
File "D:\Users\wei22\Desktop\gandissect\venv\lib\site-packages\torch\utils\cpp_extension.py", line 1136, in load
keep_intermediates=keep_intermediates)
File "D:\Users\wei22\Desktop\gandissect\venv\lib\site-packages\torch\utils\cpp_extension.py", line 1347, in _jit_compile
is_standalone=is_standalone)
File "D:\Users\wei22\Desktop\gandissect\venv\lib\site-packages\torch\utils\cpp_extension.py", line 1452, in _write_ninja_file_and_build_library
error_prefix=f"Error building extension '{name}'")
File "D:\Users\wei22\Desktop\gandissect\venv\lib\site-packages\torch\utils\cpp_extension.py", line 1733, in _run_ninja_build
raise RuntimeError(message) from e
RuntimeError: Error building extension '_prroi_pooling'

My environment:
window 11
ninja 1.10.2
torch 1.10.2+cu113

Support for pytorch 1.0.1

Will there be plan to have the script support pytorch 1.0.1? I got the following error when executing ./travis.sh with pytorch 1.0.1: "ImportError: torch.utils.ffi is deprecated. Please use cpp extensions instead"

Failed at 'Making images:'

I met a problem when run 'Dissecting a GAN'. When the program is at the stage of 'Making Images:', it shuts down without releasing any error. Do you have specific requirements about the GPUs?

Lack of paint_select

package.json under folder client indicates a url git+ssh://[email protected]/HendrikStrobelt/paint_widget_js.git which is currently not available.
Could the team republic it?

Custom dataset

Hi

I'm trying to train on my own dataset. I created a data loader class like this

Screenshot 2019-03-26 at 21 33 04

I call the dissect model like this

Screenshot 2019-03-26 at 21 35 40

But I keep getting an indexing error.

Screenshot 2019-03-26 at 21 36 12

Can anybody help me with this issue?

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.