Giter Site home page Giter Site logo

f-anogan's People

Contributors

a03ki avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar

f-anogan's Issues

Graph values is off.

Hi!
I have tried your code on a custom dataset and got some bad values for my graphs, so I tested the MvTec Carpet class as well and got similar bad results... Do you happen to know what might cause this?
roc-auc
pr-auc

python setup.py install

when i run python setup.py install , it only used in cpu not GPU ,Does it support compiling under GPU?

Train AnoGAN on pcam dataset in h5 format

Hi @A03ki,
I want to train f-anogan using pcam dataset. I have two questions:

  1. How should I edit the code/directory structure to get it to work with training files in h5 format?
  2. Will the code work with 96x96x3 images? (shape of one image when parsed to numpy array)

or maybe I have to generate jpgs myself and put them all in the folder?

Thanks

I want to train this f-anoGAN networks with my image datasets.

HI, @A03ki

I want to train this f-anoGAN networks with my image datasets.
my directory structure is below,
├── f-AnoGAN
│ ├── your_own_dataset
│ │ ├── dataset
│ │ │ ├── SD_3D_V
│ │ │ │ ├── train
│ │ │ │ ├── test
Error while running step1 python train_wgangp.py "SD_3D_V\train"
FileNotFoundError: Couldn't find any class folder in SD_3D_V\train.
How to solve this problem please

Question about images_e encoder output

Hello A03ki, thank you for your amazing code here. It is helping me a lot in my university project. I'm trying to understand it and follow your work.
If you don't mind answering my short question i.e. what is the interpretation of the images saved in "images_e" folder. The ones that are the output of "train_z_encoding_izif.py"? At the first glance they look just like the ones straight from the GAN part in "f-AnoGAN/mvtec_ad/results/images". Could you please explain what is the difference.

How can I implement a validation dataset?

Hello thank you very much for your code, it is very useful to study it, I wanted to ask how to implement the validation process in the code since currently there is only training and testing.
Thank you very much in advance

G_loss and D_loss are negative numbers when running train_wgangp.py using my own image dataset

Hi,@A03ki.
Thank you very much for your code! When I use my own image dataset to train the WGAN, both of its G_loss and D_loss are negative nmbers, I wonder if this situation is right? And Im not really understand what the class mean, for example, I have 100 normal images and 100 ano images, so I put 100 normal images in the path "your own dataset-dataset-train-class0", and put the 100 ano images in the path"your own dataset-dataset-test-class0", then I run wgangp.py, izif.py, detection.py and save.py one by one? Is this correct?
Thank you again for your code and your answer.

Need absolute value for image subtraction

Thanks for your excellent work!
Recently I use the code to train my custom dataset, and I try to calculate the AUROC of localization performance.
In the Line 25 of file fanogan/save_compared_images.py, there is code for image subtraction:
compared_images[2::3] = real_img - fake_img
But there are much white pixels in the compared_images.
I find that the output images will be more reasonable by using absolute values of the compared_images:
compared_images[2::3] = torch.abs(real_img - fake_img)
Thanks a lot!

Huge generator and tiny discriminator?

I'm training these models with my own dataset at --latent_dim 1000 and --image_size 320, and I notice that the saved generator is about 3gb while the discriminator is around 500kb. Should I scale up the discriminator?

wrong line in mvtec jupyter notebook

Hi @A03ki ,
Thank you for this amazing repository!
When trying out training on mvtec-ad dataset, in the visualization part, there is a line that seems to be wrong. Instead of training_label=0, we should set training_label=3.
Thank you!

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.