Giter Site home page Giter Site logo

naxalpha / xgan Goto Github PK

View Code? Open in Web Editor NEW
15.0 5.0 0.0 49.75 MB

A highly customizable implementation of GAN for rapid prototyping using PyTorch

License: MIT License

Python 0.02% Jupyter Notebook 99.98%
pytorch gan machine-learning generative-adversarial-network

xgan's Introduction

xGAN

xGAN is highly customizable zero-coding GAN implementation for rapid prototyping. You can use it quickly train your GAN on your dataset.

image

NOTE: This repo implements DCGAN and does not include recent breakthroughs in the field e.g. Progressive GAN. But I have plan to implement them very soon.

Getting Started

Follow these steps to train your own GAN:

  • Clone the repository git clone https://github.com/NaxAlpha/xgan.git
  • Install PyTorch and other requirements pip install -r requirements.txt
  • Prepare your dataset in dataset-dir like this: dataset-dir/data/abc.png
  • Start training: python train.py dataset-dir 100 1024 512 256 128 64
  • Duh!

Options

train.py has very diverse set of options available for you to customize. Example usage is:

python train.py <dataset-dir> <network-layers...> [--batch_size=64] [--epochs=100] [--model_dir=None] [--log_iter=10] [--loss_buffer=500] [--n_outputs=3] [--dump_dir=None]

Following is parameter documentation:

  • dataset-dir: Path to images you want to train your GAN on
  • network-layers: Network architecture from latent space to filters on each layer:
    • Vanilla DCGAN has following parameters: 100 1024 512 256 128 64
    • First layer value is latent space
    • Size of image is determined number of layers e.g. in case of vanilla we have 6 layers: 2^6 => 64
    • An other example architecture would be for image of size 128x128: 100 1024 512 256 128 64 32
  • batch_size: Size of single batch
  • epochs: Number of epochs for complete iteration of on dataset
  • model_dir: Path to directory where to save model (skip if you do not want to save model)
  • log_iter: Number of iterations after which to to save model/output
  • loss_buffer: Number of values in discriminator/generator loss displayed in output window
  • n_outputs: Number of images per row displayed in output window
  • dump_dir: Path where to save output of model (skip not to save output)

TODO:

  • Sample Jupyter Notebook
  • Settings for Training on Colab
  • Implement Progressive GAN

Blog Post:

xgan's People

Contributors

arsalan993 avatar naxalpha avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.