Giter Site home page Giter Site logo

ducis28 / multiple-objects-gan Goto Github PK

View Code? Open in Web Editor NEW

This project forked from tohinz/multiple-objects-gan

0.0 1.0 0.0 23.05 MB

Implementation for reproducing the results from the paper "Generating Multiple Objects at Spatially Distinct Locations"

License: MIT License

Python 99.15% Shell 0.85%

multiple-objects-gan's Introduction

Generating Multiple Objects at Spatially Distinct Locations

Pytorch implementation for reproducing the results from the paper Generating Multiple Objects at Spatially Distinct Locations by Tobias Hinz, Stefan Heinrich, and Stefan Wermter accepted for publication at the International Conference on Learning Representations 2019.

For more information and visualizations also see our blog post

Our poster can be found here

Model-Architecture

Dependencies

  • python 2.7
  • pytorch 0.4.1

Please add the project folder to PYTHONPATH and install the required dependencies:

pip install -r requirements.txt

Data

  • Multi-MNIST: adapted from here
    • contains the three data sets used in the paper: normal (three digits per image), split_digits (0-4 in top half of image, 5-9 in bottom half), and bottom_half_empty (no digits in bottom half of the image)
    • download our data, save it to data/ and extract
  • CLEVR: adapted from here
    • Main: download our data, save it to data/ and extract
    • CoGenT: download our data, save it to data/ and extract
  • MS-COCO:
    • download our preprocessed data (bounding boxes and bounding box labels), save it to data/ and extract
    • obtain the train and validation images from the 2014 split here, extract and save them in data/MS-COCO/train/ and data/MS-COCO/test/
    • for the StackGAN architecture: obtain the preprocessed char-CNN-RNN text embeddings from here and put the files in data/MS-COCO/train/ and data/MS-COCO/test/
    • for the AttnGAN architecture: obtain the preprocessed metadata and the pre-trained DAMSM model from here
      • extract the preprocessed metadata, then add the files downloaded in the first step (bounding boxes and bounding box labels) to the data/coco/coco/train/ and data/coco/coco/test/ folder
      • put the downloaded DAMSM model into code/coco/attngan/DAMSMencoders/ and extract

Training

  • to start training run sh train.sh data gpu-ids where you choose the desired data set and architecture (mnist/clevr/coco-stackgan-1/coco-stackgan-2/coco-attngan) and which/how many gpus to train on
  • e.g. to train on the Multi-MNIST data set on one GPU: sh train.sh mnist 0
  • e.g. to train the AttnGAN architecture on the MS-COCO data set on three GPUs: sh train.sh coco-attngan 0,1,2
  • training parameters can be adapted via code/dataset/cfg/dataset_train.yml
  • make sure the DATA_DIR in the respective code/dataset/cfg/dataset_train.yml points to the correct path
  • results are stored in output/

Evaluating

  • update the eval cfg file in code/dataset/cfg/dataset_eval.yml and adapt the path of NET_G to point to the model you want to use (default path is to the pretrained models linked below)
  • run sh sample.sh mnist/clevr/coco-stackgan-2/coco-attngan to generate images using the specified model

Pretrained Models

  • pretrained model for Multi-MNIST: download, save to models and extract
  • pretrained model for CLEVR: download, save to models and extract
  • pretrained model for MS-COCO:
    • StackGAN architecture: download, save to models and extract
    • AttnGAN architecture: download, save to models and extract

Examples Generated by the Pretrained Models

Multi-MNIST

Multi-Mnist Examples

CLEVR

CLEVR Examples

MS-COCO

StackGAN Architecture

COCO-StackGAN Examples

AttnGAN Architecture

COCO-AttnGAN Examples

Acknowledgement

  • Code for the experiments on Multi-MNIST and CLEVR data sets is adapted from StackGAN-Pytorch.
  • Code for the experiments on MS-COCO with the StackGAN architecture is adapted from StackGAN-Pytorch, while the code with the AttnGAN architecture is adapted from AttnGAN.

Citing

If you find our model useful in your research please consider citing:

@inproceedings{hinz2019generating,
title     = {Generating Multiple Objects at Spatially Distinct Locations},
author    = {Tobias Hinz and Stefan Heinrich and Stefan Wermter},
booktitle = {International Conference on Learning Representations},
year      = {2019},
url       = {https://openreview.net/forum?id=H1edIiA9KQ},
}

multiple-objects-gan's People

Contributors

tohinz avatar heinrichst avatar

Watchers

James Cloos avatar

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.