Giter Site home page Giter Site logo

xusun98 / aoda Goto Github PK

View Code? Open in Web Editor NEW

This project forked from mukosame/aoda

0.0 0.0 0.0 9.69 MB

Official implementation of "Adversarial Open Domain Adaptation for Sketch-to-Photo Synthesis"(WACV 2022/CVPRW 2021)

License: BSD 3-Clause "New" or "Revised" License

Python 100.00%

aoda's Introduction

AODA

By Xiaoyu Xiang, Ding Liu, Xiao Yang, Yiheng Zhu, Xiaohui Shen, Jan P. Allebach

This is the official Pytorch implementation of Adversarial Open Domain Adaptation for Sketch-to-Photo Synthesis.

aoda

Updates

  • Our paper will be presented on WACV-2022 on Jan 5, 19:30 pm GMT-10. Welcome to come and ask questions!
  • 2021.12.26: Edit some comments of the code.
  • 2021.12.25: Upload all codes. Merry Christmas!
  • 2021.12.21: Update the LICENSE and repo contents.
  • 2021.4.15: Create the repo

Contents

  1. Introduction
  2. Prerequisites
  3. Get Started
  4. Contact
  5. License
  6. Citations
  7. Acknowledgments

Introduction

The repository contains the entire project (including all the util scripts) for our open domain sketch-to-photo synthesis network, AODA.

AODA aims to synthesize a realistic photo from a freehand sketch with its class label, even if the sketches of that class are missing in the training data. It is accepted by WACV-2022 and CVPR Workshop-2021. The most updated paper with supplementary materials can be found at arXiv.

In AODA, we propose a simple yet effective open-domain sampling and optimization strategy to "fool" the generator into treating fake sketches as real ones. To achieve this goal, we adopt a framework that jointly learns sketch-to-photo and photo-to-sketch generation. Our approach shows impressive results in synthesizing realistic color, texture, and maintaining the geometric composition for various categories of open-domain sketches.

If our proposed architectures also help your research, please consider citing our paper.

framework

Prerequisites

  • Linux or macOS
  • Python 3 (Recommend to use Anaconda)
  • CPU or NVIDIA GPU + CUDA CuDNN

Get Started

Installation

First, clone this repository:

git clone https://github.com/Mukosame/AODA.git

Install the required packages: pip install -r requirements.txt.

Data Preparation

There are three datasets used in this paper: Scribble, SketchyCOCO, and QMUL-Sketch:

Scribble:

wget -N "http://www.robots.ox.ac.uk/~arnabg/scribble_dataset.zip"

SketchyCOCO:

Download from Google Drive.

QMUL-Sketch:

This dataset includes three datasets: handbags with 400 photos and sketches, ShoeV2 with 2000 photos and 6648 sketches, and ChairV2 with 400 photos and 1297 sketches. The complete dataset can be downloaded through Google Drive.

Training

Train an AODA model:

python train.py --dataroot ./dataset/scribble_10class_open/ \
                --name scribble_aoda \
                --model aoda_gan \
                --gan_mode vanilla \
                --no_dropout \
                --n_classes 10 \
                --direction BtoA \
                --load_size 260

After training, your models models/latest_net_G_A.pth, models/latest_net_G_B.pth and its training states states/latest.state, and a corresponding log file train_scribble_aoda_xxx are placed in the directory of ./checkpoints/scribble_aoda/.

Testing

Please download the weights from [GoogleDrive], and put it into the weights/ folder.

You can switch the --model_suffix to control the direction of sketch-to-photo or photo-to-sketch synthesis. For different datasets, you need to change the --name and the corresponding --n_classes:

python test.py --model_suffix _B --dataroot ./dataset/scribble/testA --name scribble_aoda --model test --phase test --no_dropout --n_classes 10

Your test results will be saved at ./results/test_latest/.

Contact

Xiaoyu Xiang.

You can also leave your questions as issues in the repository. I will be glad to answer them!

License

This project is released under the BSD-3 Clause License.

Citations

@inproceedings{xiang2022adversarial,
  title={Adversarial Open Domain Adaptation for Sketch-to-Photo Synthesis},
  author={Xiang, Xiaoyu and Liu, Ding and Yang, Xiao and Zhu, Yiheng and Shen, Xiaohui and Allebach, Jan P},
  booktitle={Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision},
  year={2022}
}

Acknowledgments

This project is based on the CycleGAN PyTorch.

aoda's People

Contributors

mukosame avatar

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.