Giter Site home page Giter Site logo

ptran1203 / pytorch-animegan Goto Github PK

View Code? Open in Web Editor NEW
150.0 150.0 37.0 211.19 MB

Pytorch implementation of AnimeGAN for fast photo animation

Python 3.55% Jupyter Notebook 96.21% Shell 0.22% Batchfile 0.02%
animation-images animegan gan photo-animation pytorch

pytorch-animegan's Introduction

AnimeGAN Pytorch Open In Colab

Pytorch implementation of AnimeGAN for fast photo animation

Input Animation
c2 g2

  • 05/05/2024: Add color_transfer module to retain original color of generated images, See here.
  • 23/04/2024: Added DDP training.
  • 16/04/2024: AnimeGANv2 (Hayao style) is released with training code

Quick start

git clone https://github.com/ptran1203/pytorch-animeGAN.git
cd pytorch-animeGAN

Run Inference on your local machine

--src can be directory or image file

python3 inference.py --weight hayao:v2 --src /your/path/to/image_dir --out /path/to/output_dir
  • Python code
from inference import Predictor

predictor= Predictor(
    'hayao:v2',
    # if set True, generated image will retain original color as input image
    retain_color=True
)

url = 'https://github.com/ptran1203/pytorch-animeGAN/blob/master/example/result/real/1%20(20).jpg?raw=true'

predictor.transform_file(url, "anime.jpg")

Pretrained weight

Model name Model Dataset Weight
Hayao:v2 AnimeGANv2 Google Landmark v2 + Hayao style GeneratorV2_gldv2_Hayao.pt
Hayao AnimeGAN train_photo + Hayao style generator_hayao.pt
Shinkai AnimeGAN train_photo + Shinkai style generator_shinkai.pt

Train on custom dataset

1. Prepare dataset

1.1 To download dataset from the paper, run below command

wget -O anime-gan.zip https://github.com/ptran1203/pytorch-animeGAN/releases/download/v1.0/dataset_v1.zip
unzip anime-gan.zip

=> The dataset folder can be found in your current folder with named dataset

1.2 Create custom data from anime video

You need to have a video file located on your machine.

Step 1. Create anime images from the video

python3 script/video_to_images.py --video-path /path/to/your_video.mp4\
                                --save-path dataset/MyCustomData/style\
                                --image-size 256\

Step 2. Create edge-smooth version of dataset from Step 1.

python3 script/edge_smooth.py --dataset MyCustomData --image-size 256

2. Train animeGAN

To train the animeGAN from command line, you can run train.py as the following:

python3 train.py --anime_image_dir dataset/Hayao \
                --real_image_dir dataset/photo_train \
                --model v2 \                 # animeGAN version, can be v1 or v2
                --batch 8 \
                --amp \                      # Turn on Automatic Mixed Precision training
                --init_epochs 10 \
                --exp_dir runs \
                --save-interval 1 \
                --gan-loss lsgan \           # one of [lsgan, hinge, bce]
                --init-lr 1e-4 \
                --lr-g 2e-5 \
                --lr-d 4e-5 \
                --wadvd 300.0\               # Aversarial loss weight for D
                --wadvg 300.0\               # Aversarial loss weight for G
                --wcon 1.5\                  # Content loss weight
                --wgra 3.0\                  # Gram loss weight
                --wcol 30.0\                 # Color loss weight
                --use_sn\                    # If set, use spectral normalization, default is False

3. Transform images

To convert images in a folder or single image, run inference.py, for example:

--src and --out can be a directory or a file

python3 inference.py --weight path/to/Generator.pt \
                     --src dataset/test/HR_photo \
                     --out inference_images

4. Transform video

To convert a video to anime version:

Be careful when choosing --batch-size, it might lead to CUDA memory error if the resolution of the video is too large

python3 inference.py --weight hayao:v2\
                        --src test_vid_3.mp4\
                        --out test_vid_3_anime.mp4\
                        --batch-size 4

Result of AnimeGAN v2

Input Hayao style v2
c1 g1
c1 g1
c1 g1
c1 g1
c1 g1
c1 g1

With color transfer module

The generated images will have color may differ from original images. It's affected by the Anime dataset was trained. To mitigate this issue, I added color_transfer module to transfer color of input images to generated images.

Input Hayao style v2 + Color transfer
c1 g1
c1 g1
c1 g1
c1 g1
c1 g1
c1 g1
More results - Hayao V2

More results - Hayao V2 Retain color

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.