Yet another StyleGAN 1.0 implementation with Chainer
We tried out to generate facial images of a specific Precure (Japanese Anime) character.
This project is finished and will be continued here for better quality with StyleGAN 2.0 ADA.
StyleGAN is a generative adversarial network introduced by NVIDIA researchers. Like PGGAN, its output image resolutions grow progressively during training. This implementation supports generation ranging from 4x4 px (stage 1) to 1024x1024 px (stage 9) images.
Most of the implementation follows the original paper, but we have added some enhancements. For example, we implemented an alternative least-squares objective introduced in LSGAN. We trained the models with facial images of Cure Beauty (Smile Pretty Cure!, 2012) and other common datasets.
- Python >= 3.6
- Chainer >= 7.0
- Pillow >= 9.1
- Numpy < 1.24
- H5py
- Matplotlib
- Cupy
- OpenCV-Python
- Pydot (Graphviz)
train.py
trains StyleGAN models. Use the-h
option for more details.generate.py
generates images from a trained model. Use the-h
option for more details.mix.py
mixes styles from latent files. Use the-h
option for more details.animate.py
makes an animation of the analogy from a trained model. Use the-h
option for more details.visualize.py
draws an example of a computation graph for debugging (Pydot and Graphviz are required). It takes no command-line arguments.check.py
analyzes the Chainer environment. It takes no command-line arguments.
Try yourself: python3 generate.py -g models/mnist.hdf5 -x 4 -c 256 16 -z 256 -n 100 -d mnist-images
Try yourself: python3 generate.py -g models/cifar10.hdf5 -x 4 -c 512 64 -t 0.7 -n 100 -d cifar10-images
Try yourself: python3 generate.py -g models/anime.hdf5 -x 5 -c 512 64 -t 0.8 -n 100 -d anime-images