Giter Site home page Giter Site logo

taldatech / soft-intro-vae-pytorch Goto Github PK

View Code? Open in Web Editor NEW
190.0 9.0 28.0 1.34 MB

[CVPR 2021 Oral] Official PyTorch implementation of Soft-IntroVAE from the paper "Soft-IntroVAE: Analyzing and Improving Introspective Variational Autoencoders"

License: Apache License 2.0

Python 25.77% Jupyter Notebook 74.23%
vae-pytorch vae pytorch cvpr2021 soft-introvae soft-intro-vae variational-autoencoder image-generation density-estimation

soft-intro-vae-pytorch's Introduction

soft-intro-vae-pytorch


[CVPR 2021 Oral] Soft-IntroVAE: Analyzing and Improving Introspective Variational Autoencoders

Tal DanielAviv Tamar

Official repository of the paper

CVPR 2021 Oral

Open In Colab

Soft-IntroVAE

Soft-IntroVAE: Analyzing and Improving Introspective Variational Autoencoders
Tal Daniel, Aviv Tamar

Abstract: The recently introduced introspective variational autoencoder (IntroVAE) exhibits outstanding image generations, and allows for amortized inference using an image encoder. The main idea in IntroVAE is to train a VAE adversarially, using the VAE encoder to discriminate between generated and real data samples. However, the original IntroVAE loss function relied on a particular hinge-loss formulation that is very hard to stabilize in practice, and its theoretical convergence analysis ignored important terms in the loss. In this work, we take a step towards better understanding of the IntroVAE model, its practical implementation, and its applications. We propose the Soft-IntroVAE, a modified IntroVAE that replaces the hinge-loss terms with a smooth exponential loss on generated samples. This change significantly improves training stability, and also enables theoretical analysis of the complete algorithm. Interestingly, we show that the IntroVAE converges to a distribution that minimizes a sum of KL distance from the data distribution and an entropy term. We discuss the implications of this result, and demonstrate that it induces competitive image generation and reconstruction. Finally, we describe two applications of Soft-IntroVAE to unsupervised image translation and out-of-distribution detection, and demonstrate compelling results.

Citation

Daniel, Tal, and Aviv Tamar. "Soft-IntroVAE: Analyzing and Improving the Introspective Variational Autoencoder." arXiv preprint arXiv:2012.13253 (2020).

@InProceedings{Daniel_2021_CVPR,
author    = {Daniel, Tal and Tamar, Aviv},
title     = {Soft-IntroVAE: Analyzing and Improving the Introspective Variational Autoencoder},
booktitle = {Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)},
month     = {June},
year      = {2021},
pages     = {4391-4400}

}

Preprint on ArXiv: 2012.13253

Prerequisites

  • For your convenience, we provide an environemnt.yml file which installs the required packages in a conda environment name torch.
    • Use the terminal or an Anaconda Prompt and run the following command conda env create -f environment.yml.
  • For Style-SoftIntroVAE, more packages are required, and we provide them in the style_soft_intro_vae directory.
Library Version
Python 3.6 (Anaconda)
torch >= 1.2 (tested on 1.7)
torchvision >= 0.4
matplotlib >= 2.2.2
numpy >= 1.17
opencv >= 3.4.2
tqdm >= 4.36.1
scipy >= 1.3.1

Repository Organization

File name Content
/soft_intro_vae directory containing implementation for image data
/soft_intro_vae_2d directory containing implementations for 2D datasets
/soft_intro_vae_3d directory containing implementations for 3D point clouds data
/soft_intro_vae_bootstrap directory containing implementation for image data using bootstrapping (using a target decoder)
/style_soft_intro_vae directory containing implementation for image data using ALAE's style-based architecture
/soft_intro_vae_tutorials directory containing Jupyter Noteboook tutorials for the various types of Soft-IntroVAE

Related Projects

  • March 2022: augmentation-enhanced-soft-intro-vae - GitHub - using differentiable augmentations to improve image generation FID score.

Credits

  • Adversarial Latent Autoencoders, Pidhorskyi et al., CVPR 2020 - Code, Paper.
  • FID is calculated natively in PyTorch using Seitzer implementation - Code

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.