Re-implementation of code in this link: https://github.com/XifengGuo/DCEC/blob/master/ConvAE.py
With little modification, sigmoid activation function is added on the top of model, thus we can output images with pixel vaule between 0 and 1, and we can apply the binary crossentropy loss:
Here the crossentropy is averaged by the number of pixels, just like mse loss.
Use fully convolutional layer instead of fully connected layer.(i.e. removed the steps of reshaping feature maps).
The bottle neck features are in 2D space, so that we can directly visualize them, but the performance is not satisfying, after 20 epochs of iteration, we got the following clustering result and reconstruction result.
Encode the image to 10D vector, and use tsne to visualize it
Convolutional variational autoencoder Encode the image to 10D vector, and visualize it by tsne
Modified based on convolutional variational autoencoder, I tested only input label in decoder and input label both in encoder and decoder, here is the result.
We could easily find that the second is better, and this is what most conditional vae algorithms do.