Convolutional Variational Autoencoder, trained on MNIST
Loading...{{ loadingProgress }}%
Modified from the MNIST convolution/deconvolution variational autoencoder example here. The network demonstrated here is the generative decoder portion (see Jupyter notebook). The network generates an image through a series of Deconvolution2D layers from coordinates in the 2D latent space. All computation performed entirely in your browser. Toggling GPU on/off shouldn't reveal any significant speed differences, as this is a fairly small network. In the architecture diagram below, intermediate outputs at each layer are also visualized.
closeCLOSE
Click around the latent space
-1.5 x 1.5
-1.5 y 1.5
use GPU
x: {{ inputCoordinates[0] < 0 ? inputCoordinates[0].toFixed(2) : inputCoordinates[0].toFixed(3) }}
y: {{ inputCoordinates[1] < 0 ? inputCoordinates[1].toFixed(2) : inputCoordinates[1].toFixed(3) }}
{{ layerResult.layerClass }} {{ layerDisplayConfig[layerResult.name].heading }}