Explore how neural networks learn compact representations. Train a tiny autoencoder right in your browser.
An autoencoder is a neural network that learns to reconstruct its input via a compressed latent code.
Training minimizes reconstruction error (e.g., mean squared error), encouraging compact, meaningful representations.
AutoEncoders minimize a loss function between input \( x \) and reconstruction \( \hat{x} \):
The encoder maps \( x \rightarrow z \), and the decoder maps \( z \rightarrow \hat{x} \). In VAEs, we model \( z \sim \mathcal{N}(\mu, \sigma^2) \) and optimize the ELBO.
Train on synthetic sine waves and watch loss and reconstruction in real time.