Conditional VAE
A demonstration of VAE and CVAE for generative modeling
Project Description
This project demonstrates the implementation of Variational Autoencoders (VAE) and Conditional Variational Autoencoders (CVAE) for generative modeling tasks. The VAE utilizes an encoder-decoder architecture trained with a reconstruction loss and a KL-divergence regularization term to learn meaningful latent space representations. The CVAE extends this by incorporating conditional labels, allowing control over the generated output.
Both models were analyzed through reconstruction tasks, latent space interpolations, and class-conditioned image generation. The project highlights the flexibility and power of these models in capturing complex data distributions.
Key Features:
- Implemented a standard VAE and extended it to a CVAE.
- Trained models on image reconstruction and class-conditional generation tasks.
- Visualized latent space interpolations to understand the learned representations.
Tools and Technologies:
- Python
- PyTorch
- Jupyter Notebook
Source Code
The complete source code for this project is available on GitHub.