Variational autoencoders (VAE) trained on DOOM 1/2 gameplay videos
Latent representations and unsupervised pretraining boost data efficiency on more challenging supervised [1] and reinforcement learning tasks [2]. The goal of this project is to provide both the Doom and machine learning communities with:
- High quality datasets comprised of Doom gameplay
- Various ready-to-run VAE experiments
- Suitable boilerplate for derivative projects
- 3FabRec: Fast Few-shot Face alignment by Reconstruction
- DARLA: Improving Zero-Shot Transfer in Reinforcement Learning
- Progressive Growing of GANs for Improved Quality, Stability, and Variation
- beta-VAE: Learning Basic Visual Concepts with a Constrained Variational Framework
Gameplay videos are sourced from Youtube. Special thanks to the following creators for their contributions to the community and this dataset - these individuals are truly the lifeblood of the Doom community:
This project will seek permission from the video authors before distributing the videos directly, e.g. from an S3 bucket. Currently, youtube_dl is used to download the videos to a local cache. Note: code such as this providing access to copyrighted content is explicitly recognized as fair use by Github. If your content has made its way into the dataset and you would prefer it be omitted, please open an issue.
Please open an issue or pull request if you would like to contribute.
- Progressive growing decoder a la [3]
- Implement beta loss term from [4]
- Implement FID(orig, recons) loss
- Dataset compiler
Doom gameplay video linksImplement entrypointsImplement datasetsResnet boilerplate