In December 2024 I developed this project to demonstrate training an AI agent to play a very simple game using Q-Learning, a popular reinforcement learning technique.
For full detailed breakdown of how I designed and built this, check out my article on this project on Medium!
All code is available in the src folder:
- tetris.py - The core Tetris game engine. Capable of representing the game board, "dropping" squares into columns, and determining rewards for specific moves.
- play.py - If you want to try playing the game of Tetris yourself with your keyboard, run this!
- representation.py - Specialized in converting the core Tetris game board (
GameState
) to a flattened list of integers that can be inputted into a neural network (a numeric representation of the game board). - intellience.py - Contains the
TetrisAI
class, a higher-level class focused on creating, manipulating, and training the custom-build neural network. - train.py - The training script. Applies Q-Learning to train the Tetris AI to play the game through reinforcement learning.
- evaluate.py - After you train and save a model via the train.py script, load the model into evaluate.py to observe it playing move by move.
- visuals.py - Simple functions I wrote to generate a graphical representation of the board at any state in the game.
- visgen.py - Uses the visuals.py to generate a series of frames of the AI playing that can be stitched together via FFMPEG.
- tools.py - Simple tools used throughout this project.
Checkpoint | Commit | Description |
---|---|---|
download | a02de2357791f170b9f9090347a22e72646fde73 |
Trained on 85,000 experiences (though this many experiences are likely not required to reach this level of performance - evidence to suggest only 4,500 would have been enough). Trained from the ground up with no knowledge from blank new game state. Plays the game perfectly, filling each row from left to right until all 16 are filled. In addition to a blank starting state, this model has also proven to play perfectly even at random, uneven starting positions. Training log file here. |
a02de2357791f170b9f9090347a22e72646fde73
- first version to confirm training works! This approach works as the model learned to play the game perfectly.
To export to MP4 video:
ffmpeg -i %06d.png -r 15 output.mp4
You can also export to a GIF
ffmpeg -i %06d.png -r 15 output.gif
- The 4x4 grid used in the visuals was designe in PowerPoint. Deck here.