Skip to content

Training a neural network (AI) to play a very simplified game of 4x4 Tetris using Q-Learning.

License

Notifications You must be signed in to change notification settings

TimHanewich/tetris-ai-mini

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

62 Commits
 
 
 
 
 
 
 
 
 
 

Repository files navigation

Tetris AI Mini

In December 2024 I developed this project to demonstrate training an AI agent to play a very simple game using Q-Learning, a popular reinforcement learning technique.

side by side

For full detailed breakdown of how I designed and built this, check out my article on this project on Medium!

Exaplanation of Code Files

All code is available in the src folder:

  • tetris.py - The core Tetris game engine. Capable of representing the game board, "dropping" squares into columns, and determining rewards for specific moves.
  • play.py - If you want to try playing the game of Tetris yourself with your keyboard, run this!
  • representation.py - Specialized in converting the core Tetris game board (GameState) to a flattened list of integers that can be inputted into a neural network (a numeric representation of the game board).
  • intellience.py - Contains the TetrisAI class, a higher-level class focused on creating, manipulating, and training the custom-build neural network.
  • train.py - The training script. Applies Q-Learning to train the Tetris AI to play the game through reinforcement learning.
  • evaluate.py - After you train and save a model via the train.py script, load the model into evaluate.py to observe it playing move by move.
  • visuals.py - Simple functions I wrote to generate a graphical representation of the board at any state in the game.
  • visgen.py - Uses the visuals.py to generate a series of frames of the AI playing that can be stitched together via FFMPEG.
  • tools.py - Simple tools used throughout this project.

Model Checkpoints

Checkpoint Commit Description
download a02de2357791f170b9f9090347a22e72646fde73 Trained on 85,000 experiences (though this many experiences are likely not required to reach this level of performance - evidence to suggest only 4,500 would have been enough). Trained from the ground up with no knowledge from blank new game state. Plays the game perfectly, filling each row from left to right until all 16 are filled. In addition to a blank starting state, this model has also proven to play perfectly even at random, uneven starting positions. Training log file here.

Notable Commits

  • a02de2357791f170b9f9090347a22e72646fde73 - first version to confirm training works! This approach works as the model learned to play the game perfectly.

Use FFMPEG to stitch images into video

To export to MP4 video:

ffmpeg -i %06d.png -r 15 output.mp4

You can also export to a GIF

ffmpeg -i %06d.png -r 15 output.gif

Other Misc. Resources

  • The 4x4 grid used in the visuals was designe in PowerPoint. Deck here.

About

Training a neural network (AI) to play a very simplified game of 4x4 Tetris using Q-Learning.

Topics

Resources

License

Stars

Watchers

Forks

Languages