How to build your own AlphaZero AI using Python and Keras
Connect4
The game that our algorithm will learn to play is Connect4 (or Four In A Row). Not quite as complex as Go but there are still 4,531,985,219,092 game positions in total.
The game rules are straightforward. Players take it in turns to enter a piece of their colour in the top of any available column. The first player to get four of their colour in a row each vertically, horizontally or diagonally, wins. If the entire grid is filled without a four-in-a-row being created, the game is drawn.
Heres a summary of the key files that make up the codebase:
This file contains the game rules for Connect4.
Each squares is allocated a number from 0 to 41, as follows:
The game.py file gives the logic behind moving from one game state to another, given a chosen action. For example, given the empty board and action 38, the takeAction method return a new game state, with the starting players piece at the bottom of the centre column.
You can replace the game.py file with any game file that conforms to the same API and the algorithm will in principal, learn strategy through self play, based on the rules you have given it.
This contains the code that starts the learning process. It loads the game rules and then iterates through the main loop of the algorithm, which consist of three stages:
There are two agents involved in this loop, the best_player and the current_player.
The best_player contains the best performing neural network and is used to generate the self play memories. The current_player then retrains its neural network on these memories and is then pitched against the best_player. If it wins, the neural network inside the best_player is switched for the neural network inside the current_player, and the loop starts again.
This contains the Agent class (a player in the game). Each player is initialised with its own neural network and Monte Carlo Search Tree.
The simulate method runs the Monte Carlo Tree Search process. Specifically, the agent moves to a leaf node of the tree, evaluates the node with its neural network and then backfills the value of the node up through the tree.
The act method repeats the simulation multiple times to understand which move from the current position is most favourable. It then returns the chosen action to the game, to enact the move.
The replay method retrains the neural network, using memories from previous games.
This file contains the Residual_CNN class, which defines how to build an instance of the neural network.
It uses a condensed version of the neural network architecture in the AlphaGoZero paper i.e. a convolutional layer, followed by many residual layers, then splitting into a value and policy head.
The depth and number of convolutional filters can be specified in the config file.
The Keras library is used to build the network, with a backend of Tensorflow.
To view individual convolutional filters and densely connected layers in the neural network, run the following inside the the run.ipynb notebook:
This contains the Node, Edge and MCTS classes, that constitute a Monte Carlo Search Tree.
The MCTS class contains the moveToLeaf and backFill methods previously mentioned, and instances of the Edge class store the statistics about each potential move.
This is where you set the key parameters that influence the algorithm.
Adjusting these variables will affect that running time, neural network accuracy and overall success of the algorithm. The above parameters produce a high quality Connect4 player, but take a long time to do so. To speed the algorithm up, try the following parameters instead.
Contains the playMatches and playMatchesBetweenVersions functions that play matches between two agents.
To play against your creation, run the following code (its also in the run.ipynb notebook)
When you run the algorithm, all model and memory files are saved in the run folder, in the root directory.
To restart the algorithm from this checkpoint later, transfer the run folder to the run_archive folder, attaching a run number to the folder name. Then, enter the run number, model version number and memory version number into the initialise.py file, corresponding to the location of the relevant files in the run_archive folder. Running the algorithm as usual will then start from this checkpoint.
An instance of the Memory class stores the memories of previous games, that the algorithm uses to retrain the neural network of the current_player.
This file contains a custom loss function, that masks predictions from illegal moves before passing to the cross entropy loss function.
The locations of the run and run_archive folders.
Log files are saved to the log folder inside the run folder.
To turn on logging, set the values of the logger_disabled variables to False inside this file.
Viewing the log files will help you to understand how the algorithm works and see inside its mind. For example, here is a sample from the logger.mcts file.
Equally from the logger.tourney file, you can see the probabilities attached to each move, during the evaluation phase:
Training over a couple of days produces the following chart of loss against mini-batch iteration number:
The top line is the error in the policy head (the cross entropy of the MCTS move probabilities, against the output from the neural network). The bottom line is the error in the value head (the mean squared error between the actual game value and the neural network predict of the value). The middle line is an average of the two.
Clearly, the neural network is getting better at predicting the value of each game state and the likely next moves. To show how this results in stronger and stronger play, I ran a league between 17 players, ranging from the 1st iteration of the neural network, up to the 49th. Each pairing played twice, with both players having a chance to play first.
Here are the final standings:
Clearly, the later versions of the neural network are superior to the earlier versions, winning most of their games. It also appears that the learning hasnt yet saturated with further training time, the players would continue to get stronger, learning more and more intricate strategies.
As an example, one clear strategy that the neural network has favoured over time is grabbing the centre column early. Observe the difference between the first version of the algorithm and say, the 30th version:
1st neural network version
30th neural network version
This is a good strategy as many lines require the centre column claiming this early ensures your opponent cannot take advantage of this. This has been learnt by the neural network, without any human input.
There is a game.py file for a game called Metasquares in the games folder. This involves placing X and O markers in a grid to try to form squares of different sizes. Larger squares score more points than smaller squares and the player with the most points when the grid is full wins.
If you switch the Connect4 game.py file for the Metasquares game.py file, the same algorithm will learn how to play Metasquares instead.
Hopefully you find this article useful let me know in the comments below if you find any typos or have questions about anything in the codebase or article and Ill get back to you as soon as possible.
Excerpt from:
How to build your own AlphaZero AI using Python and Keras
- Game Changer: AlphaZero's Groundbreaking Chess Strategies and the Promise of AI by Mathew Sadler and Natasha Regan - ChessBase India - December 18th, 2024 [December 18th, 2024]
- Demis Hassabis - when the chess prodigy won the Nobel Prize in Chemistry - Chess.com - October 14th, 2024 [October 14th, 2024]
- AI Could Learn a Thing or Two From Rat Brains - The Daily Beast - November 13th, 2023 [November 13th, 2023]
- Episode What sets great teams apart | Lane Shackleton (CPO of Coda) - Mirchi Plus - October 1st, 2023 [October 1st, 2023]
- The timeless charm of of 'Chaturanga' - Daily Pioneer - October 1st, 2023 [October 1st, 2023]
- Creating New Stories That Don't Suck - Hollywood in Toto - October 1st, 2023 [October 1st, 2023]
- AI Agents: Adapting to the Future of Software Development - ReadWrite - October 1st, 2023 [October 1st, 2023]
- The Race for AGI: Approaches of Big Tech Giants - Fagen wasanni - July 30th, 2023 [July 30th, 2023]
- Book Review: Re-engineering the Chess Classics by GM Matthew ... - Chess.com - June 4th, 2023 [June 4th, 2023]
- The Sparrow Effect: How DeepMind is Rewriting the AI Script - CityLife - June 4th, 2023 [June 4th, 2023]
- Vitalik Buterin Exclusive Interview: Longevity, AI and More - Lifespan.io News - June 4th, 2023 [June 4th, 2023]
- How to play chess against ChatGPT (and why you probably shouldn't) - Android Authority - May 29th, 2023 [May 29th, 2023]
- Weekend Movers - Conflux (CFX) and Klaytn (KLAY) - Securities.io - May 16th, 2023 [May 16th, 2023]
- How technology reinvented chess as a global social network - Financial Times - May 8th, 2023 [May 8th, 2023]
- Our moral panic over AI - The Spectator Australia - April 13th, 2023 [April 13th, 2023]
- Liability Considerations for Superhuman (and - Fenwick & West LLP - April 13th, 2023 [April 13th, 2023]
- Aston by-election minus one day The Poll Bludger - The Poll Bludger - April 2nd, 2023 [April 2nd, 2023]
- No-Castling Masters: Kramnik and Caruana will play in Dortmund - ChessBase - March 26th, 2023 [March 26th, 2023]
- AI is teamwork Bits&Chips - Bits&Chips - March 20th, 2023 [March 20th, 2023]
- Resolve Strategic nuclear subs poll (open thread) The Poll Bludger - The Poll Bludger - March 20th, 2023 [March 20th, 2023]
- How AlphaZero Learns Chess - Chess.com - February 24th, 2023 [February 24th, 2023]
- AI Topic: AlphaZero, ChatGPT, Bard, Stable Diffusion and more! - February 24th, 2023 [February 24th, 2023]
- AlphaZero Tackles Chess Variants - by Dennis Monokroussos - February 20th, 2023 [February 20th, 2023]
- AlphaZero Vs. Stockfish 8 | AI Is Conquering Computer Chess - February 10th, 2023 [February 10th, 2023]
- Stockfish (chess) - Wikipedia - November 22nd, 2022 [November 22nd, 2022]
- AlphaZero Chess Engine: The Ultimate Guide - October 14th, 2022 [October 14th, 2022]
- Whos going to save us from bad AI? - MIT Technology Review - October 14th, 2022 [October 14th, 2022]
- DeepMinds game-playing AI has beaten a 50-year-old record in computer science - MIT Technology Review - October 8th, 2022 [October 8th, 2022]
- The Download: TikTok moral panics, and DeepMinds record-breaking AI - MIT Technology Review - October 8th, 2022 [October 8th, 2022]
- Top 5 stories of the week: DeepMind and OpenAI advancements, Intels plan for GPUs, Microsofts zero-day flaws - VentureBeat - October 8th, 2022 [October 8th, 2022]
- Taxing times (open thread) The Poll Bludger - The Poll Bludger - October 8th, 2022 [October 8th, 2022]
- AlphaGo Zero Explained In One Diagram | by David Foster - Medium - October 1st, 2022 [October 1st, 2022]
- A chess scandal brings fresh attention to computers role in the game - The Record by Recorded Future - October 1st, 2022 [October 1st, 2022]
- Meta AI Boss: current AI methods will never lead to true intelligence - Gizchina.com - October 1st, 2022 [October 1st, 2022]
- Meta's AI guru LeCun: Most of today's AI approaches will never lead to true intelligence - ZDNet - September 24th, 2022 [September 24th, 2022]
- Stockfish - Chess Engines - Chess.com - September 9th, 2022 [September 9th, 2022]
- DeepMinds AlphaFold could be the future of science and AI - Vox.com - August 7th, 2022 [August 7th, 2022]
- Correspondence chess server, Go (weiqi) games online - FICGS - July 4th, 2022 [July 4th, 2022]
- Chennai Chess Olympiad and AI - Analytics India Magazine - June 28th, 2022 [June 28th, 2022]
- Yann LeCun has a bold new vision for the future of AI - MIT Technology Review - June 28th, 2022 [June 28th, 2022]
- Special Street Fighter 35th anniversary website launched, features impressive timeline of game release dates over the years - EventHubs - June 28th, 2022 [June 28th, 2022]
- The Nightmarish Frontier of AI in Chess - uschess.org - June 19th, 2022 [June 19th, 2022]
- Four Draws in Round Three of 2022 Candidates | US Chess.org - uschess.org - June 19th, 2022 [June 19th, 2022]
- Part 1: A Realistic Framing Of The Progress In Artificial Intelligence - Investing.com UK - June 19th, 2022 [June 19th, 2022]
- Who Will Win The Candidates: The Case For Each Player - Chess.com - June 13th, 2022 [June 13th, 2022]
- A tale of two universities and two engines - Chess News - March 22nd, 2022 [March 22nd, 2022]
- AlphaZero (And Other!) Chess Variants Now Available For Everyone - Chess.com - March 20th, 2022 [March 20th, 2022]
- How AI is impacting the video game industry - ZME Science - December 17th, 2021 [December 17th, 2021]
- Q&A: How Speechmatics is leading the way in tackling AI bias and improving inclusion - Information Age - November 4th, 2021 [November 4th, 2021]
- AlphaGo | DeepMind - October 22nd, 2021 [October 22nd, 2021]
- Leela Zero - Wikipedia - October 22nd, 2021 [October 22nd, 2021]
- Leela Chess Zero - Wikipedia - October 22nd, 2021 [October 22nd, 2021]
- How AI is reinventing what computers are - MIT Technology Review - October 22nd, 2021 [October 22nd, 2021]
- graphneural.network - Spektral - October 12th, 2021 [October 12th, 2021]
- MuZero - Wikipedia - October 12th, 2021 [October 12th, 2021]
- Bin Yu - October 12th, 2021 [October 12th, 2021]
- A general reinforcement learning algorithm that masters ... - August 29th, 2021 [August 29th, 2021]
- What would it be like to be a conscious AI? We might never know. - MIT Technology Review - August 29th, 2021 [August 29th, 2021]
- AlphaZero to analyse no-castling match of the champions - Chessbase News - July 13th, 2021 [July 13th, 2021]
- How This Startup Aims to Disrupt Copywriting Forever - Inc. - June 6th, 2021 [June 6th, 2021]
- Between Games and Apocalyptic Robots: Considering Near-Term Societal Risks of Reinforcement - Medium - April 17th, 2021 [April 17th, 2021]
- Trapping the queen - Chessbase News - April 17th, 2021 [April 17th, 2021]
- AI 101: All the Ways AI Could Improve or End Our World - Interesting Engineering - April 2nd, 2021 [April 2nd, 2021]
- Quick Scripts AlphaZero - February 17th, 2021 [February 17th, 2021]
- How to Kickstart an AI Venture Without Proprietary Data - Medium - February 17th, 2021 [February 17th, 2021]
- Street Fighter V: What to Expect After the Winter Update | CBR - CBR - Comic Book Resources - February 17th, 2021 [February 17th, 2021]
- This AI chess engine aims to help human players rather than defeat them - The Next Web - February 1st, 2021 [February 1st, 2021]
- Open source at Facebook: 700 repositories and 1.3 million followers - ZDNet - February 1st, 2021 [February 1st, 2021]
- Scientists say dropping acid can help with social anxiety and alcoholism - The Next Web - February 1st, 2021 [February 1st, 2021]
- AlphaZero - Chess Engines - Chess.com - November 21st, 2020 [November 21st, 2020]
- AlphaZero: Shedding new light on chess, shogi, and Go ... - November 21st, 2020 [November 21st, 2020]
- The art of chess: a brief history of the World Championship - TheArticle - November 21st, 2020 [November 21st, 2020]
- Podcast: Can you teach a machine to think? - MIT Technology Review - November 15th, 2020 [November 15th, 2020]
- Retired Chess Grandmaster, AlphaZero AI Reinvent Chess - Science Times - September 17th, 2020 [September 17th, 2020]
- DeepMind's AI is helping to re-write the rules of chess - ZDNet - September 17th, 2020 [September 17th, 2020]
- AI messed up mentally stimulating games. Right now it is actually creating the video game wonderful once again - Publicist Recorder - September 17th, 2020 [September 17th, 2020]
- A|I: The AI Times Surveillance mandated - BetaKit - September 17th, 2020 [September 17th, 2020]
- Starting on Friday: Chess 9LX with Carlsen and Kasparov - Chessbase News - September 17th, 2020 [September 17th, 2020]
- AlphaZero Match Will Be Replicated In Computer Chess Champs - Chess.com - August 3rd, 2020 [August 3rd, 2020]
- Facebook's New Algorithm Can Play Poker And Beat Humans At It - Digital Information World - August 3rd, 2020 [August 3rd, 2020]