Artificial Intelligence In Go
Abstract
In the recent decade, artificial intelligence of all sorts have revolutionized the highly complex combinatorial game of Go. The once thought impossible task of a computer beating human experts in Go was accomplished by Google DeepMind’s AlphaGo, the first successful computer in Go. This study explains conceptually how AlphaGo works, as well as demonstrates the effect of the machine’s internal features and settings on the final results. After discussing the internal networks and algorithms of AlphaGo, the goal is to observe the playout value at which decision stabilization is reached. To achieve this, experiments were done based on the three stages of the game: opening, midgame, and endgame. Decision stabilization will be measured as parallel to win rate stabilization. Three complete games (neither side resigns) are analyzed over a course of 500,000 playouts and then 1,000,000 playouts. These playout capacities correspond to an original model (2,801 playouts per second) and an improved model (6,000 playouts per second). A move is selected at an uncertain point in the game for playout simulations. Experiments with the different models indicated that decision stability is achieved at around 200,000-300,000 playouts. Optimal time will vary depending on the speed of the model. Overall, this information allows us to better understand artificial intelligence in Go and allow for possible improvement in the field. It also allows us to determine the quantity of playouts to obtain the optimal reliability. This will save sizable amounts of time and computational power.
Keywords
Full Text:
PDFReferences
Britannica, Editors of Encyclopedia., 1998, go, [Online] (updated 24 Apr. 2017) Available at: [Accessed 14 Aug. 2022]
Osínski, B., 2018, What is reinforcement learning? The complete guide, [Online] (updated 5 Jul. 2018) Available at: [Accessed 14 Aug. 2022]
Hui, J., 2018, AlphaGo: How it works technically? [Online] (updated 12 May 2018) Available at: [Accessed 14 Aug. 2022]
YouTube, Simplilearn, 2020, Backpropagation in Neural Networks|Backpropagation Algorithm Explained for Beginners|Simplilearn, [Online] (updated 15 Nov. 2020) Available at: [Accessed 14 Aug. 2022]
GeeksforGeeks, 2022, ML|Monte Carlo Tree Search (MCTS), [Online] (updated 5 Jul. 2022) Available at: [Accessed 14 Aug. 2022]
Silver, D., Hubert, T., Schrittwieser, J., Antonoglou, I., Lai, M., Guez, A., Lanctot, M., Sifre, L., Kumaran, D., Graepel, T., Lillicrap, T., Simonyan, K., Hassabis, D., 2017, Mastering Chess and Shogi by Self-Play with a General Reinforcement Learning Algorithm, [Online] (updated 5 Dec. 2017) Available at: [Accessed 25 Aug. 2022]
DOI: https://doi.org/10.18686/esta.v9i4.344
Refbacks
- There are currently no refbacks.
Copyright (c) 2023 Kevin Yang
This work is licensed under a Creative Commons Attribution-NonCommercial 4.0 International License.