AlphaGo utilizes the "Monte Carlo tree search" as its base algorithm[1]. The algorithm has been used for ten years in Go AIs, and when it was introduced, it made a huge impact. The Go bots got stronger overnight, basically.
What novel thing AlphaGo did, was a similar jump in algorithmic goodness. It introduced two neural networks for
1) predicting good moves at the present situation
2) evaluating the "value" of given board situation
Especially 2) has been hard to do in Go, without playing the game 'till the end.
This has a huge impact on the efficiency of the basic tree search algorithm. 1) narrows down the search width by eliminating obviously bad choises and 2) makes the depth at where the evaluation can be done, shallower.
So I think it's not just the processing power. It's a true algorithmic jump made possible by the recent advances in machine learning.
Especially 2) has been hard to do in Go, without playing the game 'till the end.
This is what struck me as especially interesting, as a non-player watching the commentary. The commentators, a 9-dan pro and the editor of a Go publication, were having real problems figuring out what the score was, or who was ahead. When Lee resigned the game, it came as a total surprise to both of them.
Just keeping score in Go appears to be harder than a lot of other games.
Score in Go is captured stones plus surrounded empty territory at the end of the game. Captures are well defined when they happen, but territory is not defined until the end.
The incentive structure of the game leads to moves that firmly define territory usually being weaker, so the better the players, the more they end up playing games where territory is even harder to evaluate.
AlphaGo utilizes the "Monte Carlo tree search" as its base algorithm[1]. The algorithm has been used for ten years in Go AIs, and when it was introduced, it made a huge impact. The Go bots got stronger overnight, basically.
What novel thing AlphaGo did, was a similar jump in algorithmic goodness. It introduced two neural networks for
1) predicting good moves at the present situation
2) evaluating the "value" of given board situation
Especially 2) has been hard to do in Go, without playing the game 'till the end.
This has a huge impact on the efficiency of the basic tree search algorithm. 1) narrows down the search width by eliminating obviously bad choises and 2) makes the depth at where the evaluation can be done, shallower.
So I think it's not just the processing power. It's a true algorithmic jump made possible by the recent advances in machine learning.
[1] http://senseis.xmp.net/?MonteCarlo