Naysayers forced to eat words as Google's AI masters ancient game of Go

28.01.2016
They said it couldn't be done, but Google's AI technology has proved them wrong by mastering the ancient Chinese game of Go roughly a decade earlier than anyone expected.

Tapping neural networks and advanced "tree search" programs, researchers from Google DeepMind created a system called AlphaGo that takes a different approach to the game than had been tried before.

In Go, the player's objective is to surround the opponent’s pieces by alternately placing black and white pieces on a 19-by-19-line grid while simultaneously avoiding having one’s own pieces surrounded. With more possible positions than there are atoms in the universe, Go has long been considered an ultimate challenge for artificial intelligence researchers.

Traditional AI efforts to master Go have focused on using search trees, a computer science technique used for locating specific values from within a set. AlphaGo, on the other hand, uses the more advanced Monte Carlo tree search approach often used in game playing. It also taps deep neural networks to mimic expert players and improve continuously by playing games against itself.

AlphaGo has won more than 99 percent of the games it's played against the strongest other Go programs and also defeated the human European champion by 5–0 in tournament games, according to a paper published Thursday in the journal Nature.

In March, it will face its ultimate challenge: a 5-game match in Seoul against the legendary Lee Sedol, who has been the world's top-ranked Go player for the past decade.

Google acquired DeepMind Technologies in 2014, renaming the result Google DeepMind.

Facebook has also been working on AI to tackle the Go challenge and is "getting close," CEO Mark Zuckerberg announced on Wednesday.

Katherine Noyes

Zur Startseite