Nature reports from AlphaGo's victory in Seoul.
Seoul, South Korea
Tanguy Chouard, an editor with Nature, saw Google-DeepMind’s AI system AlphaGo defeat a human professional for the first time last year at the ancient board game Go. This week, he is watching top professional Lee Sedol take on AlphaGo, in Seoul, for a $1 million prize.
It’s all over at the Four Seasons Hotel in Seoul, where this morning AlphaGo wrapped up a 4-1 victory over Lee Sedol — incidentally, earning itself and its creators an honorary '9-dan professional' degree from the Korean Baduk Association.
After winning the first three games, Google-DeepMind's computer looked impregnable. But the last two games may have revealed some weaknesses in its makeup.
Game four totally changed the Go world’s view on AlphaGo’s dominance because it made it clear that the computer can 'bug' — or at least play very poor moves when on the losing side. It was obvious that Lee felt under much less pressure than in game three. And he adopted a different style, one based on taking large amounts of territory early on rather than immediately going for ‘street fighting’ such as making threats to capture stones.
This style – called ‘amashi’ – seems to have paid off, because on move 78, Lee produced a play that somehow slipped under AlphaGo’s radar. David Silver, a scientist at DeepMind who's been leading the development of AlphaGo, said the program estimated its probability as 1 in 10,000. It’s highly debatable whether this actually was a genius move — or even "God's move" as suggested by some commentators. A couple of professionals watching on the live stream of the American Go Association brought it up as a possible move while Lee was still thinking about it. And the American professional Michael Redmond, who was the official commentator on the game for the public, suggested in post-match analysis that the play doesn’t really work.
But in any case, Lee’s play revealed some glaring weaknesses in AlphaGo. On its following move the computer made a decisive mistake (as the DeepMind team acknowledged afterwards), but it still estimated its winning chances at 70%. Only by move 87 did its internal assessment of its position take a dive. In the endgame, the system seemed rather confused – perhaps because it was clearly losing.
So it seems that AlphaGo can bug – although it’s hard to know why and how. This obviously poses a serious problem when considering future real-world applications for AI.
Confronted about the issue in the post-game press conference, DeepMind's CEO Demis Hassabis replied that AlphaGo was still only a prototype — "I'd like to call it a beta, but it's not even an alpha" — and that finding such weaknesses was precisely the main technical reason the team was so keen to test the system at the highest level.
On to game five, and it was pretty obvious from the nervous looks on the DeepMind support team that today’s game was particularly tense in the control room. Again, Lee took the amashi style, and four hours into the game, neither the go experts nor the computer scientists could call it. It was that close. Ultimately, despite an early mistake, noted by Hassabis in a tweet during the game, the computer caught up and pulled off the victory, but this was far from the comprehensive routing of the first three games.
Without taking anything away from the DeepMind team’s achievement, the exposure of cracks in AlphaGo's armour by Lee raises hopes that he and other players might, in the near future, be able to fare better against AlphaGo. It seems that Lee, with advice from his Korean baduk pro colleagues, realised what his best chance might be – which is amazing after just a few games.
The computer plays a million games a day, versus 1,000 games a year for a practising professional, but – compared to humans – it’s quite stupid. Neural networks need enormous amounts of data to be trained on, whereas we humans are very efficient at learning and generalizing from very few examples. Humans are also capable of transferring knowledge across very different domains and, of course, of formalizing their learning through explicit concepts, which can be further enriched through education or culture.
Combining all such skills in one single machine, to achieve what is known as 'general' AI, remains an outstanding challenge for the future. As Oren Etzioni, chief executive of the non-profit Allen Institute for Artificial Intelligence in Seattle, Washington, says: "We are a long, long way from general artificial intelligence."
Previous entry: AI computer clinches victory against Go champion
Related links
Related links
Related links in Nature Research
What Google’s winning Go algorithm will do next 2016-Mar-15
The Go Files: AI computer clinches victory against Go champion 2016-Mar-12
The Go Files: ‘Humanity-packed’ AI prepares to take on world champion 2016-Mar-07
Google AI algorithm masters ancient game of Go 2016-Jan-27
Go players react to computer defeat 2016-Jan-27
Related external links
Rights and permissions
About this article
Cite this article
Chouard, T. The Go Files: AI computer wraps up 4-1 victory against human champion. Nature (2016). https://doi.org/10.1038/nature.2016.19575
Published:
DOI: https://doi.org/10.1038/nature.2016.19575
This article is cited by
-
Uncertainty as a Swiss army knife: new adversarial attack and defense ideas based on epistemic uncertainty
Complex & Intelligent Systems (2023)
-
Exploiting epistemic uncertainty of the deep learning models to generate adversarial samples
Multimedia Tools and Applications (2022)
-
Impact and prospect of the fourth industrial revolution in food safety: Mini-review
Food Science and Biotechnology (2022)
-
Virtual reality in biology: could we become virtual naturalists?
Evolution: Education and Outreach (2021)
-
Science as a Vocation in the Era of Big Data: the Philosophy of Science behind Big Data and humanity’s Continued Part in Science
Integrative Psychological and Behavioral Science (2018)