Tanguy Chouard, an editor with Nature, saw Google-DeepMind’s AI system AlphaGo defeat a human professional for the first time last year at the ancient board game Go. This week, he is watching top professional Lee Sedol take on AlphaGo, in Seoul, for a $1 million prize.
Welcome everyone. I’m Tanguy, an editor for the journal Nature in London. This week I will be in Seoul to watch the AI matchup of the century so far: it’s computer versus human champion at the ancient Chinese board game Go. The AI algorithm AlphaGo will be taking on Lee Sedol, the most successful player of the past decade.
The formidable complexity of Go has been considered a fundamental challenge for computer science, something that AI wouldn’t crack for another decade. Then last October, AlphaGo became the first machine to defeat a human professional at the game (without a handicap) – it thrashed European champion Fan Hui 5-0. As the Nature editor in charge of the peer-review process for the scientific paper1 that described the system and the feat, I was at that match, held behind closed doors at the London headquarters of DeepMind, Google’s AI company which built AlphaGo.
It was one of the most gripping moments in my career. In a room upstairs, where the game was shown on a big screen, DeepMind’s engineering team were cheering for the machine, occasionally scribbling technical notes on white boards that lined the walls.
But in the quiet room downstairs, where the black and white stones were actually being plunked on the goban (game board), one couldn’t help but root for Fan Hui, the poor human getting humbled by a machine.
"Mixed feelings, I know", whispered Demis Hassabis – the CEO of DeepMind – who was seated next to me: "I'm a player myself, so I can feel the pain for him". I felt even worse. Fan Hui is a hero for kids playing Go in France, where my family lives. I used his books to teach my two sons and their school friends in Marseille about the game. How would I tell them the news?
After the elation of an AI success like this one, I usually feel a moment of despair. It makes me suppose that human brains could be little more than machines – and that algorithms might eventually, outstrip their feats. A computer just defeated a human at the world’s most sophisticated board game? A bot just composed ‘perfect Bach’? How disgusting is that?
But this time is different. AlphaGo is a computing system packed with humanity – deliberately built on imitation of human brains in its structure, knowledge and experience. The deep learning neural networks it is based on mimic the way the brain processes information. It gained cultural knowledge by being trained on millions of moves made by professional Go players: a small, biased, very human sample of the total space of conceivable moves in the game. And it gained experience by playing itself endlessly – but even these experiments are constrained by moves randomly sampled from a human-biased database.
If we draw a parallel from computing machines to flying machines, then AlphaGo is closer to a biomimetic version of a bird, than to a jet airplane. As Fan Hui told journalists, its play is uncannily human.
That deep humanity is at the root of AlphaGo’s power but might also be its fatal limitation going into this week's world match. While the system's 'intuitive' play has shocked everyone on planet Go, its style is still deeply rooted in human moves from the past. Many top professional players (those that have reached the level of 9 dan pro or '9p') feel that it may lack creativity — which is why they think Lee Sedol will win.
But then, DeepMind’s engineers seem just as confident that AlphaGo has improved so much over the past five months that it will be able to match — and indeed surpass — human 9-dan pro level.
More on that in my next post, when I’ll bring you the atmosphere from Seoul. Ultimately, we’ll see the clash of human vs AI styles on 9 March, when Sedol takes on AlphaGo in the first of five planned games over seven days. Whoever wins — human or human mimic — may the game be divine!
- Journal name: