Seoul, South Korea

Lee Sedol, who has been defeated by the AI system AlphaGo. Credit: Google DeepMind

Tanguy Chouard, an editor with Nature, saw Google-DeepMind’s AI system AlphaGo defeat a human professional for the first time last year at the ancient board game Go. This week, he is watching top professional Lee Sedol take on AlphaGo, in Seoul, for a $1 million prize.

So that’s it — this morning Google DeepMind’s AI machine AlphaGo claimed a decisive victory against Lee Sedol by winning its third game in a row, meaning that the computer has now triumphed in the best-of-five tournament in Seoul.

Read more of Tanguy's blog at The Go Files.

For me, the drama that was set up at the start of this matchup — man vs machine in humanity’s most complex board game! – has faded away. The surprise now will be if Lee, who is one of the world’s top players of Go over the past decade, manages to win a single game against the AI system. Everyone wants Lee to win – but the cliche is going around here that his only chance is to pull out AlphaGo’s power plug.

Comprehensively outplayed

At the press conference after this deciding match in the series, the Google DeepMind team were polite and respectful — praising Lee for his “amazing genius and creative skills”, while official commentator Michael Redmond added that the South Korean professional had “played his best”.

Lee himself couldn’t disagree more. Speaking through a translator, he apologized for his play in this third game. Not that he had given up before the contest. He spent Thursday night studying the two first games with a group of fellow Korean professional players, and then kept playing more relaxed baduk on his resting day, I was told by Lee Hajin, Secretary General of the International Go Federation.

But at the press conference, Lee Sedol said that he’d never felt as much pressure as he did in game 3. And while he had identified missed opportunities in game 2, he felt he could not win game 1 even if he replayed it today.

Independent commentators agreed that the machine had comprehensively outplayed the human. Andy Jackson, the vice president of the American Go Association (AGA), told me that Lee had lost the third game very early, by move 35. (A DeepMind scientist who was passing at the time said that this was in agreement with AlphaGo’s internal evaluation of its winning chances).

Lee is clearly trying to probe AlphaGo’s weaknesses. Today he engaged the computer in a series of ‘ko fights’ – sharp strategic battles — but failed to get the upper hand. The blogosphere had been buzzing over the possibility that AlphaGo might be ‘afraid’ of this kind of play. “Well, we’ve settled that one,” said Thore Graepel, one of the system’s developers.

Unknowable wisdom

Go commentators are thrilled by the way the machine plays. “I love AlphaGo! I want to study it and learn from it!” said Cho Hyeyeon, one of the strongest female Go players in Korea, who was commentating on the AGA live-feed of the match. “AlphaGo seems like it knows everything!”

Cho — and everybody else — wants to know just what AlphaGo is thinking. It seems to have a different vision of how to play the game. Alas, I’m afraid its understanding is simply unknowable – and not just because the computer has no voice to express its evaluations.

As DeepMind scientist David Silver explained to the press this January, even though AlphaGo has effectively rediscovered the most subtle concepts of Go (such as sente, moyo, aji or, now we know, ko fights), its knowledge of these is implicit. The computer doesn't explicitly parse out these concepts – they simply emerge from its statistical comparisons of types of winning board positions at Go. In effect, AlphaGo has a kind of digital intuition.

I actually feel that, even though we are reaching the end of this week of top-level testing, we have only seen the beginning of the system’s mastery. To reveal the full potential of the machine, top professional players may soon have to receive handicap advantages to play it.

Computerized hustler

The algorithm seems to be holding back its power. Sometimes it plays moves that lose material because it is seeking simply to maximise its probability of reaching winning positions, rather than — as human players tend to do — maximise territorial gains. Jackson thinks that some of these odd-looking moves may have fooled Lee into underestimating the machine’s skills at the beginning of game 1 — which, I suppose, makes AlphaGo a kind of computerized hustler.

Towards the end of today’s press conference, Lee insisted that the clinching defeat was “Lee Sedol’s defeat, not the defeat of mankind.” That would seem to imply that he wasn’t best placed to represent humanity this week — a suggestion that’s been rather rudely made by the world’s current top-ranked go player, Ke Jie, who said that “AlphaGo can’t beat me”.

Most commentators I’ve talked to would disagree. AlphaGo is likely to start “a new revolution” in the way we play Go, says Redmond. He said at the press conference that the Google DeepMind team had “created a work of art.” Says Cho: “we have to admit that we’re faced with the best player … of the past two thousand years."

One can only imagine what this kind of AI system will be capable of when applied to other problems. As Hassabis has said, one could easily conceive of translating AlphaGo’s qualities of pattern recognition, decision-making and long-term planning to, for example, a system that digests clinical data to make diagnoses or treatment plans.

One caution though: AlphaGo has proved itself supremely capable of reaching winning positions in the black and white game of Go — where the board game’s rules define what these positions are. But if we are to delegate decision-making in the more complex, nuanced real world to AI systems, we had better be careful to define what ‘winning positions’ we want them to reach. And that will be a political, not an engineering, issue.

Previous entry: AI computer wins first match against master Go player | Next entry: AI computer wraps up 4-1 victory