March 2016 sees AlphaGo face its next professional opponent, the world's top Go player Lee Sedol. Follow the match here.

A computer has beaten a human professional for the first time at Go — an ancient board game that has long been viewed as one of the greatest challenges for artificial intelligence (AI).

The best human players of chess, draughts and backgammon have all been outplayed by computers. But a hefty handicap was needed for computers to win at Go. Now Google’s London-based AI company, DeepMind, claims that its machine has mastered the game.

Go players react to computer defeat

DeepMind’s program AlphaGo beat Fan Hui, the European Go champion, five times out of five in tournament conditions, the firm reveals in research published in Nature on 27 January1. It also defeated its silicon-based rivals, winning 99.8% of games against the current best programs. The program has yet to play the Go equivalent of a world champion, but a match against South Korean professional Lee Sedol, considered by many to be the world’s strongest player, is scheduled for March. “We’re pretty confident,” says DeepMind co-founder Demis Hassabis.

“This is a really big result, it’s huge,” says Rémi Coulom, a programmer in Lille, France, who designed a commercial Go program called Crazy Stone. He had thought computer mastery of the game was a decade away.

The IBM chess computer Deep Blue, which famously beat grandmaster Garry Kasparov in 1997, was explicitly programmed to win at the game. But AlphaGo was not preprogrammed to play Go: rather, it learned using a general-purpose algorithm that allowed it to interpret the game’s patterns, in a similar way to how a DeepMind program learned to play 49 different arcade games2.

This means that similar techniques could be applied to other AI domains that require recognition of complex patterns, long-term planning and decision-making, says Hassabis. “A lot of the things we’re trying to do in the world come under that rubric.” Examples are using medical images to make diagnoses or treatment plans, and improving climate-change models.

Digital intuition

In China, Japan and South Korea, Go is hugely popular and is even played by celebrity professionals. But the game has long interested AI researchers because of its complexity. The rules are relatively simple: the goal is to gain the most territory by placing and capturing black and white stones on a 19 × 19 grid. But the average 150-move game contains more possible board configurations — 10170 — than there are atoms in the Universe, so it can’t be solved by algorithms that search exhaustively for the best move.

Go, a complex game popular in Asia, has frustrated the efforts of artificial-intelligence researchers for decades. Credit: Nature Video

Abstract strategy

Computer science: The learning machines

Chess is less complex than Go, but it still has too many possible configurations to solve by brute force alone. Instead, programs cut down their searches by looking a few turns ahead and judging which player would have the upper hand. In Go, recognizing winning and losing positions is much harder: stones have equal values and can have subtle impacts far across the board.

To interpret Go boards and to learn the best possible moves, the AlphaGo program applied deep learning in neural networks — brain-inspired programs in which connections between layers of simulated neurons are strengthened through examples and experience. It first studied 30 million positions from expert games, gleaning abstract information on the state of play from board data, much as other programmes categorize images from pixels. Then it played against itself across 50 computers, improving with each iteration, a technique known as reinforcement learning.

Deep learning is killing every problem in AI.

The software was already competitive with the leading commercial Go programs, which select the best move by scanning a sample of simulated future games. DeepMind then combined this search approach with the ability to pick moves and interpret Go boards — giving AlphaGo a better idea of which strategies are likely to be successful. The technique is “phenomenal”, says Jonathan Schaeffer, a computer scientist at the University of Alberta in Edmonton, Canada, whose software Chinook solved3 draughts in 2007. Rather than follow the trend of the past 30 years of trying to crack games using computing power, DeepMind has reverted to mimicking human-like knowledge, albeit by training, rather than by being programmed, he says. The feat also shows the power of deep learning, which is going from success to success, says Coulom. “Deep learning is killing every problem in AI.”

Game theorists crack poker

AlphaGo plays in a human way, says Fan. “If no one told me, maybe I would think the player was a little strange, but a very strong player, a real person.” The program seems to have developed a conservative (rather than aggressive) style, adds Toby Manning, a lifelong Go player who refereed the match.

Google’s rival firm Facebook has also been working on software that uses machine learning to play Go. Its program, called darkforest, is still behind commercial state-of-the-art Go AI systems, according to a November preprint4

Hassabis says that many challenges remain in DeepMind’s goal of developing a generalized AI system. In particular, its programs cannot yet usefully transfer their learning about one system — such as Go — to new tasks; a feat that humans perform seamlessly. “We’ve no idea how to do that. Not yet,” Hassabis says.

Game-playing software holds lessons for neuroscience

Go players will be keen to use the software to improve their game, says Manning, although Hassabis says that DeepMind has yet to decide whether it will make a commercial version.

AlphaGo hasn’t killed the joy of the game, Manning adds. Strap lines boasting that Go is a game that computers can’t win will have to be changed, he says. “But just because some software has got to a strength that I can only dream of, it’s not going to stop me playing.”

Southern Korean children play Go, a popular game across much of Asia. Credit: JUNG YEON-JE/AFP/Getty Images