A computer program that can outplay humans in the abstract game of Go will redefine our relationship with machines.
Napoleon had it and so did Charles Darwin. Tennis champion Roger Federer has it in spades. The dictionary defines intuition as knowledge obtained without conscious reasoning. It is decision-making based on apparently instinctual responses; thinking without thinking.
Intuition is a very human skill, or so we like to think. Or, more accurately, so we liked to think. In what could prove to be a landmark moment for artificial intelligence, scientists announce this week that they have created an intuitive computer. The machine acts according to its programming, but it also chooses what to do on the basis of something — knowledge, experience or a combination of the two — that its programmers cannot predict or fully explain. And, in the limited tests carried out so far, the computer has proved that it can make these intuitive decisions much more effectively than the most skilled humans can. The machines are not just on the rise, they have nudged ahead.
Experts in ethics, computer science and artificial intelligence routinely debate whether clever machines in the future will use their powers for good or evil. This latest example of digital discovery puts neural networks to work on a problem that is almost as old: how to win at the board game Go.
Outside business-management seminars, Go is not well known in the West, but it is older, more complex and harder to master than chess. Yet it is simpler to learn and play: two players take it in turns to place black or white counters on a grid. When a counter (called a stone) is surrounded by rivals, it is removed from the board. Winning — like so much in life and war — is about controlling the most territory. The game is wildly popular across countries in east Asia, and players from Japan, China and South Korea routinely compete in televised professional tournaments.
Computers mastered chess two decades ago, when IBM’s Deep Blue machine won against then-world-champion Garry Kasparov in 1997, but Go was thought to be safe from artificial conquest. That is partly because all of the possible moves in Go, as well as the resulting combinations of stones on the board, are much too numerous for any computer to crunch through and compare to select one manoeuvre. (The same goes for chess, but the diversity in the value of chess pieces enables some short cuts.) In Go, all stones are worth the same and their influences can be felt through vast distances across the board.
The machine becomes an oracle; its pronouncements have to be believed.
On this week's issue of Nature, computer scientists at Google DeepMind in London unveil the successor to Deep Blue. It is a program called AlphaGo, and in October 2015 it beat the human Go champion of Europe by five games to zero. To put that into context, in Deep Blue’s time, a human beginner with just a week’s practice could easily defeat the best Go computer programs. A match between AlphaGo and the world’s most titled player of the decade is lined up for March.
AlphaGo cannot explain how it chooses its moves, but its programmers are more open than Deep Blue’s in publishing how it is built. Previous Go computer programs explore moves at random, but the new technology relies on a suite of deep neural networks. These were trained to mimic the moves of the best human players, to reward wins and, using a probability distribution, to limit the outcomes for any board position to a single verdict: win or lose. Working together, these machine-learning strategies can massively reduce the number of possible moves the program evaluates and chooses from — in a seemingly intuitive way.
As shown by its results, the moves that AlphaGo selects are invariably correct. But the interplay of its neural networks means that a human can hardly check its working, or verify its decisions before they are followed through. As the use of deep neural network systems spreads into everyday life — they are already used to analyse and recommend financial transactions — it raises an interesting concept for humans and their relationships with machines. The machine becomes an oracle; its pronouncements have to be believed.
When a conventional computer tells an engineer to place a rivet or a weld in a specific place on an aircraft wing, the engineer — if he or she wishes — can lift the machine’s lid and examine the assumptions and calculations inside. That is why the rest of us are happy to fly. Intuitive machines will need more than trust: they will demand faith.
Related links in Nature Research
Related external links
About this article
Nature Reviews Molecular Cell Biology (2018)