When a select band of computer scientists met at Dartmouth College in Hanover, New Hampshire, in 1956 to begin work on a field they called ‘artificial intelligence’, they were optimistic, to say the least. Their founding principle of developing machine intelligence was based on an assumption that human intelligence could itself be well characterized. They argued that: “Every aspect of learning or any other feature of intelligence can in principle be so precisely described that a machine can be made to simulate it.”

Ask ten people to define human intelligence and you will get at least eleven answers. To a philosopher, intelligence is the absence of a lack of intelligence. To psychologists it is what intelligence tests measure.

Yet despite this fuzziness, the nature of artificial intelligence, in popular culture at least, is sharply defined: computers and robots that can think and act like a human, and that have the potential to outthink and counteract us in most situations. That is probably why many people are disappointed with what even the most advanced robots can achieve, certainly compared with the impressive abilities of even the youngest humans. In their minds, Mozart was composing and performing music at five years old whereas robots can barely fold a towel. The pre-eminence of humankind, it seems, is assured.

And yet, break down the holistic expectation of intelligence into a series of distinct (if overlapping) abilities, and the machines fare somewhat better. In a research paper on page 503, scientists define intelligence as the ability to predict the future. And they have built machines that can do it pretty well. Or at least they have built robots that can analyse the past to plan how to modify their own future behaviours if they are to continue functioning. The work’s implications for the continuing survival of feeble humanity are described in a News & Views article on page 426.

Continuing the theme, a series of Comment articles starting on page 415 assesses the current state of debate over how society should respond, regulate and interact with intelligent machines. From autonomous weapons, which could be ‘clever’ enough to distinguish friend from foe and act accordingly, to medical diagnoses based on rapid and accurate analysis and interpretation of health-care data, these machines may not yet be classed as fully intelligent, but they are reaching a point at which they can mimic and potentially outperform specific ‘intelligent’ human abilities. What should be done? In the case of drones and other armed intelligent machines, decision time is looming.

Finally, a string of Review articles make up a Nature Insight on machine intelligence, starting on page 435. From machine-learning techniques and evolutionary computation to the design and construction of malleable robots inspired by nature, the selection offers both a primer to the uninitiated and a useful summary of the state of the art. It is all, of course, essential reading. The machines, after all, are getting smarter. We should keep up.