Synthetic thought

Machines Who Think: A Personal Inquiry into the History and Prospects of Artificial Intelligence

A. K. Peters: 2004. 576 pp. $19.95 1568812051 | ISBN: 1-568-81205-1

For centuries humans have speculated on how the technology of the time could be used to carry out mental tasks that mimic, if not surpass, those done by the human mind. The current incarnation of this form of hubris landed on the intellectual landscape in a legendary 1950 paper in which British computer pioneer Alan Turing laid out the agenda for the creation of a “machine that thinks” (Mind 59, 433–460). Just six years later, a summer conference at Dartmouth College in New Hampshire brought together an eclectic group of maverick mathematicians, engineers, psychologists, computer scientists (in today's terminology), neurobiologists and other assorted denizens of the academic world. They established a research programme in what one attendee, John McCarthy, dubbed “artificial intelligence” (AI), an evocative (and provocative) appellation by which the field has been known ever since.

Like all zealots setting up a new religion, the Dartmouth group asserted some pretty amazing claims. Two of the most interesting to catch the public eye ended up as touchstone problems by which progress in AI could be measured. The first was to develop within ten years a computer program that could beat the world chess champion. The second was to develop in about the same length of time a computer program that could translate from one human language to another at a level indistinguishable from that of a professional translator. The rationale for these benchmark problems was that, in both cases, creating a program to carry out the task would teach us many things about the mysterious ways of the human mind.

Amusingly, the first goal has now been achieved by the program Deep Blue II — but it taught us nothing about human thought processes, other than that world-class chess-playing can be done in ways completely alien to the way in which human grandmasters do it. The second goal, however, is about as far from being achieved as the number of atoms in the Universe is from infinity. But, strangely perhaps, work on machine language translation has taught us a lot about how human language processing takes place.

About 20 years after the Dartmouth meeting, writer Pamela McCorduck, wife of pioneering computer scientist Joseph Traub, was having lunch at Stanford with two AI veterans, McCarthy and Edward Feigenbaum. She suggested getting the thoughts and impressions of the workers in this field down on paper as a kind of historical and sociological account of the emergence and evolution of an entirely new field of intellectual endeavour.

In 1979, after several years of interviews and lunchtime conversations with AI researchers, including Marvin Minsky, Lotfi Zadeh, Herbert Simon and Alan Newell, McCorduck published her book, Machines Who Think. This work did not pretend to be an exhaustive account of the entire field, but rather was a kind of eclectic sampling of various schools of thought in AI and how much progress, or lack thereof, had been made towards the ultimate goal of a thinking machine.

Being an informed outsider to the field, as well as a gifted writer and storyteller, was a decided advantage to McCorduck. She had no particular axe to grind with regard to championing one approach over another, and was able to tell the human story of AI in terms that made the book fascinating reading — even if you were not especially interested in whether a machine could one day replace your brain.

In retrospect, 1979 was a turning point for AI. The technological advances of the preceding two decades, coupled with the appearance that year of Douglas Hofstadter's Pulitzer-prize-winning book Goedel, Escher, Bach, catalysed a resurgence of ‘bottom-up’ AI based on neural networks, or what came to be termed ‘connectionism’ — an approach that had long been displaced by ‘top-down’ designed programs. Moreover, in the decades since, much progress has been made in robotics, distributed intelligence, cooperative intelligence and numerous other areas, to go along with an exponential improvement in hardware, which allowed these approaches to be implemented.

McCorduck has now reissued her original 1979 book, this time with an afterword of more than 100 pages that tells the story of the past 25 years. If you are interested in how the pioneers of AI approached the problem of getting a machine to think like a human — a story told here with verve, wit, intelligence and perception — there is no better place to go than this book. It is a worthy continuation of the story begun in McCorduck's 1979 account. One can only hope that she will have the same stamina and enthusiasm to produce an expanded, expanded edition for our entertainment and enlightenment in another decade or two. No one could do it better.

Author information

Affiliations

Authors

Rights and permissions

Reprints and Permissions

About this article

Cite this article

Casti, J. Synthetic thought. Nature 427, 680 (2004). https://doi.org/10.1038/427680a

Download citation

Further reading

Comments

By submitting a comment you agree to abide by our Terms and Community Guidelines. If you find something abusive or that does not comply with our terms or guidelines please flag it as inappropriate.