Books and Arts

Nature 454, 829 (14 August 2008) | doi:10.1038/454829a; Published online 13 August 2008

In Retrospect: Gödel's proof

Andrew Hodges1


In today's computer age, the implications of the discovery in formal logic that Newman and Nagel articulated in 1958 are of even broader interest, says Andrew Hodges.


by Ernest Nagel & James R. Newman

New York University Press: 1958. 118 pp.

Fifty years ago an unusual book appeared. Its bald and unapologetic title, Gödel's Proof, must have left the casual browser wondering who or what Gödel was. Those tempted to look inside discovered a classic of scientific exposition and faced quite a challenge. The writers, Ernest Nagel and James Newman, already distinguished figures in scientific philosophy and education, gave an uncompromising presentation of their unfamiliar subject matter: mathematical logic. Kurt Gödel himself, the great logician whose breakthrough discovery of 1931 was the subject of this book, was very much alive in 1958 Princeton, but he was no popularizer or media celebrity. For Nagel and Newman to see the potential interest to a wider public was both visionary and optimistic.

I remember discovering Gödel's Proof as a student in 1968. I cannot have been the only one to find it a unique text on the college library shelf, leading to unexpected regions beyond the standard syllabus. It was not written like a textbook; neither was it a 'Gödel made easy'. Although rooted in an earlier article in Scientific American, it used copious equations; indeed it explored the very meaning of equations, a demand on the reader that would make most publishers nervous. Nagel and Newman explained difficult ideas of logical deduction from formal axioms, distinguishing formal proof from informal reasoning. They showed Gödel's crucial insight: that the rules of logic for quoting axioms, substituting variables and formulating deductions are themselves mathematical operations. And they revealed how his technical innovation exploited this observation, using numbers to code statements about numbers.

Nagel and Newman's detailed account showed how Gödel was led to the astonishing discovery of true mathematical statements that could not possibly have a formal proof. In other words, Gödel proved the formal incompleteness of mathematics. They also recorded the shock that this discovery caused to the hitherto mainstream positivist assumptions, such as those of Bertrand Russell, whose programme for deriving mathematics from purely logical axioms Gödel explicitly contradicted.

In Retrospect: G|[ouml]|del's proof


The implications of Gödel's discovery are if anything of even broader interest now than in 1958. A vast industry has arisen founded on logical algorithms, and nowadays it is better appreciated that the business of computing is inseparable from the logical calculus built up in the early twentieth century. One could even argue that the underlying concept of the digital computer is owed to Gödel, via the British mathematician Alan Turing. Turing's 1936 concept of the universal machine is the basis of the computer, and Turing arrived at it by following Gödel's lead, seeing that instructions could operate on other instructions, rather as Gödel's numbers had coded formal statements about numbers.

In the 1950s there was a tendency for mathematicians to distance themselves from practical applications, and from computing in particular. Since the 1970s these divisions have become less rigid. Gödel's arguments are now more fully connected with the body of mathematics and its classical problems of 'how to solve it'. In 2000, the Clay Mathematics Institute in Cambridge, Massachusetts, announced seven prizes for a set of Millennium Problems. One of these, concerning computational complexity, has its root in a remark of Gödel's that might have seemed abstruse in 1958, but is now of great value to practical computing.

It therefore now seems a little odd that Nagel and Newman paid no attention to computing. They framed their closing reflections as if Turing's theory of computability was an obvious corollary. By contrast with their detailed explanation of Gödel's technical arguments, they found no difficulty in writing off, in a few sentences, the possibility of artificial intelligence (AI). This is now a huge and hotly contested area of scientific philosophy. In fact, it was already the subject of dispute in 1958. Turing himself, in 1950, argued that Gödel's proof was irrelevant to the question of achieving AI. In the 1960s, Gödel in turn made somewhat delphic remarks objecting to Turing's philosophy; he seems to have considered that his proof implied that the human mind could not be mechanized.

These arguments stimulated another famous book with Gödel in the title. Douglas Hofstadter's 1979 Gödel, Escher, Bach (Penguin), although in many ways inspired by Newman and Nagel's work, took an approach diametrically opposite to their clipped classicism. Expansive and illustrative, it also came to quite a different conclusion about AI — essentially Turing's. This disagreement remains unresolved; in fact it is heightened by another protagonist, Roger Penrose, who supports something like Gödel's position but in an entirely new way.

In a few pregnant words, Nagel and Newman referred to the brain as a machine apparently more powerful, through its capacity for informal reasoning, than computers. Penrose, since the 1980s, has asked what could possibly lend it such power, finding an answer in the ill-understood phenomenon of quantum-mechanical state reduction. His conclusions are keenly disputed: for instance the leading logician Martin Davis, himself a popularizer, has forcefully pressed Turing's original view in his book Engines of Logic (W. W. Norton, 2001).

Nagel and Newman dedicated their book to Russell, whose logical work during the opening years of the twentieth century lay behind Gödel's proof. In that same period, Planck and Einstein opened the quantum-mechanical door on reality. A hundred years have not sufficed to resolve the fundamental questions they revealed. Mathematical and physical science describe a continuous quantum universe using formal operations on discrete symbols. Neither the quantum, nor those symbols, nor the connection between them, are yet fully understood.

  1. Andrew Hodges is a Fellow of Wadham College, University of Oxford, Oxford OX1 3PN, UK. He is author of Alan Turing: the Enigma.


These links to content published by NPG are automatically generated.


Philosophy of science Theories of almost everything

Nature News and Views (16 Oct 2008)