It is fitting that the Nobel committee chooses to take a broad view of Alfred Nobel's wish that the prizes be awarded to those whose achievements “have conferred the greatest benefit to mankind.” Taken too much at face value, that could be a highly utilitarian prescription, favouring those who develop practical applications (or whose insights have led to them) rather than those whose work remains in the realm problematically labelled 'pure science'. More than a few awards, especially in physics, would never have been given if usefulness were the criterion.

But views on the relationship between so-called pure and applied science are sometimes still little advanced beyond C. P. Snow's comment in his 'Two Cultures' lecture in 1959: “Pure scientists have by and large been dim-witted about engineers and applied science... Their instinct... was to take it for granted that applied science was an occupation for second-rate minds.” As biologist Peter Medawar said around the same time, the distinction was often seen as that “between polite and rude learning, between the laudably useless and the vulgarly applied, the free and the intellectually compromised, the poetic and the mundane.” British Nobel laureate chemist George Porter sought cannily to erode the elitist implication by dividing research instead into 'applied' and 'not yet applied'.

Debate about curiosity-driven research can still hover on the brink of such hierarchical attitudes, however. Making a case for research not driven by short-term applied goals — by knowledge-creation rather than wealth-creation — is ever more important in a funding climate that seems increasingly to demand that proposals be pitched along economic lines, supported by arbitrary metrics of impact. The converse appeal to 'spin-offs' to justify fundamental research in, say, quantum or particle physics is no less disheartening.

But there's nothing new in a defence of sheer curiosity as a vital motivation for science. The case was put in a 1939 essay called 'The usefulness of useless knowledge' by Abraham Flexner (pictured), founding director of the Princeton Institute for Advanced Study (IAS) where, in the 1930s, Albert Einstein worked alongside John von Neumann and Hermann Weyl. The essay has recently been republished1.

As a celebration of the value of abstract thought, Flexner's essay remains elegant and pertinent. As an analysis of the interactions between theory and application in science, it probably represents a widespread view today — but that doesn't make it any the less flawed. Flexner recounts telling George Eastman that it was James Clerk Maxwell, who had “no practical objective”, and not Marconi, who deserved credit for the invention of radio broadcasting. What he omits is that the Cavendish Laboratory in Cambridge, where Maxwell worked as the first director, was established specifically to improve the practical training of British physicists and engineers — a concern prompted partly by the failure in laying the first transatlantic telegraph cable in 1858. Michael Faraday too allegedly cared nothing for “the question of utility” in his electrical researches; in reality he was dealing with plenty of queries about it, and no one would doubt the practical acumen of his mentor at the Royal Institution, Humphry Davy. And so it goes on further back in time: Flexner asserts that Isaac Newton and Francis Bacon were motivated by pure curiosity, whereas Newton and his contemporaries were inventors as much as philosophers, and Bacon's entire programme of state-sponsored fact-collection outlined in Novum Organum (1620) was predicated on the “relief of man's estate”, as well as providing a vehicle of state power.

Credit: SCIENCE HISTORY IMAGES / ALAMY STOCK PHOTO

Getting this history of the applied potential of curiosity right matters for several reasons. First, it reminds us of the value of the broader, worldly view in scientific training. Yes, serendipity has spawned many useful discoveries, but generally when it strikes the “prepared mind” — as Louis Pasteur, the paragon of a scientist happy to turn his mind to practical problems, declared. Not only was William Perkin trying to make something useful (quinine) when he stumbled upon the first aniline dye, but he only thought to exploit the discovery because he was aware of how industrially valuable a good purple dye might be. Not only is it often an application that stimulates some lucky discovery leading elsewhere — carbon-fibre technology leading to carbon nanotubes, say — but practical questions can themselves motivate new pure science. Who knows if topological quantum materials will produce anything useful? It shouldn't matter if they do or not, but an eye on applications can initiate new fundamental questions (as, in fairness, Flexner acknowledges). Conversely, scientists who do want to find applications need guidance and training if they are not to approach industry and the marketplace naively.

Second, it is all too easy for pure science to deny ethical responsibility, almost by definition. Flexner insists that chemical warfare came from innocent science. We now know it was no accident on Fritz Haber's part, who is said to have described it as “a higher form of killing”2. The year before Flexner's essay appeared, James Kendall, professor of chemistry at Edinburgh, defended that view while conducting research on chemical weapons3. Happily, the introductory essay to the new book, by the IAS's current director Robert Dijkgraaf, is more nuanced on such matters.

It is unwise, not least in a discipline as practically driven as materials science, to idealize the interaction of curiosity and application. Problems are no less intellectual (or difficult) for being practical ones, and one's recondite theory will not be sullied by ideas of how to put it to use.