COMMENT

Science must move with the times

Research cannot fulfil its social contract and reach new horizons by advancing on the same footing into the future, argues Philip Ball in the last essay of a series on how the past 150 years have shaped today’s science system, to mark Nature’s anniversary.
Philip Ball is a science writer and author; his latest book is How To Grow a Human.

Search for this author in:

Illustration by Señor Salme

In 1866, three years before the first issue of Nature was published, a transatlantic telegraph cable established light-speed communication between Great Britain and North America. The triumph won William Thomson (later Lord Kelvin) a knighthood for the scientific advice he had given to the project. Yet Thomson had also advised on a disastrous earlier attempt in 1858 that barely worked from the outset and deteriorated within weeks.

It was partly in response to that costly debacle that the Cavendish Laboratory was established at the University of Cambridge, UK, in the early 1870s, to provide the nation’s future engineers with a better grounding in physics. The first director was James Clerk Maxwell, whose electromagnetic theory of the mid-1860s led to the discovery of radio waves in 1887 — which soon enabled ‘wireless’ telecommunication and rendered the telegraph obsolete.

In such ways, the distinctly Western and specifically British world into which Nature was launched regarded fundamental scientific research as the engine of socially transformative industrial innovation. Emanating from London, Norman Lockyer’s journal showcased those developments from the perspective of a British Empire that grew to encompass about one-fifth of the world’s population by the century’s end. The benefits of research laboratories and the systematic institutionalization of science, in both academia and industry, were beyond doubt for Nature’s target audience.

Eight decades later, this model motivated Vannevar Bush’s 1945 report to US president Franklin D. Roosevelt. Science — The Endless Frontier made the case for governmental support of basic science research to promote national security, public health and welfare. It led to the establishment of the US National Science Foundation, and it appealed to the optimistic and simplistic vision of science as a quest that, motivated by curiosity and guaranteed freedom of enquiry, would serve the interests of the nation and of humankind.

Science — whether it is Maxwell’s electromagnetism, the Manhattan Project that inspired Bush, or the Human Genome Project — has indeed been so socially transformative that its intellectual and technological machinery has gained seemingly irresistible momentum. Is this not how progress is made, and is that not, on balance, a good thing?

Even to ask the question invites familiar and polarized arguments. Some commentators question the wisdom of unfettered scientific development, pointing to the problems of climate change and environmental despoliation, nuclear weapons and antibiotic resistance, along with the ambivalent influence of artificial intelligence and robotics, information technologies and genetic engineering. Others point out that quality-of-life indicators — lifespan and infant mortality, say — have improved steadily (if unevenly, geographically and temporally) during the era of modern science that roughly coincides with the span of Nature’s existence.

But Manichean views and tropes of ‘dual use’ miss the point. Some of the key questions that confront science today are about whether its methods, practices and ethos, pursued with very little real change since Maxwell’s day, are fit for purpose in the light of the challenges — conceptual and practical — we now face. Can science continue to fulfil its social contract and to reach new horizons by advancing on the same footing into the future? Or does something need to shift?

Illsutration of the laying of the transatlantic telegraph cable, August 8th, 1866.

Three years before the launch of Nature, the laying of transatlantic telegraph cable established light-speed communication between Great Britain and North America.Credit: Heritage Image Partnership Ltd/Alamy

Looking out

Let’s consider where we stand. The convention of the past century or so has tended to place the frontiers of knowledge at the scales of the very large and very small. Today we might be inclined to add the very complex — which typically pertains to the intermediate scales of direct human experience.

It’s now clear that challenges at the two extreme scales — fundamental particles and cosmology — are related. As the island of knowledge grows, so does the perimeter of the horizon where knowledge ends, says Marcelo Gleiser, a particle cosmologist at Dartmouth College in Hanover, New Hampshire. “The more we know, the more exposed we are to our ignorance, and the more we know to ask”, he writes1.

We have known for only several decades that dark matter outweighs all visible matter by a factor of five, yet we are no closer to knowing what it consists of. And scarcely two decades have passed since the mysterious entity dubbed dark energy, which causes the Universe’s expansion to accelerate, has been recognized to comprise more than two-thirds of the total cosmic energy density. Never before has our knowledge of the Universe seemed so deficient.

Plugging these gaps at the largest scales will depend on elucidating the physical world at the smallest. Here the prospects are currently dim enough to cause desperation and even rancour. The world’s largest particle accelerator, the Large Hadron Collider at CERN near Geneva, Switzerland, has so far failed to offer any hint of how to proceed beyond known physics. Elegant ideas look moribund in the face of an ugly lack of facts. In the meantime, models are being forced towards ideas, such as the multitude of universes now permitted by the inflationary model of the Big Bang, that seem to some critics to abandon the empirical basis of science itself.

Yet even as our view of the Universe becomes increasingly perplexing, it is being fleshed out as never before. In the 1860s, it was almost casually assumed that life would be common on other worlds. H. G. Wells’s 1897 novel The War of the Worlds (informed by his reading of Nature) seemed all the more chilling because of the widespread belief — which persisted for another half-century — that there was indeed life on Mars. Seasonal changes of surface colour were interpreted as vegetation growth, and striations described by astronomer Giovanni Schiaparelli were notoriously ascribed by others to artificial waterways.

But the barren, sterile Martian landscape that the Viking landers revealed in 1976 confirmed a growing sense — stoked by the Apollo Moon landings and reflected in physicist Enrico Fermi’s famous question about the apparent absence of alien visitations — that we are a lonely outpost in a bleak, lifeless cosmos. Well, no longer. Since the first discovery of an extrasolar planet orbiting a Sun-like star was reported in this journal2 in 1995, around 4,000 sightings of such planets have now accumulated (and a 2019 Nobel prize).

It seems that planetary systems are the norm for other stars, and Earth-like planets far from uncommon. Already we know a little about the atmospheres of some of these worlds. With the launch of NASA’s Transiting Exoplanet Survey Satellite last year, and the James Webb Space Telescope scheduled to launch in 2021, we will soon know much more. Researchers are now speaking plausibly about deducing within a lifetime if there is life elsewhere.

Where does all this leave us? The cosmological perspective could seem to perpetuate the sense of an unfolding Copernican revolution in making humankind even more peripheral. Not just an insignificant dot in a vast Universe, we’re possibly an insignificant universe in a potentially infinite multiverse. It’s hard to imagine a demotion more extreme.

There is another view that is anything but Copernican. Here, habitable worlds are ubiquitous and we remain uncomfortably, almost absurdly, at the centre of things. In the inflationary multiverse, our presence is the explanation for the fundamental constants of nature. They might have different values in other universes, but the conditions necessary for our existence guarantee that we will see the ones we do.

NASA engineer Ernie Wright looks on as the first six flight ready James Webb Space Telescope's primary mirror segments

The first six primary mirror segments for the James Webb Space Telescope.Credit: NASA/Marshall Space Flight Center/David Higginbotham

The foundations of quantum mechanics (a topic once disreputable that now verges on fashionable) muddy the picture too. The ‘many worlds’ interpretation is more popular today than when US physicist Hugh Everett proposed it in the 1950s. It multiplies universes (in a manner distinct from the cosmological multiverse) and it multiplies each of ‘us’ beyond measure. Meanwhile, US theoretical physicist John Wheeler’s ‘participatory universe’ and new interpretations such as QBism3 insist that quantum theory requires the observer’s presence — rather than being the abstract and objective framework that science usually supplies.

These ideas remain speculative. But they challenge the Newtonian promise of an impersonal mechanics.

Looking in

In other words, it’s still unclear when or whether we can exclude ourselves from the scientific frame. This would have been no surprise to Maxwell. His conception of physical reality was predicated (no less than was Newton’s) on a religious position that awarded humanity a special place.

This, of course, is where Charles Darwin also enters the frame. His ideas on evolution by natural selection, published in On the Origin of Species (1859) were still causing shock waves when Nature was founded. Two years after that, he delivered the final bombshell in The Descent of Man (1871). The significance of his ideas was not as an explosive charge placed underneath the church but as the opening salvo to a century and a half of debate about what it means to be human. If there was a struggle, it was not about which book to consult but about who had the most decisive authority. Within science, first evolutionary theory, then psychoanalysis, and now genetics and neuroscience, have all staked their claims.

On Nature’s centenary, you might have placed your bets with the latter disciplines. Half a century later, it is less clear that they can offer the last word. Powerful new techniques applied to rapidly growing data sets, such as genome-wide association studies4, have disclosed a clear and sometimes strong genetic component to almost every human behavioural trait we choose to study, as well as influencing health and disease. But a mechanistic understanding of genetic effects often remains remote. And for traits in which many — perhaps even several thousand — genes are implicated, it is not even clear if this is the right level at which to ascribe causes for what we can see and measure.

The emerging picture of development and tissue function at the level of single-cell transcription (and perhaps soon of translation) adds a new layer of complexity5. Apparently identical cells in the same tissue can show a wide range of dynamic states of gene expression. It might be that the genome tells us no more about how an organism builds and sustains itself than a dictionary does about how a story unfolds. New methods, rather than finally answering old questions, could merely beggar them, shifting the goalposts entirely — as genomics itself has done for notions of race.

Essay series: lessons from the past for the future of research

Read more of this collection published to mark Nature’s 150th anniversary, in which leading historians explore how the past century and a half has forged some of the defining features of today’s scientific system.

We ignore the past at our peril

Government: Discovery is always political

China: How science made a superpower

Identity: How advances have repeatedly changed who we think we are

Data: From objects to assets

Can marketplace science be trusted?

Ethical research: the long and bumpy road from shirked to shared

Science must move with the times

Neuroscience, like genetics, has been restricted in the questions it can ask by the data it can gather. Functional magnetic resonance imaging remains a blunt tool, showing where things are happening in the brain (at rather coarse-grained resolution), but not what transpires. The idea that the human brain might be understood by exhaustive documentation and perhaps simulation of neuronal connections and firing patterns was challenged as soon as it was mooted (by the ill-fated European Human Brain Project6).

Here we arrive at one stretch of the ‘complexity’ frontier. If history is any guide, we should expect that understanding these complex systems will not emerge by drawing analogies with the latest cutting-edge technologies. Just as the brain is not (as was thought in the early nineteenth century) a battery, neither is it a computer; nor is the genome a digital list of parts. And more data, although extremely valuable as a resource, will not help us without new ideas. These are in short supply. As neurobiologist and historian Matthew Cobb at the University of Manchester, UK, writes, “no major conceptual innovation has been made in our overall understanding of how the brain works for over half a century”7.

It’s no surprise, then, that the ‘hard problem’ of consciousness is barely articulated, let alone understood. We are still at the stage where serious thinkers on the topic embrace the gamut of positions, from regarding it as an illusion to considering it the only valid starting point for a theory of human experience. That latter view harks back to how US psychologist William James ignored “the traditional antithesis between reality and appearance”, as Nature put it in 19158. As for claims that neuroscience has banished free will (for example, because decisions can be predicted from brain scans in advance of their conscious manifestation), saying that “your brain decides before you do” merely returns us to British philosopher Gilbert Ryle’s famous regression of mental homunculi9.

New views

Among the ways in which science has changed over the past century and a half, three loom large. First, it is no longer driven by lone figures labouring in their laboratories, but has become a team effort that spans labs, departments, disciplines, institutions and continents. Second, it often relies now on data sets so vast that human brains cannot hope to hold or parse them all. Third, it increasingly confronts issues of global reach and even existential urgency — from climate heating and the need for a carbon-neutral economy, to epidemics and water security.

Yet these changing demands are not reflected in incentives, funding mechanisms, awards or popular narratives. Systemic biases — for example, in barriers to the entry and advancement of women and people from minorities, or in the demographic coverage of medical databases, or the prejudices that algorithms inherit from their makers — remain entrenched. Even science’s internationalism is threatened by current political trends. To regard what biologist Thomas Henry Huxley in Nature’s first issue called the “progress of Science” as an inexorable, triumphant forward march, today seems dangerously complacent.

It is time to ask whether such problems are not imperfections of the system but consequences of it. Science might be hindered by channelling its practitioners into a single mode of thinking. There is hubris in the assumption that the traditions, conventions, training, disciplinary boundaries, methods, responsibilities and social contract that crystallized in the nineteenth century from a highly restricted demographic must still be the best way of working. To say as much is not to submit to some trendy caricature of postmodernism. Rather, it is to acknowledge that there are assumptions embedded, often invisibly, in the way we develop models, deploy metaphors, apportion priorities, recognize and reward achievement, and recruit participants that must be questioned.

The canonical scientific article, with its unified and passive voice, its closed and self-contained narrative, its seductively confident diagrams and standardized format, and its eventual metric quantification of impact, is not the only or the best vehicle for translating and disseminating today’s research: for posing and then answering questions. There’s scope for more variety in who does this, and how. Who would have guessed, for example, that what was needed to finally put climate science firmly on the public agenda was the candour and courage of a schoolgirl who is on the autistic spectrum?

The history of science tells us that some of the toughest questions will be addressed not by being answered but by being replaced with better questions. Among those haunting us today that might deserve this fate are: what is life? What is consciousness? What makes individuals who they are? Why does our Universe seem fine-tuned for our existence? How did it all begin? It will take creative and diverse thinking to improve on them — for the view over the horizon might not be the one we anticipated.

Nature 575, 29-31 (2019)

doi: 10.1038/d41586-019-03307-8

References

  1. 1.

    Gleiser, M. The Island of Knowledge: The Limits of Science and the Search for Meaning (Basic Books, 2014).

  2. 2.

    Mayor, M. & Queloz, D. Nature 378, 355–359 (1995).

  3. 3.

    Mermin, N. D. Nature 507, 421–423 (2014).

  4. 4.

    Tam, V. et al. Nature Rev. Genet. 20, 467–484 (2019).

  5. 5.

    Pennisi, E. Science 362, 1344–1345 (2018).

  6. 6.

    Abbott, A. Nature 511, 133–134 (2014).

  7. 7.

    Cobb, M. The Idea of the Brain (Profile, in the press).

  8. 8.

    Crawley, A. E. Nature 95, 200–201 (1915).

  9. 9.

    Ryle, G. The Concept of Mind (Hutchinson’s, 1949).

Download references

Nature Briefing

An essential round-up of science news, opinion and analysis, delivered to your inbox every weekday.