The term 'science studies' was invented in the 1970s by 'outsiders', such as those from the social sciences and humanities, to describe what they had to say about science. Science studies have been through what my colleagues and I at the Cardiff School of Social Sciences, UK, see as two waves. In wave one, social scientists took science to be the ultimate form of knowledge and tried to work out what kind of society nurtures it best. Wave two was characterized by scepticism about science.

Credit: J. TAYLOR

The recent dominance of this second wave has unfortunately led some from science studies and the broader humanities movement known as post-modernism to conclude that science is just a form of faith or politics. They have become overly cynical about science.

The prospect of a society that entirely rejects the values of science and expertise is too awful to contemplate. What is needed is a third wave of science studies to counter the scepticism that threatens to swamp us all.

We must choose, or 'elect', to put the values that underpin scientific thinking back in the centre of our world; we must replace post-modernism with 'elective modernism'. To support this, social scientists must work out what is right about science, not just what is wrong — we cannot live by scepticism alone. Natural scientists, too, have a part to play: they must reflect on and recognize the limits of their practice and their understanding. Together, we must choose to live in a society that recognizes the value of experience and expertise.

The prospect of a society that entirely rejects the values of science is too awful to contemplate.

This third wave will be resisted. Post-modernists have become comfortable in their cocoon of cynicism. And some natural scientists have become too fond of describing their work as godlike. Others are ready to offer simple-minded criticisms of deeply held beliefs. But the third wave is needed to put science back in its proper place.

Logic of science

The first wave of science studies largely coincided with post-war confidence in science, drawing on the success of physicists during the Second World War. During this period, philosophers attempted to define the underlying logic of the sciences, culminating in Karl Popper's notion that the criterion of scientific validity was the ability to state the conditions under which a claim could be proven false. Social scientists such as Robert Merton additionally documented the norms of the scientific community: science must be unbiased, disinterested, a free public good and subject to organized critical review. These norms seemed to be at the heart of the science that defeated fascism. Unsurprisingly, they fitted neatly with democratic ideals so, conveniently, democracy could be described as the best political system because it produced the best science.

The second wave was a child of the broader cultural revolution of the 1960s, as everything from sex to ideology loosened up. In science-studies circles, the authority of science went the way of the shirt and tie. It was shown that many kinds of scientific activities did not fit the philosophers' models and ignored the norms, and yet were still successful.

One of my contributions to this second wave was to demonstrate that scientists could not always check a result by simply repeating it, because what counted as a satisfactory repetition was not clear if a controversy ran deep. Take, for example, physicist Joseph Weber's claim to have detected gravitational waves in the 1960s. It was very difficult to disprove this experimentally, because Weber and his allies would not accept that those who could not repeat the results had tried hard enough. A single negative experiment or observation could not prove the theory false, so Popper's idea was itself flawed.

Historians showed that both the 1887 Michelson–Morley experiment on light travel and Arthur Stanley Eddington's 1919 eclipse observations, both said to provide key empirical support for Einstein's theories, were actually open to a variety of interpretations, even though the textbooks continued to offer myth-like accounts of the experiments' decisiveness1.

This type of analysis — in which it is shown that science cannot avoid human influence — came to be called social constructivism and remained a little-known speciality until the early 1990s. Then some scientists began a war with the social constructivists, throwing them into the spotlight. Suddenly, sociologists were being blamed for the growing troubles of science, from the rejection of confidence in genetically modified foods to the diminution of funding, symbolized by the demise in the United States of the Superconducting Super Collider in 1993.

It was said that sociologists were trying to undermine science. But we were not questioning the results of the great experiments, merely examining how the consensus about their interpretation was established. The conclusions of most of us were moderate: science could not deliver the absolute certainties of religion or morality, and scientists were not priests but rather skilful artisans, reaching towards universal truths but inevitably falling short. Far from being anti-science, we were trying to safeguard science against the danger of claiming more than it could deliver. If science presents itself as revealed truth it will inevitably disappoint, inviting a dangerous reaction; even the most talented craftsmen have their off-days, whereas a god must never fail.

Warriors disarm

By around 2000, the production of books, papers and conferences contributing to the science wars had pretty well stopped, whereas science studies grew in size and influence. Serious sociologists and serious scientists made friends and occasionally published joint works2. Society could not simply return to the way things were during wave one, as the science warriors would have preferred; the scepticism born of the second wave could not simply be forgotten.

By definition, the logic of a sceptical argument defeats any amount of evidence; one can deduce that no inference from observation can ever be certain, that one cannot be sure that the future will be like the past, and that nothing is exactly like anything else, making the process of experimental repetition more complicated than it seems. The work of sociologists was simply to show how this played out in the practice of the laboratory.

Post-modernists have become comfortable in their cocoon of cynicism.

Nowadays, however, I wonder if the science warriors might have been right to be worried about the (unintended) consequences of what social constructivists were doing. We may have got too much of what we wished for. The founding myth of the individual scientist using evidence to stand against the power of church or state — which has a central role in Western societies — has been replaced with a model in which Machiavellian scientists engage in artful collaboration with the powerful.

The modern social analyst of science has no more to say about the failure of Trofim Lysenko's theories of biological inheritance during Stalinist times than the failure of the Soviet Union — both simply lost a political battle.

One can justify anything with scepticism. Recently a philosopher acting as an expert witness in a court case in the United States claimed that the scientific method, being so ill-defined, could support creationism. Worse, scientific and technological ideas are nowadays being said to be merely a matter of lifestyle, supporting the idea that wise folk may be justified in choosing technical solutions according to their preferences — an idea horribly reminiscent of 'the common sense of the people' favoured in 1930s Germany. Some social scientists defend parents' right to reject vaccines and other unnatural treatments because a lack of danger cannot be absolutely demonstrated. At the beginning of the century, President Thabo Mbeki's policies denied anti-retroviral drugs to HIV-positive pregnant mothers in South Africa. Some saw this as a justified blow against Western imperialism, given that the safety and efficacy of the treatment cannot be proven beyond doubt.

A third wave of science studies would mean breaking away from now-routine and secure criticism, and instead taking the risks involved with the synthesis and generalization that build human culture. Mbeki claimed that anti-retroviral drugs had not been proven to reduce mother-to-child transmission of HIV, and pointed out that some scientists claim the drugs are poisonous. He was right. The hard problem for social studies of science is to show why, although he was right in logic, he was wrong for all practical purposes. Just showing there is some doubt about an issue, or another side to the story — at which we social scientists are nowadays unbeatable — does not inform you what to do in a case such as this.

Expertise defined

Some natural scientists have become too fond of describing their work as godlike.

One way to try to crack the hard problem is to analyse and classify the nature of expertise to provide the tools for an initial weighting of opinion. The result of such an exercise is the creation of some new classes of expert (such as people whose expertise is based on experience rather than training and certificates), and the exclusion of some old classes (such as scientists speaking outside their narrow areas of specialization). My colleagues and I have summarized this approach in a kind of 'periodic table' of expertises3.

Using this approach, it can be shown that Mbeki's ideas about the danger of anti-retrovirals were developed by reading the views of a small group of maverick scientists on the Internet and advising his ministers to do the same. But the view gained from the Internet is not always the view developed within the scientific community. Although in principle the logic of the mavericks' position cannot be defeated, a policy-maker should accept the position of those who share in the tacit knowledge of the expert community.

It is not only social scientists who would have to change their approach under elective modernism. If we are to choose the values that underpin scientific thinking to underpin society, scientists must think of themselves as moral leaders. But they must teach fallibility, not absolute truth. Whenever a scientist, acting in the name of science, cheats, cynically manipulates, claims to speak with the voice of capitalism, the voice of a god, or even the voice of a doctrinaire atheist, it diminishes not only science but the whole of our society.

In a society informed by elective modernism, free criticism of ideas would be a good thing; the right way to pursue knowledge about the natural world would be through observation, theorization and experiment, not revelation, tradition, the study of books of obscure origin or the building of alliances of the powerful. Science's findings are to be preferred over religion's revealed truths, and are braver than the logic of scepticism, but they are not certain. They are a better grounding for society precisely, and only, because they are provisional. It is open debate among those with experience that is the ultimate value of the good society.

Science, then, can provide us with a set of values — not findings — for how to run our lives, and that includes our social and political lives. But it can do this only if we accept that assessing scientific findings is a far more difficult task than was once believed, and that those findings do not lead straight to political conclusions. Scientists can guide us only by admitting their weaknesses, and, concomitantly, when we outsiders judge scientists, we must do it not to the standard of truth, but to the much softer standard of expertise.