Skip to main content

Thank you for visiting You are using a browser version with limited support for CSS. To obtain the best experience, we recommend you use a more up to date browser (or turn off compatibility mode in Internet Explorer). In the meantime, to ensure continued support, we are displaying the site without styles and JavaScript.

Trouble in the trough: how uncertainties were downplayed in the UK’s science advice on Covid-19

The 2020 Covid-19 pandemic has forced science advisory institutions and processes into an unusually prominent role, and placed their decisions under intense public, political and media scrutiny. In the UK, much of the focus has been on whether the government was too late in implementing its lockdown policy, resulting in thousands of unnecessary deaths. Some experts have argued that this was the result of poor data being fed into epidemiological models in the early days of the pandemic, resulting in inaccurate estimates of the virus’s doubling rate. In this article, I argue that a fuller explanation is provided by an analysis of how the multiple uncertainties arising from poor quality data, a predictable characteristic of an emergency situation, were represented in the advice to decision makers. Epidemiological modelling showed a wide range of credible doubling rates, while the science advice based upon modelling presented a much narrower range of doubling rates. I explain this puzzle by showing how some science advisors were both knowledge producers (through epidemiological models) and knowledge users (through the development of advice), roles associated with different perceptions of scientific uncertainty. This conflation of experts’ roles gave rise to contradictions in the representation of uncertainty over the doubling rate. Role conflation presents a challenge to science advice, and highlights the need for a diversity of expertise, a structured process for selecting experts, and greater clarity regarding the methods by which expert consensus is achieved. The analysis indicates an urgent research agenda that can help strengthen the UK science advice system after Covid-19.


The 2020 Covid-19 pandemic has forced science advisory institutions and processes into an unusually prominent role, and placed their decisions under intense public, political and media scrutiny. In the UK, a key controversy has centred on whether the government was too late in implementing its lockdown policy, resulting in thousands of unnecessary deaths, and the role of advice provided by the government’s Scientific Advisory Group for Emergencies (SAGE), a group of experts drawn from government, academia and industry (SAGE, 2020a). In this article, I highlight how uncertainties in the virus doubling rate—an important input into lockdown decision-making—were downplayed in both the advice provided by SAGE and public comments by SAGE participants. Previous research has shown how knowledge producers perceive high uncertainty, whereas knowledge users perceive less uncertainty (MacKenzie, 1990), an issue particularly problematic for government science advice where differentiating between knowledge production and use may be challenging (Jasanoff, 2003). In the case of UK Covid-19 advice, the conflation of knowledge production and knowledge use has accompanied experts downplaying the uncertainty within their own models as they provided advice to decision makers. The consideration of policy options thus became anchored in a consensus that Covid-19 cases were doubling every five days, as opposed to the results of scientific modelling that showed doubling rates as low as three days were also credible. This failure to consider the full range of credible values for doubling rates projected an unwarranted sense of certainty to decision-makers and the public regarding the spread of the virus, and potentially helped to delay the implementation of a lockdown policy.

Knowledge production, knowledge use and uncertainty perception

In June 2020, as the UK started to wind down its lockdown measures in response to Covid-19, some prominent SAGE participants reflected on the timing of the policy’s introduction. Professor Neil FergusonFootnote 1 stated that enforcing lockdown a week earlier could have halved the death rate (Stewart and Sample, 2020), Professor John Edmunds expressed regret that the delay “cost a lot of lives” (UK Lockdown Delay Cost a Lot of Lives-Scientist, 2020), a position supported by fellow SAGE participant and Royal Society President Sir Venki Ramakrishnan (Today, 2020). Both Ferguson and Edmunds pinpointed a paucity of data as the key reason for the delay to lockdown (Johns, 2020; UK Lockdown Delay Cost a Lot of Lives-Scientist, 2020). However, there is more to the lockdown controversy than the important, but unsurprising, challenges of data availability and validation (Marcus and Oransky, 2020; Rutter et al., 2020). The more fundamental issue for science advisory systems is how to deal with the multiple uncertainties that inevitably arise from poor data, a topic of enduring interest in the science and technology studies (STS) and science advice literature. (Cassidy, 2019; Funtowicz and Ravetz, 1993; Landström et al., 2015; Pearce et al., 2018; Raman, 2015; SAPEA, 2019; Stilgoe et al., 2006; Stirling, 2010; Wesselink and Hoppe, 2010). Here, I focus on one aspect of science advice’s ’uncertainty monster’ (van der Sluijs, 2005), the tensions between knowledge production and knowledge use.

One of the most influential studies of scientific uncertainty was provided three decades ago by Donald MacKenzie (1990) in Inventing Accuracy: A Historical Sociology of Nuclear Missile Guidance. MacKenzie outlined three broad positions which formed a ‘certainty trough’ related to the perception of uncertainty in knowledge production and use. First, those directly involved in knowledge production, such as scientific modellers, are keenly aware of the uncertainties in that knowledge. Second, users of said knowledge are likely to perceive less uncertainty, believing, as MacKenzie puts it, ‘what the brochures tell them’. Third, those people alienated from knowledge production and use, or committed to a different technology, will have the highest perception of uncertainty. When represented schematically, this relationship resembles a trough, as shown in Figure 1 (MacKenzie, 1990, p. 372).

This concept has become influential in STS, with the implication that different actors reside in different points along the trough depending on their whether they are knowledge producers or users. Sheila Jasanoff proposes that expert reviewers of government science fall within the intermediate zone of reduced scepticism, while also noting the challenge of achieving both familiarity with the subject matter and distance from the scientists involved (2006, p. 34). In a subsequent study of climate modellers, Myanna Lahsen (2005) highlights how the boundary between model developer and model user can become blurred, leading to a situation where knowledge producers become ‘seduced’ by their own models. Lahsen uses this insight to argue for a differently shaped ‘trough’, with uncertainty perception actually at its lowest amongst those directly involved in knowledge production (2005, p. 918). However, I argue that in the case of UK Covid-19 advice, the conflation of knowledge production and use does not imply a differently shaped relationship. Instead, key experts occupy different positions on MacKenzie’s trough according to their priorities at any given moment rather than remaining fixed in one place. These conflated roles point both to the challenge of navigating the ‘hybridity’ that forms an inherent part of science advice (Palmer et al., 2019), and the importance of effective review procedures being built into science advisory systems (Jasanoff, 2003). In the next section, I provide a brief overview of Covid-19 advice in the UK, and raise questions regarding the membership of advisory groups.

Diversity and scrutiny in the UK science advice system

SAGE (Scientific Advice to Government in Emergencies) has overall responsibility for coordinating and peer reviewing the scientific advice that informs decision-making (Civil Contingencies Secretariat, 2012, p. 2), and is part of a science advice system that takes in multiple sub-groups (see Table 1).

Table 1 Descriptions of SAGE and related sub-groups (Government Office for Science, 2020; HM Government, 2020b).

At the time of writing, the UK Government’s information on SAGE does not detail criteria for the selection of experts participating within the system, other than them being from healthcare and academia, and that sub-groups “consider the scientific evidence and feed in their consensus conclusions to SAGE” (HM Government, 2020a). However, the processes behind these remain somewhat opaque. On membership, the guidance for SAGE states that the experts appointed should be the most appropriate rather than the most accessible, but not how that is to be determined (Civil Contingencies Secretariat, 2012, p. 19). On consensus, no mechanism is discussed for resolving expert disagreement, other than to note that reaching consensus may not always be possible and that a statement should be made on “the extent and sources of uncertainty” (Civil Contingencies Secretariat, 2012, p. 47).

This lack of clarity was exacerbated by an early decision not to disclose the identities of expert advisers, in order to protect them from “lobbying and other forms of unwanted influence” (Vallance, 2020). This caused controversy over a perceived lack of transparency and accountability, leading to pressure from Members of Parliament and the eventual publication of a leaked participant list by The Guardian on April 24th (Carrell et al., 2020; Mason, 2020a). On May 4th, the names of participants within all advisory groups were published by government (Government Office for Science, 2020; Mason, 2020b). Analysis of this list shows a total of 17 experts participating in three or four advisory groups (see Table 2).

Table 2 List of experts participating in three or more groups within the UK science advice system, as of May 4th, 2020.

That some experts appear on more than one committee is perhaps unsurprising, not least because Cabinet Office guidance recommends that at least one member of each sub-group should attend SAGE in order to enable two-way communication (Civil Contingencies Secretariat, 2012, p. 15). However, the guidance also notes that SAGE “should not overly rely on specific experts” (Cabinet Office, 2012, p. 19; original emphasis). Such a principle is important for the consideration of a full range of expert views in the process of consensus-formation, as well as ensuring that the available evidence can be scrutinised by experts beyond those directly involved in its production (Freedman, 2020b; Jasanoff, 2003). That Professor Ian Boyd joined SAGE in April 2020 to provide an “independent challenge function” suggests an awareness within the science advice system of a need for greater diversity and scrutiny (Freedman, 2020a). In the next section, I highlight the importance of these issues through the representation of uncertainty within advice on the Covid-19 doubling rate.

The doubling rate: from multiple uncertainties to a single number

Two leading groups of epidemiological modellers from Imperial College London and London School of Hygiene and Tropical Medicine (LSHTM) were closely involved in the early science advice provided to the UK government for Covid-19. As shown in Table 1, LSHTM’s Edmunds is a member of four groups within the science advice system, including SAGE (Government Office for Science, 2020). While LSHTM’s work has not been as high-profile as that from Ferguson’s Imperial group, it features regularly in the published minutes of SAGE, and their scientists are prominent media contributors. While the modelling reports provided to SAGE during February and March are yet to be published, a version of the LSHTM group’s work has been peer reviewed and published in The Lancet Public Health (LPH) (Davies et al., 2020).Footnote 2 In this article they provide an estimate of R0 (the number of people infected by a single case, in a situation where everyone is susceptible) as 2.7, but state that it could also credibly be as high as 3.9. This is notable for two reasons. First, these values for R0 are considerably higher than those in the Imperial College report that was reportedly pivotal in the government’s move to more stringent interventions and, ultimately, lockdown policies (Landler and Castle, 2020). Imperial modelled R0 as between 2.0–2.6, with a central assumption of 2.4 (Ferguson et al., 2020). In other words, the highest value modelled by the Imperial group was below the central estimate of LSHTM. Footnote 3 Second, despite the wide range of possible R0 values identified in the LPH article, and the divergence from estimates provided by Imperial, public pronouncements by science advisers focused on a single number for the doubling rate of the virus. Providing a single number for the doubling rate relies not only on having a firm grasp of a figure for R0, but also the time elapsed between infectiousness in a primary case and a secondary case. This was perhaps an impossible task in the early stages of the virus, where a paucity of data meant societies were “living in a moment of ground-zero empiricism …[with] basic facts yet to be ascertained” (Daston, 2020). Some modelling experts from outside the epidemiological community have countered such perspectives, argued that government science advisers should have paid more attention to early case data from the UK and other European countries (Annan and Hargreaves, 2020). While such efforts to improve the quality of model simulations are clearly important, there is a broader point that requires attention: that the multiple uncertainties inherent in trying to model the virus’s spread should be reflected in the scientific advice given to decision makers. Based on the evidence published in the scientific literature and by government, this does not seem to have happened in the UK case.

Fig. 1

The ‘certainty trough’ shows a non-linear relationship between proximity to knowledge and perception of uncertainty. Adapted from MacKenzie (1990, p. 372; 1998, p. 325).

Defending the number

On March 13th, Edmunds appeared in a feisty Channel 4 News interview with Tomas Pueyo, a commentator with no apparent scientific credentials but who had written the viral Medium story “Why You Must Act Now” recommending immediate implementation of lockdown in order to minimise death rates (Pearce, 2020; Pueyo, 2020). At the end of the interview, Pueyo made the case, based on the increasing number of cases in the UK, that cases were more than doubling every three days. Edmunds disputed this saying (de Pear and Cacace, 2020, 23m17s):

“it’s true that if you look crudely at the numbers, then the cases are doubling every 2.5 days, but that’s because they are doing more contact tracing. The actual underlying rate of doubling is actually about every five days”.

The claim here is that because the UK was conducting a lot of tests, this was increasing the number of cases being detected, and so making it appear that the virus was spreading more quickly than it was. Investigating this claim is beyond the scope of this article, but what is clear is that the influence of testing rates on doubling rate calculations introduces a further, potentially important, source of uncertainty in estimating the rate at which Covid-19 cases were increasing. A responsible way of dealing with this would be to emphasise the range of potential values involved which, as described in the LPH article, could credibly have been closer to three days than five. Three days after the Channel 4 News programme, on March 16th, the minutes of the SAGE meeting echoed Edmunds’ claim, reporting that UK cases “may be doubling ever 5–6 days” (SAGE, 2020c, p. 2). Later that day, the Government Chief Scientific Adviser, Sir Patrick Vallance, also stated that he expected the epidemic to double “every five days or so” (BBC News, 2020, 27m47s). A week later, on March 23rd, the consensus on five days was broken, with SAGE meeting minutes reporting a doubling time of 3–4 days (SAGE, 2020b, p. 2).

The presentation of doubling time as reasonably certain is puzzling. Both Vallance and Edmunds are esteemed scientists who were surely aware of the inherent uncertainties within the models. Returning to MacKenzie’s certainty trough, one would expect Edmunds in particular, as a knowledge producer, to be highly perceptive of the uncertainties. Indeed, the LPH article on which Edmunds is a co-author demonstrates such perception. So, what is going on? My argument is that this is a case of science misreporting that stems from the conflated roles of key experts in the UK science advice system. In the LPH article, Edmunds fulfils his primary role as a knowledge producer through epidemiological models, providing a detailed account of the uncertainties in the LSHTM group’s research. However, his participation in SAGE and other advisory groups also casts him in the role of knowledge user: translating the outputs of his team’s scientific models into advice. This shows that not only is it possible for an expert to occupy different points on the certainty trough at different times, (see Fig. 2) but that this is a likely result of the close relationship between knowledge production and knowledge use within science advice.

Fig. 2

A science advisor can occupy different points in the ‘certainty trough’ depending on their role at a given time. Adapted from MacKenzie (1990, p. 372; 1998, p. 325).

The humility of Edmunds and other advisers regarding the overdue implementation of lockdown is welcome and should provide an example to the UK’s political leaders, if public trust is to be maintained. However, the point of this article is not to highlight individuals, but to open up a dialogue about the structure of the UK’s science advice system and how it came to downplay the multiple uncertainties that seemed obvious to many external observers (Cookson and Mancini, 2020). If data problems in the crucial early days of the pandemic were as acute as science advisers now claim, then the consensus that formed around a doubling rate of five days becomes even less defensible. Instead, poor data availability should have prompted uncertainties to be emphasised rather than downplayed. This in turn could have opened up a wider range of policy options, and at least put on the agenda the rapid lockdown policy which some SAGE participants subsequently wished for (Pielke, 2007).

The future of science advice

This article has analysed the representation of uncertainty regarding the virus doubling rate, a key area of controversy in UK science advice, finding contradictions between the outputs from epidemiological models and their public representation. In particular, I have shown how the doubling rate was represented as relatively certain despite the presence of three significant sources of uncertainty: the R0 number, the time elapsed between infectiousness in a primary case and a secondary case, and the influence of increased testing on doubling rate calculations. This article has been written within six months of the events described, so the analysis is necessarily provisional. However, there are three important findings: first, that the science advice system presented the virus doubling rate with unwarranted certainty; second, that role conflation in science advisers between knowledge production and knowledge use helps to explain the downplaying of uncertainties; and, third, that these issues highlight the need for diversity and clarity in the selection of experts and the means by which consensus in advice is achieved. None of this is straightforward to navigate, with trite slogans such as “follow the science” telling us nothing about the tricky business of producing and using scientific knowledge to inform decision-making (Bacevic, 2020). Rather, the unprecedented stress test of Covid-19 provides an important opportunity to learn lessons and strengthen the science advice system in preparation for future emergencies (Obermeister, 2020).

Two US-based projects, CompCoReFootnote 4 and EScAPEFootnote 5, are starting to build an evidence base, comparing science advice and policy responses across multiple countries. However, more detailed national-level research is urgently required in the UK and elsewhere. This article indicates numerous potential lines of enquiry including: the relationship between medical classification and political imaginaries (Liddiard, 2020; Perego et al., 2020), cultural influences on the representation of uncertainty (Douglas, 2016; Stirling and Scoones, 2020), diversity and consensus within science advice (Leach and Scoones, 2013; Smallman, 2020a), the staging of science advice at press conferences (Hilgartner, 2000; Hollin and Pearce, 2015), the role of blogs and social media in challenges to established science (Raman and Pearce, 2020; Turner, 2013), the emerging demand for alternative science advice such as “Independent SAGE” (Smallman, 2020b; Wise, 2020) and how scientific emergencies such as Covid-19 affect public trust in experts (Dommett and Pearce, 2019).

Such a research agenda is wide-ranging and challenging. Yet the analysis presented here reminds us that science advice is literally a matter of life-and-death. The ultimate responsibility for decision remains on politicians, but there is responsibility too on experts to reflect on whether the scientific advice provided on Covid-19 served the public good. The colossal toll of death, damage and despair left behind by the pandemic demands nothing less.


  1. 1.

    While Ferguson was not a member of SAGE at the time of this statement, he was a key SAGE participant between January and March 2020.

  2. 2.

    The authors report that while the results within the LPH article are updated from their reports provided to government, the overall conclusions “are the same as those presented to decision makers in real time” (2020, p. e383).

  3. 3.

    On March 25th, a paper to SAGE estimated a reproduction number of 2.8 under a reasonable worse-case scenario (Planning Assumptions for the UK Reasonable Worst Case Scenario (Draft), 2020).

  4. 4.

  5. 5.


  1. Annan JD, Hargreaves, JC (2020) Model calibration, nowcasting, and operational prediction of the COVID-19 pandemic. MedRxiv.

  2. Bacevic J (2020) There’s no such thing as just ‘following the science’–advice is political. The Guardian.

  3. BBC News (2020) Coronavirus: Boris Johnson sets out ‘drastic action’. YouTube.

  4. Callard F (2020) Very, very mild: Covid-19 symptoms and illness classification. Somatosphere.

  5. Carrell S, Pegg D, Lawrence F, Lewis P, Evans R, Conn D, Davies H, Proctor K (2020) Revealed: Dominic Cummings on secret scientific advisory group for Covid-19. The Guardian.

  6. Cassidy A (2019) Vermin, Victims and Disease: British Debates over Bovine Tuberculosis and Badgers. Cham: Palgrave Macmillan

  7. Civil Contingencies Secretariat (2012) Enhanced SAGE Guidance: a strategic framework for the Scientific Advisory Group for Emergencies (SAGE). Cabinet Office.

  8. Cookson C, Mancini DP (2020) Scientists warn UK’s coronavirus response is inadequate. Financial Times.

  9. Daston L (2020, April 10) Ground-Zero Empiricism. In the Moment.

  10. Davies NG, Kucharski AJ, Eggo RM, Gimma A, Edmunds WJ, Jombart T, O’Reilly K, Endo A, Hellewell J, Nightingale ES, Quilty BJ, Jarvis CI, Russell TW, Klepac P, Bosse NI, Funk S, Abbott S, Medley GF, Gibbs H, Liu Y (2020) Effects of non-pharmaceutical interventions on COVID-19 cases, deaths, and demand for hospital services in the UK: a modelling study. Lancet Public Health 5(7):e375–e385.

    Article  PubMed  PubMed Central  Google Scholar 

  11. de Pear B, Cacace H (2020) Coronavirus special: are we doing enough? In Channel 4 News. Channel 4.

  12. Dommett K, Pearce W (2019) What do we know about public attitudes towards experts? Reviewing survey data in the United Kingdom and European Union. Public Unders Sci 28(6):669–678.

    Article  Google Scholar 

  13. Douglas H (2016) Values in science. The Oxford handbook of philosophy of science. pp. 609–632

  14. Ferguson N, Laydon D, Nedjati Gilani G, Imai N, Ainslie K, Baguelin M, Bhatia S, Boonyasiri A, Cucunuba Perez Z, Cuomo-Dannenburg G (2020) Report 9: impact of non-pharmaceutical interventions (NPIs) to reduce COVID19 mortality and healthcare demand.

  15. Freedman L (2020a) Where the science went wrong. New Statesman.

  16. Freedman L (2020b) Scientific advice at a time of emergency. SAGE and Covid‐19. The Political Quarterly.

  17. Funtowicz SO, Ravetz JR (1993) Science for the post normal age. Futures 25(7):739–755

    Article  Google Scholar 

  18. Government Office for Science (2020) List of participants of SAGE and related sub-groups. GOV.UK.

  19. Hilgartner S (2000) Science on Stage: Expert Advice as Public Drama. Stanford: Stanford University Press

  20. HM Government (2020a) About us. GOV.UK.

  21. HM Government (2020b) New and Emerging Respiratory Virus Threats Advisory Group. GOV.UK.

  22. Hollin GJS, Pearce W (2015) Tension between scientific certainty and meaning complicates communication of IPCC reports. Nat Clim Change 5(8):753–756.

    ADS  Article  Google Scholar 

  23. Jasanoff S (2003) Accounting for expertise Sci Public Policy 30:157–162.

    Article  Google Scholar 

  24. Jasanoff S (2006) Transparency in public science: purposes, reasons, limits. Law Contem Prob 69(3):21–45

    Google Scholar 

  25. Johns S (2020) Neil Ferguson talks modelling, lockdown and scientific advice with MPs. Imperial News.

  26. Lahsen M (2005) Seductive simulations? Uncertainty distribution around climate models. Soc Stud Sci 35(6):895–922

    Article  Google Scholar 

  27. Landler M, Castle S (2020) Behind the Virus Report That Jarred the U.S. and the U.K. to Action. The New York Times.

  28. Landström C, Hauxwell-Baldwin R, Lorenzoni I, Rogers-Hayden T (2015) The (Mis)understanding of scientific uncertainty? How experts view policy-makers, the media and publics. Sci Culture 24(3):276–298.

    Article  Google Scholar 

  29. Leach M, Scoones I (2013) The social and political lives of zoonotic disease models: narratives, science and policy. Social Science & Medicine 88:10–17.

    Article  Google Scholar 

  30. Liddiard K (2020) Surviving ableism in Covid times. IHuman.

  31. MacKenzie D (1990) Inventing accuracy: a historical sociology of nuclear missile guidance. MIT Press

  32. Marcus A, Oransky I (2020) The science of this pandemic is moving at dangerous speeds. Wired.

  33. Mason R (2020a) No 10 faces calls to lift secrecy around Covid-19 advisory group. The Guardian.

  34. Mason R (2020b) Government names dozens of scientists who sit on Sage group. The Guardian.

  35. Obermeister N (2020) Tapping into science advisers’ learning. Pal Commun 6(1):1–9.

    Article  Google Scholar 

  36. Palmer J, Owens S, Doubleday R (2019) Perfecting the ‘Elevator Pitch’? Expert advice as locally-situated boundary work. Sci Public Policy 46(2):244–253.

    Article  Google Scholar 

  37. Pearce W (2020) What does Covid-19 mean for expertise? The case of Tomas Pueyo. IHuman.

  38. Pearce W, Mahony M, Raman S (2018) Science advice for global challenges: learning from trade-offs in the IPCC. Environ Sci Policy 80:125–131.

    Article  Google Scholar 

  39. Perego E, Callard F, Stras L, Melville-Jóhannesson B, Pope R, Alwan NA (2020) Why the patient-made term ‘Long Covid’ is needed. Wellcome Open Research 5:224.

    Article  Google Scholar 

  40. Pielke Jr RA (2007) The honest broker: making sense of science in policy and politics. Cambridge University Press

  41. Planning assumptions for the UK reasonable worst case scenario (Draft). (2020)

  42. Pueyo T (2020) Coronavirus: why you must act now. Medium.

  43. Raman S (2015) Science, uncertainty and the normative question of epistemic governance in policymaking. In: Cloatre E & Pickersgill M (eds) Knowledge, technology and law: interrogating the nexus. Routledge, pp. 17–32

  44. Raman S, Pearce W (2020) Learning the lessons of Climategate: a cosmopolitan moment in the public life of climate science. Wiley Interdiscip Rev

  45. Rutter H, Wolpert M, Greenhalgh T (2020) Managing uncertainty in the covid-19 era. BMJ, 370(8258), m3349.

  46. SAGE (2020a) About us. GOV.UK.

  47. SAGE (2020b) Eighteenth SAGE meeting on Wuhan Coronavirus (Covid-19). HM Government.

  48. SAGE (2020c) Sixteenth SAGE meeting on Wuhan Coronavirus (Covid-19). HM Government.

  49. SAPEA (2019) Making sense of science for policy under conditions of complexity and uncertainty. Science Advice for Policy by European Academies.

  50. Smallman M (2020a) ‘Nothing to do with the science’: how an elite sociotechnical imaginary cements policy resistance to public perspectives on science and technology through the machinery of government. Soc Stud Sci 50(4):589–608.

    Article  PubMed  Google Scholar 

  51. Smallman M (2020b) ‘Independent Sage’ group is an oxymoron. Research Professional News.

  52. Stewart H, Sample I (2020) Coronavirus: enforcing UK lockdown one week earlier ‘could have saved 20,000 lives’. The Guardian.

  53. Stilgoe J, Irwin A, Jones K (2006) The received wisdom: opening up expert advice. Demos

  54. Stirling A (2010) Keep it complex. Nature 468(7327):1029–1031.

    ADS  CAS  Article  PubMed  Google Scholar 

  55. Stirling A, Scoones I (2020) COVID-19 and the futility of control 36(4):25–27. July 29

    Google Scholar 

  56. Today (2020, June 8) BBC Radio 4.

  57. Turner S (2013) The blogosphere and its enemies: the case of oophorectomy. Sociol Rev 61:160–179.

    Article  Google Scholar 

  58. UK lockdown delay cost a lot of lives—Scientist. (2020, June 7). BBC News.

  59. Van Der Sluijs J (2005) Uncertainty as a monster in the science–policy interface: Four coping strategies. Water Science and Technology 52(6):87–92

    Article  Google Scholar 

  60. Vallance P (2020) Composition of the Scientific Advisory Group for Emergencies (SAGE) and subgroups informing the Government response to COVID-19.

  61. Wesselink A, Hoppe R (2010) If post-normal science is the solution, what is the problem?: the politics of activist environmental science. Sci Technol Human Val 36(3):389–412.

    Article  Google Scholar 

  62. Wise J (2020) Covid-19: push to reopen schools risks new wave of infections, says Independent SAGE. BMJ 369.

Download references


Thanks to James Annan, Angela Cassidy, Andrew Chitty, Murray Goulden, Adam Hedgecoe, Sarah Salway, Tom Stafford and James Wilsdon for useful comments and help during the writing of this paper. I also thank the members of the Comparative Covid Response (CompCoRe) project for useful conceptual ideas and inspiration.

Author information



Corresponding author

Correspondence to Warren Pearce.

Ethics declarations

Competing interests

The author declares no competing interests.

Additional information

Publisher’s note Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Rights and permissions

Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made. The images or other third party material in this article are included in the article’s Creative Commons license, unless indicated otherwise in a credit line to the material. If material is not included in the article’s Creative Commons license and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this license, visit

Reprints and Permissions

About this article

Verify currency and authenticity via CrossMark

Cite this article

Pearce, W. Trouble in the trough: how uncertainties were downplayed in the UK’s science advice on Covid-19. Humanit Soc Sci Commun 7, 122 (2020).

Download citation


Quick links