Everyone makes claims about what works. Politicians claim that stop-and-search policing will reduce violent crime; friends might assert that vaccines cause autism; advertisers declare that natural food is healthy. A group of scientists describes giving all schoolchildren deworming pills in some areas as one of the most potent anti-poverty interventions of our time. Another group counters that it does not improve children’s health or performance at school.
Unfortunately, people often fail to think critically about the trustworthiness of claims, including policymakers who weigh up those made by scientists. Schools do not do enough to prepare young people to think critically1. So many people struggle to assess evidence. As a consequence, they might make poor choices.
To address this deficit, we present here a set of principles for assessing the trustworthiness of claims about what works, and for making informed choices (see ‘Key Concepts for Informed Choices’). We hope that scientists and professionals in all fields will evaluate, use and comment on it. The resources were adapted, drawing on the expertise of two dozen researchers, from a framework developed for health care2 (see ‘Randomized trial’).
Ideally, these concepts should be embedded in education for citizens of all ages. This should be done using learning resources and teaching strategies that have been evaluated and shown to be effective.
People are flooded with information. Simply giving them more is unlikely to be helpful, unless its value is understood. A 2016 survey in the United Kingdom showed that only about one-third of the public trusts evidence from medical research; about two-thirds trust the experiences of friends and family3.
Not all evidence is created equal. Yet people often don’t appreciate which claims are more trustworthy than others; what sort of comparisons are needed to evaluate different proposals fairly; or what other information needs to be considered to inform good choices.
For example, many people don’t grasp that two things can be associated without one necessarily causing the other. The media sometimes perpetuates this problem by using language suggesting that cause and effect has been established when it has not4 — for instance, statements such as ‘coffee can kill you’ or ‘drinking one glass of beer a day can make you live longer’. Worse, exaggerated causal claims often pepper press releases from universities and journals5.
Studies that make fair comparisons are crucial, yet people often don’t know how to appraise the validity of research. Systematic reviews that synthesize well-designed studies that are relevant to clearly defined questions are more trustworthy than haphazard observations. This is because they are less susceptible to biases (systematic distortions) and the play of chance (random errors). Yet results from single studies are often reported in isolation, as facts. Hence the familiar flip-flopping headlines such as ‘chocolate is good for you’, followed the next week by ‘chocolate is bad for you’.
To make good choices, other types of information are needed too — for example, about costs and feasibility. Judgements must also be made about the relevance of information from research (how applicable or transferable it is), and about the balance between the likely desirable and undesirable effects of a drug, therapy or regulation.
When it comes to carbon taxes, for example, policymakers need to consider evidence about the environmental and economic effects of such taxes, judge how comparable their context is with that of the studies and weigh how onerous the administrative difficulties are. They also need to model how tax burdens will be distributed across socio-economic groups and think about whether the taxes will be accepted in their jurisdictions.
Individuals and organizations across many fields are working to enable people to make informed decisions. These efforts include synthesizing the best available evidence in systematic reviews; making that information more accessible, such as through plain-language summaries or open access; and teaching people how to use such resources. Examples of such review organizations are Cochrane (previously called the Cochrane Collaboration), which focuses on health care; the Campbell Collaboration, which looks at the effects of social policies; the Collaboration for Environmental Evidence; and the International Society for Evidence-Based Health Care. Others include the Center for Evidence-Based Management, the Africa Centre for Evidence, the International Initiative for Impact Evaluation (known as 3ie) and Britain’s What Works Centres.
Unfortunately, academics tend to work in silos and can miss opportunities to learn from others. The expertise of the authors of this article spans 14 fields: agriculture, economics, education, environmental management, international development, health care, informal learning, management, nutrition, planetary health, policing, speech and language therapy, social welfare, and veterinary medicine.
We have identified many concepts that apply across these fields (see ‘Key Concepts for Informed Choices’ and ‘Key concepts in action’). Some further concepts are more relevant in some fields than in others. For example, it is often important to consider potential placebo effects when assessing claims about medical treatments and nutrition; these are rarely relevant to interventions in the environment.
Our collaboration has already prompted many of us to develop frameworks for specific fields and to suggest improvements to the original Informed Health Choices framework2. There is power in identifying an issue that resonates across different domains; it provides momentum to align efforts.
The Key Concepts for Informed Choices is not a checklist. It is a starting point. Although we have organized the ideas into three groups (claims, comparisons and choices), they can be used to develop learning resources that include any combination of these, presented in any order. We hope that the concepts will prove useful to people who help others to think critically about what evidence to trust and what to do, including those who teach critical thinking and those responsible for communicating research findings.
Evidence-informed practice is now taught to professionals in many different fields, and these efforts must grow. It is also crucial that schoolchildren learn these key concepts, rather than delaying acquisition of these skills until adulthood. Young people who have been explicitly taught critical thinking make better judgements than those who have not6. Educating people about such concepts at a young age sets an important foundation for future learning.
An important part of the work of encouraging critical thinking is learning and sharing strategies that promote healthy scepticism, but which avoid unintended adverse consequences. These include inducing nihilism (extreme scepticism); allowing for disingenuous claims that uncertainty is a defensible argument against action (on climate change, for example); or encouraging false beliefs — such as that all research is untrustworthy because of competing interests among those who promote particular interventions.
Competing interests take various forms in different fields, but the challenges and remedies are similar: recognition of potential conflicts, transparency and independent evaluations. Achieving these depends on improved public understanding of the need for independent evaluation, and public demand for investment in it, as well as unbiased communication of findings.
Further development and specialization of the Key Concepts for Informed Choices is needed, and we welcome suggestions. For example, more consideration needs to be given to how these concepts can be applied to actions to address system-wide changes, taking into account complex, dynamic interactions and feedback loops, such as in climate-change mitigation or adaptation strategies.
We have therefore created a website (www.thatsaclaim.org) on which our key concepts can be adapted to different fields and target users, translated into other languages and linked to learning resources.