Behavioural interventions can improve choices across many domains, but we must remember that they are not universally effective.
Nudges, it seems, are everywhere. If you have ever received a reminder letter informing you of the large proportion of your peers who have, for example, paid their taxes or voted, you have been nudged. If you have seen calorie counts on restaurant menus and perhaps reconsidered that extra-large option, you have been nudged.
In essence, a nudge is an intervention that encourages people to make better choices, without making any changes to the options available. Rather than mandating or forbidding choices, nudges encourage people to choose better for themselves.
Since the publication of Richard Thaler and Cass Sunstein’s book Nudge in 2008 (ref. 1), these interventions have entered the mainstream and shaped policies around the world. In one recent example, the government of the United Kingdom has announced that from the spring of 2020, organ donation will operate on an opt-out basis, in which consent is assumed by default and individuals who do not wish to donate must actively opt out of the system. This change to the default option has been found to increase rates of organ donation in other countries2.
Given their successes and the extent to which policy-makers have embraced nudges, it is easy to forget that this approach, like any other tool for behaviour change, has its limits. For example, researchers have tried unsuccessfully to increase university student achievement through nudge messages3, and studies have found that menu labelling may not change restaurant food choices after all4.
Part of the challenge in identifying the limits of nudging is the lack of information on unsuccessful interventions. A recent systematic review of nudge studies found that just 18% of interventions reported in a set of 116 studies were unsuccessful5. The problem is that it’s uncertain what this finding means for the state of our knowledge. Are nudges truly unlikely to fail? Or do we just not hear about it when they do? Given the known problem of publication bias, it’s possible that many more unsuccessful interventions exist but have never been reported.
As we have argued before6, this blind spot in results is a serious problem that requires a change in publication culture to shift. When it comes to behavioural interventions, knowledge of what doesn’t work can be just as important as knowledge of what does.
In new research published in Nature Human Behaviour, Kristal and Whillans7 provide this type of insight by reporting the results of five interventions designed to reduce rates of commuting by single-occupant vehicle among employees of a large European airport. The interventions included sending letters about carpool registration and peer testimonials, a free bus trial, information on lost opportunities for savings, and the provision of personalized travel plans that provided customized information about carpooling and public transit options. None of these attempts meaningfully changed commuting behaviour.
Although the interventions did not have the intended effects, there are still important lessons to be learned from this study. First, the findings suggest that different, and perhaps more intensive interventions, are needed to change commuting choices. Given the centrality of commuting to many workers’ lives and the extent to which days are structured around the commute and its related stop-offs, it’s possible that different interventions and policy approaches would be needed. Second, the study highlights that even null results can have policy implications. When a research question is important and a study is well-designed, even a disappointing finding can be meaningful.
Finally, the study points to the limits of nudges and reminds us that they are not a panacea. While they can be a low-cost and simple way to shift many behaviours, it’s important to remember that not all behaviours are equally easy to shift. Interventions must match the nature of the problem and the realities of existing preferences, incentives and psychology. Through careful design and transparent reporting of interventions—both successful and unsuccessful—behavioural science can continue to advance, providing opportunities to improve outcomes across domains.
Thaler, R.H. & Sunstein, C.R. Nudge: Improving Decisions about Health, Wealth, and Happiness (Yale Univ. Press, 2008).
Shepherd, L., O’Carroll, R. E. & Ferguson, E. BMC Med. 12, 131 (2014).
Oreopoulos, P. & Petronijevic, U. The remarkable unresponsiveness of college students to nudging and what we can learn from it. NBER Working Paper No. 26059 https://doi.org/10.3386/w26059 (2019).
Canto, J. et al. Health Aff. 34, 1893–1900 (2015).
Szaszi, B. et al. J. Behav. Decis. Making 31, 355–366 (2018).
Nat. Hum. Behav. 3, 197 (2019).
Kristal, A.S. & Whillans, A.V. Nat. Hum. Behav. https://doi.org/10.1038/s41562-019-0795-z (2019).
About this article
Cite this article
Nudges that don’t nudge. Nat Hum Behav 4, 121 (2020). https://doi.org/10.1038/s41562-020-0832-y
This article is cited by
Nature Human Behaviour (2022)