Daniel Gilbert maintains that people generally make bad decisions on risk issues, and suggests that communication strategies and education programmes would help (Nature 474, 275–277; 2011). This version of the deficit model pervades policy-making and branches of the social sciences.

In this model, conflicts between expert and public perceptions of risk are put down to the difficulties that laypeople have in reasoning in the face of uncertainties rather than to deficits in knowledge per se. There are three problems with this stance.

First, it relies on a selective reading of the literature. Evolutionary psychologists have a more positive view of people's capacity for statistical reasoning (see, for example, L. Cosmides and J. Tooby Cognition 58, 1–73; 1996), arguing that many putative reasoning 'errors' may be nothing of the sort.

Second, it rests on some bold extrapolations. For example, it is not clear how the biases Gilbert identifies in the classic 'trolley' experiment play out in the real world. Many such reasoning 'errors' are mutually contradictory — for example, people have been accused of both excessive reliance on and neglect of generic 'base-rate' information to judge the probability of an event. This casts doubt on the idea that they reflect universal or hard-wired failings in cognition.

The third problem is the presentation of rational choice theory as the only way of deciding how to handle risk issues. Alternative decision logics, from the precautionary principle to deontology, are reduced to mere reasoning fallacies. Yet to be concerned with fundamental rights, moral obligations (deontology) and worst-case scenarios (precaution) is not pathological. To treat it as such, as Gilbert and others do (see, for example, C. R. Sunstein Behav. Brain Sci. 28, 531–542; 2005), seems myopic.

Given that many modern risk crises stem from science's inability to foresee the dark side of technological progress, a little humility from the rationality project wouldn't go amiss.