Most scientists take pride in their ability to think logically. They try to assess experimental evidence and theoretical arguments on objective, rational grounds, and try to judge potential risks realistically through statistics and calculation rather than intuition. We're thinking people, and demand that our thinking respects the principles of reason; this is what sets science apart.

There is, of course, more to science than reason alone. “Basic research”, as the organic chemist Homer Adkins once said, “is like shooting an arrow into the air and, where it lands, painting a target.” Accidents and chance and inexplicable inspiration matter too. Even so, it is still reason and logic to which we cling, as we have since the time of The Enlightenment. Turn the spotlight of reason onto the reasoning process itself, however, and paradoxes emerge. Pure reason just isn't as reasonable as it seems.

Countless experiments establish that people — even scientists — aren't rational in practice. Humans are generally overconfident in their judgements. We also have a perverse tendency to notice information that confirms our current beliefs, while conveniently overlooking contradictory evidence. This is human reasoning, warts and all. Knowing the principles of sound logic gives us the means to check ourselves and illuminates an ideal behaviour to which we can at least aspire. But it's a mistake to think that classical logic is always superior to the more intuitive and error-prone kind of thinking given to us by evolution. It's just different, not better.

After all, reasoning by logic — even imperfect rough logic — is slow and laborious. In contrast, intuitive judgements give decent results effortlessly and quickly. It avoids costly indecision and sometimes yields better decisions. For example, psychologists in a study in 2006 found that people facing simple decisions did best with conscious calculation, but that decisions made on 'gut feelings' were superior for complex decisions, which involve the conflicting pulls and pushes of many different attributes (A. Dijksterhuis et al. Science 311, 1005–1007; 2006).

Pure reason just isn't as reasonable as it seems.

Scientists should keep this in mind when pressed to find 'quantitative' evidence to support, for example, hiring one person out of 20 or 50 or 100 excellent post-doc candidates. Interviews and immersion in the candidates' prior work is certainly required to prepare the mind, but intuition and gut feelings then have a perfectly legitimate place in the final decision — it can't be reduced to a calculation.

Take reason to its extreme, in fact, and it doesn't even make sense. Suppose a scientist who is unfailingly rational needs to design an experiment. He or she begins by trying to work out the optimal design, but deliberation itself brings a cost in time; even a crude experiment run next week would be better than the perfect experiment run after 20 years. How long should he or she work to improve the design before running the experiment as is? A fully rational person has to solve this preliminary problem, before even getting to work.

But this is only the beginning of infinite trouble. The preliminary problem is itself a difficult problem and a rational individual shouldn't want to waste resources thinking about this one too long either. Hence, the rational person first has to decide what is the optimal time to spend on solving the preliminary problem. This is another problem again. As economist John Conlisk first pointed out, a rational thinker confronts an infinite sequence of further nested problems and can actually never get to work (J. Econ. Lit. 34, 669–700; 1996).

Hence, rationality taken to its logical conclusion ends up destroying itself — it's simply an inconsistent idea. At some point, it is better to judge roughly that the experiment is 'good enough' and do it. Make contact with reality. Then adapt. Reasoning may well be our most valuable human possession, but only when it's a little sloppy. Perfect reasoning is a mirage — it evaporates under close scrutiny.

Indeed, all of our behaviour must be in some sense irrational, and we're often better off for it. Take overconfidence. This is among the most well documented of general human foibles. On average, people think they are better than average drivers, which obviously cannot be true. People systematically underestimate how long it will take them to do almost any task. Ask people to identify words they can spell correctly with 100% certainty and they get about 20% wrong.

This is all puzzling for evolutionary theory, as you would think overconfidence should be costly. Among other things it leads to the underestimation of risk — “I can probably outrun that bear” — and so should have been weeded out through time. But the evolutionary process is often counter-intuitive and research suggests that irrational overconfidence may actually be adaptive.

It's not crazy to think that overconfidence could help people by making them more ambitious or persistent or more convincing when bluffing. Dominic Johnson and James Fowler examined the logic of this more closely in a simple model in which individuals meet in pairs and compete over resources (Nature 477, 317–320; 2011). Each individual has an inherent strength — variable in the population — and each pairwise encounter has one of two outcomes: one individual takes the resource without a fight (the taker gains and the other neither gains nor loses); or the two fight, the stronger winning (gaining the resource minus some effort) and weaker losing (and losing the effort). If individuals always judged strengths accurately, there would never be any fights, as weaker individuals would always defer.

But Johnson and Fowler included the possibility that individuals make errors about the strengths of others, and may also be systematically overconfident or underconfident about their own strength. Hence, fights may take place. In an evolutionary model in which individual types gaining more resources tend to spread, Johnson and Fowler found that it is quite common for natural competition in this evolutionary setting to lead to endemic misperceptions in the population. In particular, if resources are sufficiently valuable, the entire population end up being overconfident in their own strength — such overconfidence being an asset in leading them into battle, which frequently pays off despite their error.

So being rational isn't anything like the same as being fit. It's not the same as being wise, and certainly not the same as being a good scientist. If the young student Einstein had been too rational, he'd have concluded he probably had very little chance of overturning the foundations of physics as established by the likes of Newton and Galileo.