The tendency of humans to punish perceived free-loaders, even at a cost to themselves, is an evolutionary puzzle: punishers perish, and those who benefit the most are those who have never punished at all.
Humans are champions of cooperation. Reciprocity — the idea that, if I help you this time, you'll help me next time1 — is a secret of our success. But how do I avoid being the sucker when someone I've helped refuses to pay me back? Social-dilemma games, which in the laboratory mimic human social interactions, have shown that the opportunity to punish is an effective curb on 'defectors', even when punishment not only hurts the punished, but also the punisher2,3,4,5. We see that kind of behaviour outside the laboratory too: bystanders suffer personal injury intervening in altercations; environmental activists risk their lives fighting destructive acts; and so on.
On page 348 of this issue, Dreber et al.6 quantify who profits from this 'costly punishment'. Their findings are intriguing. Although costly punishment induces cooperation, its cost destroys all gains from increased cooperation, not just for the punisher, but for the whole group. At the end of the game, those who punished were the ultimate losers; the absolute winners had never punished. Explaining why costly punishment is used at all, if not even the group seems to benefit, becomes even more of a challenge.
The authors used a variant of the classic two-person 'prisoner's dilemma' game, in which players have a binary choice of cooperation or defection. If I cooperate with you, I lose one unit of money so that you gain two; if I defect, I gain one unit and you lose one. That way, if we both cooperate, each of us has a net gain of one unit. If we both defect, neither of us gains anything; so cooperation pays. But if you cooperate and I defect, I gain three units and you lose two. That makes defection tempting for most people, and cooperation generally breaks down at some point in a prisoner's dilemma game. A strategy that emerges is 'tit-for-tat'7: players begin cooperatively, and then copy their partner's last move, cooperating with cooperators and defecting with defectors — thus avoiding being the sucker.
In Dreber and colleagues' extension of this game6, participants could choose from three options in each round: cooperate, defect or punish. Punishment here means losing one money unit so that the other player loses four. There are thus two ways of expressing disapproval: the moderate way of defection, and the harsh way of costly punishment. Subjects made use of the harsher option in 7% of all choices. A single punishment act rarely re-established cooperation; indeed, it often led to mutual back-stabbing. But overall, cooperation increased from 21% in the prisoner's dilemma game, used by the authors as a control, to 52%, although the tit-for-tat strategy was an option in either case.
A success story, one might think. Not for the punishers: the more a player had used the punishment option, the lower that individual's final profit was. The final, aggregated pay-off of all participants (quantifying the benefit to society as a whole) was the same in the games with and without the punishment option.
If both punishers and the punished lose through punishment, someone must have profited. Indeed: cooperators who did not punish at all gained even more in the games where punishment was possible than the best-performing participants in the control. Thus, it would seem, winners don't punish; and punishers perish (Fig. 1).
Dreber et al. conclude that costly punishment is a 'maladaptive' behaviour in social-dilemma situations — one that is fundamentally counterproductive, because it pays off neither for the punisher nor for the group. Thus, although it frequently induces cooperation, it can't have evolved for inducing cooperation. Not even the cooperation-enhancing effect appears consistently in social-dilemma games. In some societies, not only free-loaders but also high contributors are punished, which dampens and sometimes even removes the cooperation-enhancing effect of punishment8.
Dreber et al. argue that punishment has evolved for another purpose, such as coercing individuals into submission, or establishing dominance hierarchies. But the fact remains that, given the choice, players of social-dilemma games have been shown to prefer an environment where punishment is possible. That preference pays off when participants, punishers as well as non-punishers, enter this environment after the initial period of high punishment is over and cooperation dominates4.
If costly punishment is detrimental to personal evolutionary fitness in a certain situation, we should have evolved the ability to suppress it in that context. Evidence that we have comes from ultimatum games, in which one player decides how to split a sum of money, and the second player can either accept the offer (in which case both players receive the proposed share) or reject it (in which case neither player wins anything). Neurological tests have shown that humans have a stronger activation of brain areas related to both emotion and cognition when unfair offers in an ultimatum game come from other humans than when the same offers, and monetary consequences, come from a computer9. Similarly, in experiments where subjects could choose between costly punishment of the free-loaders and helping cooperative players to gain, costly punishment was reduced to a third; the few remaining punishing acts were concentrated on the worst defectors10. In our view, this ability to reduce the use of costly punishment makes it unlikely that it is just an unavoidable by-product of something else, such as an inability to control anger.
To provide punishers with an overall net benefit, costly punishment must be greatly rewarded in another context. Perhaps punishers gain a special kind of reputation that is advantageous elsewhere. But so far, there has been no conclusive evidence for such a delayed pay-off, and so costly punishment remains one of the most thorny puzzles in human social dilemmas. Dreber and colleagues' results make it plain that we are still a long way from understanding the dark side of human sociality.
About this article
Scientific Reports (2012)