Abstract
Third-party punishment (TPP)1,2,3,4,5,6,7, in which unaffected observers punish selfishness, promotes cooperation by deterring defection. But why should individuals choose to bear the costs of punishing? We present a game theoretic model of TPP as a costly signal8,9,10 of trustworthiness. Our model is based on individual differences in the costs and/or benefits of being trustworthy. We argue that individuals for whom trustworthiness is payoff-maximizing will find TPP to be less net costly (for example, because mechanisms11 that incentivize some individuals to be trustworthy also create benefits for deterring selfishness via TPP). We show that because of this relationship, it can be advantageous for individuals to punish selfishness in order to signal that they are not selfish themselves. We then empirically validate our model using economic game experiments. We show that TPP is indeed a signal of trustworthiness: third-party punishers are trusted more, and actually behave in a more trustworthy way, than non-punishers. Furthermore, as predicted by our model, introducing a more informative signal—the opportunity to help directly—attenuates these signalling effects. When potential punishers have the chance to help, they are less likely to punish, and punishment is perceived as, and actually is, a weaker signal of trustworthiness. Costly helping, in contrast, is a strong and highly used signal even when TPP is also possible. Together, our model and experiments provide a formal reputational account of TPP, and demonstrate how the costs of punishing may be recouped by the long-run benefits of signalling one’s trustworthiness.
This is a preview of subscription content, access via your institution
Relevant articles
Open Access articles citing this article.
-
Self-serving reward and punishment: evidence from the laboratory
Scientific Reports Open Access 26 August 2023
-
Reputation effects drive the joint evolution of cooperation and social rewarding
Nature Communications Open Access 07 October 2022
-
Misrepresentation of group contributions undermines conditional cooperation in a human decision making experiment
Scientific Reports Open Access 19 July 2022
Access options
Subscribe to this journal
Receive 51 print issues and online access
$199.00 per year
only $3.90 per issue
Rent or buy this article
Prices vary by article type
from$1.95
to$39.95
Prices may be subject to local taxes which are calculated during checkout


References
Fehr, E. & Fischbacher, U. Third-party punishment and social norms. Evol. Hum. Behav. 25, 63–87 (2004)
Goette, L., Huffman, D. & Meier, S. The impact of group membership on cooperation and norm enforcement: evidence using random assignment to real social groups. Am. Econ. Rev. 96, 212–216 (2006)
Jordan, J. J., McAuliffe, K. & Rand, D. G. The effects of endowment size and strategy method on third party punishment. Exp. Econ. http://dx.doi.org/10.1007/s10683-015-9466-8 (2015)
Kurzban, R., DeScioli, P. & O’Brien, E. Audience effects on moralistic punishment. Evol. Hum. Behav. 28, 75–84 (2007)
Balafoutas, L. & Nikiforakis, N. Norm enforcement in the city: a natural field experiment. Eur. Econ. Rev. 56, 1773–1785 (2012)
Mathew, S. & Boyd, R. Punishment sustains large-scale cooperation in prestate warfare. Proc. Natl Acad. Sci. USA 108, 11375–11380 (2011)
FeldmanHall, O., Sokol-Hessner, P., Van Bavel, J. J. & Phelps, E. A. Fairness violations elicit greater punishment on behalf of another than for oneself. Nature Commun. 5, 5306 (2014)
Zahavi, A. Mate selection—a selection for a handicap. J. Theor. Biol. 53, 205–214 (1975)
Gintis, H., Smith, E. A. & Bowles, S. Costly signaling and cooperation. J. Theor. Biol. 213, 103–119 (2001)
Roberts, G. Competitive altruism: from reciprocity to the handicap principle. Proc. Biol. Sci. 265, 427–431 (1998)
Rand, D. G. & Nowak, M. Human cooperation. Trends Cogn. Sci. 17, 413–425 (2013)
Guala, F. Reciprocity: weak or strong? What punishment experiments do (and do not) demonstrate. Behav. Brain Sci. 35, 1–15 (2012)
Henrich, J. et al. Costly punishment across human societies. Science 312, 1767–1770 (2006)
Nowak, M. A. & Sigmund, K. Evolution of indirect reciprocity. Nature 437, 1291–1298 (2005)
Raihani, N. J. & Bshary, R. The reputation of punishers. Trends Ecol. Evol. 30, 98–103 (2015)
Barclay, P. Reputational benefits for altruistic punishment. Evol. Hum. Behav. 27, 325–344 (2006)
Fessler, D. M. & Haley, K. J. in The Genetic and Cultural Evolution of Cooperation (ed. Hammerstein, P. ) (MIT Press, 2003)
Panchanathan, K. & Boyd, R. Indirect reciprocity can stabilize cooperation without the second-order free rider problem. Nature 108, 432–502 (2014)
Baumard, N., André, J.-B. & Sperber, D. A mutualistic approach to morality: the evolution of fairness by partner choice. Behav. Brain Sci. 36, 59–78 (2013)
Boyd, R., Gintis, H. & Bowles, S. Coordinated punishment of defectors sustains cooperation and can proliferate when rare. Science 328, 617–620 (2010)
van Veelen, M. Robustness against indirect invasions. Games Econ. Behav. 74, 382–393 (2012)
Nelissen, R. M. A. The price you pay: cost-dependent reputation effects of altruistic punishment. Evol. Hum. Behav. 29, 242–248 (2008)
Peysakhovich, A., Nowak, M. A. & Rand, D. Humans display a ‘cooperative phenotype’ that is domain general and temporally stable. Nature Commun. 5, 4939 (2014)
Horita, Y. Punishers may be chosen as providers but not as recipients. Lett. Evol. Behav. Sci. 1, 6–9 (2010)
Bear, A. & Rand, D. G. Intuition, deliberation, and the evolution of cooperation. Proc. Natl Acad. Sci. USA 113, 936–941 (2016)
Peysakhovich, P. & Rand, D. G. Habits of virtue: creating norms of cooperation and defection in the laboratory. Management Science http://dx.doi.org/10.1287/mnsc.2015.2168 (2015)
Raihani, N. J. & Bshary, R. Third-party punishers are rewarded, but third-party helpers even more so. Evolution 69, 993–1003 (2015)
Acknowledgements
We gratefully acknowledge the John Templeton Foundation for financial support; A. Bear, R. Boyd, M. Crockett, J. Cone, F. Cushman, E. Fehr, M. Krasnow, R. Kurzban, J. Martin, M. Nowak, N. Raihani, L. Santos, and A. Shaw for helpful feedback; and A. Arechar, Z. Epstein, and G. Kraft-Todd for technical assistance.
Author information
Authors and Affiliations
Contributions
J.J.J., M.H. and D.G.R. designed and analysed the model. J.J.J., P.B. and D.G.R. designed the experiments. J.J.J. conducted the experiments and analysed the results. J.J.J., M.H., P.B. and D.G.R. wrote the paper.
Corresponding authors
Ethics declarations
Competing interests
The authors declare no competing financial interests.
Extended data figures and tables
Extended Data Figure 1 Agent-based simulations from our second microfoundation model in which gaining interaction partners reduces TPP costs.
TPP evolves over time in this modified model, in which a Signaller’s punishment costs are endogenous (decreasing in the number of times she has been accepted as a partner), rather than exogenously fixed as lower for trustworthy types. We use parameters similar to the main text agent-based simulations, where punishment is moderately informative and helping is more informative. Shown is the average over 500 simulations of Signallers’ average probability of helping and punishing (when experiencing the small signalling cost) in each generation, as well as the expected probability of experiencing the small punishing cost for trustworthy and exploitative types (based on the average number of times trustworthy and exploitative types were chosen as partners) at the end of each generation. See Supplementary Information section 1.3.2 for a detailed description of our second microfoundation model.
Extended Data Figure 2 Full agent-based simulation results from the main text model.
Here, we present the Signaller and Chooser strategies for each scenario from our main model agent-based simulations, a summary of which is shown in Fig. 1c. In scenario 1, when only punishment is possible, punishment-signalling evolves, regardless of the informativeness of small helping costs ISH. a, Signallers are likely to punish when the punishment cost is small and b, Choosers are likely to accept Signallers who punish, while they almost always reject those who do not. In scenario 2, when only helping is possible, helping-signalling evolves, and becomes stronger as ISH increases. c, Signallers are increasingly likely to help when the helping cost is small and d, Choosers are increasingly likely to accept Signallers who help, while they almost always reject those who do not. In scenario 3, when both signals are available, agents evolve to use both signals with equal frequency when they are equally informative, but to favour helping as ISH increases. e, As ISH increases, Signallers are increasingly likely to help, both when they have only a small helping cost (light blue dots), and when they have both small costs (dark blue dots); and are decreasingly likely to pay to punish, both when they only have a small punishing cost (light red dots), and when they have both small costs (dark red dots). f, As ISH increases, Choosers are increasingly likely to accept Signallers who help but do not punish (blue dots), and increasingly likely to reject Signallers who punish but do not help (red dots). Furthermore, regardless of ISH, Choosers almost always reject Signallers who neither help nor punish (brown dots). However, Chooser behaviour in response to Signallers who both punish and help (purple dots) stays at chance levels across all values of ISH (because Signallers never send both signals, and thus Choosers do not face selection pressure to respond optimally to such Signallers).
Extended Data Figure 3 Our two-stage experimental design involving Signallers and Choosers.
First, in the signalling stage, the Signaller participates in a third-party punishment game (TPPG). Here a Helper decides whether to share with a Recipient, and then a third-party Punisher decides whether to pay to punish the Helper if the Helper was selfish (chose not to share). In our three experimental conditions, we manipulate the role(s) the Signaller plays in the TPPG. In the punishment-only condition, the Signaller plays once as the Punisher; in the punishment-plus-helping condition, the Signaller plays twice (with two different sets of other people) as the Punisher and the Helper; in the helping-only condition, the Signaller plays once as the Helper. Thus we vary which signal(s) are available. Second, in the partner choice stage, the Chooser plays a trust game with the Signaller. The Chooser decides how much to send the Signaller and any amount sent is tripled by the experimenter. The Signaller then decides how much of the tripled amount to return. Choosers use the strategy method to condition their sending on Signallers’ TPPG decisions.
Extended Data Figure 4 Third-party punishment is perceived as a stronger signal of trustworthiness than retaliation in our additional experiment (study 2).
In our additional experiment, we manipulate whether the second stage of our game is a trust game (TG) or an ultimatum game (UG). In the TG, Choosers maximize their payoffs by sending more money to trustworthy Signallers (who will return a large amount); thus, preferential sending to punishers reflects expectations of punisher trustworthiness. In this game (left bars), punishment has large reputational benefits: replicating study 1, Choosers (n = 405) send 16 percentage points more to punishers than non-punishers, P < 0.001. In the UG, Choosers (n = 421) maximize their payoffs by sending more money to retaliatory Signallers (who are willing to pay the cost required to reject low offers); thus, preferential sending to punishers reflects expectations of punisher retaliation. In this game (right bars), punishment has smaller reputational benefits: Choosers send 3 percentage points more to punishers than non-punishers, P = 0.001. This difference between conditions is significant (P < 0.001) and robust to accounting for the fact that there is less overall variance in UG offers than TG transfers (see Supplementary Information section 6). Thus TPP is perceived as a stronger signal of trustworthiness (in the TG) than willingness to retaliate (in UG). These findings provide further evidence that our TG experiment results (study 1) are not driven by a perception that TPP signals retaliation (although TPP may also signal retaliation in other contexts). Shown is mean sending in each game. Error bars are ± 1 s.e.m.
Supplementary information
Supplementary Information
This file contains Supplementary Text and Data – see contents page for details. (PDF 1786 kb)
PowerPoint slides
Rights and permissions
About this article
Cite this article
Jordan, J., Hoffman, M., Bloom, P. et al. Third-party punishment as a costly signal of trustworthiness. Nature 530, 473–476 (2016). https://doi.org/10.1038/nature16981
Received:
Accepted:
Published:
Issue Date:
DOI: https://doi.org/10.1038/nature16981
This article is cited by
-
Overperception of moral outrage in online social networks inflates beliefs about intergroup hostility
Nature Human Behaviour (2023)
-
Self-serving reward and punishment: evidence from the laboratory
Scientific Reports (2023)
-
Is a punisher always trustworthy? In-group punishment reduces trust
Current Psychology (2023)
-
When punishers might be loved: fourth-party choices and third-party punishment in a delegation game
Theory and Decision (2023)
-
Children as assessors and agents of third-party punishment
Nature Reviews Psychology (2022)
Comments
By submitting a comment you agree to abide by our Terms and Community Guidelines. If you find something abusive or that does not comply with our terms or guidelines please flag it as inappropriate.