Thank you for visiting nature.com. You are using a browser version with limited support for CSS. To obtain the best experience, we recommend you use a more up to date browser (or turn off compatibility mode in Internet Explorer). In the meantime, to ensure continued support, we are displaying the site without styles and JavaScript.

# Fairness violations elicit greater punishment on behalf of another than for oneself

## Abstract

Classic psychology and economic studies argue that punishment is the standard response to violations of fairness norms. Typically, individuals are presented with the option to punish the transgressor or not. However, such a narrow choice set may fail to capture stronger alternative preferences for restoring justice. Here we show, in contrast to the majority of findings on social punishment, that other forms of justice restoration (for example, compensation to the victim) are strongly preferred to punitive measures. Furthermore, these alternative preferences for restoring justice depend on the perspective of the deciding agent. When people are the recipient of an unfair offer, they prefer to compensate themselves without seeking retribution, even when punishment is free. Yet when people observe a fairness violation targeted at another, they change their decision to the most punitive option. Together these findings indicate that humans prefer alternative forms of justice restoration to punishment alone.

## Introduction

Social norms, such as fairness concerns, provide prescribed standards for behaviour that promote social efficiency and cooperation1,2,3. How humans resolve fairness transgressions has been extensively studied in the context of simple, constrained interactions4. Traditionally, people are presented with two options—engage in punitive behaviour, or do nothing. In this context, people typically respond to fairness violations with punishment5,6. However, such a narrow range of options may fail to capture alternative, preferred strategies for restoring justice that are frequently observed in everyday life. Here, we test alternative preferences for justice restoration by broadening the decision-making space to include compensatory measures in addition to punishment. Since impartiality is a core principle of many legal systems and is believed to influence judicial decision-making, we further test whether these preferences are differentially deployed depending on the perspective of the deciding agent. That is, do unaffected third parties sanction fairness violations differently than personally affected second parties?

Demonstrations of how intensely humans endorse punishment as a means of ensuring fair and equitable outcomes2 suggests that punishment is the standard response to violations of justice. Hundreds of studies using the Ultimatum Game illustrate that people are willing to incur personal monetary costs to punish fairness violations. In the Ultimatum Game, two players must agree on how to split a sum of money. First, the proposer makes an offer of how to divide the money. The responder can then either accept the offer, in which case the money is split as proposed, or reject the offer, in which case neither player receives any money7. It is well established that responders will forgo even large monetary benefits by rejecting the offer to punish the proposer for offering an unfair split8,9. In fact, extremely unfair offers are rejected around 70% of the time10.

In the real world, however, punishment is rarely the only option for restoring justice. There is a broad range of alternative responses, reflecting the idea that both the transgressor and the victim can be differentially valued depending on one’s social preferences and conceptual sense of justice. For instance, some people may prefer to compensate the victim11, or punish the transgressor such that the penalty is proportionate to the harm committed12, preferences that may prove to have powerful roles in motivating the restoration of justice. Although existence of alternative forms of justice restoration date back as far as four millennia ago13, no research that we are aware of has examined these alternatives alongside the prototypical punitive options.

The question of justice restoration is important because most legal systems are largely based on the principle that social order depends on punishment. For much of modern civilization, formal systems—such as judges and juries14,15—have been structured to mete out justice. The underlying assumption is that people make judgments differently depending on whether a fairness violation is directed towards another individual or aimed at oneself. Given the distinct asymmetries between the way people perceive themselves versus their peers16, it is thought that unaffected and putatively dispassionate third parties sanction transgressors in a less egocentric and more deliberate manner than victims17. Indeed, theorists suggest that people experience psychologically close events (for example, those experienced personally) in a detailed, concrete manner, whereas socially distant objects are construed in terms of high-level, abstract characteristics and principles18,19. Psychological distance from a transgression may therefore bias how people evaluate fairness violations and influence their subsequent preferences for restoring justice. Accordingly, we theorized that individuals would endorse different routes to justice restoration depending on whether they are the direct recipient of a fairness violation compared with when they merely observe it.

To examine alternative motivations for restoring justice and test whether individuals navigate fairness violations differently for both self and another, we developed a novel economic game that broadens the available choice space to include a range of punitive and compensatory options for restoring justice that are not present in classic experimental games. To model alternative options for justice restoration frequently observed in the real world, we not only presented participants with the opportunity to accept or reject the proposed split (as in the Ultimatum Game), but also other novel options that reflect a range of other-regarding preferences.

In our task, Player A has the first move and can propose a division of a $10 pie with Player B (Player A:$10−x, Player B: x, Fig. 1a). Player B can then reapportion the money by choosing from the following five options; (1) accept: agreeing to the proposed split ($10−x, x)7; (2) punish: reducing Player A’s payout to the original amount offered to Player B (x, x)20; (3) equity: equally splitting the pie so that both players receive half of the initial endowment ($5, $5)4; (4) compensate: increasing Player B’s own payout to equal Player A’s payout, thus enlarging the pie to maximizing both players’ monetary outcomes ($10−x, $10−x)21; and finally, (5) reverse: reversing the proposed split—a ‘just deserts’ motive where the perpetrator deserves punishment proportionate to the wrong committed12—so that Player A is punished and Player B is compensated (x,$10−x)22,23. See Supplementary Discussion for in-depth explanations of each option. As in many classic experimental economics games that explore trade-offs between discrete choice pairs7,24, participants were presented with only two options on any given trial, such that each option (that is, ‘compensate’, ‘equity’, ‘accept’, ‘punish’, ‘reverse’) was randomly paired with one alternative option per trial, resulting in every combination pair, for a total of 10 unique combination pairs (Fig. 1b). When making their offers, Player A was not aware which two options would be available to Player B on a given trial.

We find that although decades of research demonstrate that individuals consistently retaliate against those who behave unfairly, when alternative options for dealing with fairness violations are made available, these assumedly robust preferences to punish another are not actually preferred when offered alongside other, non-punitive options. However, when tasked with making the same decision on behalf of someone else who has experienced a fairness violation, individuals modify their responses and apply the harshest form of punishment to the transgressor. Together these results challenge our current understanding of social preferences and the emphasis placed on punitive behaviour.

## Results

### Preferences for justice restoration extend beyond punishment

Figure 2a shows choice behaviour (N=112; 42 males, mean age 20.8±2.11) for moderately unfair offers and highly unfair offers in Experiment 1. We compute endorsement rates by the frequency an option is selected, such that each option’s endorsement rate is out of 100% (number of times an option is selected/number of times the option is presented during the experiment). That is, we calculate the number of times ‘accept’ is chosen when paired with every possible alternative option, and did the same for ‘punish’, ‘compensate’, ‘equity’ and ‘reverse’. Strikingly, across all offer types, participants least chose the options ‘accept’ and ‘punish’ (10% and 16% endorsement rate, respectively; Supplementary Table 1)—the two options most similar to those in the traditional Ultimatum Game. Instead, participants most preferred the option ‘compensate’, choosing to increase their own payout and apply no punishment to Player A (92% endorsement rate; Supplementary Table 1). This preference remained robust even when participants were offered a highly unfair split of (Fig. 2a).

Since the choice pair ‘compensate’ versus ‘reverse’ controls for Player B’s monetary benefit—that is, after receiving a highly unfair spilt of choosing compensate or reverse results in the exact same monetary payout to Player B ($9)—we can use this choice pair to directly test other-regarding preferences while controlling for Player B’s fiscal efficiency. Results reveal that when responding to unfair offers, participants prefer to compensate rather than reverse, even though punishment is free (Pearson’s χ2=9, 1 df, P=0.003, ϕ=0.15, Fig. 2b). In other words, despite the available option to maximize one’s payout while simultaneously applying punishment to Player A (selecting ‘reverse’), participants preferred to maximize their payoff and not apply any punishment to Player A. Although most previous research has focused on punishment3 as the primary method of restoring justice, these findings illustrate that when possible, people actually prefer compensation to punishment. In a second experiment, Player Bs were presented with varying splits of a$1 endowment from Player A, ranging from moderately unfair to highly unfair , reflected through 10 cent increments. As in Experiment 1, participants (N=97, Experiment 2a) did not prefer traditional Ultimatum Game options to ‘accept’ the offer or to ‘punish’ Player A for proposing an unfair split, and instead the strongest preference was to compensate (84% endorsement rate of ‘compensate’ across all offer types, Supplementary Table 2a). Again, for unfair offers, the choice pair compensate versus reverse reveals that even when punishment is free, individuals still prefer to compensate and abstain from punishing Player A (Pearson’s χ2=7.7, 1 df, P=0.005, ϕ=0.14). Together, these findings indicate that when given the option for alternative forms of justice restoration, compensation of the victim is strongly preferred to punishment of the transgressor.

### Second and third party preferences for justice restoration

To test whether being directly affected by a fairness violation influences decisions to restore justice, we also examined participants’ behaviour when they acted as a non-vested third party (Player C), observing interactions between Players A and B (N=261, Experiment 2b). That is, participants were asked to make decisions on behalf of another player such that payoffs would be paid to Players A and B and not to themselves. Unlike in the ‘Self’, second-party condition in which participants played the game as Player B (Experiments 1 and 2a), these ‘Other’, third-party decisions were non-costly and non-beneficial. Similar to decisions made in the Self condition, Player Cs (Other condition) show little preference to ‘accept’ the offer, or to ‘punish’ Player A for proposing an unfair split to Player B (Supplementary Table 2b).

Although individuals chose to compensate oneself and another at the same rate when the offer was relatively fair (McNemar’s χ2=1.2, 1 df, P=0.27), we found that when responding to unfair offers, Player Cs selected ‘reverse’—the option that both compensates Player B and punishes Player A—significantly more often than Player Bs did for themselves (choice pair compensate/reverse: McNemar’s χ2=13.5, 1 df, P<0.001, ϕ=0.14; Supplementary Fig. 2). In other words, although participants did not show preferences for punishing Player A when directly affected by a fairness violation (that is, as a second party), when observation of a fairness violation targeted at another (that is, as a third party), participants significantly increased their retributive responding.

### Experiments 2–6

Participants were recruited from the United States using the online labour market Amazon Mechanical Turk (AMT)33,34,35,36). Participants played anonymously over the Internet and were not allowed to participate in more than one experimental session. On each trial, participants (Player B) were paid an initial participation fee of $0.50 and an additional bonus depending on their choices (ranging from$0.10 to $0.90). Across all experiments, participants were first presented with a standard digital consent form, which explained the general procedure, known risks (none), confidentiality, compensation and their rights. They could only partake in the study once they agreed to the consent form. To ensure task comprehension, participants had to correctly complete a quiz following the instructions. Only after they correctly completed the quiz could participants begin the task. Participants were then told to place their hands on the keyboard on the following keys: S, D, F, H, J, and a timer counted down from five before the task started. On each trial, the options ‘compensate’, ‘equity’, ‘accept’, ‘punish’ and ‘reverse’ (labelled in analyses and here, but not presented to participants; see Supplementary Fig. 4) were displayed in a different order. After completing the task, participants were explicitly probed on their strategies when the offer was relatively fair and when the offer was highly unfair , for both the Self and Other conditions. That is, participants were asked ‘in your own words please describe your strategy for a scenario when Player A kept$0.60 and offered $0.40 to you’. See Supplementary Materials for a sampling of participants’ strategies. Unlike the experiments run in the laboratory, in the experiments run through AMT, we restricted offers from Player A (in reality, predetermined offers from a computer) to varying levels of unfairness, ranging from moderately unfair to highly unfair , reflected through$0.10 increments. This was done primarily because we were interested in how people resolve fairness transgressions.

### Differences in task structure for experiments 2–6

Experiment 2 was a pairwise comparison of each choice pair (Fig. 1b). Participants (N=358) played the task either as Player B (Self condition; N=97) or as Player C (Other condition; N=261), a between-subjects design. Participants were only instructed about the condition they were in, such that the instructions either explained that participants were to make decisions for themselves and Player A (Self condition), or on behalf of two other Players (Other condition). Participants were able to make an additional payout based on their choices if they completed the Self condition. For participants who completed the Other condition, they did not make an additional bonus but were paid for the time taken to complete the task.

Like in Experiment 1, on each trial, participants were presented with only two options. For example, after being offered an unfair split, Player B only observed two options (for example, compensate versus equity, compensate versus accept, compensate versus punish, compensate versus reverse, equity versus accept, equity versus punish, and so on). Thus, for every offer type (), participants saw all possible pairwise comparisons (that is, 10 pairs for each offer type, and four different offer types, resulting in 40 anonymous, one-shot games in total). Trials were randomly presented to participants.

In Experiments 3–6, participants played the task as both Player B and Player C. This within-subject design allowed us to explore each individual’s choices across conditions, Self and Other. Although Experiments 3–6 were quite similar, there were small differences between the tasks which are enumerated here. In Experiment 3, Self and Other trials were presented in discrete blocks, with the Self condition always presented first and the Other condition presented second. However, to ensure that there were no order effects and that participants were not anchoring their decisions according to the decisions made in the first block (Self condition), Experiments 4–6 randomly presented the trials such that Self and Other trials were randomly interleaved across the experiment. In Experiment 3, reaction times were collected with a mouse, whereas in Experiments 4–6, reaction times were collected using the keyboard (button presses). Reaction time data were similar regardless of whether participants used a mouse or a keyboard: across all four Experiments, participants were faster to decide for another than they were for themselves (see reaction time data in Supplementary Materials). In Experiment 4, each participant was presented with a random ordering of trials. In other words, no participant saw the same order of offer types. In Experiment 5, all participants were presented with the same randomized set of trials. That is, AMT presented the same order of trials (previously determined by an algorithm to randomly interleave offer types and conditions) to all participants. Experiment 6 followed the same structure as Experiment 5, with the only difference being that blank profile pictures were added to the instructions to further delineate the roles of all the players.

How to cite this article: FeldmanHall, O. et al. Fairness violations elicit greater punishment on behalf of another than for oneself. Nat. Commun. 5:5306 doi: 10.1038/ncomms6306 (2014).

## References

1. 1

Falk, A., Fehr, E. & Fischbacher, U. Testing theories of fairness—Intentions matter. Games Econ. Behav. 62, 287–303 (2008).

2. 2

Fehr, E. & Fischbacher, U. Why social preferences matter—The impact of non-selfish motives on competition, cooperation and incentives. Econ. J. 112, C1–C33 (2002).

3. 3

Fehr, E., Fischbacher, U. & Gachter, S. Strong reciprocity, human cooperation, and the enforcement of social norms. Hum. Nature 13, 1–25 (2002).

4. 4

Fehr, E. & Schmidt, K. M. A theory of fairness, competition, and cooperation. Q. J. Econ. 114, 817–868 (1999).

5. 5

Herrmann, B., Thoni, C. & Gachter, S. Antisocial punishment across societies. Science 319, 1362–1367 (2008).

6. 6

Fowler, J. H. Altruistic punishment and the origin of cooperation. Proc. Natl Acad. Sci. USA 102, 7047–7049 (2005).

7. 7

Guth, W., Schmittberger, R. & Schwarze, B. An experimental-analysis of ultimatum bargaining. J. Econ. Behav. Organ. 3, 367–388 (1982).

8. 8

Cameron, L. A. Raising the stakes in the ultimatum game: Experimental evidence from Indonesia. Econ. Inq. 37, 47–59 (1999).

9. 9

Slonim, R. & Roth, A. E. Learning in high stakes ultimatum games: an experiment in the Slovak Republic. Econometrica 66, 569–596 (1998).

10. 10

Camerer, C. Behavioral Game Theory: Experiments in Strategic Interaction Russell Sage Foundation; Princeton University Press (2003).

11. 11

Weitekamp, E. Reparative Justice: towards a victim oriented system. Eur. J. Criminal Policy Res. 1, 70–93 (1993).

12. 12

Carlsmith, K. M., Darley, J. M. & Robinson, P. H. Why do we punish? Deterrence and just deserts as motives for punishment. J. Pers. Soc. Psychol. 83, 284–299 (2002).

13. 13

Gurney, O. R. & Kramer, S. N. Two Fragments of Sumerian Laws University of Chicago Press (1965).

14. 14

Fehr, E. & Fischbacher, U. Third-party punishment and social norms. Evol. Hum. Behav. 25, 63–87 (2004).

15. 15

Smith, A. The Theory of Moral Sentiments A. Millar (1759).

16. 16

Nisbett, R. E., Legant, P. & Marecek, J. Behavior as seen by actor and as seen by observer. J. Pers. Soc. Psychol. 27, 154–164 (1973).

17. 17

Fehr, E. & Fischbacher, U. Social norms and human cooperation. Trends Cogn. Sci. 8, 185–190 (2004).

18. 18

Ledgerwood, A., Trope, Y. & Liberman, N. Flexibility and consistency in evaluative responding: the function of construal level. Adv. Exp. Soc. Psychol. 43, 257–295 (2010).

19. 19

Trope, Y. & Liberman, N. Construal-level theory of psychological distance. Psychol. Rev. 117, 440–463 (2010).

20. 20

Bolton, G. E. & Zwick, R. Anonymity versus punishment in ultimatum bargaining. Games Econ. Behav. 10, 95–121 (1995).

21. 21

Lotz, S., Okimoto, T. G., Schlosser, T. & Fetchenhauer, D. Punitive versus compensatory reactions to injustice: Emotional antecedents to third-party interventions. J. Exp. Soc. Psychol. 47, 477–480 (2011).

22. 22

Pillutla, M. M. & Murnighan, J. K. Unfairness, anger, and spite: Emotional rejections of ultimatum offers. Organ. Behav. Hum. Decis. Process 68, 208–224 (1996).

23. 23

Straub, P. G. & Murnighan, J. K. An experimental investigation of ultimatum games—information, fairness, expectations, and lowest acceptable offers. J. Econ. Behav. Organ. 27, 345–364 (1995).

24. 24

Fehr, E. & Gachter, S. Cooperation and punishment in public goods experiments. Am. Econ. Rev. 90, 980–994 (2000).

25. 25

Schimmack, U. The ironic effect of significant results on the credibility of multiple-study articles. Psychol. Methods 17, 551–566 (2012).

26. 26

Henrich, J. et al. ‘Economic man’ in cross-cultural perspective: behavioral experiments in 15 small-scale societies. Behav. Brain Sci. 28, 795–815 (2005).

27. 27

Rand, D. G. & Nowak, M. A. The evolution of antisocial punishment in optional public goods games. Nat. Commun. 2, 434 (2011).

28. 28

Rand, D. G., Dreber, A., Ellingsen, T., Fudenberg, D. & Nowak, M. A. Positive interactions promote public cooperation. Science 325, 1272–1275 (2009).

29. 29

Andreoni, J., Harbaugh, W. & Vesterlund, L. The carrot or the stick: rewards, punishments, and cooperation. Am. Econ. Rev. 93, 893–902 (2003).

30. 30

Ledgerwood, A., Trope, Y. & Chaiken, S. Flexibility now, consistency later: psychological distance and construal shape evaluative responding. J. Pers. Soc. Psychol. 99, 32–51 (2010).

31. 31

Rawls, J. Theory of Justice 38–52Harvard University (1994).

32. 32

Camerer, C. T. & Thaler, R. H. Anomalies: ultimatums, dictators and manners. J. Econ. Behav. Perspect. 9, 209–219 (1995).

33. 33

Mason, W. & Suri, S. Conducting behavioral research on Amazon's Mechanical Turk. Behav. Res. Methods 44, 1–23 (2012).

34. 34

Horton, J. J., Rand, D. G. & Zeckhauser, R. J. The online laboratory: conducting experiments in a real labor market. Exp. Econ. 14, 399–425 (2011).

35. 35

Paolacci, G., Chandler, J. & Ipeirotis, P. G. Running experiments on Amazon Mechanical Turk. Judgm. Decis. Making 5, 411–419 (2010).

36. 36

Buhrmester, M., Kwang, T. & Gosling, S. D. Amazon's mechanical turk: a new source of inexpensive, yet high-quality, data? Perspect. Psychol. Sci. 6, 3–5 (2011).

## Acknowledgements

We are grateful to Dean Mobbs and Tim Dalgleish for their early help and support with this research. This research is supported by a grant from the National Institute of Aging.

## Author information

Authors

### Contributions

O.F.H. designed the experiments in consultation with E.A.P. and P.S.H. O.F.H. and P.S.H. carried out the experiments. O.F.H. ran the statistical analyses, and O.F.H., P.S.H., J.J.V.B. and E.A.P. wrote the paper.

### Corresponding author

Correspondence to Elizabeth A. Phelps.

## Ethics declarations

### Competing interests

The authors declare no competing financial interests.

## Supplementary information

### Supplementary Information

Supplementary Figures 1-7, Supplementary Tables 1-2, Supplementary Discussion and Supplementary References (PDF 1326 kb)

## Rights and permissions

Reprints and Permissions

FeldmanHall, O., Sokol-Hessner, P., Van Bavel, J. et al. Fairness violations elicit greater punishment on behalf of another than for oneself. Nat Commun 5, 5306 (2014). https://doi.org/10.1038/ncomms6306

• Accepted:

• Published:

• ### Children punish third parties to satisfy both consequentialist and retributive motives

• Julia Marshall
• , Daniel A. Yudkin
•  & Molly J. Crockett

Nature Human Behaviour (2021)

• ### Infringers’ willingness to pay compensation versus fines

• Pieter T. M. Desmet
•  & Franziska Weber

European Journal of Law and Economics (2021)

• ### Direct and indirect punishment of norm violations in daily life

• Catherine Molho
• , Joshua M. Tybur
• , Paul A. M. Van Lange
•  & Daniel Balliet

Nature Communications (2020)

• ### Why we don’t always punish: Preferences for non-punitive responses to moral violations

• Joseph Heffner
•  & Oriel FeldmanHall

Scientific Reports (2019)

• ### Crowdsourcing punishment: Individuals reference group preferences to inform their own punitive decisions

• Jae-Young Son
• , Apoorva Bhandari
•  & Oriel FeldmanHall

Scientific Reports (2019)