Skip to main content

Thank you for visiting nature.com. You are using a browser version with limited support for CSS. To obtain the best experience, we recommend you use a more up to date browser (or turn off compatibility mode in Internet Explorer). In the meantime, to ensure continued support, we are displaying the site without styles and JavaScript.

Psychological reactions to human versus robotic job replacement

Abstract

Advances in robotics and artificial intelligence are increasingly enabling organizations to replace humans with intelligent machines and algorithms1. Forecasts predict that, in the coming years, these new technologies will affect millions of workers in a wide range of occupations, replacing human workers in numerous tasks2,3, but potentially also in whole occupations1,4,5. Despite the intense debate about these developments in economics, sociology and other social sciences, research has not examined how people react to the technological replacement of human labour. We begin to address this gap by examining the psychology of technological replacement. Our investigation reveals that people tend to prefer workers to be replaced by other human workers (versus robots); however, paradoxically, this preference reverses when people consider the prospect of their own job loss. We further demonstrate that this preference reversal occurs because being replaced by machines, robots or software (versus other humans) is associated with reduced self-threat. In contrast, being replaced by robots is associated with a greater perceived threat to one’s economic future. These findings suggest that technological replacement of human labour has unique psychological consequences that should be taken into account by policy measures (for example, appropriately tailoring support programmes for the unemployed).

Main

To obtain initial empirical insights, we first examined whether people perceive robots as a threat to human labour using survey data from a representative sample of European Union citizens (n = 26,750)6. The data revealed that people tend to more strongly agree than disagree that robots steal people’s jobs (1 = totally disagree; 4 = totally agree; mean = 3.01; t(26,053) = 88.89; P < 0.001; Cohen’s d = 0.55; 95% confidence interval (CI): (0.54, 0.56)). This pattern was robust across different occupational groups (for example, students, manual workers and managers), as well as across different countries, suggesting that people generally tend to perceive robots as a threat to human jobs. In line with these results, we argue that—at least for jobs that are not dangerous, dirty or dull—people should prefer that human workers are employed rather than robots, and therefore prefer human workers to be replaced by other humans (versus robots). This reasoning is consistent with research on prosocial behaviour7 documenting that people often care about the wellbeing of other individuals. When job losses affect other people, we thus predict that individuals should prefer human workers being replaced by other human workers rather than by robots.

However, we theorize that this preference for human replacement should be significantly reduced when people contemplate the prospect of their own job loss (versus other people’s). We posit that this preference reversal occurs because technological (versus human) replacement has unique psychological consequences—and that these consequences can be understood within a social comparison framework. Specifically, we argue that, compared with a situation where someone else’s job is at risk (observer perspective), when one’s own job is at risk (employee perspective), social comparison processes become more relevant and overshadow prosocial feelings. Research has shown that social comparisons (that is, the natural tendency to compare oneself with similar or close others8) can have a substantial impact on people’s self-view (that is, how individuals evaluate their self-worth and abilities9). Such identity-relevant comparisons should be more prominent when one’s job is taken over by another human than by a robot. Being replaced by—or ‘losing’ out to—a robot should be less self-threating than being replaced by another person. Thus, to avoid self-threat and maintain a positive self-image, people should have a short-term psychological incentive to prefer being replaced by a robot (versus another human). At the same time, we argue that robotic (versus human) replacement should make employees feel more concerned about their economic future. That is, whereas comparing one’s abilities with those of a robot may be less threating to people’s self-worth in the short run, it might be more threatening to people’s views of their own economic situation in the long run. When thinking about their future, people should realize that the differences in abilities between robots and themselves might not be short-lived but permanent, indicating skill obsolescence10. In light of technological progress, people may even believe that these differences will increase further over time and pose a threat to their future professional prospects.

In summary, we propose that there are seemingly contradictory dispositions towards technological job replacement. Whereas observers should prefer human workers to be replaced by other human workers (versus robots), this preference should reverse when people consider the prospect of losing their own job. This effect of perspective arises because being replaced by robots (versus other humans) poses a less immediate threat to people’s self-worth. However, being replaced by robots as opposed to humans should be perceived as posing a bigger threat to one’s economic future. Below, we present the results of 11 studies. Details of these studies are provided in the Methods. Complete stimulus materials and data are openly available in the Supplementary Information and the Open Science Framework (https://osf.io/8nfc5/), respectively.

In study 1a, we experimentally tested whether people prefer human over robotic replacement. Half of the student participants read a scenario in which a firm needed to cut costs and therefore had the option to replace existing employees either by new employees or by robots. Consistent with our expectations, more participants preferred the employees to be replaced by new employees than by robots (67 versus 33%; Z = 2.16; P < 0.05; Cohen’s h = 0.34; 95% CI: (0.53, 0.81)). The other half read the same text but adopted the perspective of the employees about to lose their jobs. Thus, they read that they could be replaced either by new employees or by robots and were asked about their preference. This subtle manipulation of perspective taking, from that of an observer to an employee, shifted participants’ preferences: when participants were told that their own job was at risk, only 40% of the participants (versus 67% in the observer condition) preferred being replaced by humans rather than by robots (χ2(1) = 6.59; P < 0.05; Cramer’s V = 0.27; 95% CI: (0.06, 0.48)). A logistic regression revealed significantly higher preferences for robotic over human replacement when participants considered their own job loss (versus somebody else’s) (b = 1.12; Z = 2.53; P < 0.05; 95% CI: (0.25, 1.98)).

We replicated this preference reversal effect with samples of highly qualified (study 1b) and regular workers (study 1c) from an online labour market. As Fig. 1 shows, these two replication studies produced similar results: 63% and 60% (for studies 1b and 1c, respectively) of the participants preferred human replacement in the observer perspective condition, whereas only 40% and 38% (respectively) preferred human replacement in the employee perspective condition (χ2(1) = 4.65; P < 0.05; V = 0.23; 95% CI: (0, 0.44) for study 1b; and χ2(1) = 6.34; P < 0.05; V = 0.23; 95% CI: (0.05, 0.40) for study 1c).

Fig. 1: Preferences for human and robotic replacement across studies 1a–c.
figure1

Percentage distribution for human and robotic replacement across observer versus employee perspective conditions in study 1a (n = 90), study 1b (n = 86) and study 1c (n = 124).

In study 2, we tested whether the different reactions to robotic (versus human) replacement from an observer (versus employee) perspective can be generalized using a different dependent measure—negative emotional reactions. In this study, we used a sample that was representative of the US population in terms of age and gender. The study design was the same as in studies 1a–c; that is, participants were assigned to either the observer or the employee perspective condition. However, instead of expressing a replacement preference, participants were asked to indicate the intensity of their negative emotional reactions (that is, sad, angry or frustrated) in the case of replacement by new employees (a score of 1) versus modern robots (a score of 6).

Echoing the results of the previous studies, robotic replacement induced more negative emotions (that is, sadness, anger and frustration) than human replacement when it was about the job of others (mean = 3.88; t(122) = 2.54; P < 0.05; d = 0.23; 95% CI: (0.06, 0.43); all t values reported in studies 2 and 3a,b refer to one-sample t-tests against the scale midpoint). However, this negative emotional reaction reversed when participants contemplated the prospect of their own job; in this case, robotic replacement induced less negative emotions than human replacement (mean = 3.12; t(127) = −2.50; P < 0.05; d = −0.22; 95% CI: (−0.40, −0.05)). Taken together, the results of studies 1a–c and 2 show that robotic (versus human) replacement leads to different psychological reactions, depending on whether people consider the job of others or their own job.

In study 3a, we tested our prediction that being replaced by robots (versus other humans) is less threatening to one’s self-identity but more threatening to one’s future economic prospects. We used a ‘white-collar’ working context in which European students were asked to imagine working as a lawyer for a reputable law firm. Participants were told that their firm was reorganizing its business processes and that they would be losing their job. Participants were asked whether they would prefer being replaced by another lawyer (a score of 1) or by software (a score of 6). Next, participants were asked, using the same scale, which replacement option induced a higher degree of self-threat and greater concerns regarding one’s own economic future.

Participants displayed a strong and significant preference for being replaced by software (mean = 4.53; t(84) = 5.45; P < 0.001; d = 0.59; 95% CI: (0.34, 0.95)). Consistent with our predictions, participants rated robotic replacement as less threatening to their self-identity than human replacement (mean = 1.98; t(84) = −10.70; P < 0.001; d = −1.16; 95% CI: (−1.80, −0.77)) but more threatening to their economic future (mean = 3.86; t(84) = 2.26; P < 0.05; d = 0.25; 95% CI: (0.03, 0.47)). A linear regression model of preference on self-threat and on future concerns revealed that participants preferred the replacement option that induced a lower degree of self-threat and fewer future concerns (both coefficients were negative). Yet, the effect of self-threat (b = −0.860; t(82) = −8.07; P < 0.001; 95% CI: (−1.04, −0.70)) was about four times greater compared with the marginally significant effect of future concerns (b = −0.214; t(82) = −2.24; P < 0.05; 90% CI: (−0.40, −0.02)). Thus, participants preferred the replacement option that induced a lower degree of self-threat; namely, replacement by software.

In study 3b, we tested whether the psychological effects of robotic (versus human) replacement in study 3a replicated with real workers from an industry threatened by robotic replacement (that is, manufacturing11). We asked manufacturing workers whether they would prefer being replaced by a new manufacturing worker (a score of 1) or by new technology (a score of 6), and which of these two options induced a higher degree of self-threat and higher future concerns. Participants also indicated whether they thought that their current job could be replaced by new technology at some point in the near future. A substantial proportion of the participants (98 out of 296) thought that their current job could be replaced by technology in the near future.

Again, we found a preference for being replaced by new technology—both among manufacturing workers who believed their current job could be replaced by technology in the near future (mean = 4.10; t(97) = 3.12; P < 0.01; d = 0.31; 95% CI: (0.10, 0.57)) and among those who did not (mean = 4.31; t(197) = 6.32; P < 0.001; d = 0.45; 95% CI: (0.30, 0.62)). Two ordinary least squares (OLS) regression models—estimated separately for those who thought that technological replacement was possible in the near future and for those who did not—revealed that participants had a stronger preference for the replacement option that induced a lower degree of self-threat and future concerns. However, across both models, the effect of self-threat (bbelieve = −0.620; t(95) = −6.46; P < 0.001; 95% CI: (−0.82, −0.40) versus bnon-believe = −0.828; t(195) = −13.66; P < 0.001; 95% CI: (−0.94, −0.72)) was at least two times greater compared with that of future concerns (bbelieve = −0.280; t(95) = −2.51; P < 0.05; 95% CI: (−0.50, −0.09) versus bnon-believe = −0.111; t(195) = −1.78; P < 0.1; 95% CI: (−0.26, 0.05); see Supplementary Materials for detailed results). In summary, workers who regarded technological replacement as likely (and those who did so to a lesser extent) would rather be replaced by new technology than by another worker, as this option induced a lower degree of self-threat.

In study 4, we further examined the effect of robotic versus human replacement on feelings of self-threat and future concerns in a 2 (replacement option: human versus robotic) × 2 (appraisal: self-threat versus future concern) mixed design, with the first factor manipulated between participants, and with repeated measures for the second factor. In the robotic replacement condition, participants were asked to imagine working for a logistics firm and read that, in the course of a reorganization process, the firm had decided to replace them with modern robots. In the human replacement condition, participants read the same text, the difference being that the firm had decided to replace them with other warehouse workers. We then measured self-threat and future economic concerns for all participants.

A 2 × 2 mixed-model analysis of variance revealed a significant interaction effect (F(1, 87) = 61.87; P < 0.001). When participants were replaced by a robot, they expressed a significantly lower degree of self-threat (mean = 3.00) than future concerns (mean = 4.09; t(43) = −4.80; P < 0.001; d = −0.79; 95% CI: (−1.20, −0.47)). However, when participants were replaced by another worker, they expressed a significantly higher degree of self-threat (mean = 4.41) than future concerns (mean = 3.29; t(44) = 6.70; P < 0.001; d = 0.89; 95% CI: (0.60, 1.27)). Most importantly, though, being replaced by a robot (versus by another warehouse worker) led to a lower degree of self-threat (t(87) = −4.48; P < 0.001; d = −0.95; 95% CI: (−1.46, −0.54)) but more concerns regarding one’s own economic future (t(87) = 3.37; P < 0.01; d = 0.71; 95% CI: (0.30, 1.20)).

In study 5a, we tested whether reduced self-threat—the main driver of preferences for robotic (versus human) replacement in the previous studies—is indeed driven by social comparison processes. We used the same choice design as in study 3a,b but with yet another occupational context (junior market analyst) and sample (US students). In addition to preference for robotic (versus human) replacement and self-threat, we now also measured participants’ engagement in social comparisons. Out of 90 participants, only five rated the robotic replacement scenario as not believable and were excluded from further analyses. By excluding these participants, we alleviated the possibility that some participants might experience robotic replacement as less self-threatening simply because they do not believe that a given human job can be fully replaced by technology.

Consistent with the previous results, one-sample t-tests revealed that participants preferred being replaced by software (mean = 4.65; t(84) = 7.01; P < 0.001; d = 0.76; 95% CI: (0.50, 1.12)) and perceived robotic (versus human) replacement as less threatening to their self-identity (mean = 1.92; t(84) = −12.65; P < 0.001; d = −1.37; 95% CI: (−2.02, −0.98)). A mediation model (with engagement in social comparisons as the explanatory variable, self-threat as the mediator variable and preference for human versus robotic replacement as the dependent variable) supported our proposed process chain (see Fig. 2). That is, people’s tendency to compare themselves less with software (versus other humans) (mean = 1.67; t(84) = −22.69; P < 0.001; d = −2.46; 95% CI: (−3.35, −1.95)) explained the reduced self-threat of robotic (versus human) replacement (b = 0.452; t(83) = 2.78; P < 0.01; 95% CI: (0.13, 0.78)), which, in turn, drove participants’ preferences for being replaced by software (versus other humans) (indirect effect: b = −0.455; Z = −2.67; P < 0.01; 95% CI: (−0.85, −0.16)). Thus, these results support our proposed account that social comparisons can explain why people’s self-identity is less threatened by robotic replacement.

Fig. 2: Mediation diagram for study 5a (n = 85).
figure2

Engagement in social comparisons with software versus another worker explains self-threat concerns, thereby driving preferences for human versus robotic replacement (indirect effect: b = −0.455; Z = −2.67; P < 0.01; 95% CI: (−0.85, −0.16)). Mediation was tested by calculating bias-corrected 95% CIs using bootstrapping with 10,000 resamples via the PROCESS macro25. Significance levels: **P < 0.01; ***P < 0.001.

In study 5b, we tested whether the effect of robotic (versus human) replacement on self-threat observed in study 4 is due to social comparison processes (in contrast with study 5a, in study 5b, the replacement option was thus manipulated between participants). In addition to self-threat (our focal dependent variable), we also measured the extent to which participants engaged in social comparisons. Out of 240 participants, only ten found the replacement scenarios to not be believable, and were excluded from further analyses.

Consistent with the previous results, we again found that being replaced by a robot (versus by another warehouse worker) led to a lower degree of self-threat (mean = 3.98 versus 4.81; t(228) = −4.32; P < 0.001; d = −0.57; 95% CI: (−0.83, −0.30)). A mediation model supported our prediction that this reduced effect of robotic replacement on self-threat is mediated by social comparison processes (that is, participants rated being replaced by a robot (versus another warehouse worker) as less self-threatening because they were less likely to compare themselves with a robot than with another worker (indirect effect: b = 0.183; Z = 2.56; P < 0.05; 95% CI: (0.06, 0.34))). The results of this mediation model provide further support that social comparison processes underlie the effect of robotic (versus human) replacement on self-threat.

In study 5c, we tested whether social comparisons underlie the increased self-threat associated with human (versus robotic) replacement through experimental manipulation. Specifically, we manipulated the extent to which other human employees were relevant targets of social comparisons. We predicted that other human workers should become less relevant comparison targets—and should therefore trigger less self-threatening social comparisons—when they do not rely on their own (human) abilities to perform the job8, but instead rely on technological (non-human) abilities such as artificial intelligence. To test this prediction, we randomly assigned participants to one of three conditions. In all conditions, participants were asked to imagine working as professional translators and read that, as part of a reorganization process, the firm had decided to replace them. In the first condition, participants were told that they would be replaced with modern software using artificial intelligence. In the other two conditions, participants read the same text but with the difference that the firm had decided to replace them with another (human) employee (condition 2), or with another (human) employee using artificial intelligence (condition 3). In addition to self-threat, we also measured the relevance of social comparison.

The perceived relevance of social comparison varied across the conditions (F(2, 358) = 33.76; P < 0.001; \(\eta ^2\) = 0.16; 95% CI: (0.09, 0.22)). Another employee (using their own abilities) was rated as a more relevant comparison target (mean = 5.34) than software relying on artificial intelligence (mean = 4.11; t(358) = 6.74; P < 0.001; d = 0.93; 95% CI: (0.66, 1.21)) or than another employee relying on artificial intelligence (mean = 3.99; t(358) = 7.41; P< 0.001; d = 0.95; 95% CI: (0.69, 1.25)). We found no significant differences between software relying on artificial intelligence and another employee relying on artificial intelligence (t(358) = 0.66; P = 0.508; d = 0.08; 95% CI: (−0.17, 0.34)).

Mirroring these differences, participants’ levels of self-threat also varied across conditions (F(2, 358) = 17.88; P < 0.001; \(\eta ^2\) = 0.09; 95% CI: (0.04, 0.15)). Another employee (using their own abilities) induced a higher degree of self-threat (mean = 4.58) than software relying on artificial intelligence (mean = 3.68; t(358) = 4.90; P < 0.001; d = 0.64; 95% CI: (0.37, 0.93)) or than another employee relying on artificial intelligence (mean = 3.58; t(358) = 5.40; P < 0.001; d = 0.68; 95% CI: (0.40, 0.98)). We found no significant differences between software relying on artificial intelligence and another employee relying on artificial intelligence (t(358) = 0.50; P = 0.620; d = 0.07; 95% CI: (−0.19, 0.31)). Thus, the effect of human (versus robotic) replacement on self-threat was reduced when participants were replaced by another employee who—not relying on their own ability but on the abilities of a machine to perform their job—was perceived as a less relevant target of social comparisons.

In study 6, we tested whether the effects documented in the previous experiments replicated in a correlational study among people who had recently lost their jobs. Specifically, we tested whether self-reported reasons for people’s job loss (either robotic or human replacement) are empirically related to self-identity threats and perceived future economic prospects. We recruited workers from an online labour market who had lost their jobs in the previous two years. To measure attributions of job loss to robotic and human replacement, participants rated the extent to which they thought their job had become automated (that is, they had been replaced by machines, robots or software), as well as the extent to which they thought they had been replaced by another worker. In addition to job loss-related self-threat and future concerns, we also measured a series of control variables (for example, other reasons for job loss, the duration of unemployment, and former and current income).

Consistent with the results of the previous studies, we found evidence for a significant positive relationship between self-threat and attribution of the job loss to human replacement (Pearson’s r(214) = 0.26; P < 0.001; 95% CI: (0.14, 0.38)), but no evidence for a significant relationship between self-threat and attribution of the job loss to robotic replacement (r(214) = −0.05; P = 0.50; 95% CI: (−0.19, 0.09)). Conversely, we found evidence for a significant positive relationship between future economic concerns and attribution of the job loss to robotic replacement (r(214) = 0.14; P < 0.05; 95% CI: (0.03, 0.24)), but no evidence for a significant relationship between future economic concerns and attribution of the job loss to human replacement (r(214) = 0.10; P = 0.13; 95% CI: (−0.03, 0.23)). To examine this pattern of results more precisely, we estimated four OLS regression models with self-threat and future concerns as dependent variables (see Table 1). Each model controlled for an increasing number of factors. Model 1a,b controlled for other reasons for job loss. Model 2a,b controlled for other reasons for job loss and the degree of future economic concerns when regressing self-threat and vice versa, since they were positively related (r(214) = 0.27; P < 0.001; 95% CI: (0.12, 0.40)). Model 3a,b controlled for other reasons for job loss, the degree of future economic concerns and various characteristics of the lost job. Model 4a,b controlled for other reasons for job loss, the degree of future economic concerns, various characteristics of the lost job and the workers’ current situation (for example, current employment status), and other demographical variables.

Table 1 Regression models of self-threat and future economic concerns in study 6

Across models 1a–4a, we found no significant relationship between perceived robotic replacement and self-threat (all P values > 0.10; see Table 2 for exact P values and CIs). In each model, the magnitude of this relationship was more than two times smaller compared with that of the relationship between job loss attribution to human replacement and self-threat, which was significant and positive in all four models (all P values < 0.01). In contrast, across models 1b–4b, we found a significant relationship between perceived robotic replacement and future economic concerns (all P values < 0.05). In each model where we controlled for self-threat, the magnitude of this relationship was around two times greater compared with the magnitude of the relationship between perceived human replacement. Together, these results replicate and validate our experimental findings with people who lost their jobs: job loss was associated with different psychological consequences, depending on whether workers perceived their job loss to be due to robotic or human replacement.

Table 2 Exact P values and 95% CIs for the regression models in study 6

Technological progress is expected to affect millions of workers in a wide variety of occupations in the coming decades1. This transition will primarily affect specific work tasks2,3, but also—to a substantial extent—entire jobs1,4,5. Despite the crucial societal importance of this development, no previous research has examined how people react to the technological replacement of human labour. In 11 studies using different samples and contexts, our investigation reveals that, whereas the public prefers human workers to be replaced by other human workers (versus robots), the workers whose jobs are actually threatened might prefer to be replaced by robots (versus human workers). This is because robotic (versus human) replacement poses a less immediate threat to people’s self-worth. However, workers whose jobs are at risk are likely to perceive robotic (versus human) replacement as a greater threat to their future economic prospects. Given the scant literature on the psychology of workplace automation12, our research represents an important step towards a better understanding of the psychological consequences of technological (versus human) job replacement—and shows that these consequences can be understood within a social comparison framework.

The results of this research may help policymakers design support programmes for the unemployed. Such programmes can help to re-employ job seekers and reduce negative effects on their mental and physical health13. However, appropriately tailoring these interventions has proven difficult in practice14. Our findings suggest that interventions targeted at restoring feelings of competence and self-worth, which represent essential coping resources in the event of a job loss13, should be less of a priority when workers attribute their job loss to automation as opposed to human replacement. In contrast, for those workers who attribute their job loss to automation, it would be better to devote all resources to interventions targeted at upgrading skills and retraining. Such retraining interventions could not only provide workers with new skills that are difficult to automate (for example, social or emotional skills12), but could also alleviate feelings of future economic concerns by reducing perceptions of skill obsolescence. An additional study (n = 280) provides empirical support for the reasoning that the observed effect of robotic (versus human) replacement on future economic concerns is indeed driven by perceptions that there are decreasing demands for one’s skillset in the labour market (see Supplementary Results). Based on these results, we speculate that job seekers who attribute their job loss to automation should show less inertia in reskilling than other job seekers who are often too optimistic in the face of job loss15. Therefore, they should benefit particularly from interventions that address market conditions and guide them towards new (in-demand) occupations16.

Based on our findings in study 6 (that both age and attributions of job loss to robotic replacement are positively related to future economic concerns), it is conceivable that technological replacement may further reduce the already lowered job search motivation of older job seekers17. Without active labour market policies, this might further increase the likelihood of older workers leaving the labour market18.

Our findings suggest that the psychological consequences of robotic (versus human) job replacement hold across various types of samples from different cultural backgrounds (that is, from European and North American countries). Yet, we acknowledge the existence of socioeconomic and cultural differences across countries (for example, in the extent to which workers blame themselves (versus the system) for their unemployment and job search experiences19) that might affect how workers deal with these consequences, and what responses they expect from the government.

Our research focuses on how people react when modern technology replaces jobs. More research is needed to investigate how people react (and adapt) when modern technology replaces not entire jobs but specific tasks (for example, by exploring how engaging workers in the decision to automate specific tasks affects their attitude towards modern technology).

Our findings may also inspire novel predictions regarding the broader societal consequences of technological unemployment. For example, based on our results, it is conceivable that organized resistance among workers to job losses tends to be weaker when the job losses are attributed to automation than when they are attributed to human replacement (for example, outsourcing). We hope that, particularly in times when policymakers are discussing strategies and practices intended to support workers who have been displaced by technology20, our work encourages more research on the psychological consequences of technological unemployment before technological progress disrupts specific jobs and occupations.

Methods

Our research complied with all relevant ethical regulations regarding human research participants. Informed consent was obtained from every participant. In all studies, participation was voluntary, and subjects could leave at any time. All test statistics were two-sided. For all parametric tests, data were assumed to be normal, but this was not formally tested. However, all CIs were obtained via bootstrapping with 1,000 iterations, which produces intervals that do not rely on the assumption of normality for valid inference. Analyses for studies 1a–c, 2 and 3a,b were conducted with STATA 14.1, and analyses for studies 5a–c and 6 were conducted with SPSS 23. Sample size estimation was based on the availability and sample size of original studies (studies 1a,b, 3a, 4, 5a and 6), a priori power analysis using G*Power, designed to have 80% power (study 1c), and the criterion that replication studies should have (at least) 2.5 times the original sample size21 (studies 2, 3b and 5b,c). Data collection was performed blind to the conditions of the experiments. Data analysis was not blind to the conditions of the experiments. No participants or data points were excluded from the analyses.

Study 1a

Participants (n = 124; 69 males; mean age = 19 years) were students from a university in the Netherlands. Thirty-four participants failed an attention check and were excluded from further analyses. Participants were randomly assigned to either the observer perspective or the employee perspective condition. In the observer perspective condition (n = 42), participants read that a company needed to cut costs in one of its business units and had two options available: replacing some of its existing employees with new employees or replacing them with robots from an external supplier. Next, participants were asked whether they would prefer that existing employees were replaced by new employees or by robots.

In the employee perspective condition (n = 48), we used the materials described above, except that we changed the following words: instead of “a company”, participants read “your company”; and instead of “existing employees will be replaced”, participants read “you will be replaced”. In both conditions, the levels of cost reduction and output quality associated with robotic or human replacement were held constant. Participants were then asked to complete the same preference question.

Study 1b,c

In study 1b, participants (n = 95; 59 males; mean age = 38 years) were highly qualified online workers who were recruited through an online research platform (Amazon Mechanical Turk (MTurk); Master Workers). Nine participants failed an attention check and were excluded from further analysis. The design and procedure of this study were similar to study 1a. Before indicating their preferences, participants in the observer perspective (n = 43) and employee perspective (n = 43) conditions were asked to elaborate on the two replacement options.

In study 1c, participants (n = 124; 70 males; mean age= 39 years; MTurk) were regular online workers from the United States and Canada who were assigned to either the observer (n = 63) or the employee (n = 61) perspective condition. In the observer (versus employee) perspective condition, participants were asked to consider that employees (versus themselves) are working for a large manufacturing company. Next, participants were told that the (versus their) company had decided to reorganize its business processes. As part of this reorganization process, some of the existing employees (versus themselves) would be replaced and thus lose their jobs. To achieve the goals of the reorganization process, the company had two options available: existing employees (versus themselves) could be replaced either by modern robots, which would perform the tasks automatically, or by other employees.

Study 2

A sample of 251 participants (125 males; mean age = 41 years), representative of the US population in terms of gender and age, were recruited by Dynata—a global market research agency. A Qualtrics quota error resulted in oversampling one male participant. We used the same study design as in study 1a–c, but instead of preference, we measured negative emotional reactions as a dependent variable with three items: “I would feel more (1) sad; (2) frustrated; or (3) angry if existing employees were replaced (versus if I was replaced)…” (scores: 1 = by other employees (versus by another employee); 6 = by modern robots (versus by a modern robot); Cronbach’s α = 0.80)).

Study 3a

Participants (n = 95; 56 males; mean age = 20 years) were students from a university in the Netherlands. Ten participants failed an attention check and were excluded from further analyses. Participants were asked to imagine that they had recently graduated from law school and now worked at a reputable law firm. This information was followed by a description of their main tasks (see Supplementary Methods). Participants were told that they had performed their tasks satisfactorily (to avoid participants attributing their job loss to inadequate performance). Next, participants were told that after doing their job for 1 year their company had decided to reorganize its business processes. As part of this reorganization process, they would be replaced and thus lose their job. To achieve the goals of the reorganization process, the company had two options available. The company could either replace the employee by another lawyer who had recently graduated from law school, or by a modern software algorithm that would perform the employee’s tasks automatically. The level of expected effectiveness and output quality associated with both replacement options were held constant.

Next, participants indicated whether they would prefer being replaced by another lawyer or by software (1 = lawyer; 6 = software). We then measured self-threat (“Which option would…”: “…make you feel more devalued?”; “…make you raise more doubts about yourself?”; and “…make you question more your own abilities?”; α = 0.89) and future economic concerns (“Which option would…”: “…make you feel more worried about the future?”; “…make it more difficult for you to find another job?”; and “…make it more stressful for you to find another job?”; α = 0.72) (1 = lawyer; 6 = software).

Study 3b

Participants (n = 296; 192 males; mean age = 38 years) were workers from the manufacturing industry who were recruited by Prolific—a European-based online panel22. In addition to the pre-screening done by the agency, participants were again asked to verify their employment in the manufacturing industry before entering the actual study.

The design and procedure of this study were similar to study 3a. Participants were first asked to name their current job in the manufacturing industry. Next, participants were asked to imagine that their company had decided to reorganize its business processes. As part of this reorganization process, they would be replaced and thus lose their job. To achieve the goals of the reorganization process, the company had two options available. The company could replace them with either new manufacturing workers or a new technology that would perform their tasks automatically. Next, participants were asked whether they would prefer being replaced by a new manufacturing worker or by new technology (1 = new manufacturing worker; 6 = new technology). We measured self-threat (α = 0.87) and future concerns (α = 0.78) with the same three bipolar items used in study 3a. After completing these measures, we also administered an item to check whether participants believed that their current job could be replaced by new technology at some point in the near future (1 = yes; 2 = no).

Study 4

Participants (n = 92; 44 males; mean age = 20 years) were students from a university in the Netherlands who participated in a between-participants experiment. Three participants failed an attention check and were excluded from further analyses. Participants were randomly assigned to either the robot or human replacement condition. In the robot replacement condition (n = 44), participants were asked to imagine that they had recently graduated from high school and now worked at a large logistics firm. This was followed by a description of their main tasks (Supplementary Methods). Next, participants were told that after doing their job for 1 year their company had decided to reorganize their business processes, and therefore they would lose their job. They were told that, to achieve the goals of the reorganization process, they would be replaced by a modern robot that would perform their tasks automatically. In the human replacement condition (n = 45), participants read the same text but were told that they would be replaced by another warehouse worker. In both conditions, participants were told that they had performed their tasks satisfactorily (to avoid participants attributing their job loss to inadequate performance).

Next, we measured self-threat and future concerns. Self-threat was measured with two bipolar items: “If I were replaced by a modern robot (another warehouse worker), then…”: “…I would (not) question my abilities (at all)” and “…it would (not) raise (any) doubts about myself” with each answer scored 1 (6). We recoded these two items so that higher values indicated a higher degree of self-threat (r = 0.78). Future concerns were also measured with two items: “How worried would you be about your future?” and “How difficult do you think it would be for you to find another job?” (1 = not at all; 6 = extremely; r = 0.52).

Study 5a

Participants (n = 90; 19 males; mean age = 22 years) were business students from a university in the United States. The study design and procedure were the same as in study 3a. We varied the profession to increase the generalizability of our findings. That is, participants were asked to imagine that they had recently graduated from business school and now worked as a junior market analyst at a reputable firm. This was followed by a description of their main tasks (Supplementary Methods). Accordingly, participants then read the same text as in study 3a, the difference being that they would be replaced either by another market analyst who recently graduated from business school or by a modern software algorithm that would perform the employee’s tasks automatically.

In addition to preference (1 = another person; 6 = software) and which option induced a higher degree of self-threat (1 = another person; 6 = software; α = 0.83), we also measured participants’ engagement in social comparisons as an explanatory variable. This was measured with the following four items adapted from Darnon et al.23: “It was easier to compare myself with…”; “I compared myself more strongly with…”; “It was more natural to me to compare myself with…”; and “To me, it was more relevant to compare my abilities with…” (scores: 1 = the other person; 6 = the modern software (α = 0.77)). Finally, we asked participants whether they found the scenario in which a junior market analyst was replaced by modern software believable (1 = yes; 2 = no). Five participants rated the replacement scenario as not believable and were excluded from further analyses.

Study 5b

Participants (n = 242; 122 males; mean age = 37 years; MTurk) from the United States and Canada participated in a study similar to study 4. Two participants failed an attention check and were excluded from further analyses. In addition to self-threat (α = 0.85), we also measured participants’ engagement in social comparisons as a process variable with four items: “It was easy to compare myself with…”; “I compared myself strongly with…”; “It was natural to me to compare myself with…”; and “It was relevant to me to compare my abilities with…”: “…the modern robot (versus the other warehouse worker) replacing me” (scores: 1 = not at all; 6 = very much (α = 0.86)). We also asked participants whether they found the replacement scenarios believable (1 = yes, 2 = no). Ten participants rated the scenarios as not believable and were excluded from further analyses.

Study 5c

Participants (n = 361; 177 males; mean age = 37 years; MTurk) from the United States and Canada participated in a three-condition between-participants study. Participants were randomly assigned to either the robotic replacement, human replacement or human replacement complemented by technology condition. In the human replacement condition (n = 123), participants were asked to imagine that they had studied languages at college and recently started to work as a professional translator for a large international company, where their job was to translate technical product features from English into other languages. Next, participants were told that their company had informed them that, as part of a reorganization process, they would lose their job and would be replaced by another employee. In the robotic replacement condition (n = 119) and the human replacement complemented by technology condition (n = 119), participants read the same text but were told that they would be replaced by modern software that used artificial intelligence, or by another employee who used artificial intelligence, respectively.

Next, we measured self-threat (our dependent variable) as in study 4 (α = 0.83). As a manipulation check, we also measured the extent to which participants perceived each replacement option to be a relevant target of social comparisons: “As a translator, I felt it makes sense to compare myself with…“; “It felt reasonable to compare myself with…”; and “As a translator, I felt somewhat similar to…”: “…the employee that uses artificial intelligence (versus the software that uses artificial intelligence/versus the employee)”, as well as “I compared myself with the employee that uses artificial intelligence (versus the software that uses artificial intelligence/versus the employee) that replaced me as a professional translator” (scores: 1 = strongly disagree; 7 = strongly agree (α = 0.90)).

Study 6

Participants were recruited through MTurk. We posted a task entitled “Tell us your opinion on modern work”, which was described as “This is a study about unemployment. So, we are looking for people who have lost their jobs within the past 2 years, and who would like to give us their opinion about changes in today’s working environment”. After accepting the human intelligence task, but before taking the actual survey, participants were asked whether they had lost their job at least once within the past 2 years. Participants were informed that if they answered ‘yes’ to this screen-out question, they would proceed with the survey. If they answered ‘no’, they would be offered the possibility to participate in another survey. This screen-out question before participants entered the final survey helped ensure the validity of our results by excluding those participants who did not meet our selection criteria (specified in the description of the posted task) but who nevertheless started the survey24. Of the 275 people who accepted our task and initially started our survey, 216 participants (128 males; mean age = 35 years) answered ‘yes’ and thus represented the final sample for this study.

As independent variables, we measured the extent to which workers attributed their (most recent) job loss to robotic and human replacement. Additionally, we measured the extent to which workers perceived other reasons to be responsible for their job loss, using one item. Therefore, following the preamble “To what extent do you think the following reasons played a role in why you lost your job?”, participants rated the following three statements: “My job became automated (my job is now done by machines, robots, software, and so on)”; “I was replaced by another worker (for example, my job is now done by people working in other firms, other countries, etc.)”; and “other reasons” (1 = played no role; 10 = played a major role).

As dependent variables, we measured the degree of job loss-related self-threat and future economic concerns. Specifically, participants were told to think of the reasons for their job loss and that we are interested in how these reasons made them feel. Self-threat was measured with three items: “Due to the reasons why I lost my job…”: “…I did question my abilities”; “…it did make me feel extremely devalued”; and “…it did raise doubts about myself” (scores: 1 = strongly disagree; 6 = strongly agree; α = 0.79). Future concerns were measured with three items: “How worried were you about your future?” (1 = not at all worried; 6 = extremely worried); “How difficult do you think it would be for you to find another job?” (1 = not at all difficult; 6 = extremely difficult); and “How stressful do you think it would be for you to find another job?” (1 = not at all stressful; 6 = extremely stressful; α = 0.85).

We also measured a series of control and auxiliary variables, including participants’ duration of employment at the former company, the duration of unemployment, the size of the company, former and current income, current job status and demographic variables (for a full list, see Supplementary Methods).

Reporting Summary

Further information on research design is available in the Nature Research Reporting Summary linked to this article.

Data availability

Data from all of the studies reported in this paper are publicly available at https://osf.io/8nfc5/.

Code availability

Analyses were conducted with STATA 14.1 and with SPSS 23. No custom code was used. Code that supports the findings of this study is available from the corresponding author upon request.

References

  1. 1.

    National Academies of Sciences, Engineering, and Medicine. Information Technology and the U.S. Workforce: Where Are We and Where Do We Go from Here? https://doi.org/10.17226/24649 (National Academies Press, 2017).

  2. 2.

    Brynjolfsson, E., Mitchell, T. & Rock, D. What can machines learn, and what does it mean for occupations and the economy? AEA Pap. Proc. 108, 43–47 (2018).

    Article  Google Scholar 

  3. 3.

    Arntz, M., Gregory, T. & Zierahn, U. The Risk of Automation for Jobs in OECD Countries: A Comparative Analysis OECD Social, Employment and Migration Working Paper No. 189 https://doi.org/10.1787/5jlz9h56dvq7-en (OECD Publishing, 2016).

  4. 4.

    Brynjolfsson, E. & McAfee, A. The Second Machine Age: Work, Progress, and Prosperity in a Time of Brilliant Technologies (WW Norton & Company, 2014).

  5. 5.

    Nedelkoska, L. & Quintini, G. Automation, Skills Use and Training OECD Social, Employment and Migration Working Paper No. 202 https://doi.org/10.1787/2e2f4eea-en (OECD Publishing, 2018).

  6. 6.

    Special Eurobarometer 382: Public Attitudes Towards Robots https://data.europa.eu/euodp/data/dataset/S1044_77_1_EBS382 (European Commission, 2014).

  7. 7.

    Batson, C. D. & Powell, A. A. in Handbook of Psychology: Personality and Social Psychology Vol. 5 463–484 (John Wiley & Sons, 2003).

  8. 8.

    Festinger, L. A theory of social comparison processes. Hum. Relat. 7, 117–140 (1954).

    Article  Google Scholar 

  9. 9.

    Tesser, A. Toward a self-evaluation maintenance model of social behavior. Adv. Exp. Soc. Psychol. 21, 181–227 (1988).

    Google Scholar 

  10. 10.

    Arnkelsson, G. B. & Smith, W. P. The impact of stable and unstable attributes on ability assessment in social comparison. Pers. Soc. Psychol. Bull. 26, 936–947 (2000).

    Article  Google Scholar 

  11. 11.

    Charles, K. K., Hurst, E. & Schwartz, M. The transformation of manufacturing and the decline in US employment. in NBER Macroeconomics Annual 2018 Vol. 33 (eds. Eichenbaum, M. & Parker, J. A.) 307–372 (University of Chicago Press, 2019).

  12. 12.

    Waytz, A. & Norton, M. I. Botsourcing and outsourcing: robot, British, Chinese, and German workers are for thinking—not feeling—jobs. Emotion 14, 434–444 (2014).

    Article  Google Scholar 

  13. 13.

    McKee-Ryan, F., Song, Z., Wanberg, C. R. & Kinicki, A. J. Psychological and physical well-being during unemployment: a meta-analytic study. J. Appl. Psychol. 90, 53–76 (2005).

    Article  Google Scholar 

  14. 14.

    Wanberg, C. R. The individual experience of unemployment. Annu. Rev. Psychol. 63, 369–396 (2012).

    Article  Google Scholar 

  15. 15.

    Wanberg, C., Basbug, G., Van Hooft, E. A. & Samtani, A. Navigating the black hole: explicating layers of job search context and adaptational responses. Pers. Psychol. 65, 887–926 (2012).

    Article  Google Scholar 

  16. 16.

    Hendra, R. et al. Encouraging Evidence on a Sector-Focused Advancement Strategy: Two-Year Impacts from the WorkAdvance Demonstration (MDRC, 2016).

  17. 17.

    Wanberg, C. R., Kanfer, R., Hamann, D. J. & Zhang, Z. Age and reemployment success after job loss: an integrative model and meta-analysis. Psychol. Bull. 142, 400–426 (2016).

    Article  Google Scholar 

  18. 18.

    Van Ours, J. C. & Vodopivec, M. How shortening the potential duration of unemployment benefits affects the duration of unemployment: evidence from a natural experiment. J. Labor Econ. 24, 351–378 (2006).

    Article  Google Scholar 

  19. 19.

    Sharone, O. Flawed System/Flawed Self: Job Searching and Unemployment Experiences (Univ. Chicago Press, 2013).

  20. 20.

    Mitchell, T. & Brynjolfsson, E. Track how technology is transforming work. Nature 544, 290–292 (2017).

    CAS  Article  Google Scholar 

  21. 21.

    Simonsohn, U. Small telescopes: detectability and the evaluation of replication results. Psychol. Sci. 26, 559–569 (2015).

    Article  Google Scholar 

  22. 22.

    Palan, S. & Schitter, C. Prolific.ac—a subject pool for online experiments. J. Behav. Exp. Finance 17, 22–27 (2018).

    Article  Google Scholar 

  23. 23.

    Darnon, C., Dompnier, B., Gilliéron, O. & Butera, F. The interplay of mastery and performance goals in social comparison: a multiple-goal perspective. J. Educ. Psychol. 102, 212–222 (2010).

    Article  Google Scholar 

  24. 24.

    Sharpe Wessling, K., Huber, J. & Netzer, O. MTurk character misrepresentation: assessment and solutions. J. Consum. Res. 44, 211–230 (2017).

    Article  Google Scholar 

  25. 25.

    Hayes, A. F. Introduction to Mediation, Moderation, and Conditional Process Analysis: A Regression-Based Approach (Guilford Publications, 2017).

Download references

Acknowledgements

The authors received no specific funding for this work.

Author information

Affiliations

Authors

Contributions

A.G., C.F. and S.P. designed the studies. A.G. and C.F. carried out the experiments. A.G. analysed the data. A.G., C.F. and S.P. wrote the paper.

Corresponding author

Correspondence to Armin Granulo.

Ethics declarations

Competing interests

The authors declare no competing interests

Additional information

Peer review information: Primary Handling Editor: Marike Schiffer.

Publisher’s note: Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Supplementary information

Supplementary Information

Supplementary Methods, Supplementary Results and Supplementary Tables 1–5.

Reporting Summary

Supplementary Dataset 1a

Data for study 1a.

Supplementary Dataset 1b

Data for study 1b.

Supplementary Dataset 1c

Data for study 1c.

Supplementary Dataset 2

Data for study 2.

Supplementary Dataset 3a

Data for study 3a.

Supplementary Dataset 3b

Data for study 3b.

Supplementary Dataset 4

Data for study 4.

Supplementary Dataset 5a

Data for study 5a.

Supplementary Dataset 5b

Data for study 5b.

Supplementary Dataset 5c

Data for study 5c.

Supplementary Dataset 6

Data for study 6.

Supplementary Data 1

Data for additional study examining the relationship between robotic (versus human) replacement, future economic concerns and skill obsolescence.

Rights and permissions

Reprints and Permissions

About this article

Verify currency and authenticity via CrossMark

Cite this article

Granulo, A., Fuchs, C. & Puntoni, S. Psychological reactions to human versus robotic job replacement. Nat Hum Behav 3, 1062–1069 (2019). https://doi.org/10.1038/s41562-019-0670-y

Download citation

Further reading

Search

Quick links

Nature Briefing

Sign up for the Nature Briefing newsletter — what matters in science, free to your inbox daily.

Get the most important science stories of the day, free in your inbox. Sign up for Nature Briefing