The positive–negative–competence (PNC) model of psychological responses to representations of robots

Robots are becoming an increasingly prominent part of society. Despite their growing importance, there exists no overarching model that synthesizes people’s psychological reactions to robots and identifies what factors shape them. To address this, we created a taxonomy of affective, cognitive and behavioural processes in response to a comprehensive stimulus sample depicting robots from 28 domains of human activity (for example, education, hospitality and industry) and examined its individual difference predictors. Across seven studies that tested 9,274 UK and US participants recruited via online panels, we used a data-driven approach combining qualitative and quantitative techniques to develop the positive–negative–competence model, which categorizes all psychological processes in response to the stimulus sample into three dimensions: positive, negative and competence-related. We also established the main individual difference predictors of these dimensions and examined the mechanisms for each predictor. Overall, this research provides an in-depth understanding of psychological functioning regarding representations of robots.

Robots are becoming an increasingly prominent part of society.Despite their growing importance, there exists no overarching model that synthesizes people's psychological reactions to robots and identifies what factors shape them.To address this, we created a taxonomy of affective, cognitive and behavioural processes in response to a comprehensive stimulus sample depicting robots from 28 domains of human activity (for example, education, hospitality and industry) and examined its individual difference predictors.Across seven studies that tested 9,274 UK and US participants recruited via online panels, we used a data-driven approach combining qualitative and quantitative techniques to develop the positivenegative-competence model, which categorizes all psychological processes in response to the stimulus sample into three dimensions: positive, negative and competence-related.We also established the main individual difference predictors of these dimensions and examined the mechanisms for each predictor.Overall, this research provides an in-depth understanding of psychological functioning regarding representations of robots.
Various projections indicate that robots will soon become a constituent part of society and will need to be increasingly integrated into it [1][2][3][4][5] .This trend highlights the importance of understanding people's psychological processes (for example, feelings, thoughts and actions) towards robots.Indeed, these processes form the basis of human-robot relationships and are therefore likely to shape the dynamics of the new world permeated by robots [6][7][8][9][10] .In this respect, although various processes have been investigated 6 , this research area is still in its infancy for several reasons.
First, scholars have not synthesized psychological processes towards robots into an overarching framework that clarifies how they function as a whole and allows for building theories that would explain them.Second, it is unclear whether, and how many, important psychological processes remain hidden due to the lack of systematic research on this topic.Third, previous studies have mainly focused on specific robot types (for example, social 6 ) rather than examining the full content space of robots across all domains of human activity (for example, education, hospitality and industry).Finally, most research has been conducted outside of psychology (for example, healthcare and robotics 6,11,12 ).Consequently, there has been little effort to integrate people's responses to robots with important constructs from psychology in a way that would allow the field to study this topic more systematically and establish a coherent research stream around it.
To address this, the present research has two objectives: (1) to develop an integrative and comprehensive taxonomy of psychological processes in response to robots from all domains of human activity that organizes these processes into dimensions; and (2) to establish which individual differences widely studied in psychology are the most important predictors of these dimensions and to understand the mechanisms behind their relationships.
In this context we use the term 'psychological processes' in reference to people's affective (that is, feelings towards robots), cognitive (that is, thoughts about robots) and behavioural responses (that is, actions towards them).This rule-of-thumb classification is Article https://doi.org/10.1038/s41562-023-01705-7 general principles (that is, theory), data-driven research is inductive because it starts with empirical observations that are not guided by hypotheses and can progressively evolve into theory [77][78][79][80][81][82] .
A data-driven approach is recommended if (1) a construct is in its early stages of development and/or (2) its theoretical foundations have not been established [77][78][79][80]82,83 . Basedon this, a data-driven approach is optimal for our research for both reasons.First, as previously indicated, the conceptual bases of our topic are at an early stage because different affective, cognitive and behavioural responses to robots have not been studied under an all-encompassing construct (that is, psychological processes).Second, theoretical foundations have not yet been developed, because encapsulating the entirety of psychological functioning regarding robots by identifying, organizing and predicting the psychological processes triggered by robots is beyond the scope of existing models of human-technology relationships.To illustrate this, the technology acceptance model [84][85][86] and its extensions-the unified theory of acceptance and use of technology [87][88][89] and the Almere model 90 -examine the factors that make people accept technology (for example, perceived usefulness, ease of use or social influence) whereas the media equation [91][92][93] examines whether people interact with media (for example, computers) similarly to how they interact with other humans.
Data-driven approaches have three main benefits.First, they allow the study of novel topics without engaging in premature theorizing that can lead to post hoc hypothesizing and false-positive findings 77,78,[94][95][96][97] .Second, because the emphasis is on inferences from data that are not constrained by previous theories and findings, these approaches can diversify knowledge of human psychology and spark unexpected insights 79,81,98 .Third, they can be more beneficial to previous research on the topic than deductive approaches directly informed by this research.In behavioural sciences, failed replications are common and researchers examining the same research questions and hypotheses, even with identical data, can often obtain different findings [99][100][101][102][103] .Therefore, if a data-driven study produces a finding consistent with previous research and theorizing, despite using a methodological approach that is solely guided by data and not constrained by their assumptions, this is a compelling case of support for the previous work.It is thus important to emphasize that using a data-driven approach does not imply conducting a research project that disregards previous literature.Quite the contrary, it is essential to comprehensively evaluate and discuss how the findings are linked to previous work to illuminate how the present research has extended this work and moved the field forward-a process labelled inductive integration 77 .
Drawing on data-driven approaches, our research objectives-(1) establishing a taxonomy of psychological processes involving robots and (2) examining its individual difference predictors-are achieved in three phases comprising seven studies (Fig. 1; for participant information see Table 1).
Phase 1 consisted of two studies that undertook an in-depth examination of the construct of robots that was necessary to build the taxonomy.In Study 1 we developed an all-encompassing general definition of robots.In Study 2 we used this definition to identify all domains of human activity in which robots operate.Phase 2 consisted of three studies aimed at creating the taxonomy.In Study 3 we sampled a comprehensive content space of people's psychological processes involving robots across the domains identified in Phase 1 to develop items assessing each process.In Study 4 we determined the main dimensions of these processes using exploratory factor analyses (EFAs [104][105][106] ).In Study 5 we further confirmed these dimensions using exploratory structural equation modelling (ESEM 107,108 ) and developed the psychological responses to robots (PRR) scale that can assess psychological processes towards any robot.Phase 3 consisted of two studies that focused on determining the most important individual difference predictors of the psychological responses and testing the mechanisms behind these relationships.
often used to summarize and investigate psychological processes in an all-encompassing way [13][14][15] , because an official taxonomy does not exist.We adopt it because it is useful as a guiding principle when (1)  eliciting diverse psychological processes and (2) identifying and organizing previous literature, considering that psychological functioning involving robots is typically not studied as a uniform construct and comprises studies from numerous areas.
Next, we briefly review previous research on psychological processes regarding robots in terms of affective, cognitive and behavioural responses (for a detailed review see Supplementary Notes).Before this review, we first clarify how we define robots because their definition is often confined to various specific types (for example, autonomous and social 6,[16][17][18][19][20] ), and describing them as an overarching category can be less straightforward [19][20][21] .
We adopt a general definition proposed by the Institute of Electrical and Electronics Engineers (IEEE 22 ), according to which robots are devices that can act in the physical world to accomplish different tasks and are made of mechanical and electronic parts.These devices can be autonomous or subordinated to humans or software agents acting on behalf of humans.Robots can also form groups (that is, robotic systems) in which they cooperate to accomplish collective goals (for example, car manufacturing).
In terms of cognitive responses, people's thoughts about robots can be organized into several themes.A key theme is the level of competence displayed by robots concerning tasks in which they are specialized 19,30,45,46 .For example, robots are often seen as efficient and accurate in what they do and as more physically endurant than humans 19,47,48 .Individuals can also consider robots helpful and appreciate their effectiveness in accomplishing various tasks, from household chores to carrying heavy loads [49][50][51][52] .Another important theme is anthropomorphism (that is, ascribing human characteristics to non-living entities 7 ).For instance, people may perceive robots as sentient beings that have feelings [53][54][55][56][57][58][59] but they may also see them as distinct from humans (for example, cold or soulless 19,[60][61][62] ) and question whether robots can be trusted in their capacities as companions, coworkers and other roles they assume [63][64][65] .
In terms of behavioural responses, actions towards robots can be classified as either approach (for example, engaging with them) or avoidance (for example, evading them) 10,[66][67][68] .Common approach behaviours involve communicating, cooperating, playing and requesting information 10,26,69,70 .More negative approach behaviours have also been documented, including several instances of robot abuse [71][72][73] .In contrast to approach, avoidance behaviours (for example, hiding from robots) are infrequently mentioned in the literature and may typically occur in environments where robots could potentially injure humans 74,75 .
Overall, the reviewed literature indicates that various psychological responses to robots have been observed.However, because this topic is not studied under a common umbrella of psychological processes but in relation to diverse topics (for example, anthropomorphism, robotic job replacement or robot acceptance 7,49,76 ), it is unclear how these processes are interlinked, what shapes them and whether all important processes have been discovered.
For these reasons, our research adopted a data-driven rather than a theory-driven approach [77][78][79] .Contrary to theory-driven studies that are inherently deductive because they test hypotheses deduced from Article https://doi.org/10.1038/s41562-023-01705-7 In Study 6 we used machine learning 109,110 to identify the key predictors of the main dimensions of the PRR scale.In Study 7 we probed the mechanisms behind these predictors.
All in all, to achieve our research objectives, as stimuli we used representations (that is, images and descriptions) of robots (Supplementary Table 7) from 28 exhaustive domains of human activity in which robots operate (Table 2).This comprehensive approach allowed us to minimize the chance that our findings are driven by idiosyncrasies of a sample that is small in size and/or variety of robot types, which could compromise replicability 111,112 .Despite the wide variety of our stimulus sample, it is unclear to what degree this sample is representative of the general population of robots because (1) there are no established recommendations on what variables would need to be measured to accurately define this population, (2) the type of data used to quantify general characteristics of human populations is not available for robots and (3) the field of robotics is rapidly evolving.Therefore, in the context of our research we use the term 'robot/s' in reference to our specific stimulus sample and we do not imply that our insights extend to the general population (that is, all physical robots).

Results
In this section we briefly present the results (for a detailed description see Supplementary Results).

Phase 1: mapping a comprehensive content space of robots
Phase 1 aimed to establish a comprehensive content space that encompasses a wide range of robots by identifying all domains of human activity in which robots operate, to ensure that our taxonomy developed in Phase 2 is not biased towards only a few robot types 111,112 .
The first step in this endeavour was to devise a general definition of robots in Study 1, because robot definitions are typically proposed by experts 6,21,22,113 and it is less well known whether these reflect how people more broadly perceive robots.Because any robot definition is essentially a set of characteristics that describe robots (for example, made of mechanical parts, autonomous 6,22,113 ), to develop a general definition we first recruited Sample 1 and asked them to generate robot characteristics.Using this approach, 277 characteristics were identified (Supplementary Table 3).We then recruited Sample 2 and asked them to group these characteristics into common categories.Using hierarchical cluster analysis [114][115][116] , the following main clusters of robot characteristics were identified: (1) characteristics conveying the degree of robot-human similarity; (2) positive characteristics; (3) characteristics conveying robots' composition; (4) negative characteristics; and (5) characteristics conveying robots' ability to perform various tasks (Supplementary Table 3).
The general definition of robots that we subsequently developed by linking the themes of each cluster is available in Table 2.It is important to emphasize that we did not form the definition by always translating an individual cluster theme into a separate part, because the definition was more succinct and coherent if certain themes were combined in the same parts.
In Study 2 we used this robot definition to identify a comprehensive list of domains in which robots operate.Participants were presented with the definition and asked to generate all such domains they could think of.To develop an extensive inventory of domains, we analysed their responses using inductive content analysis [117][118][119][120][121] .Additionally, to ensure we did not miss any domains that participants were unable to identify, we consulted various other resources (for example, articles from the literature review of this paper and classifications detailed in Methods).The final list of domains, accompanied by the example items generated by participants, is available in Table 2.

Phase 2: creating the taxonomy of psychological processes
To develop the taxonomy, it was first necessary to identify a comprehensive range of psychological processes involving robots in Study 3. We instructed participants to write about any feelings, thoughts and The goal of this phase was to develop a comprehensive list of domains from all areas of human activity in which robots play a role so that this list could be used in Phase 2 to create the taxonomy of psychological processes in response to robots across these domains.This was achieved by first developing a general definition of robots based on how participants perceive them (Study 1), as robots are typically defined by experts, and it is less known whether their definitions reflect how people see robots.Then, a comprehensive list of domains of human activity was generated in which robots, as defined using the previously developed definition, can be encountered (Study 2).

Phase
The goal of this phase was to develop the taxonomy of psychological processes in response to robots from the domains identified in Phase 1.This was achieved by first generating an exhaustive list of psychological processes (that is, cognitions, thoughts and feelings) toward robots from these domains (Study 3).Then, we created an item for each process and asked participants to answer the items in relation to a robot from each domain to organize these processes into dimensions (Study 4).Finally, we confirmed the dimensions (Study 5).
The goal of this phase was to establish the main individual di erence predictors of the dimensions of psychological processes identified in Phase 2, and to determine the mechanisms that explain why these individual di erences are predictive of the dimensions.This was achieved by first testing a comprehensive range of individual di erences from personality psychology to establish the most predictive ones (Study 6).Then, the key predictors identified were replicated and their mechanisms examined (Study 7).

Developing a definition of robots based on how participants perceive them.
Identifying all domains of human functioning in which robots operate.
Mapping the content space of psychological processes toward robots.
Establishing dimensions of the psychological processes.
Confirming the dimensions of the psychological processes.
Determining the main individual di erence predictors of the dimensions.
Confirming the predictors and explaining why they predict the dimensions.  1 and 2 contain more comprehensive breakdowns of these variables, the criteria that were used to guide representative sampling and additional demographic characteristics.a UD, undisclosed: for gender, this category comprises participants who either selected the option 'choose not to disclose' or whose data were missing; for other variables, this category comprises participants whose data were missing.For employment status, the category 'employed' comprises participants who were either self-employed or working for an employer whereas 'unemployed' refers to participants who were not working for themselves or someone else.

Article
https://doi.org/10.1038/s41562-023-01705-7Our aim was to develop domains that are narrow rather than broad, which means that some overlap between them may be present.This approach was aligned with our objective to establish a comprehensive content space of all robots to decrease the probability of using a biased stimulus sample 111,112 when developing the taxonomy of psychological responses to representations of robots.For that reason, it was more optimal to lean towards having too many rather than too few domains, to reduce the chance of failing to cover the content space of all robots in detail and omitting important types of robots.a Cluster 1, characteristics conveying the degree of robot-human similarity; Cluster 2, positive characteristics; Cluster 3, characteristics conveying robots' composition; Cluster 4, negative characteristics; and Cluster 5, characteristics conveying robots' ability to perform various tasks (Supplementary Table 3).

Article
https://doi.org/10.1038/s41562-023-01705-7 behaviours they could think of concerning robots from the domains developed in Study 2 (Table 2)-each participant was randomly allocated to one of five domains.Participants were not provided with specific robot examples for a given domain, because we expected that reliance on their own reflections and experiences would cover a broader spectrum of robots and therefore increase the diversity of psychological processes reported (for a similar methodological approach see ref. 82).Table 3 contains the final list of psychological processes derived from participants' responses using iterative categorization 122 .
In Study 4 we then created items for each of these processes (Table 3) and asked participants to answer the items about an example of a robot (Supplementary Table 7) from one of the 28 domains (Table 2) to which they were randomly allocated.To develop the taxonomy from participants' responses we used maximum-likelihood EFAs 104 with Kaiser 123 normalized promax rotation 106,124 .EFAs were appropriate because the Kaiser-Meyer-Olkin measure of sampling adequacy was 0.983 and 0.984 for Samples 1 and 2, respectively, and Bartlett's test of sphericity was significant (for both samples, P < 0.001) 125 .
To select the most appropriate factor solution we used the following procedure.We first consulted parallel analysis 126,127 , very simple structure 128 , Velicer map 129 , optimal coordinates 130 , acceleration factor 130 , Kaiser rule 131 and visual inspection of scree plots 132 , which indicated that extraction of between one and 19 factors (Sample 1) and between two and 18 factors (Sample 2) could be optimal.Next, we evaluated the largest factor solutions (that is, 19 factors for Sample 1 and 18 for Sample 2) against several statistical and semantic benchmarks.If the benchmarks were not met we decreased the number of factors by one and evaluated these new solutions.This procedure was continued until the benchmarks were met.Concerning statistical benchmarks, a factor solution was required to produce only valid factors-those that have at least three items with standardized loadings ≥0.5 and cross-loadings of <0.32 (refs.105,125,133,134).Semantically, a solution was required to make sense conceptually by having factors that are coherent and easy to interpret 135,136 .
For Samples 1 and 2, three-factor solutions emerged as the most optimal.These met the statistical criteria and had semantically coherent factors that denoted positive, negative and competence-related psychological processes (Table 3).Therefore, the taxonomy was labelled the positive-negative-competence (PNC) model of psychological processes regarding robots.None of the larger factor solutions met the statistical criteria.
We aimed to further validate the PNC model by confirming its dimensions and thereby developing the PRR scale that measures them.To do this, in Study 4 we selected a representative subset of PNC items (bold items in Table 3) and subjected them to ESEM 107 using the maximum-likelihood with robust standard errors (MLR) estimator 137,138 and target rotation with all cross-loadings as targets of zero 139,140 .For both samples, fit indices showed good to excellent fit (that is, SRMR < 0.05, CFI > 0. Subsequently, in Study 5 we recruited two additional samples and asked participants to answer these items about one of the two robot examples (Supplementary Table 7) from one of the 28 domains to which participants were randomly allocated.The ESEM models for both samples had a good to excellent fit (Table 4).Moreover, items previously classified under a specific dimension (that is, positive, negative or competence) by EFAs in Study 4 (Table 3) had the highest loadings for this dimension whereas the cross-loadings were <0.32.
To ensure that the model comprising the three dimensions was the most appropriate we tested several alternative models, which were all rejected due to poor fit (Supplementary Results).
To show that our model has equivalent factor structure, loadings and intercepts regardless of participants' country, robot examples used and several key demographic characteristics, we tested configural, metric and scalar measurement invariance [144][145][146] .As shown in Table 5, measurement invariance was demonstrated in all cases given that the configural model demonstrated good to excellent fit (SRMR < 0.05, CFI > 0.90, RMSEA < 0.06 (refs.141-143); changes in SRMR, CFI and RMSEA were, respectively, ≤0.030, 0.010 and 0.015 for the metric model and ≤0.015, 0.010 and 0.015 for the scalar model 144 .Since we could not analyse measurement invariance for participants who did versus did not use robots at work in Study 5 because the number of those who did was insufficient (Table 1), we tested this in Study 6 where sample sizes were larger.In Study 6 we also computed measurement invariance for additional participant characteristics assessed in that study (educational attainment, income, being liberal versus conservative, ethnic identity and relationship status).Measurement invariance was demonstrated in all cases (Supplementary Table 10).
Overall, the structure of the PNC model and its validity across different subgroups of participants were confirmed.

Phase 3: examining individual difference predictors
In Study 6, to identify the main predictors of the PNC model we followed the analytic strategy described in Methods.We first computed 11 common machine learning models (for example, linear least squares, lasso 109,110 ) for the positive, negative and competence dimensions separately.The key predictors in each model were 79 personality measures that were found to be conceptually or theoretically relevant to the PNC dimensions.We selected these measures by examining several comprehensive psychological scale databases (for example, Database of Individual Differences Survey Tools 147 ).All measures and their justifications are available in Supplementary Table 11.
We then identified the most predictive models, which were the same across all PNC dimensions: conditional random forest Subsequently we determined all individual differences that were among the top 30 predictors across these six models and that were also statistically significant in the linear least-squares model after applying the false-discovery rate 148 correction (Supplementary Tables 12-15).Several variables met these criteria and were therefore deemed the main individual difference predictors of PNC dimensions.For the positive dimension these were general risk propensity (GRP 149 ), anthropomorphism (IDAQ 150 ) and parental expectations (FMPS_PE 151 ); for the negative dimension these were trait negative affect (PANAS_TNA 152 ), psychopathy (SD3_P 153 ), anthropomorphism (IDAQ 150 ) and expressive suppression (ERQ_ES 154 ); and for the competence dimension these were approach temperament (ATQ_AP 155 ) and security-societal (PVQ5X_SS 156 ).According to the most interpretable model (that is, the linear least squares) these most predictive individual differences were positively associated with the corresponding PNC dimensions.
In Study 7, to replicate the findings we measured the most predictive individual differences in wave 1 and used linear regressions to show that they significantly predicted PNC dimensions in wave 2 (Table 6), consistent with Study 6.Furthermore, we examined various potential mediators of the relationship between each predictor and a PNC dimension using parallel mediation analyses 157 percentile-bootstrapped with 10,000 samples (for mediators and mediated effects see Table 7; the rationale behind each mediator and detailed mediation analyses are available in Supplementary Table 17 and Supplementary Results, respectively).To aid the interpretation of the mechanisms, below we summarize the mediated effects from Table 7 that successfully explain a portion of the relationship between the key individual differences and PNC dimensions.
For the positive dimension, GRP 149 was a positive predictor because people scoring higher on this trait valued the risks associated with robot adoption (GRP_M3) and were curious to see how robots would change the world (GRP_M4).Moreover, IDAQ 150 was a positive predictor because people scoring higher on this trait generally felt positive towards inanimate entities with human features (IDAQ_M3), and because interaction with such entities helped them fulfil the need to experience strong emotions regularly (IDAQ_M2).FMPS_PE 151 was also a positive predictor due its association with valuing robots because they were closer to perfection than humans (FMPS_PE_M1), and also because they could help humans fulfil their own high expectations (FMPS_PE_M2) and could help humans cope with their own high expectations of themselves (FMPS_PE_M6).
For the negative dimension, PANAS_TNA 152 was a positive predictor because people scoring high on this trait were more likely to be in a state of activated displeasure (for example, feeling scared and upset; 12-PAC_AD 158 ).Furthermore, SD3_P 153 was a positive predictor because people scoring high on this trait were also more likely to be in the state of activated displeasure (12-PAC_AD 158 ), had negative feelings towards other people's inventions (SD3_P_M2) and felt inferior towards technologies in which they were not proficient (SD3_P_M3).For ERQ_ES 154 and IDAQ 150 , we did not manage to explain the mechanism behind their relationship with the negative dimension.
For the competence dimension, ATQ_AP 155 was a positive predictor because people scoring high on this trait were more likely to value exceptional skills and competencies (ATQ_AP_M5).PVQ5X_SS 156 was also a positive predictor because it was associated with people linking advanced technology (for example, robots and machines) with how powerful society is (PVQ5X_SS_M4).The symbol Δ refers to the absolute value of a change in fit indices for an invariance model relative to the previous (that is, metric minus configural; scalar minus metric).For robot example, 'A' indicates that the robot example for the domain to which participants from Study 5 were randomly allocated belonged to one of the two stimulus sets used in the present research, while 'B' indicates that the robot example belonged to the other stimulus set (Supplementary Table 7).For gender, very few participants identified themselves as 'other' or did not disclose any information (Table 1), and they were therefore randomly classified as either 'female' or 'male' so they could be used in invariance testing.For employment status the category 'employed' includes those participants who were self-employed or working for an employer.For use of robots at work we could not analyse measurement invariance because of the insufficient number of participants who used robots at work (Table 1).However, for Study 6, in which sample sizes were larger (Table 1), we tested measurement invariance for this variable and for additional participant characteristics assessed in that study (educational attainment, income, political orientation: liberal versus conservative, ethnic identity and relationship status).Measurement invariance was met in all cases (Supplementary Table 10).

Discussion
In this section we first discuss (1) our findings and their contributions in relation to previous research to achieve inductive integration 77 and then (2) the main limitations (for a detailed discussion see Supplementary Discussion).
Starting with Phase 1, we first discuss the robot definition (Table 2) and then the domains (Table 2).Regarding the definition, ours and that of IEEE 22 both conceptualize robots as devices or entities that can perform different tasks (Part 1, Table 2), emphasize that robots can have different degrees of autonomy (Part 2, Table 2) and include robots' composition (Part 5, Table 2).However, the two definitions also have unique elements: ours includes robots' durability (Part 3, Table 2) and positive/negative attributes (Part 4, Table 2) whereas the IEEE definition includes robots' capability to form robotic systems.Overall, although our definition is somewhat more nuanced, both definitions are remarkably aligned, which indicates that experts and lay individuals perceive robots similarly.
Regarding the domains in which robots operate we have identified 28 (Table 2), which is more than professional organizations usually propose (for example, the IEEE lists 18 domains on their website, https://robots.ieee.org/learn/types-of-robots/).However, this is not surprising because our list was intentionally nuanced to enable the identification of a comprehensive sample of robots, and we hope that other scholars will adopt it in their research for this purpose.It is important to emphasize that, despite the meticulous procedure used to develop the list, it is possible that (1) we failed to identify more niche domains and (2) the number of domains might increase as technology advances.
Continuing with Phase 2, we first compare the psychological processes of the PNC model (Table 3) against those reported in previous research and then discuss the model (Tables 3 and 4) more specifically.In general, participants evoked the processes identified in the literature reviewed in the Introduction, including positive feelings such as happiness 10,33 (Item 32); negative feelings such as anxiety 27 (Item 13); performance 48 and usefulness 19 (Items 4 and 5); anthropomorphism 36,159 (Items 24 and 27); and various approach 66 (Items 22 and 40) or avoidance 26 (Items 11 and 52) behaviours (Table 3).Importantly, participants also described many infrequent or previously unidentified processes.For example, they indicated that robots contribute to human degeneration (Item 142); lead to existential questioning (Item 148); make people feel dehumanized (Item 25); help humans self-improve (Item 132); and restrict freedom (Item 114).
One of the main contributions of our research is showing that these seemingly highly diverse psychological processes fall under three dimensions: positive (P), negative (N) and competence (C) (Tables 3 and  4).In general, previous research on human-robot relationships and interactions has focused on studying and measuring specific psychological reactions to robots (for example, safety, anthropomorphism, animacy, intelligence, likeability and various social attributes 159,160 ) but did not attempt to identify all these reactions and investigate them under an all-encompassing construct of psychological processes.In that regard, the PNC model can be seen as an integrative framework that links and organizes an exhaustive list of psychological processes, both those that researchers have already studied separately and the less common ones generated by our participants.We believe that our model moves the field forward, not only through this integration but also by enabling researchers to systematically study psychological processes regarding robots by (1) using the PNC as a guide to inform the design of future research on these processes and (2) employing the PRR scale to measure them.
One of the most interesting insights spawned by the PNC model stems from comparing it with the stereotype content model (SCM 161,162 ).According to the SCM, people form impressions of other humans along two dimensions: warmth (that is, positive and negative social characteristics) and competence (that is, a person's ability to successfully accomplish tasks).Although our model is broader than the SCM because it comprises all psychological processes rather than only social and intellectual characteristics, the competence dimensions from the two models are thematically comparable whereas the positive and negative attributes from the SCM's warmth dimension are broadly aligned with our positive and negative dimensions..GRP and FMPS_PE were measured on a 1-5 scale (1, strongly disagree; 5, strongly agree); IDAQ was measured on a 0-10 scale (0, not at all; 10, very much); PANAS_TNA was measured on a 1-5 scale (1, strongly disagree; 5, strongly agree); SD3_P was measured on a 1-5 scale (1, disagree strongly; 5, agree strongly); ERQ_ES and ATQ_AP were measured on a 1-7 scale (1, strongly disagree; 7, strongly agree); and PVQ5X_SS was measured on a 1-6 scale (1, not like me at all; 6, very much like me).

Article
https://doi.org/10.1038/s41562-023-01705-7Ending with Phase 3, we discuss our findings on individual difference predictors (Tables 6 and 7) in relation to the previous relevant literature.In this respect, researchers found that extraversion, openness and anthropomorphism predicted positive responses to robots [163][164][165][166] ; the need for cognition predicted lower negative attitudes towards robots 167 ; and animal reminder disgust, neuroticism and religiosity predicted experiencing robots as eerie 168 .Among these, our research corroborated only the positive relationship between anthropomorphism and positive responses (Table 6).
We also went beyond previous research by discovering many relationships not easily anticipated by theory.For example, although we had a sound rationale behind each predictor (Supplementary Table 11) it would have been difficult to foresee psychopathy as the most robust predictor of the negative dimension (Table 6) 153 .We also did not expect that one of the main mechanisms behind negative robot perceptions would be negative feelings towards other people's creations and the state of activated displeasure, which mediated the relationship between psychopathy and the negative PNC dimension (Table 7).Therefore, using a data-driven approach allowed us to generate unexpected insights, thus diversifying the body of knowledge on psychological reactions to robots 79,81,98 .
There are several limitations to this research.First, the stimuli were not physical robots but their depictions.These stimuli hold ecological validity because people often interact with robots indirectly (for example, via social media or various websites), and many psychological processes may therefore be shaped in this manner.Nonetheless, previous research showed that direct interaction with robots impacts people's experiences 27,169,170 .Therefore, based on the present findings it is not known whether our taxonomy applies to the physical counterparts of the robots depicted by our stimuli, and investigating this is currently unachievable because many of these robots are inaccessible for in-person research due to their size, cost, limited production or potential use as weapons (for example, industrial and military robots).However, this research may be possible in the future if such robots become more accessible.
Second, participants were from Western, educated, industrialized, rich and democratic 171 countries (United Kingdom and United States).Because our research proposed and investigated a construct (that is, psychological processes regarding robots) from scratch, our priority was to establish its foundations.Combining the investigation of cultural differences with this agenda using equally meticulous methods would have exceeded the scope of a single article.Nevertheless, because measurement invariance analyses showed that the PNC model applies to individuals regardless of their income, age, education, use of robots at work, political orientation, ethnic identity and relationship status, it is plausible that the model would generalize to countries that differ from the United Kingdom or United States on these population characteristics.Conducting an in-depth examination of this question will be a crucial step as this research topic progresses.
For each mediator we first present its name, followed by its mediated effect (ab) in parentheses.Mediated effects are presented in raw units.For example, for GRP_M3 (ab = 0.057, 99% CI [0.029, 0.093], ab % = 0.393) the mediated effect ab indicates that for one unit increase in GRP as a predictor, the positive dimension increased by 0.057 units, which is the effect that can be accounted for by the mediator (GRP_M3).For an easier understanding of the magnitude of each mediated effect, ab % is also reported and indicates the percentage of the total effect between a predictor and DV (that is, coefficients b in Table 6) explained by the mediator.In some cases ab % can exceed 1 (that is, 100%), which means that the effect travelling through the mediator is larger than the total effect itself.A mediated effect is significant only if its 99% CI bootstrapped does not contain 0 (ref.157).Some mediated effects (ab and ab % ) are negative; this means they are in the opposite direction to the effect between a predictor and DV, and therefore do not explain their relationship.All mediators that successfully explained a portion of the relationship between a predictor and DV (that is, mediated effects that are positive and whose 99% CI bootstrapped does not contain 0) are presented in bold typeface.a The two items for GRP_M7 were averaged into a composite score.b For IDAQ as a predictor we used the same mediators for the positive and negative dimensions, considering that we wanted to ensure that any potential differences between the mechanisms for these two dimensions are not a consequence of different mediators being used in the mediation models.c The five 12-PAC mediators capture state affect because they were assessed in relation to how people currently felt.d Although the mediated effect of IDAQ_M3 was significant, the direction of this effect was negative (ab = −0.011)and thus opposite to the positive direction of the relationship between IDAQ and the negative domain (Table 6).Therefore, the mediator failed to explain this relationship.Third, we recruited online participants who are inherently more confident with technology.Whereas this might have influenced the findings, alternative modes of recruitment (for example, laboratory) would have yielded smaller and less representative participant samples [172][173][174][175][176] .Furthermore, to reduce the chance of technological proficiency biasing the findings, all machine learning models controlled for a variable indicative of technological proficiency (that is, previous frequency of interaction with robots; Supplementary Tables 11 and 12).
Finally, rapid technological development might make robots with an embodiment similar to humans able to perform and simulate all human activities, thereby substantially changing how people perceive robots.However, since our comparison of the PNC model and SCM 161,162 indicates that people form impressions of robots and humans in a similar manner, it is unlikely that robots becoming more like humans will have a notable impact on the structure of our model.Even if it does, the PNC can be updated via the same methodological procedures we used.

Methods
This research complies with the ethics policy and procedures of the London School of Economics and Political Science (LSE), and has also been approved by its Research Ethics Committee (no.20810).Informed consent was obtained from all participants and they were compensated for their participation.Table 1 summarizes key participant information.In Studies 4-6, participants were recruited to be reasonably representative of the UK/US populations for age, gender and geographical region, and in Study 1 (Sample 1) for gender only.More comprehensive breakdowns of participant information and the criteria used for representative sampling are available in Supplementary Tables 1 and 2.
To be included in analyses, participants had to pass seriousness checks 177 , instructed-response items (for example, please respond with 'somewhat disagree') [178][179][180] , understanding checks in which they identified the main research topic (that is, robots) amongst dummy topics (for example, animals or art) and completely automated public Turing tests to tell computers and humans apart (CAPTCHAs), used to safeguard against bots 181 .The number of these quality checks varied per study.In Studies 4-7, which were quantitative, we employed pairwise deletion for missing data because various simulations showed that this does not bias the type of analyses we used when missing data are infrequent (≤5%)-even in smaller participant samples (for example, 240)-and larger samples are generally more robust to missing data 182,183 .In our analyses, the percentage of participants with missing data never exceeded 1.95.
The analyses using machine learning models (Study 6) did not rely on distributional assumptions due to cross-validation 184 , and neither did the mediation analyses (Study 7) due to bootstrapped confidence intervals used to test mediated effects 157 .All other quantitative analyses assumed a normal distribution of data.Because formal normality tests are sensitive to small deviations that do not bias findings 134 , we assumed variables to be normal if they had skewness between −2 and 2 and kurtosis between −7 and 7 (refs.185-187).All the required variables met these criteria (Supplementary Tables 18-23).Given the large sample sizes we used, even severe deviations from normality would not compromise the validity of statistical inferences 157,188,189 .
Next, we succinctly describe the methods of the studies in each phase (for a more comprehensive description, see Supplementary Methods).Study 7 was preregistered on 12 December 2021 via the Open Science Framework (OSF) and can be accessed using this link: https://osf.io/nejvm?view_only= 79b6eeee42e24cb2a977927712b-dcdd2.There were no deviations from the preregistered protocol.Data and analysis codes for all studies are also publicly available via the OSF using the following link: https://osf.io/2ntdy/?view_only = 2cacc7b1cf2141cf8c343f3ee28dab1d Phase 1: mapping a comprehensive content space of robots Study 1. Sample size.To determine Sample 1 size we relied on previous work showing that, in qualitative research, samples having 30-50 participants tend to reach the point of data saturation, which means that the addition of further participants produces little new information [190][191][192][193][194][195] .We recruited a considerably larger sample (266; Table 1) to ensure that the study detected all important robot characteristics because the robot definition we wanted to develop was essential for all subsequent studies.For Sample 2 we recruited 100 participants (Table 1), which is comparable to other studies using hierarchical clustering 196,197 given the lack of guidelines on optimal sample sizes for this technique (for additional insights based on simulations, see Supplementary Methods).
Procedure.In Sample 1, participants first answered the consent form after which they were presented with three items that elicited robot characteristics.In the following order, they were asked to: (1) state the first thing that comes to mind when they think about a robot; (2) define in their own words what a robot is; and (3) list as many characteristics they associate with robots that they could think of.At the end we assessed participant information, including gender, age, employment status and use of robots at work (Table 1).In Sample 2, after answering the consent form, participants were exposed to 277 robot characteristics produced by Sample 1 (Supplementary Table 3) and were asked to sort them into groups based on similarity.In this regard, participants were provided with up to 60 empty boxes representing different groups into which they could drag the characteristics they perceived as being similar.At the end, participant information was assessed as for Sample 1.
Analytic approach.We first extracted robot characteristics generated by Sample 1 participants for the three questions described in the Study 1 procedure and then rephrased those that were stated vaguely (for example, 'appearance of thought') into a more precise formulation (for example, 'appears to think on its own').Next, we deleted all characteristics that were identical and therefore redundant.However, we included many items that were overlapping or similar (for example, 'performs actions' and 'performs certain actions') to ensure that the potential content space of robot characteristics was sampled in detail (for the final list of 277 characteristics see Supplementary Table 3).The characteristics, as sorted into categories by Sample 2 participants, were subjected to hierarchical cluster analysis for categorical data [114][115][116] : a dissimilarity matrix was computed using Gower's distance 198,199 , clusters were produced using Ward's linkage method 200,201 and the optimal number of clusters was determined via the mean silhouette width approach using the partitioning around medoids algorithm 114,202,203 .The five clusters that emerged were then arranged into the robot definition (Table 2).Study 2. Sample size.To determine sample size we followed the same guidelines as for Study 1 (Sample 1) that considered the point of data saturation in qualitative research.
Procedure.After completing the consent form, participants were presented with the robot definition developed in Study 1.They were then asked to think about and list any domains that came to mind in which humans can encounter and/or interact with robots.It was explained that, by 'domains', we mean any area of human life and human Article https://doi.org/10.1038/s41562-023-01705-7activity in which people encounter, interact with, use, are helped by and/or are substituted by robots.At the end, participant information was assessed as in Study 1 (Table 1).
Analytic approach.To identify the domains we performed an inductive qualitative content analysis on participants' responses [117][118][119][120][121] : we first created a list of all domain items identified by participants (see Supplementary Results, subsection 'Additional analysis output') and then arranged these items into common categories that correspond to the domains of robot use.The first author created the initial list of categories from the domain items.The list was revised by the remaining authors and, eventually, it was consolidated by all three authors.To ensure that no important domains had been omitted we also consulted the classification of robots proposed by IEEE (https://robots.ieee.org/learn/types-of-robots/), the list of industries and sectors endorsed by the International Labor Organization (https://www.ilo.org/global/industries-and-sectors/lang--en/index.htm) and the articles from our literature review.
Phase 2: creating the taxonomy of psychological processes Study 3. Sample size.To determine sample size we followed the same guidelines as for Study 1 (Sample 1) and Study 2. Because Study 3 aimed to identify a comprehensive range of psychological processes towards robots, which was a crucial step of our research, we recruited a substantially larger sample than required (350; Table 1) to ensure that even highly infrequent processes were detected.
Procedure.Participants first completed the consent form and were then randomly allocated to five out of the 28 domains we developed (Table 2).After reading the definition of robots (Table 2), we prompted them to think about robots from the allocated domains by writing about interactions they had with such robots, or else about interactions they could imagine or were exposed to via media.To assess participants' psychological processes, we then asked them to list and describe feelings they had experienced (for affective responses), thoughts they had (for cognitive responses) and actions they engaged in (for behavioural responses) when they interacted with any robots they could think of from each domain, or to write about feelings, thoughts and actions they could conceive in case they had never interacted with these robots.At the end, participant information was assessed as in Studies 1 and 2 (Table 1).
Analytic approach.We implemented iterative categorization 122 .This qualitative analysis involved first splitting participants' responses to questions assessing their psychological processes into key points (that is, separate issues or thoughts-for example, 'I think this will be the future')-and then grouping these points into themes based on similarity.Out of 334 participants who were included in analyses (Table 1), only four produced merely meaningless responses that could not be analysed and the remaining 330 generated 10,332 valid key points (approximately 31 per participant) that were analysed.Study 4. Sample size.Because power analyses are difficult to implement for EFAs before any parameters are known, to determine the sizes of Samples 1 and 2 we consulted various resources that estimated the optimal sample size for EFAs (for a more comprehensive description see Supplementary Methods).Because the size of 1,500 met all the estimates, we recruited the samples required to reach this number after accounting for exclusions (Table 1).
Procedure.The procedure for both samples was identical.After answering the consent form, participants were randomly allocated to a domain (Table 2) and received a specific example of a robot from that domain (Supplementary Table 7) that included an image and description approximately eight lines long.For the sex domain, two robot examples were created (one male and one female) and participants assigned to this domain were randomly allocated to one.Participants were then asked to answer 149 items (Table 3), presented in a randomized order, about the robot in question.At the end, participant information was assessed as in Studies 1, 2 and 3 (Table 1).
Analytic approach.For both samples we planned several steps to determine the optimal factor structure.First, the Kaiser-Meyer-Olkin measure of sampling adequacy and Bartlett's test of sphericity were required to show that our data are suitable for EFAs 125 .Second, to determine the preliminary number of factors for examining in EFAs, we used parallel analysis 126,127,204 , very simple structure 128 , Velicer map 129 , optimal coordinates 130 , acceleration factor 130 , Kaiser rule 131 and visual inspection of scree plots 132 .This was advisable because consulting several criteria allows understanding of the range within which lies the optimal number of factors potentially 82,135,136,205,206 .Next, we aimed to evaluate the largest factor solution identified in the previous step against several statistical benchmarks using maximum-likelihood EFAs 104,105 with Kaiser-normalized 123 promax rotation 106,124 .Namely, the factor solution was required to produce only valid factors (that is, those that have at least three items with loadings ≥0.5 and cross-loadings <0.32) to be acc epted 105,125,133,134 .If these criteria were not met, we aimed to decrease the number of factors by one and evaluate the new solution-this procedure would continue until a satisfying solution was identified.Finally, the accepted factor structure also had to have factors that are coherent and easy to interpret 135,136 .Importantly, this approach to selecting the best structure is not only statistically and semantically viable but has precedent in previous taxonomic research 82,205 .
Study 5. Sample size.To determine sample size we used Monte Carlo simulations 207 based on the data from Samples 1 and 2 (Study 4).Details are available in Supplementary Methods.
Procedure.The procedure for both samples was identical.After answering the consent form, participants were randomly allocated to one robot example.The randomization procedure was the same as in Study 4 except that there were two (rather than one) possible robot examples per domain (Supplementary Table 7).The sex domain had four examples-two male and two female robots.The descriptions of robots were also consistent with Study 4. Participants were then asked to answer the 37 selected items (Table 4), presented in a randomized order, about the robot in question.At the end, participant information was assessed as in Studies 1, 2, 3 and 4 (Table 1).

Phase 3: examining individual difference predictors
Study 6. Sample size.For machine learning algorithms combined with cross-validation there are no straightforward guidelines for compution of power analyses.Simulations showed that, for the tenfold cross-validations we were planning to use, a sample of 2,000 leads to high generalizability (that is, a likelihood that the results will apply to other samples from the same population) without inflating the time taken to run the models 209 .Therefore, we aimed to recruit a sample that would have approximately 2,200 participants after accounting for exclusions, in case of any additional missing data.
Procedure.After answering the consent form, participants were randomly allocated to one robot example as in Study 5 and asked to answer the PRR scale items (Table 4) presented in a randomized order.They then completed measures that assessed the 79 individual differences we tested as predictors (Supplementary Table 11), ranging from general personality traits, such as BIG 5 (ref.210) or approach Article https://doi.org/10.1038/s41562-023-01705-7temperament 155 , to more specific ones, such as psychopathy 153 .We also measured covariates for inclusion in the models alongside the individual differences (that is, familiarity with the robot, frequency of interaction, descriptive norms, injunctive norms, age, income and political orientation; Supplementary Table 11).Finally participant information was assessed as in the previous studies, with the addition of education level, ethnic identity and relationship status (Supplementary Table 1).
Analytic approach.We implemented a rigorous multistep procedure to select the most predictive individual differences.Using the caret package 109,110 in R, we computed the following 11 machine learning models for each PNC dimension separately: linear least squares, ridge, lasso, elastic net, k-nearest neighbours, regression trees, conditional inference trees, random forest, conditional random forest, neural networks and neural networks with a principal component step.For each model, tenfold cross-validation 184,[211][212][213][214] was implemented and all 79 individual differences plus covariates were used as predictors.
The most predictive models were selected using root-mean-square error (r.m.s.e.) 109,110,184 .For each PNC dimension, the model with the highest r.m.s.e. was identified and the remaining models were compared with it using paired-samples t-tests (Bonferroni corrected α of 0.00167 was used as the significance criterion).Ultimately, the model with the highest r.m.s.e. and those not significantly different from it were identified as the most predictive models.For each of these models we first identified the 30 most important predictors using the VarImp function in R 110 and then identified individual differences that appeared in the top 30 across all models.
Based on the linear least-squares model-which is in essence a linear regression algorithm combined with cross-validation and thus outputs P values-we retained only the most important individual differences identified in the previous step that were also statistically significant after applying false-discovery rate correction 148 .We used this approach because in Study 7 we aimed to replicate the selected predictors using linear regressions; therefore, we wanted to further minimize the likelihood that these predictors are false positives.Study 7. Sample size.Because this study tested the key predictors identified in Study 6, sample size was estimated using power analyses 215 based on the parameters from that study (Supplementary Methods).
Procedure.The study consisted of two waves.In wave 1, participants first completed the consent form and were then presented with, in a randomized order, the measures assessing the most predictive individual differences identified in Study 7 (Table 6).Finally participant information was assessed as in Studies 1-5.Approximately 4 days after completing wave 1, participants were invited to participate in wave 2. They first completed the consent form and were then presented with the items measuring the mediators (Table 7) in a randomized order.Subsequently, they were randomly allocated to a robot example as in Studies 5 and 6 and asked to answer the PRR scale items (Table 4), presented in a randomized order.
Analytic approach.To test whether the key individual differences predicted the relevant PNC dimensions we used linear regressions, one per predictor (Table 6).Furthermore, to identify the most important mediators we used the Process package (Model 4 (ref. 157)) to perform parallel mediation analyses (that is, with all potential mediators analysed together for the relevant predictor; Table 7), percentile-bootstrapped with 10,000 samples.In line with the Benjamini-Yekutieli correction 216,217 , the significance criterion was 0.01 for the regression analyses whereas for the mediated effects we used 99% confidence intervals that are the equivalent of this criterion.

Reporting summary
Further information on research design is available in the Nature Portfolio Reporting Summary linked to this article.

Sampling strategy
As indicated above, participants were recruited via online panels commonly used in psychological and behavioural research (Prolific, Pureprofile, and Amazon Mechanical Turk).These and other online participant panels generally use some form of convenience sampling (e.g., Chandler & Shapiro 2016; Armitage & Eerola, 2020; see also https://researcher-help.prolific.co/hc/en-gb/articles/360009223133-Is-online-crowdsourcing-a-legitimate-alternative-to-lab-based-research-), and the sampling strategy used in the present research was therefore convenience sampling.More information about the composition of our participant samples is provided in the section "Research sample" above.
In the Methods section for each study in the article, there is a section on "Sample size" that explains how the sample size was predetermined (see also Supplementary Methods).For studies that had qualitative elements (Study 1, Sample 1; Study 2; and Study 3), we recruited sample sizes larger than 50 participants, given that simulations have indicated that sample sizes larger than 30-50 participants (Mayring, 2019; van Rijnsoever, 2017) tend to reach the point of data saturation, which implies that adding new participants beyond this number produces very little new information (Faulkner & Trotter, 2017).For Study 1 (Sample 2; see section "Sample Size" for that study in the article), in which we used hierarchical cluster analysis, the sample size was based on recent simulations, according to which the most important determinant of power seems to be the number of observations per cluster, with 20 observations yielding sufficient power to detect a cluster (Dalmaijer et al., 2022).For Study 4 (see section "Sample Size" for that study in the article), we consulted several resources to determine the number of participants to test for each sample because there is no consensus regarding sample size requirements for EFA (Costello & Osborne, 2005; Hogarty, Hines, Kromrey, Ferron, & Mumford, 2005; Kyriazos, 2018; MacCallum, Widaman, Zhang, & Hong, 1999; Reio Jr & Shuck, 2015).First, a few resources posit that the ratio of the number of participants to the number of items should be at least 10:1 (Everitt, 1975; Gorsuch, 1983; Reio Jr & Shuck, 2015).Second, some studies estimated that, if the ratio of the number of items to the number of factors is larger than 10:3, recruiting approximately 400 participants leads to high power, even under low communalities (MacCallum et al., 1999).Third, it has been proposed that a sample size larger than 300 is sufficient for a wide range of factor solutions (Dimitrov, 2012; Guadagnoli & Velicer,  1988).Our sample sizes for Study 4 met all these criteria.For Study 5, we determined the number of participants to test using Monte Carlo simulations (Muthén & Muthén, 2002) based on the data from Samples 1 and 2 (Study 4).Concerning Study 6, there are no clear guidelines for the use of machine learning algorithms combined with cross-validation regarding sample size and power.In a series of simulations, Song, Tang, and Wee (2021) showed that, for 10-fold cross-validations that we were planning to use, a sample size of 2000 leads to high generalizability (i.e., likelihood that the results will apply to other samples from the same population) without inflating time taken to run the models.We therefore aimed to recruit a sample that would result in roughly 2200 participants after applying the exclusion criteria, in case of any additional missing data.Finally, we determined the sample size for Study 7 by computing a-priori power analyses (Faul, Erdfelder, Buchner, & Lang, 2009) based on the data from Study 6.

Data collection
Qualtrics (https://www.qualtrics.com/) was used to collect the data (for the versions of Qualtrics that were used, see the "Data collection" field above).This is an online survey software widely used by universities across the world.Participants were anonymous and completed the study in their own surroundings.Participation was allowed on PCs, laptops, and tablets, but not on mobile phones.The researchers (i.e., authors of this paper) were not blinded to study predictions and aims.However, since the participants were anonymous and there was no contact between the researchers and participants, it is implausible that experimenter demand effects played a role in the present research.Importantly, since the present research used a data-driven approach as described in the Introduction section of the article, the majority of studies did not have a priori predictions.Only Study 5, in which we aimed to confirm the dimensions of psychological processes established in Study 4, and Study 7, in which we aimed to corroborate the main individual difference predictors identified in Study 6, were confirmatory.This is another reason why experimenter demand effects concerning study predictions were unlikely to play a role in the present research.

Data exclusions
The exclusion criteria were pre-established (e.g., see pre-registration for Study 7: https://osf.io/nejvm?view_only=79b6eeee42e24cb2a977927712bdcdd2).They are comprehensively described in the Methods section in the article and in Supplementary Methods.In general, participants were excluded from analyses if they did not correctly answer seriousness checks (Aust, Diedenhofen, Ullrich, & Musch, 2013), instructed-response items (Kung, Kwok, & Brown, 2018; Meade & Craig, 2012; Thomas & Clifford, 2017), and understanding checks in which they were asked to identify the main topic of the study amongst a range of dummy topics.Table 1 in the article summarizes participants who completed the study and who were included in analyses after the exclusion criteria were applied.From the participants who completed the study, the following number of participants were excluded from data analyses:

Non-participation
Considering that participation in the present research took place anonymously and online, we only have knowledge of participants who completed the study (see Table 1 in the article).In some cases, online participants recruited via the online panels we used (Prolific, Pureprofile, and Amazon Mechanical Turk) test the survey and answer one or few questions and then leave -these data are captured under incomplete data but we are not aware of whether and how many of these participants are unique participants.Overall, non-participation data for the present research are not available.

Randomization
As can be seen under "Study description", the present research was not experimental.Therefore, there were no different conditions to which participants could be randomized.However, it is important to emphasize that in Studies 4-7, in which participants were allocated to robot examples from 28 possible robot domains, this allocation was random.

Reporting for specific materials, systems and methods
We require information from authors about some types of materials, experimental systems and methods used in many studies.Here, indicate whether each material, system or method listed is relevant to your study.If you are not sure if a list item applies to your research, read the appropriate section before selecting a response.

| Overview of the present research. '
Phase' outlines the goals of each research phase, how these goals were achieved and the link between successive phases.'Study' and 'description' indicate the number of each study and its goal while 'analysis' specifies the statistical analyses that were used in each study.

Table 2 | Definition of robots developed from the clusters of their characteristics generated by participants (Study 1), and robot domains grounded in this definition (Study 2), with example participant items analysed using indictive content analysis to develop the domains Definition part Definition Robot clusters that informed the definition a
18Hospitality and food service (i.e., hotels, conventions, restaurants, bars, and other lodging, space, food and/or drink provider) and related customer service and support 28 Art (this domain was added based on ref.18and was not generated from participants' responses) −

Table 3 | Summary of key findings (Studies 3 and 4): psychological processes, items corresponding to each process and the output of EFAs performed on the items across two participant samples
PrivacyThis robot violates privacy (e.g., is too intrusive or invasive).

Table 3 (continued) | Summary of key findings (Studies 3 and 4): psychological processes, items corresponding to each process and the output of EFAs performed on the items across two participant samples Article
Values under each factor correspond to standardized factor loadings; only loadings with absolute values ≥0.320 are reported for clarity.The psychological processes and corresponding items are ordered according to item loadings on the three factors, while item no.corresponds to the number they were assigned when they were created.Items in bold were those selected for the PRR scale tested in Study 5 (Table4).Coefficients for factors P, N and C at the bottom of the table denote correlations between factors.All items were scored on a seven-point Likert scale (1, strongly disagree; 7, strongly agree).
P, N and C refer to the dimensions (that is, factors) that comprise positive, negative and competence-related psychological processes, respectively, regarding robots.

Table 6 | Main individual difference predictors of the positive, negative and competence dimensions (Study 7)
216,217.014.All models had 1,069 residual degrees of freedom.In all models we used t-tests (two-sided) to assess the significance of the coefficients, with the significance criterion being P < 0.010 based on the Benjamini-Yekutieli correction216,217for multiple comparisons.The table contains raw P values that are statistically significant if they meet this benchmark; therefore, all nine predictors reached statistical significance.f 2 denotes Cohen's f 2 effect size