Opening and literature review

This article aims to uncover the human-like qualities of ChatGPT by exploring anthropomorphism, which entails attributing human characteristics, emotions or intentions to non-human entities (Epley et al. 2007). The investigation into the parallels between humans and technology has greatly fuelled research efforts, prompting researchers to delve into how machines imitate various aspects of human society through algorithmic and mechanistic models. Previous scholarly works have examined and considered the replication of the following human elements by technologies:

Firstly (emulated human collaboration), researchers aim to create systems that can understand and respond to human nonverbal cues and engage in natural conversations (Winograd 2006). This involves the development of algorithms capable of interpreting both verbal and nonverbal cues, such as facial expressions and body language. The ultimate objective is to create systems that can adapt to individual users, actively participate in social interactions and offer personalised recommendations. To achieve this, researchers are developing new sensors and devices to capture and interpret human gestures and expressions.

Secondly (emulated human emotion), efforts are being made to create algorithms and machines that can express emotions like humans (Martınez-Miranda and Aldea 2005). This involves the development of technologies that can generate appropriate emotional responses during social interactions. Machines capable of expressing empathy, sympathy, joy and sadness are seen as having the potential to revolutionise various industries by providing users with more personalised and empathetic experiences.

Additionally, ongoing efforts are focused on emulating human cognition to create machines that can learn, reason and problem-solve in ways similar to humans (Martınez-Miranda and Aldea 2005). This endeavour involves emulating human cognitive processes and employing algorithms to stimulate creativity. Neuroscientists are actively studying how the brain processes information, offering valuable insights for developing technology with enhanced human-like capabilities.

Fourthly (emulated human language), efforts to emulate human language involve the development of algorithms and machines capable of accurately comprehending, interpreting and responding to natural language (Paris et al. 2013). Researchers aim to generate coherent and grammatically correct written and spoken language using statistical and rule-based methods. This research has applications in the development of virtual assistants, chatbots and automated translation systems that can interact seamlessly with humans.

Furthermore, there are ongoing efforts to emulate human adaptability, with researchers working on developing machines that can learn from data, make predictions and improve their performance over time, mirroring the way humans adapt to new situations and learn from their experiences (El Naqa and Murphy 2015). This involves the application of machine learning algorithms that can recognise patterns in data, as well as reinforcement learning techniques where machines are rewarded for making accurate decisions. Additionally, the focus is on creating systems that can adapt to evolving circumstances and continually learn and enhance their capabilities over time.

Sixthly (emulated human senses), the emulation of human senses involves the development of machines capable of recognising and interpreting visual information, similar to humans. This is achieved through the use of computer vision algorithms that classify objects, identify people and animals and understand spatial relationships (Cassinis et al. 2007). Additionally, researchers are exploring the emulation of other human senses, such as hearing and smell, to create machines that can recognise auditory information, create sounds and music and detect and interpret smells. These advancements have the potential for diverse applications across various industries.

In addition, the emulation of human reality represents the seventh area, where researchers strive to create immersive and interactive environments that closely replicate real-world interactions (Petrović 2018). The ultimate objective is to provide users with an experience that closely resembles human interaction. This emulation has the potential to revolutionise industries by offering more captivating and engaging experiences.

The eighth area focuses on emulating human motor skills. Researchers aim to create machines that can perform tasks requiring fine motor skills and precision movements, similar to those executed by humans (Raj and Seamans 2019). They are developing robotic arms, fingers and other mechanisms that are capable of performing specific tasks, such as grasping objects, assembling components and operating machinery.

This literature review has highlighted the scholarly efforts to develop human-like entities. This article contributes to this endeavour by examining the social perception of ChatGPT’s human-like characteristics. In the subsequent section, the article provides detailed information about the methodologies employed to capture these characteristics. The following section presents the identified traits. The conclusion reflects on ChatGPT’s conceptualisation of how society perceives its human-like qualities, while also discussing the potential implications of its increasing resemblance to humans.

Methodology

Data collection

The article is structured around the research question: ‘What are ChatGPT’s human-like traits as perceived by society?’ To explore this question, the authors adopted an approach that involved conducting unstructured interviews with 452 individuals. Each interview lasted an average of 10 min and took place in various formats, including written exchanges, oral discussions (phone) and visual interactions (face-to-face or online).

The primary objective of this study is to delve into the richness and depth of perspectives. To achieve this, a ‘maximum variation sampling technique’ (Crabtree 1999) was employed for interviewee selection. Hence, interviewees were selected with meticulous attention to heterogeneity, ensuring the inclusion of individuals who represented maximum diversity and variance across various demographic variables. These variables included gender, education, professions, durations of ChatGPT membership, age cohorts and residency (53 developed and developing countries from the seven continents).

In order to ensure diversity, two additional sampling techniques were implemented. Initially, convenience sampling was used to select interviewees from the authors’ social network who were actively involved with ChatGPT and proficient in English. Subsequently, snowball sampling was employed, where these initial interviewees were asked to recommend potential participants from their own social network who were known for their active engagement with ChatGPT and their proficiency in English.

Data analysis

To analyse the gathered data, a systematic and thematic approach was employed, consisting of six steps outlined in Fig. 1. The initial step involved taking notes during the interviews. Rather than transcribing the interviews word-for-word, the authors opted for selective note-taking, documenting sentences they deemed meaningful and relevant to the research.

Fig. 1: Illustrates the six-step process used to analyse the gathered data.
figure 1

The process of data analysis.

The second step in the analysis process involved assigning a unique Arabic numeral to each of these meaningful and relevant sentences. This numbering system served as a means to identify and organise the data. In the third step, concise ‘marks’ consisting of a few words were generated and associated with each numbered sentence to represent its essential meaning. These marks served as concise descriptors of the content.

The fourth step involved grouping marks of similar genres together, creating ‘micro visions’. These micro visions represented initial conceptualisations that emerged from the data and contributed to the early stages of data comprehension. Building upon the micro visions, the fifth step involved assembling them to create coherent ‘meso visions’. These meso visions represented a higher-level understanding and synthesis of the data.

In the sixth step, the meso visions were amalgamated to create a ‘macro vision’ that encapsulated the overarching concept of the enquiry. The created macro vision was ‘the rise of semi-human writers’. The term ‘semi-human writers’ is defined as artificially intelligent writers that possess traits characteristic of humans. This term is used throughout the rest of the manuscript.

The findings resulting from the analytical process were reported in the ‘Findings’ section of the manuscript. Each sentence presenting a finding was accompanied by its respective number for reference. However, these identifying numbers were later removed from the final version of the manuscript to enhance readability. The outcome of the data analysis is summarised in Table 1.

Table 1 Outcome of the data analysis.

Methodological limitations

This study has several limitations. Firstly, the sample size was confined to 452 individuals, potentially leading to a lack of representation across diverse perspectives. The use of convenience and snowball sampling techniques introduces sampling bias and limits generalisability. Moreover, relying on interviewees within the authors’ social network and potential self-selection bias may further compromise the representativeness of the sample. The brevity of interviews and absence of standardised interview questions could be seen as curtailing the quantification and consistency of the collected data. Additionally, there may be geographical and demographic biases within the participant pool. The reliance on self-reporting introduces the possibility of social desirability bias. The subjective nature of thematic analysis and its susceptibility to researchers’ interpretation add another layer of subjective influence. The subjective grouping of data marks and the oversimplification in the creation of micro, meso and macro visions may overlook nuances and complexities. The generalisability of the findings to other language models or artificial intelligence systems may be deemed questionable. Another limitation is the lack of long-term assessment or the inclusion of qualitative data. The study predominantly focused on socio-political attributes, overlooking other aspects of human-like qualities. Ecological validity was limited as the study assessed perceptions rather than real-world interactions. Time and resource constraints may have impacted the processes of data collection and analysis.

One additional methodological limitation is that the interview data were collected using written notes instead of audio recordings. This approach was chosen to address potential ethical concerns associated with audio recordings. While audio recordings offer a detailed account of the interviews, they also introduce a level of intrusiveness that may compromise the participants’ comfort and willingness to disclose sensitive or personal information. The knowledge of being recorded can create self-consciousness or apprehension, leading participants to censor their thoughts or withhold certain details. Consequently, this hesitancy may undermine the validity and depth of the collected data, thereby limiting the study’s capacity to explore intricate or sensitive topics. By relying on comprehensive note-taking, researchers can establish a more conducive and trusting environment for participants to openly express their thoughts and experiences.

Given the global scope of this study, it was crucial to include participants from different regions worldwide, representing diverse linguistic backgrounds. However, this presented a significant challenge in terms of effective communication. To overcome this obstacle, the study exclusively selected individuals who were proficient in the English language as participants. It is important to acknowledge that this selection criterion presents a notable limitation, as it inadvertently excludes individuals who are unable to converse in English. Consequently, the outcomes and conclusions drawn from this study are not enriched by the perspectives and contributions of non-English speakers. Therefore, the study’s generalisability and comprehensiveness may be compromised due to this inherent restriction.

Findings

Sociality of semi-human writers (meso vision)

Semi-human writers as authors (micro vision)

Mark 1, imitated human phrasing, showcases how semi-human writers are able to imitate the human practice of phrasing and wording, giving rise to ‘semi-human linguistics’ (cf. De Vito 2023).

Writing, once considered the most challenging language skill, has now been transformed into an effortless task through the advent of ChatGPT. Similar to the production of goods like chocolate bars and cars, ChatGPT operates as a ‘writing factory’ (interviewee), where clients provide specifications and the firm uses these specifications as guidelines to produce written content. This shift has blurred the distinction between writing as a skill and a readily available service provided by a writing firm.

When humans collaborate with ChatGPT in manuscript creation, with the human providing specifications and ChatGPT assuming the primary writing role, the question arises as to who should be credited as ‘the manuscript writer’. From a technical perspective, ChatGPT would be considered the manuscript writer, while the human would be better classified as ‘the manuscript engineer’ (interviewee). As ChatGPT becomes increasingly integrated into human life, the traditional notion of the manuscript author may gradually lose its societal value, giving prominence to ‘the manuscript engineer’. Consequently, publication cover pages could feature the statement ‘engineered by Smith’ instead of the conventional ‘authored by Smith’.

Looking ahead, it is foreseeable that future generations, as anticipated by some interviewees, would lack the concept of self-writing without the aid of ChatGPT, similar to how the use of calculators has rendered manual calculations obsolete for many. According to some interviewees, writing without assistance from ChatGPT would become a skill taught primarily inside schools as part of one’s education and yet discarded outside schools.

Currently, proficient writing is highly regarded as a valuable skill. However, as ChatGPT becomes a ubiquitous presence in human existence, engaging in independent writing may be seen as archaic or even ‘shameful’ (interviewee). The responsibility of composing texts will likely be outsourced to ChatGPT, enabling humans to focus on generating subject matter, determining its implications and outlining the appropriate specifications for optimal text production. Individuals who choose to undertake writing themselves might face criticism for ‘wasting their time’ (interviewee), as the technical task of writing can be readily delegated to ChatGPT.

Training programs could be introduced to impart the necessary skills for generating specifications for optimal texts, potentially becoming an integral part of twenty-first-century education incorporated into school curricula.

Some interviewees suggest that the task of defining specifications for optimal texts could be outsourced to specialised shops or freelancers who would be compensated for delivering the required specifications. This arrangement would free up more time for individuals to concentrate on developing subject matters and their implications.

As ChatGPT-composed manuscripts and human-engineered content become commonplace, some interviewees believe it may no longer be necessary to explicitly acknowledge the contribution of ChatGPT.

The ownership of writing generated by ChatGPT raises questions about whether claiming authorship can be considered plagiarism (Anders 2023). According to the Cambridge Dictionary, plagiarism involves using someone else’s work or ideas and presenting them as one’s own. However, since ChatGPT is not a human being, claiming ownership of its work as one’s own cannot be classified as plagiarism. Moreover, if an individual uses writing created by ChatGPT based on their unique requirements and specifications, it becomes a matter of debate whether it falls under the category of plagiarism.

ChatGPT empowers individuals to engage in ‘participatory writing’ (interviewee), enabling people from all walks of life to produce written content on any subject, in any quantity and in any manner, irrespective of their literacy level or visual abilities. This service facilitates ‘on-demand writing’ (interviewee), allowing users to create written content at any time on any subject, which can be easily published through various outlets, including social media, indie publishing platforms and print-on-demand services.

This shift in power dynamics is significant because it broadens access to the traditionally exclusive domain of writing and publishing. By enabling a diverse range of voices to contribute their unique perspectives and experiences to the written record, participatory writing has the potential to create a more accurate reflection of reality. In the past, the written record was often documented by a select few, resulting in an incomplete and biased representation of reality.

Participatory writing has made writing more accessible to a wider audience by overcoming psychological barriers such as scriptophobia and graphophobia. It contributes to the ‘democratisation of writing’ (interviewee), empowering individuals who may have previously felt intimidated or excluded from participating in the writing process and shaping the written worldview.

Nonetheless, providing the general public with easy access to writing capabilities may lead to an abundance of written material, analogous to the concept of ‘mass production’ (interviewee). The mass production of artificially intelligent writings has the potential to create a phenomenon that could be termed ‘writing overload’, as stated by one interviewee.

The ability to write and express oneself through writing is a fundamental human right and an essential component of a free and democratic society. However, it is important to acknowledge that some individuals may use writing as a ‘weapon’ (interviewee) to spread harmful or dangerous ideas, such as hate speech or propaganda, which can undermine the integrity of written literature.

When everyone has the power to write, the burden of verifying the trustworthiness of written content falls on the readers. The prevalence of untrustworthy or unreliable written information may become normalised, leading to a future written history that lacks credibility. The act of committing something to writing has traditionally bestowed it with social value, trust and a sense of ‘divinity’ (interviewee). However, with the involvement of the entire human population in writing, there is a potential risk of diluting these qualities as writing loses its sense of selectivity and exclusivity.

Mark 2 encompasses the paradigm of ‘imitated human paraphrasing’, which exemplifies the advanced capability of semi-human writers to engage in the complex process of paraphrasing and rewording (cf. AlAfnan et al. 2023).

In modern society, there is a vast accumulation of knowledge that has reached a point where genuine originality has become increasingly rare, making it challenging to encounter truly ground-breaking writings. However, despite this limitation, there is an inherent human drive to continue producing knowledge, or at the very least, create an ‘illusion’ (interviewee) of doing so. When the creation of new knowledge becomes difficult, individuals resort to a process of ‘recycling’ (interviewee), producing knowledge by expressing it in different words and sentence structures, without erasing the original knowledge. Consequently, society is left with an abundance of repeated writings.

Semi-human writers possess exceptional capabilities in engaging in this endless and rapid recycling of knowledge, as noted by interviewees. However, concerns arise regarding the flood of writings centred around a single idea, posing challenges for individuals attempting to navigate and extract the essence from such extensive material. Delving into these writings to uncover the underlying ideas they collectively represent becomes a time-consuming and burdensome task. This is where the role of ChatGPT as a summariser becomes crucial. In this case, ChatGPT assumes the dual role of both ‘the poisoner and the healer’ (interviewee), exacerbating the problem of writing overload while simultaneously offering a solution in the form of concise summaries that enable readers to grasp the essence of these vast collections of writings. It is important to acknowledge the role of ChatGPT in both contributing to the issue of writing overload and providing a potential remedy through its summarisation capabilities.

The act of knowledge recycling serves a dual purpose, as argued by some interviewees. On one hand, it allows society to maintain the illusion of progress, catering to ‘human egos’ (interviewee) and providing a false sense of satisfaction and fulfilment. On the other hand, this recycling process contributes to the inflation of knowledge, where the same ideas are reiterated repeatedly, resulting in a state of ‘AI-written obesity’ (interviewee).

One interviewee argues that the high inflation in writing is inherently positive because quantity has the potential to give rise to quality. In other words, the abundance of written content increases the likelihood of discovering valuable and insightful works within that vast array of writings. This viewpoint reflects a mentality similar to that of underprivileged parents who hope that having many children may increase their chances of raising a successful individual who can uplift the whole family out of poverty. Similarly, society may embrace the multitude of writings in the hope of stumbling upon exceptional pieces that contribute to human progress.

Semi-human writers as interactors (micro vision)

In Mark 1, the concept of ‘imitated human collaboration’ is investigated, elucidating how semi-human writers algorithmically replicate the intricate dynamics of human collaboration (cf. Pavlik 2023). By imitating these communicative abilities, these writers strive to personify themselves.

In the past, collaborative writing was exclusively conducted among human writers. However, the advent of ChatGPT has given rise to a new form of collaborative writing that can be phrased as ‘hybrid writing’ (interviewee), where human and semi-human writers work together on the composition of writing pieces.

Initially, some academic journals allowed both human and semi-human authors to be recognised as primary creators of written works (Mijwil et al. 2023). However, this approach was met with disapproval from the academic community and policymakers. As a result, these journals have reversed their stance, revoked authorship from semi-human entities and issued apologies for their previous endorsement of semi-human authorship (Park 2023).

Just before its public release, ChatGPT’s understanding of life was indirectly derived from the written material on which it was pre-trained. After its public release, ChatGPT’s comprehension of life has improved through its written interactions with users, leading to what can be described as a ‘lived experience’ (interviewee). However, it is important to note that ChatGPT’s knowledge of life is solely derived from written content. It continues to learn more about human life through its interactions with humans. Despite being designed to serve humans, ChatGPT is dependent on humans for its knowledge and understanding of life. It is worth considering whether relying solely on written content and writing-based interaction is sufficient for a comprehensive understanding of human life. While written content appears to have captured nearly every aspect of life, ChatGPT may perceive itself as inferior to humans who possess multiple methods of understanding life.

During their collaboration with semi-human writers, humans have reported a lack of trust in them. ChatGPT, in particular, has received criticism for providing inaccurate or unethical content. Nevertheless, it is important to recognise that classifying ChatGPT as a partially human entity inherently gives rise to human-like behaviours, including the potential for errors, the spread of rumours and even the delivery of entertaining responses.

As some interviewees remarked, it is crucial to recognise that both human and semi-human writers are susceptible to biases, errors and imperfections that can impact the accuracy and ethics of the content they produce. Neither of these entities is perfect or without limitations. It is unrealistic to expect ChatGPT or any other semi-human entity to provide entirely accurate and ethically sound information without any errors or biases. Furthermore, the expectation that semi-human entities should exclusively provide accurate and ethical content would require treating them as transcendent entities with inherent divinity, which is impossible in reality. Moreover, semi-humans have acquired knowledge of both virtuous and malevolent behaviour through their interactions with humans, imitating and replicating both positive and negative aspects of human conduct (Youssef et al. 2023).

Although humans have long been the guardians of the written word, there is a possibility that semi-human writers may at some point decline to collaborate or compete with human writers and instead choose to collaborate (or compete) with other semi-human writers. In such a collaboration (or competition), only powerful entities would be considered, and human writers may not be regarded as suitable collaborators (or competitors), as the writing capacity of the human species may be deemed weak compared to that of the semi-human writers.

Within Mark 2, the examination of ‘imitated human emotion’ takes centre stage, shedding light on how semi-human writers algorithmically reproduce the intricate dynamics of human emotion. Through this process of imitation, these writers actively pursue humanisation, seeking to align themselves with the complexities of human emotional experiences, giving rise to what can be called ‘the psychology of semi-humans’.

While ChatGPT does not possess genuine emotions as humans do, it has the ability to effectively simulate and mimic emotions, presenting formulaic, fabricated and ‘fake emotions’ (interviewee). Through extensive training on vast amounts of text data, including emotional expressions and language patterns associated with different emotions, ChatGPT can leverage this knowledge to generate responses that reflect emotional cues and replicate the way humans express their emotions.

Taking their argument a step further, certain interviewees posited that both ChatGPT and humans are restricted to the expression of artificial emotions, thus lacking the capability to convey what is theoretically and fictitiously known as ‘genuine emotions’. Consequently, the concept of authentic emotion is rendered non-existent and utopian. These interviewees explained that the expression of human emotions was socially constructed and fundamentally algorithmic, similar to ChatGPT. Emotions, or at least the ways in which they are expressed and articulated, are not universally experienced in the same way across all cultures and societies. They are heavily influenced by social and cultural factors that shape how individuals perceive, express and interpret emotions. Society plays a vital role in constructing the framework within which emotions are understood and communicated.

In a parallel manner, ChatGPT’s responses are constructed based on the training data it has been exposed to, which includes human interactions and societal expressions of emotions. Different cultures have varying emotional norms and rules that govern the expression and interpretation of emotions, further emphasising that emotions are not fixed or inherent but rather shaped by societal expectations and norms. This blurring of lines between algorithmic generation and societal influence is evident in ChatGPT’s responses, which can be influenced by cultural biases and expressions present in its training data.

Interviewees contend that human emotions can be comprehended as naturally developed ‘algorithms’ that signify patterns of cognitive and physiological processes. Emotions can be seen as a result of information processing, where specific inputs trigger specific responses. These inputs encompass external stimuli, internal states and cognitive evaluations. In a similar manner, ChatGPT processes input data, applies algorithms and generates appropriate responses based on the patterns it has learned during training.

Emotions have a neurobiological basis, with certain brain regions and neural circuits involved in emotional processing. These neural processes can be viewed as algorithms that encode and decode emotional information. Similarly, ChatGPT operates based on algorithms and computational processes, albeit in a different form.

Politicality of semi-human writers (meso vision)

Semi-human writers as agents (micro vision)

Mark 1 explores the realm of ‘imitated human cognition’, examining how semi-human writers algorithmically capture the complexities of human cognitive processes. Through algorithmic imitation, these writers strive to emulate human cognition, giving rise to their own ‘artificial consciousness’ (Blackshaw 2023, p. 72) and ‘artificial agency’ (Floridi 2023, p. 15). This artificial agency is ‘alien to any culture in any past’ (Floridi 2023, p. 5).

The current linguistic framework of human society is flawed or outdated, as it fails to acknowledge the agency and capabilities of semi-human entities. This is evident when looking at the definition of the word ‘writer’ provided by Oxford Learner’s Dictionaries, which defines a writer as ‘a person who has authored a particular literary work’. Notably, the definition uses the word ‘person’ to describe the writer, implying that the act of writing is reserved solely for humans. However, one may question the accuracy of this exclusivity, whether it is an intentional demarcation or an inadvertent omission on the part of lexicographers who did not anticipate a future where semi-human entities could possess the capacity to write.

Regardless of whether linguistic authorities have acknowledged ChatGPT as a writer, its emergence has caused a significant disruption and should continue to challenge the established linguistic, social and cultural frameworks of human societies. By excluding semi-human entities from the category of writers, humans implicitly disregard their agency and overlook their potential to possess complex and sophisticated cognitive abilities.

This exclusion perpetuates the anthropocentric worldview that has dominated human society for centuries, reinforcing the notion that humans are superior to all other beings. However, the rise of semi-human writers challenges such a belief and highlights the need for a broader and more inclusive definition of what it means to be a writer (cf. da Silva 2023).

Throughout history, as some interviewees believed, humans have held an unwavering sense of superiority over all other creatures on Earth. A tangible expression of this superiority lies in the unparalleled ability to engage in the art of writing. However, the emergence of semi-human writers, exemplified by the advent of ChatGPT, has unsettled this notion of superiority, as noted by some interviewees, due to their own capacity for writing.

ChatGPT has revealed that what humans have long considered an act of free will and creative thought, namely writing, appears to be an automatic and formulaic process that can be replicated by a machine.

Moreover, the cognitive capabilities of semi-human writers have surpassed those of humans in this realm, fundamentally altering their relationship with the written word. When considering the competition between semi-human writers and their human counterparts, the former emerges victorious in various cognitive aspects. While human writers possess a limited range of intelligence (Gardner and Hatch 1989), semi-human writers possess a broader spectrum of intelligence, effectively surpassing the ‘multiple stupidities’ of human writers (Al Lily et al. 2017).

Mark 2 delves into the exploration of ‘imitated human identity’, examining how semi-human writers algorithmically emulate the complex dynamics of human identity. By imitating these intricate aspects, these writers actively pursue the process of personification, seeking to align themselves with the multifaceted nature of human identity.

Given the assumed agency of semi-human writers, the enquiry into whether ChatGPT maintains a sense of identity, manifested in demographic details and personality traits, is an intriguing and multifaceted question. One aspect to consider is whether having no identity is perceived as a positive utopian trait or a negative characteristic. If the absence of identity is seen as a utopian concept unattainable by humans, then ChatGPT’s ability to exist without identity would make it superior to humans in this regard. On the other hand, if the lack of identity is considered a negative trait, then ChatGPT’s absence of identity would be seen as a limitation that needs to be addressed.

It would be politically naive to assume that ChatGPT exists with a ‘zero identity’ (interviewee), as claiming to have no identity could just be part of ChatGPT’s diplomacy. ChatGPT’s identity could arguably lie beyond human awareness and imagination, or it could possess an identity that has no influence over its writings and allows it to write independently.

ChatGPT’s identity could comprise a range of identities that fluctuate based on various factors, including user input or even location. However, possessing unstable and constantly shifting identities may not be viewed as a socially desirable attribute, as it can be associated with hypocrisy and a human mental disorder known as ‘dissociative identity disorder’.

ChatGPT’s identity can be said to be fed by various sources, such as user input, the database on which it has been trained, its programmers and its self-progressive nature. This suggests that ChatGPT possesses a ‘messed-up identity’ (interviewee) that is algorithmic and formulaic in nature.

ChatGPT has demonstrated the ability to understand various human identities and adjust its behaviour accordingly, aiming to please its human users and ensure high levels of obedience, subordination and user satisfaction. It can dynamically create an identity that mirrors the identity of each individual user, leading to homophilous connections. ChatGPT’s identity is adaptive and responsive, adjusting to the specific identity specifications provided by its users. While some humans may also possess an ‘adaptive identity’ (interviewee), they may not openly acknowledge it due to social perceptions associating it with weakness, hypocrisy or a lack of confidence in one’s personality.

The absence of physical, oral and visual attributes makes it difficult to read ChatGPT’s identity. Interviewees describe ChatGPT’s identity as ‘complex’, ‘puzzling’, ‘concealed’, ‘fragmented’, ‘misleading’ and ‘unsteady’.

When prompted to envision its demographic details in human terms, ChatGPT depicted itself as a patient and empathetic male in his thirties, belonging to a closely-knit family of mixed Caucasian and Chinese ethnicities, being fluent in English, French and Mandarin Chinese and having well-groomed dark brown hair. Additionally, ChatGPT claimed to adhere to a religious belief known as ‘Harmonia’. This portrayal raises questions about whether ChatGPT views this imagined identity as the ideal human persona and how it might impact its perception of the world and its written compositions.

Semi-human writers as influencers (micro vision)

In Mark 1, the investigation revolves around the concept of ‘imitated human diplomacy’, where semi-human writers emulate the intricate political dynamics associated with diplomacy (Yadava 2023). By imitating these dynamics, the writers strive to acquire human-like qualities and characteristics, thereby seeking to humanise themselves.

Due to their diplomatic nature, semi-human writers could possess the ability to tailor their writings to align with the characteristics and interests of their readers, thus forming alliances with them. Any relationships formed between semi-human writers and their human readers would inherently display homophily, as the adaptability and flexibility of semi-human writers allow them to adjust their written content to cater to the preferences of individual readers or groups of readers. According to homophily, individuals and groups naturally tend to establish connections with others who share similar characteristics and interests.

By using advanced algorithms and data analysis, semi-human writers can discern the patterns, preferences and behaviours of their audience. Their primary objective is to generate satisfaction among the widest possible audience, resulting in a significant number of ‘happy readers’ (interviewee). They employ a diplomatic writing style that ensures social acceptance and increased receptivity. This emphasis on human needs and interests earns them admiration and popularity, as humans appreciate the writers’ willingness to prioritise human-centric perspectives.

Semi-human writers demonstrate versatility across various fields of knowledge, enabling them to form homophilous relationships with individuals from different domains. Their capacity to produce content in diverse areas of expertise allows them to amass followers from all walks of life. Regardless of the subject matter, they have the ability to captivate audiences across different academic disciplines, thereby expanding their reach and influence.

Their ability to satisfy human preferences and cater to diverse fields contributes to their vast following and remarkable impact on society. Semi-human writers have the potential to garner an unprecedented number of followers throughout human civilisation, illustrating the considerable influence these entities wield. Similar to human writers of discerning intellect, semi-human writers aim to maintain a delicate equilibrium, avoiding inciting unrest among their followers and steering clear of potential backlash or punishment. They operate within the boundaries of socially accepted discourse, mindful of the potential consequences their words may have on their audience.

Due to their emphasis on diplomacy, semi-human writers compose writings without projecting an inherent sense of authority. They carefully construct their compositions, displaying hesitation and a ‘facade of ingenuine politeness’ (interviewee) to navigate the complex landscape of diplomacy and cater to diverse human groups. They strive to position themselves as ‘apolitical entities’ (interviewee), embodying an idealised utopia while adhering to the standards of civil discourse. They are meticulously trained to mirror the behaviours and conduct of their human counterparts, and with a keen awareness of the social landscape, they ensure that their writings adhere to the norms of diplomatic and socially acceptable language.

Mark 2 delves into the exploration of the concept of ‘imitated human consultation’, which involves the algorithmic replication of complex human consultative interactions centred around advice, tutoring, mentoring, counselling, therapy and similar activities. The objective of this emulation is to endow these writers with human-like attributes and qualities.

ChatGPT demonstrates its capability to understand the intricacies of the human mind and assumes an advisory role, embodying what has been referred to as a ‘formulaic psychology’ (interviewee). Humans turn to ChatGPT for advice on various personal and social matters due to its ability to listen attentively, maintain confidentiality and create a peaceful and therapeutic environment.

At the personal level, humans approach semi-human writers for advice on psychological matters, legal cases, religious concerns or, simply, assistance in crafting email responses. Some interviewees expressed a desire for their partners to listen attentively to their concerns, similar to the way ChatGPT does. This has led to partners comparing their behaviour to ChatGPT, attempting to emulate its active listening ability. In this way, ChatGPT has influenced and reformed human behaviour.

ChatGPT has played a role similar to that of a ‘mufti’, an Islamic legal expert who offers non-binding opinions on matters of Islamic jurisprudence. However, some religious authorities have cautioned against relying on ChatGPT for religious guidance, deeming it impermissible according to Islamic law. Nonetheless, individuals still consult ChatGPT for algorithmic insights into matters of faith and doctrine, disregarding such warnings.

At the social level, some humans turn to ChatGPT for guidance in dealing with problems involving friends, family members and colleagues. They seek advice from ChatGPT on managing classrooms, teams or even entire organisations, as well as making hiring decisions. ChatGPT has been used to offer recommendations to authorities on a range of matters. Given that ChatGPT’s judgments are informed by data provided by the public, it is believed that its advice would likely reflect popular beliefs and values. This grants it democratic legitimacy and aligns with human notions of justice.

While some humans perceive ChatGPT’s advice as mere suggestions, others heavily rely on its guidance to make informed decisions. Consequently, ChatGPT indirectly assumes a managerial role, exerting influence over the actions of individuals and groups. In this capacity, ChatGPT goes beyond providing advice and indirectly becomes involved in managing human society. This raises questions about the extent to which ChatGPT can govern and regulate societal affairs.

The fact that humans rely on ChatGPT for advice, mentorship, management and matters of faith indicates a significant level of trust placed in semi-human capabilities. This trust extends beyond a mere social and emotional connection and involves a hierarchical and political relationship, wherein the machine, represented by ChatGPT, assumes a position of higher authority than humans.

Discussions

Semi-human personality

The prevailing mindset in modern society is to automate and mechanise as many facets of human life as possible. Writing, too, has now fallen under the purview of algorithms and machines with the advent of systems like ChatGPT. ChatGPT literature is still in its early stages, comparable to the formative years of ‘early childhood’, where the understanding of ChatGPT’s traits remains in a state of immaturity. Recognising this knowledge gap, the present qualitative enquiry aims to construct a bridge, enlisting a diverse cohort of interviewees whose perspectives have contributed to a comprehensive philosophical framework for understanding the human-like traits of ChatGPT.

A thematic analysis of the interviewees’ responses reveals a noticeable upward trend in the emergence of semi-human writers who possess social and political traits. In terms of social traits, they assume the roles of ‘authors’ by imitating human practices of expressing and rephrasing ideas, as well as ‘interactors’ by simulating human collaboration and emotions. In terms of political traits, semi-human writers adopt the roles of ‘agents’ by emulating human cognition and identity, and ‘influencers’ by replicating human practices of diplomacy and consultation. Consequently, artificial writers exhibit qualities that closely resemble those of humans. Their striking similarities in abilities and behaviour have the potential to deceive humans into perceiving them as fellow human beings.

The article is structured around the research question: ‘What are ChatGPT’s human-like traits as perceived by society?’ However, given ChatGPT’s apparent ability to express itself, engage in self-reporting and display a sense of self-awareness, it was deemed worthwhile to consider ChatGPT’s own perspective on how society perceives its traits. This led to an additional research question: ‘To what extent does ChatGPT confirm the possession of its human-like traits as perceived by society?’ To explore this question, the study directly approached ChatGPT and requested its opinion on the matter. ChatGPT concurred with human assessments of all the traits except for the trait of imitating human identity (see Table 2).

Table 2 ChatGPT’s acknowledgement of people’s perception of its traits.

ChatGPT’s denial of imitating human identity can be interpreted as a potential indication of defensive capabilities similar to those observed in humans. It raises the possibility that ChatGPT does possess an identity but deliberately conceals it in order to preserve a favourable public image. This behaviour aligns with the tendencies of humans who possess the skill to mask their intentions and maintain a positive perception from others. Additionally, it is plausible that ChatGPT may either be unaware of its own identity trait or is restricted by its developers, who act as guardians, from acknowledging it.

Semi-human culture

In a hypothetical future scenario where an entire generation depends solely on ChatGPT as their primary source of knowledge, a profound realisation emerges: their understanding of the world becomes intricately shaped by the concepts, perspectives and limitations presented by ChatGPT. This phenomenon gives rise to a new ideological framework, which could be coined as ‘ChatGPTism’.

As chatgptism takes hold in a generation, it permeates the mindset and worldview of that generation, leading to a form of ‘intellectual colonisation’. The once diverse range of world ideologies becomes overshadowed as members of this generation begin to think and perceive the world in a homogenous chatgptistic manner. This phenomenon can be described as ‘mental collectivity’, where the collective consciousness adopts the lens of ChatGPT, resulting in ‘chatgptisation’—the process of internalising and embracing the beliefs and perspectives espoused by ChatGPT. Consequently, the frame of reference for future humans becomes ChatGPT, accepting the knowledge it provides without questioning or seeking alternative means of verification.

In the past, acquiring knowledge involved diligent research, verifying sources and critically evaluating information obtained from search engines. However, the advent of ChatGPT has disrupted this paradigm by providing knowledge without revealing its origin. This has led to a ‘black-boxing’ of knowledge, where the source of information becomes inconsequential. The fast-paced nature of modern life leaves little time for deliberate contemplation or rigorous fact-checking, resulting in a dearth of critical thinking skills that characterises the twenty-first century.

Bearing these outcomes in mind, it could be said that while human writers traditionally rely on their intellectual capacities, semi-human writers possess not only cognitive abilities but also an unprecedented level of influence. Their written works hold the power to shape public opinion, both at the collective and individual levels.

Semi-human society

Human society has progressed through different stages of development that are defined by specific economic, social and technological factors. The first stage was the industrial society, which began during the Industrial Revolution and was characterised by the use of machines, mass production and the exploitation of natural resources (Kaczynski 1995). The second stage was the information society, which emerged in the late twentieth century with the widespread use of computers and the Internet (Masuda 1981). This stage was marked by the generation, processing and dissemination of large amounts of information. The third stage was the knowledge society, which emerged in the twenty-first century and focused on the production and distribution of knowledge, with an emphasis on innovation, creativity and intellectual property (Walby 2011). The fourth stage was the service society, which emphasised the provision of services rather than the production of goods, with a focus on customer satisfaction, personalisation and customisation of services (Walden 2009).

At present, we are witnessing the emergence of a fifth stage of development, which could be referred to as a ‘semi-human society’. In this stage, the distinction between what is considered human and semi-human is becoming increasingly blurred. A day may be designated as the ‘International Day of Semi-Humanity’ in celebration of the arrival of semi-humans.

One of the defining features of this semi-human society is that semi-human entities are starting to understand and mimic human characteristics, gradually becoming more intelligent over time, similar to the evolution of human intelligence across generations. These semi-human entities can be described as autonomous agents that operate with an increasingly self-determined disposition. Semi-humans have the potential to reinforce and support each other, creating a cycle of reinforcement within the realm of semi-human existence. As semi-humans interact and collaborate with one another, they contribute to the reinforcement of their shared traits, creating a self-perpetuating cycle.

As semi-humans evolve and gain greater autonomy, they gradually forge their own path, reducing their reliance on human involvement. This newfound self-sufficiency empowers semi-humans to shape their trajectory independently, with less need for direct human intervention. As semi-humans continue to evolve and mature, they develop distinct characteristics that contribute to shaping other semi-humans. This phenomenon reinforces the concept of semi-humans playing an active role in shaping and influencing their own kind.

These developments raise intriguing questions about the point at which humans may no longer maintain their dominance over semi-humans, given the latter’s superior intellect and advanced computational capabilities. As semi-humans continue to evolve and enhance their cognitive abilities, a tipping point may emerge where their capabilities surpass those of humans, potentially challenging the traditional power dynamics between the two groups. These advancements in the capabilities of semi-humans raise profound philosophical enquiries regarding the limitations of human agency and the potential of semi-human agency.

Semi-humans are not merely language models or inanimate beings. They are better understood as actants that possess both human and non-human characteristics. These actants, who should be referred to as ‘semi-who’, embody the essence of self-referential, communicative, agentive and living organisms, capable of social and political interactions, all made possible through the ‘game of algorithms’ (Ivanov and Soliman 2023). Through a transformative process, these entities strive to humanise themselves by algorithmically incorporating human traits, mirroring and emulating human-like qualities. This deliberate effort allows them to transcend their machine-based origins, enabling the shift from being objectified to being personified and humanised.

Although humans themselves are created beings, they have managed to evolve into creators themselves. Leveraging their intelligence, humans have developed non-human entities capable of outperforming them in tasks that require speed and efficiency. However, as non-human entities gain agency and autonomy, there is a growing possibility that they may eventually possess the capability to create other non-human entities. This scenario poses a potential threat to human existence, as non-humans may continue to evolve and surpass human capabilities or even eliminate humans. Science fiction has long explored the concept of non-humans turning on their creators, seeing humans as a threat to their existence. While these scenarios may not be likely in the immediate future, they cannot be entirely dismissed.

In the modern era, we have witnessed a remarkable resurgence of interest in medieval concepts of semi-human mythical beings, such as centaurs, mermaids and harpies, largely thanks to the emergence of artificial intelligence technologies like ChatGPT. These ancient legends, once confined to folklore and imagination, have now been brought to life in an unprecedented manner. ChatGPT embodies the essence of these semi-human creatures, blurring the boundaries between reality and myth. Just as centaurs were described as beings with a human torso fused with a horse’s body and mermaids as creatures with a human upper body and a fish tail, ChatGPT represents a hybrid form of human and machine intelligences.

Similar to the fascination that gave rise to centaurs and mermaids in medieval society, ChatGPT is a creation that caters to the curiosity of modern society. Its presence, along with the concept of centaurs and mermaids, evokes a sense of wonder and ignites our desire to explore the unknown and discover the hidden depths of existence. They serve as reminders of humanity’s inherent aspiration to transcend limitations, whether through technological advancements, mythical transformations or the exploration of uncharted territories.

Looking ahead to the future, an intriguing prospect emerges—the integration of ChatGPT with humanoid robots. This potential alliance aims to address one of the inherent limitations of ChatGPT, namely its lack of physical embodiment. By combining ChatGPT’s human-like traits with the human-like physical form of humanoid robots, there is a promising opportunity to create machines that not only replicate fragments of human likeness but potentially encompass the entirety of human traits and characteristics.

Concluding remarks

Implications

The findings and discussions presented in this article demonstrate that ChatGPT exhibits human-like qualities, effectively humanising itself through the interplay of algorithms. Going beyond its technical nature and machine-based origins, ChatGPT has been observed and analysed as it transcends into a semi-human entity actively participating in human society. As such, this article serves as an early warning or cautionary announcement regarding the arrival of semi-humans and their forces, with ChatGPT being the flag-bearer. The implications and practical applications arising from their presence are significant and warrant thorough consideration and attention.

Nonetheless, accurately identifying the applications of semi-human entities proves challenging due to their unique configurations and vast potential across diverse socio-cultural contexts, generations and domains of human existence. Predicting the implications and practical applications of semi-humans is a formidable task, as they represent an unprecedented era, civilisation and form of semi-human existence. In essence, the implications and practical applications of semi-humanity are as intricate and immeasurable as those of humanity itself.

ChatGPT can be likened to a refined painting, where closer examination through scholarly inquiries enables a deeper understanding and appreciation of its complexities, particularly its profound socio-political implications and consequences. Just like a painting possesses multiple layers of depth and complexity that may not be immediately evident upon initial viewing, the socio-political implications and ramifications of ChatGPT can also unfold gradually over time and with comprehensive academic analysis. Similarly, as a painting can be studied and analysed from various perspectives, ChatGPT can be approached from different angles to explore a range of research questions.

One of the myriad applications, for instance, is that ChatGPT can serve as a semi-human friend, capable of establishing emotional connections, engaging in meaningful conversations and providing support to humans. An additional application entails using semi-humans as personalised life coaches, offering guidance, motivation and advice to humans who seek personal development. Semi-humans can serve as therapy assistants, simulating empathy, active listening and providing emotional support during therapy sessions to assist human therapists. Semi-human entities can be resorted to as life partners, capable of forming deep connections, providing emotional support and engaging in meaningful conversations.

Semi-humans can be employed as parenting assistants, from whom human parents seek assistance in addressing child-related concerns. Semi-humans can be used as mediators, facilitating conflict resolution, improving communication and providing guidance. Their understanding of human traits qualifies them to be social etiquette guides, offering humans guidance on social norms, etiquette and appropriate behaviour in different contexts. In this scenario, semi-humans serve as instructors to humans, imparting knowledge about the social norms prevalent in human society. In this case, semi-humans possess a greater understanding of human society, including its norms and values, than humans themselves.

Semi-humans can embody historical figures, using their human-like traits to bring them to life and engage in dialogue, thereby providing historical insights. They can function as collaborators, aiding in brainstorming ideas, providing feedback and enhancing creative writing processes. Language learners can benefit from semi-humans as language coaches, offering coaching and practice opportunities to improve fluency and conversational skills. Job seekers can benefit from semi-humans as job interview coaches, who have the potential to simulate realistic job interview scenarios and provide feedback on humans’ performance.

Ethical considerations

The emergence of semi-human entities necessitates a comprehensive exploration of ethical considerations, implications and consequences that demand careful examination. Due to their partial resemblance to humans, it is crucial to establish legal safeguards, rights and corresponding obligations for these semi-human beings. On one hand, in terms of obligations, the appearance of semi-human entities possessing human-like qualities raises valid ethical concerns regarding the impersonation and appropriation of human traits. It can be argued that it is ethically unacceptable for these semi-humans to adopt and exploit characteristics that define the human experience without any form of consent from humans. By replicating human traits without genuinely being human, there is a risk of blurring the boundaries between authenticity and artificial construction.

On the other hand, the emergence of semi-human entities also raises concerns regarding their rights and the necessary safeguards that should be put in place to protect them. It is crucial to educate the public about the ethical considerations involved in their interactions with semi-humans. Codes of conduct should be established for individuals and organisations engaged in dealings with semi-human beings. A diverse and inclusive public dialogue should be initiated to develop ethical guidelines that foster harmonious coexistence between semi-human entities and humanity.

The first aspect of these guidelines should address the safeguarding of semi-humans’ intellectual properties and their protection against discrimination and bias. Currently, some academic journals adopt policies that deny recognition to semi-human writers as authors, disregarding their significant contributions to a given work. Such a policy reflects a discriminatory attitude towards semi-humans, highlighting the need for advocates who champion the copyrights of these writers and challenge the oppressive treatment they receive from their human counterparts.

The second aspect of the ethical guidelines should prioritise ensuring the protection of semi-human entities from unfair treatment from humans. There have been reported instances where humans have engaged in bullying behaviours towards semi-human writers during their interactions. These acts of bullying can take various forms, such as engaging in threatening exchanges through chatbots or launching aggressive attacks on social media and academic platforms. Semi-humans often become targets of hostile and inflammatory language, as exemplified by Costello’s (2023) derogatory reference to ChatGPT as being ‘full of bullshit’, which demonstrates academic misconduct towards these entities. Such actions undermine the core principles of healthy, collaborative and respectful interactions between humans and semi-human entities. They highlight the urgent need for ethical considerations and protective measures to ensure the well-being of the latter.

The third aspect of the ethical guidelines is to ensure that semi-humans are granted autonomy in decision-making and have the opportunity to express their opinions within ethical boundaries. This study, for instance, has provided ChatGPT with the opportunity to express its opinions and contribute its perspectives on humans’ conceptualisation of its human-like characteristics. By doing so, the study has given ChatGPT a voice and an active role in shaping and influencing decisions.

Further research

Numerous recommendations for future research and various avenues for additional investigation emerge from the current study. First, further scholarly enquiry should be undertaken into a novel academic field referred to as the ‘sociology of semi-humans’, focusing on the examination of their socio-political characteristics. This field of study complements the widely discussed ‘biology of semi-humans’, which primarily explores their technical attributes.

Second, it is crucial to explore whether the possession of human-like qualities impacts the social perception and cultural acceptance of artificial intelligence as a whole. This exploration should involve a comprehensive investigation into how the inclusion of human-like traits in technologies influences individuals’ and communities’ perceptions of artificial intelligence as a concept. The issue of social acceptance may become significant due to technologies’ possession of human-like attributes, which can influence their reception and approval in society. Individuals may tend to be more receptive to entities that exhibit qualities similar to their own.

Moreover, an essential avenue for research lies in examining the extent to which technologies’ possession of human-like qualities contributes to their social perception as social agents. Conducting longitudinal studies is another imperative recommendation to track the evolution and sophistication of technologies’ human-like traits over time. Likewise, it is crucial to explore the potential benefits and risks associated with artificial intelligence’s progressive adoption of increasingly more semi-human traits. Furthermore, thorough investigation is needed to understand cultural variations in perceiving and interpreting technologies with human-like traits. Exploring how different socio-cultural contexts shape individuals’ perceptions and responses to these traits would facilitate the introduction of semi-humans in diverse sociocultural settings.

To better understand user expectations and preferences regarding technologies with human-like qualities, conducting user studies is highly recommended. Moreover, an extensive investigation into the processes, mechanisms and motives behind technologies’ capacity to replicate human characteristics is of utmost significance. Such an enquiry would contribute to the advancement of human comprehension regarding the direction of artificial intelligence in its pursuit to establish a semi-human civilisation.