Introduction

Artificial Intelligence is a term that is widely used, yet often is ill-defined (Brennen, 2018). A simple definition given by McCarthy (2007) is that AI is the process of making intelligent computer programs and machines to achieve goals in the world. However, this definition is broad, and to develop an accurate and detailed understanding of what AI is and does is not so straightforward. At the same time, the capabilities of Artificial Intelligence (AI) have been purported to be doubling every six months (Pichai, 2023), and thought leaders such Google’s CEO, Sundar Pichai, have publicly warned that global society as a whole must ‘brace for impact’ of AI, as it has the potential to cause far-reaching and deep structural changes to our society and our lives (Elias, 2023). These impacts are not only beneficial; AI has been described as a ‘double-edged sword’, with risks including privacy concerns, ethical dilemmas, potential misuse or mistakes, and the threat of malfunctions leading to severe consequences that could even entail loss of life (Cheatham et al., 2019). Consequently, the representation of AI-related issues to the public via news media is an area of concern.

Recent innovations in the field of Large Language Models, including the release of ChatGPT, have also spurred interest among the public in applications of rapidly developing AI capabilities. One of the reasons that ChatGPT has become prominent in the discussion of AI and society is due to its abilities to help solve real-world problems and contribute to the completion of tasks. From drafting emails to writing code and crafting written content, the model’s use cases are wide-ranging. At the same time, the limitations and risks of ChatGPT have been discussed, including the lack of a sense of ‘truth’ leading to the production of false answers (OpenAI, 2023) These converging factors have added fuel to the public discourse, which has been furthered by calls from the UK Prime Minister, Rishi Sunak, for ‘guardrails’ in AI development (Mason, 2023). As a result of the speed at which AI is developing and its increasing position in the public spotlight, there is a need to understand the way that online and in-print news media are contributing to public discourse on these technologies. With this in mind, our research question is best described as asking how ChatGPT and AI are discursively represented in UK news media headlines. In analysing our results, we draw on the relevance of both agenda-setting theory and framing theory. This is based on the premise of McCombs and Shaw’s (1972) work, which states that in regards to agenda setting, news editors, staff, and broadcasters play a pivotal role in the construction of social and political realities, and that there is a causal relationship between media issue coverage and public opinion which evidences agenda-setting (Neuman et al., 2014). Framing is also highly relevant to our study, given that issue-specific frames which define topics represented in the media can lead to additional societal effects when individuals engage with the news (Lecheler and de Vreese, 2018). The research question is answered through the collection of empirical data from online news databases and undertaking inductive thematic analysis of newspaper headlines featuring ChatGPT, AI, and other permutations of relevant terms, over the first 5 months of 2023. The overall aim of the investigation is to contribute an initial, rapid assessment of how AI and ChatGPT are being presented to the public readership and the implications of this representation in light of agenda-setting and framing theory. In addition, our secondary aim is to identify any patterns in the volume and dispersion of AI and ChatGPT-related headlines over time. The study begins by summarising the topic of AI, Large Language Models (LLMs), and ChatGPT. Following this, a brief outline is given regarding the relevance of media headlines and their function in relation to newspaper texts. In the next section, the methodology and data collection procedures are described before the results of the investigation are presented in reference to media studies theories. We conclude by discussing and summarising key findings, drawing inferences, and presenting areas for future research.

The emergence of a public discourse on ChatGPT and LLMs

In late 2022, the public deployment of OpenAI’s free ChatGPT service further stimulated the debate around the role AI is beginning to play in society, and its associated risks. This was followed by a similar LLM application from Google called Bard. ChatGPT is a chatbot which can mimic human language capabilities and was developed based on a deep-learning model architecture primarily used in the field of Natural Language Processing. While ChatGPT seems to have captured the public’s imagination more than many of its predecessor technologies did, it was not the first tool of this type. The development of transformer models began with BERT, which was introduced in 2018 by researchers at Google (Devlin et al., 2019). BERT, which stands for ‘Bidirectional Encoder Representations from Transformers’, was designed to improve the ability of language models to understand the meaning and context of words in a sentence (Devlin et al., 2019). BERT achieved this using a masked language model that randomly replaces some of the words in a sentence with a special mask token, forcing the model to predict the missing words based on the context of the sentence. This allowed BERT to better capture the relationships between words in a sentence, improving its ability to comprehend natural language (Devlin et al., 2019).

Following the success of BERT, researchers have continued to develop larger and more powerful language models based on transformer architectures. One of the most notable of these was GPT-3 (Brown et al., 2020), which was released by OpenAI in 2020. GPT-3, which stands for ‘Generative Pre-trained Transformer 3′, is an LLM trained on millions of words of textual data. This allowed GPT-3 to learn the patterns and structures of natural language, enabling it to generate highly human-like texts (Brown et al., 2020). Since the release of GPT-3, researchers have continued to develop even larger and more powerful language models, with the user interface of ChatGPT at the time of writing based on GPT 3.5 for the free version, and a paid premium subscription which operates on the most advanced model, GPT-4. GPT-4 is said to be the most effective model for complex tasks that require an element of creativity and sophisticated reasoning, while GPT 3.5 is less sophisticated but operates more quickly (OpenAI, 2023). ChatGPT can generate human-like written language with remarkable fidelity, accuracy, and creativity (Perkins, 2023), and can equally mimic many aspects of natural dialogue and conversation as found between two humans, including answering follow-up queries, admitting when it has made a mistake, and responding correctively to incorrect or inappropriate questions, albeit with frequent errors and mistakes (OpenAI, 2023). Additional challenges emerge due to the limitations in both current software, and trained professionals in detecting when AI-generated content is present (Perkins et al., 2023; Weber-Wulff et al., 2023). The social research field regarding AI and ChatGPT is still immature, but studies investigating the influence of AI technologies in various domains, such as scientific publishing, higher education policy development and academic integrity, have been conducted (Perkins, 2023; Hill-Yardin et al., 2023; Perkins and Roe, 2023).

Media headlines as an object of study

This investigation is underpinned by the premise that corpora based on newspapers are a relevant source of data for evaluating cultural and psychological phenomena, including changing technologies (Beelen et al., 2023). At the same time, newspaper headlines have been described as shortcuts to the content of newspaper articles, which are intended to attract attention even at the expense of being misleading regarding the full text’s meaning (Blom and Hansen, 2015). Although news headlines are ‘syntactically impoverished’, they have specific and unique linguistic features. In online news articles for example, there are often elements designed to intrigue and act as ‘clickbait’ such as forward-reference (‘find out here!’) to generate click-throughs to the full article (Blom and Hansen, 2015, p. 1). At the same time, newspaper headlines are often intended to be provocative, can be simplified, spectacularized, or written to appeal to negative sentiments (Kuiken et al., 2017). As a result, pragmatics research has posited that headlines can be analysed as autonomous texts in themselves (Ifantidou, 2009).

Studies on newspaper headlines have covered diverse topics. Haider and Hussain (2020), through their study of Arabic and English newspaper corpora, found that downsizing corpora by selecting headlines produced a more workable set of data for qualitative analysis. As headlines are often a sentence at most, a broader range of data can be compiled for a corpus more readily than collecting entire texts, and this acts as an efficient method to ‘downsize’ while retaining some of the text’s meaning or subject. Other researchers have collected entire articles, but simultaneously focused on headlines as a specific linguistic feature. MacRitchie and Seedat (2008) for example, conducted a discourse analysis of 52 South African newspaper articles with a focus on headlines and found that news articles created the perception that public holidays created War Zones’ on South African roads. In relation to the COVID-18 pandemic, Aslam et al. (2020) investigated keywords in newspaper headlines relating to the term ‘coronavirus’ in 2020 across 25 highly rated English news outlets and found that 52% of the headlines aimed to provoke negative sentiments compared to 30% relating to positive sentiments. Larger scale, quantitative approaches to text analysis have also taken place in this field, and as an example Ghasiya and Okamura (2021) demonstrated the applicability of sentiment analysis to large numbers of Japanese newspaper headlines regarding the region of the Middle East. Emerging areas of study in relation to news headlines focuses on the phenomenon of fake news and reader perception of accuracy (Smelter and Calvillo, 2020), sharing of misleading partisan news headlines (Ross et al., 2021), and consideration of the truth or falsehood of a headline (Fazio, 2020). In the face of declining physical newspaper readership and circulation and an increase in digital access to news media, research has also begun to focus on how readers interact with online newspaper content to participate in commenting practices and responses as socially constructed forms of knowledge and discourse (Roe, 2023).

Despite a dearth of research on newspaper headlines, and a current heavy focus on AI in the news media, there are few studies on AI’s representation in the media, and none to date regarding ChatGPT or other LLMs, However, there is an emergent literature exploring the role that AI will play in journalism in the coming years and the development of such technologies in reporting the news (Veglis and Maniou, 2019; Hassan and Albayari, 2022). The most significant study on the representation of AI in news media was conducted by Brennen (2018), who conducted a large-scale mixed-methods analysis of 760 articles regarding AI in the United Kingdom and found that right-learning outlets tended to focus more on economics, geography, and politics, as well as matters of national security, whereas left-leaning outlets focused more on matters involving topics such as forms of discrimination, ethics, and privacy when discussing AI. More generally, Brennen (2018) argue that news media in the United Kingdom has focused on sensationalist topics of existential threats to the world, rather than on more urgent, realistic dangers. Such messages have negative implications for society, as mistrust of technology can act as a barrier to individual’s efficient deployment of AI tools (Jaillant and Rees, 2023). We reflect on these considerations and engage in comparative analysis of our findings in the results and discussion section of this paper.

Methods

Using Lexis Library News Search, we conducted a search across all available news media outlets in the UK for the terms ‘ChatGPT’, ‘OpenAI’, AI’, ‘Artificial Intelligence’, ‘LLM’ and ‘Bard’, covering both in-print and online articles. To qualify for inclusion in the database, articles had to feature one of the keywords in their headline and be clearly related to ChatGPT, AI, Bard, or another LLM. Many results were obtained as a result of embedded links halfway through an article; therefore, these were discarded. We collected 671 qualifying articles between 1 January 2023 and 30 May 2023 and ended data collection at this point to ensure a manageable data set. During our data collection process, it became apparent that a number of regional and country-specific publications (e.g. the Manchester Evening News or the Scotsman) produced fewer articles than larger national newspapers. To deal with this, we set a cutoff at 20 articles to be considered as a separate outlet to give a reasonable amount of data for cross-outlet comparison, and outlets with under 20 articles were combined into a separate category of ‘other’. This resulted in a total of seven categories: The Times, The Independent, The Guardian, Other, The Daily Mail, The Daily Star, and The Telegraph. Sunday titles (e.g. Times on Sunday), and online titles (for example, The Mail Online) were categorised under a single news brand heading: ‘The Times’ or ‘The Daily Mail’. Identifying the political leaning of each newspaper is challenging, given the subjective interpretation of categories such as ‘right’, ‘left’ and ‘centric’. We drew our categorisation from data collected by the UK Government’s National Readership Survey, which asked readers to rate newspaper outlets on a scale from ‘most right wing’ to ‘most left wing’ (YouGov, 2017). This data is summarised in Table 1.

Table 1 UK Newspaper Political Leanings as Per National Readership Survey.

Once the data collection was completed, the headlines of each article were stored in a separate database and tagged manually by month and news brand. After this, we conducted a straightforward analysis of the dispersion and frequency of news articles by each outlet per month. Subsequently, we conducted an inductive thematic analysis of the headlines for each newspaper outlet, before holistically developing our themes. Inductive thematic analysis was selected because it has been shown to be a flexible method (Braun and Clarke, 2006) and is effective when used to inspect headlines across media outlets (Yoon and Hernández, 2021). Furthermore, inductive analysis is often used when investigating news framing effects (Lecheler and de Vreese, 2018). We followed the six-step approach set out in Braun and Clarke’s (2006) landmark work on conducting thematic analysis, which began by familiarising ourselves with the content of the headlines prior to identifying shared patterns of meaning and assigning codes. We then refined the codes in an iterative and reflexive manner. This process resulted in the final output of six shared themes for newspaper headlines describing ChatGPT and AI, which were present throughout the database across all political leanings and all news brands. We then cross-referenced our findings and compared the results to Brennen (2018).

Results

Our six themes were labelled as ‘impending danger’, ‘explanation/informative’, ‘negative capabilities of AI or ChatGPT’, ‘positive capabilities of AI or ChatGPT’, ‘Humorous/Comedic’, and ‘Experimental Reporting’ while a minority of 32 headlines were assigned to the ‘Unclassified’ category. These themes are listed in Table 2 with a brief description highlighting some of the key linguistic features and content that typify the theme.

Table 2 Headline themes across all news media outlets.

In relation to our secondary aim of understanding the dispersion of articles related to AI and ChatGPT over a short period of time, a clear pattern was found. Our results showed that there was a slow but sustained increase over all newspaper outlets, regardless of political leaning from January 2023 to May 2023. When all outlets were taken together, there was a five-fold increase in headlines featuring our keywords in May 2023 compared with January 2023. Six of the seven news brands hit the highest number of articles in May 2023, suggesting that the wave of attention given to the technology had not begun to decline at that point. The breakdown of headline production by month is shown in Table 3, and the percentage of headlines allocated to each theme is shown in Table 4.

Table 3 Headlines produced by month.
Table 4 Headlines collected by outlet and classified by theme.

As visible in Table 4, the majority of headlines in the data set fall under two main themes: ‘Impending Danger’ and ‘Explanation/Informative’. ‘Impending Danger’ was the most prevalent category, with 248 headlines, representing 37% of the total collected. This category includes articles that report potential dangers and disruptions that may be caused by AI technologies. These headlines tend to emphasise the potential societal damage or severe consequences that may arise due to the unforeseen impacts of AI technologies. The ‘Explanation/Informative’ category, with 173 headlines (26%) is the second most common theme in the data set. This category includes headlines that generally discuss what AI is, including explaining the basic functions of artificially intelligent programs and chatbots such as ChatGPT. Examples of the ‘Impending Danger’ theme are given below in Table 5.

Table 5 Examples of headlines coded as ‘Impending Danger’.

These three headlines were coded as belonging to the ‘Impending Danger’ theme, based on several unique attributes. The first headline is exclamatory, suggesting a coming revolution, a ‘rise’ of the robot class, which is likely to cause significant socioeconomic disruption by eliminating or dispossessing people from their jobs. The second headline is a report on Elon Musk discussing the launch of an alternative product, TruthGPT, while also highlighting ‘warnings’ of ‘annihilation’, which is taken to mean the mass extinction of life. Brennen (2018) also identified Elon Musk as a frequent subject in newspaper headlines on AI, suggesting that there is a tendency to rely on ‘figureheads’ as agenda-setters in these fields. Although the headline is quoting Musk’s warning, the selection of such dramatic language leads to the headline clearly belonging to the ‘Warning/Danger’ theme. The third headline also quotes a prominent (yet unnamed) figure, the ‘boss’. In this headline, the quotation of ‘significant harm’, despite not having an object which is the recipient of such harm, can be seen as a warning of the dangers of AI, but without sufficient explanatory detail, thus demonstrating the textual strategies that typify the genre of news headlines. The headline continues to assert that developments in AI leave the risk of catastrophic events as likely to occur. These examples indicate that there is imminent concern regarding the danger of AI. In terms of political leaning, this theme is well-represented across all outlets. While the above examples come from two right-leaning newspapers (The Daily Mail and The Telegraph) and one from the Centrist/Unaffiliated newspaper The Independent, our analysis did not indicate any differences in the way that this theme was communicated in relation to political leaning. In terms of the theme ‘Explanation/Informative’, examples from multiple newspaper outlets are provided in Table 6.

Table 6 Examples of headlines coded as ‘Explanation/Informative’.

In Table 6, headlines coded as ‘Explanation/Informative’ are given from three outlets. This theme recurred frequently across all political leanings and was often related to specific products or corporations such as Microsoft Bing, Snapchat, or ChatGPT. This could relate to the macrostructural changes to journalism, as indicated by Brennen (2018), who asserts that such product-related headlines are often crafted based on press releases as a result of a reduction in the provision of specialist journalists. Regardless of the cause, this theme tended to focus on explaining an update, change, or availability of features to the reader without taking a clear stance on the positive or negative societal or individual implications of the subject. The first example was coded as ‘Explanation/Informative’ given that it describes the addition of a technology feature to an existing product, but also uses the ‘-like’ suffix to give an explanation of what the technology is. The second headline equally is informative and describes to the reader the function of a new AI chatbot in Snapchat that is ‘similar’ to ChatGPT. These two examples also aim to inform the reader, and to give them a general update on a new event or change that is occurring with the technology. The third example similarly gives a brief snippet of information on an upgrading of ChatGPT’s capabilities, without any clear positive or negative evaluation or judgement.

The third most common theme in the headlines was ‘Positive Capabilities’. This theme generally describes the ways in which AI and ChatGPT could be used for the benefit of individuals or society and stood in stark contrast to the ‘Impending Danger’ category and the ‘Negative Capabilities’ category. Again, this theme did not seem to vary significantly with the political leaning of the newspaper outlet and accounted for 11% of the total headlines collected. Examples of these themes are listed in Table 7.

Table 7 Examples of headlines coded as ‘Positive Capabilities’.

The examples in Table 7 come from a traditionally right-leaning newspaper (The Daily Mail), a centrist newspaper (The Independent), and a traditionally left-leaning newspaper (The Daily Star). In the first example, AI is described as having the potential to achieve greater work-life balance for ‘us’, taken to mean people in general, and again draws on an expert subject to provide this insight. The second example provides a practical use-case rather than a theoretical positive capability, demonstrating how ChatGPT has been used to achieve a ‘fantastic’ outcome in a typically frustrating situation of an airline delay. The third headline provides an example of how AI can be used in medical treatment in place of a human General Practitioner (doctor) and carries out this task in a better bedside manner. Each of these examples gives a use-case of AI and ChatGPT which shows a positive impact, ranging from the major structural change of work-life balance for the population in general, to the singular example of a well-written complaint email. The second example of this theme appears similar to the ‘Humorous/Comedic’ theme, but with the exception that it offers an evaluative appraisal of a task completed by ChatGPT.

The fourth most frequent theme was Negative Capabilities. 77 articles were categorised under this theme and featured in every newspaper outlet, with the exception of The Times. Although related to ‘Warning/Concern’, this theme specifically describes mistakes, errors, or failures of AI and ChatGPT which have already taken place, rather than hypothetical situations which create risks for humanity in general. Examples are listed in Table 8.

Table 8 Examples of headlines coded as ‘Negative Capabilities’.

In the first example, a reference is given to a mistake made by Bard, Google’s competitor, to ChatGPT, and the subsequent effect that this had on share prices. As such, this example shows that the headline reflects an actual event that has taken place, leading to a failure or underwhelming performance of AI, which has caused further negative consequences (in this case, a decrease in share price). The second example demonstrates the potential issues that can arise from the fact that LLM’s such as ChatGPT are unable to define ‘truth’ (OpenAI, 2023), leading to the consequence of an false accusation. The third example indicates that the most capable Generative AI chatbot in terms of intelligence is unable to pass an examination, using the colloquial and negative evaluative verb ‘flunk’. Each of these headlines does not offer a warning or risk of impending danger regarding AI but passes a more balanced judgement or report on a technological failure or limitation of the capability of ChatGPT.

The next theme relates to experimentation. Under this theme, an experimenter (often the journalist) describes an experience using AI or ChatGPT to achieve a task traditionally ascribed to a human, including conducting the work of writing articles or acting as a journalist as a frequent example. Headlines belonging to this theme often feature the use of forward reference, a technique associated with ‘clickbait’ (Blom and Hansen, 2015, p1) to engage the reader. In this case, the experimental headlines regularly ended with the phrase ‘here’s how it went’ or similar. Examples of this theme can be seen in Table 9.

Table 9 Experiment.

In the first example, rhetorical questions are used to provoke the reader, while in the second and third, the use of forward reference invites the reader to find out how effective AI and ChatGPT can assist with daily tasks. At times, the experimental theme overlapped with the sixth theme, ‘humour’. Although this theme occurred far less frequently, accounting for only 4% of the total headlines collected, the ‘humorous’ theme was developed from headlines which intended to not only inform and demonstrate capabilities of AI or ChatGPT, but to use them in a light-hearted, playful, or entertaining manner. Examples of these themes are presented in Table 10. Similar to the other themes identified, no particular differences could be ascertained between right and left-leaning political outlets.

Table 10 Humour/comedic headlines.

In the above examples, the creativity of ChatGPT is shown to create an entertaining or comedic product. In each of these, the ‘clickbait’ is a humorous product, including a stand-up comedy routine, romantic poem, and Eurovision song contest entry. Several of these themes also include a quotation from ChatGPT within the headline. In the third example, this takes the form of a comparison of Grimsby and Cleethorpes (a coastal conurbation in northern England with high levels of economic deprivation) to the Principality of Monaco. Although relatively rare, this theme demonstrates that there are significant variations outside of those discussed previously. The remaining 32 headlines were categorised as ‘other’ thematically. Although some examples from this category demonstrated applicability to some of the identified themes, they were deemed ambiguous enough not to fit into the theme categories fully. Examples of unclassified themes are listed in Table 11.

Table 11 Unclassified Headlines.

In total, only 32 of the headlines collected could not be classified as belonging to the major themes. In the above examples, there is a clear mention of AI and some suggestions towards an evaluation. The first example suggests an evaluation will be provided within the article, as a ‘plan’ is required for dealing with AI, but there is no clear sense of what the plan must address. On this basis, the headline was chosen as not belonging to the other themes. Likewise, the second example is a play on the common idiom ‘me, myself, and I’, but does not provide enough detail to be included in any other theme. The third example, at first glance appears to be describing an expert suggestion on the use of AI, but equally readdresses this seemingly negative capability as not a cause for concern. As a result, no conclusion can be drawn about which of the themes it can be considered under.

Discussion

The findings of this research offer an initial contribution to our understanding of how AI and ChatGPT are discursively represented within the UK news media landscape. The six themes identified across different newspapers provides a broad perspective on societal attitudes towards AI and generative technologies and seem not to drastically vary with political leaning of the newspaper outlet. From our results, we can identify that there is a need for readers to seek greater explanation on what AI, ChatGPT, and LLMs are, given the growing number of headlines that belong to the ‘Explanation/Informative’ theme. While providing critical information for reader comprehension (Metila, 2013), even headlines such as these have the power to shape the public’s perception of technology (MacRitchie and Seedat, 2008), and the lens through which AI is presented to the public is often constructed by the news outlets themselves (Brennen, 2018).

The ‘Impending Danger’ theme leans in some cases towards sensationalism, which was also found by Brennen’s (2018) investigation. This is particularly clear when complete destruction of society or ‘annihilation’ is presented as ‘just around the corner’. Although there is ample evidence that AI presents risks, spontaneous, imminent destruction of the world and/or society does not seem to be an accurate assessment of these risks at the present time. Part of the explanation for this tendency may be that in an increasingly digital world, where the online readership of newspapers outpaces physical circulation, headlines assume a heightened role in attracting readers’ attention (Kuiken et al., 2017), and furthermore, headlines have been shown in other studies to provoke negative sentiment (Aslam et al., 2020). This shift towards the digital realm may then have inadvertently encouraged a trend towards sensationalism, as shown by the prevalence of ‘Warning/Danger’ headlines. Brennen (2018) identified a similar sensationalist trend in AI coverage, that may contribute to a skewed public perception of AI technologies. This demonstrated sensationalism may amplify the risks associated with AI, such as privacy violations, accidents, discrimination, and political vulnerabilities (Cheatham et al., 2019), potentially fostering unnecessary anxiety and fear among the public.

Although we aimed to identify ideological differences between political leanings, affiliations, and headlines regarding AI and ChatGPT, we found few significant differences in our dataset. While traditionally right-leaning outlets like the Telegraph and the Times presented a higher proportion of ‘Explanation/Informative’ and ‘Impending Danger’ articles, and traditionally left-leaning outlets like the Guardian seemed to gravitate more towards ethical issues associated with AI, reflected in the substantial number of ‘Capabilities—Negative’ themed articles, most themes were fairly equally represented by all of the newspaper outlets under study. We did not find significant evidence of the polarisation found by Brennen (2018), although this could be a result of the smaller dataset and limited time over which our study was conducted.

In relating our investigation to existing theoretical positions in the study of news media, our results can be considered in relation to Agenda-setting Theory (McCombs and Shaw, 1972) and Framing Theory (Entman, 1993). Agenda-setting theory in media studies posits that the media significantly influences what issues the public thinks are important (McCombs and Shaw, 1972). Given the salience of headlines in shaping public perception (Metila, 2013) the frequent portrayal of AI in ‘Warning/Danger’ and ‘Explanation’ headlines could be influencing public discourse and potentially setting a societal agenda that views AI as a potential threat that requires immediate and wide-ranging regulation. The concept of Framing Theory (Entman, 1993) is equally salient as it offers an analytical lens for understanding our findings. The framing of AI technologies in media is not merely an objective report of what exists but is shaped by the orientations of news outlets and their perception of what their readership would find engaging. As headlines are subject to specific discursive and framing modalities (MacRitchie and Seedat, 2008), the ‘frames’ chosen for AI stories become a crucial determinant of public understanding and attitudes towards these technologies.

Our findings, especially the prevalence of ‘Explanation/Informative’ and ‘Impending Danger’ themes in the headlines, can be viewed as two primary frames employed by the media in their AI coverage. The ‘Explanation/Informative’ frame, while educating the public about AI, could also inadvertently highlight the complexity and incomprehensibility of the technology to the unfamiliar reader, which may lead to uncertainty or apprehension. On the other hand, the ‘Impending Danger’ frame fits well with the media’s tendency towards sensationalism, perhaps aiming to evoke fear or concern among readers about AI’s potential risks of AI (Brennen, 2018). These frames arguably have the potential to shape public discourse and societal attitudes towards AI in significant ways. Furthermore, a comparison of framing across different media outlets revealed some interesting insights. While some differences were observed in the frames used by traditionally right-leaning and left-leaning outlets, the fact that most themes were fairly represented across all outlets suggests that the framing of AI is more nuanced than a simple dichotomy between positive and negative or between right-wing and left-wing ideologies. This points to a broader and more multifaceted discourse on AI in society that reflects the complex and multidimensional nature of AI technologies themselves. However, to fully understand the implications of these frames, more research needs to be conducted, particularly examining how these frames interact with readers’ existing beliefs, attitudes, and consequent behaviours regarding AI technologies.

This study underscores the responsibility of media outlets, experts, and the public to foster balanced, informed, and nuanced discussions on AI and its potential repercussions. It serves as a timely reminder of the need for critical evaluation of media coverage, a shift away from sensationalism, and a movement towards comprehensive, expert-informed reporting on AI technologies.

Limitations

While we strived to present an insightful snapshot of the portrayal of AI technologies and ChatGPT in UK news media headlines, there are several limitations associated with our research. First, as a result of the sheer numbers of headlines being produced on a daily basis by a large, diverse news media, our data collection became unmanageable quickly. As a result, our study was restricted to the timeframe of the initial five months of 2023. This offers a valuable temporal snapshot of media portrayal at a single point in time, but does not reflect the ongoing, expansive and evolving picture. Methodological choices also posed certain constraints. Our reliance on human interpretation during inductive thematic analysis, despite its robust nature, could introduce the potential for subtle biases, although we aimed to take a reflexive approach when conducting coding. Finally, we intentionally narrowed our focus to the UK mediascape, thus providing a localised view of what is a global phenomenon. Consequently, the findings of this study may not be readily applicable to international contexts. To enrich our understanding of the media’s role in shaping public discourse on AI, future research could widen both the temporal and geographical scope. This would enable a more comprehensive exploration, offering a panoramic view of media portrayals of AI technologies across various cultural and political contexts and throughout their continued development and integration into society.

Conclusion

The rapid development of AI technologies such as ChatGPT has prompted an escalation in media attention and public discourse which is visible in our results, judging from the three-fold increase in headlines related to ChatGPT and AI from January to May 2023. Undertaking a rapid analysis of AI-related newspaper headlines across a diverse spectrum of UK outlets provides us with a foundational understanding of how AI and more specifically ChatGPT, is being portrayed to the public. Our findings reveal that, while there is an increasing effort to demystify AI and LLMs for the public, media representations often swing between extremes of promising potential and serious impending dangers, including at times references to the outright annihilation of the planet. Furthermore, it seems that such concerns and dangers are not limited to specific political orientations in our study.

When considering agenda-setting (McCombs and Shaw, 1972), it must be considered that the at times bipolar representation of AI as both hero and demon may lead to unreasonable or inaccurate representations of the capabilities and functions of AI at present in society. The use of public opinion polls to correlate our findings with changes in popular sentiment regarding these technologies is a logical next step which could lead to a greater understanding of the effects of agenda setting (Neuman et al., 2014), while further research on the frames and framing devices that delineate the representation of AI in the news media in other geographical contexts will also help to illuminate this issue.

The major impact of this work is the identification of a broadly apolitical yet problematic and complex picture of media representation of AI, LLMs, and ChatGPT in the UK. Our research equally contributes to a broader understanding of the current agendas on these topics and the potential for critique of current media practices. Policymakers, AI developers, and educators can use these findings to inform strategies that support public understanding and engagement with AI technologies. Furthermore, news outlets might reflect on their practices in light of the influential role they play in shaping societal perceptions of AI. Finally, this research serves as a platform for further exploration of media representations of AI. Future research could delve deeper into how different social, cultural, and political contexts influence the portrayals of AI. Similarly, more longitudinal studies would be beneficial to assess changes in media representation as AI technologies continue to evolve and proliferate. Through such studies, we can aspire to a future in which the complexities of AI technologies are understood and embraced by society, thereby facilitating their responsible and beneficial use.