Introduction

Imagine you are thirty, and you return home after a stressful day at work, wishing for a long bath to relax. After soaking in the tub for half an hour, the alarms suddenly go off, and your phone starts ringing. Your family members have been alerted by sensors in your home that you have been in the bathroom for longer than usual. Once the situation settles, you head to the kitchen to have a glass of wine, only to receive a message on your smartwatch warning against it. Your digital assistant reminds you that you’ve had wine in the previous days, which is very bad for your health and may affect your life insurance. You are advised to return the wine to the fridge and head to bed; otherwise, your family and physician will be notified. This scenario looks far from ideal. But what if the protagonist of the story was eighty? For many, using technology to monitor and control the lives of older adults is often seen as a way to ensure their safety and well-being, even if the price is the constraint of their autonomy.

Already, the question of the impact of digital technologies on older adults is becoming more and more pressing. By 2050, the percentage of people 65 years and above is projected to double in comparison to the 2021 figures, due to increased life expectancy over recent decades (United Nations Department of Economic and Social, 2023, p. 18). Longer lifespans bring about a higher likelihood of experiencing disabilities and various medical conditions. In the near future, the demand for care services will far surpass the offer, which is already dwindling (United Nations Department of Economic and Social, 2023, p. 113). In this context, artificial intelligence (AI) is seen as crucial in supplementing and expanding care for older adults. Although AI holds important promise in tackling the scarcity of care, it raises some problems as well.

In this paper, we show that ageism can inform the development and deployment of AI technologies for aged care, and when it does, it takes the form of technological paternalism. Ageism against older adults presupposes the existence of stereotypes based on age, which frequently rely on broad assumptions about old age and old individuals, depicting them as frail, vulnerable, and incompetent, although warm and friendly (Swift et al., 2021, p. 168). Ageism can negatively affect older people’s self-esteem, who might end up internalizing ageist stereotypes, which can become self-fulfilling prophecies resulting in social exclusion and health problems (Chasteen et al., 2020, p. 1326; Chang et al., 2020, p. 15). Although it can be difficult to avoid the confrontation with ageist stereotypes, older adults can still overcome them and continue living their lives as they see fit. But in some instances, ageism leads to paternalistic attitudes towards older adults (Cary et al., 2017), and in these circumstances, it is more insidious and difficult to resist, as it presupposes a constraint of older adults’ freedom, against their will, and for their supposed well-being. Thus, paternalism has direct effects on the lives of older adults, whose autonomy is constrained without them having the possibility to resist or oppose the constraint.

We start by showing how implicit age biases get embedded in AI technologies, either through designers’ ideologies and beliefs or in the data processed by AI systems. Thereafter, we argue that ageism can lead to paternalism towards older adults. We show how implicit age biases in AI development lead to the creation of paternalistic technologies designed for older adults’ care. We introduce the concept of technological paternalism and illustrate how it works in practice, by looking at AI for aged care. We end by analyzing the justifications for paternalism in the care of older adults to show that the imposition of paternalistic AI technologies to promote the overall good of older adults is not justified.

AI for aged care

AI is an umbrella term for a variety of systems that can analyze their surroundings and take actions with a degree of autonomy. These systems can be software-based, functioning in virtual spaces such as conversational agents based on large language models, or hardware-based, operating in physical environments, such as robots. AI techniques encompass machine learning, computer vision, pattern detection, and natural language processing, among others. These AI-enhanced interventions, which oftentimes incorporate environmental sensors, are developed to support the health and independence of older individuals. The hope is that these semi- and fully autonomous systems will extend the reach of care services, enhance their efficiency, and reduce the burden on caregivers. Moreover, by supplementing (or completely replacing) caregivers, AI is hoped to improve workforce sustainability, address service disparities, and streamline information systems and data analysis for those in need of care (Loveys et al., 2022, p. e286).

AI in aged care also holds promise because of its' potential to realize, in a cost-effective manner, the ideal of 4P medicine (predictive, personalized, preventive, and participatory) (Rubeis, 2020, p. 2), which is supposed to reduce the incidence of chronic diseases in a cost-effective manner. For example, predictive systems are already used to collect data through monitoring, surveillance, and sensors to detect abnormalities in older individuals’ behavior, such as the probability of falls. Similarly, AI plays a vital role in personalized medicine, where once again, collection of personal data is crucial, as it is then used to screen for chronic diseases and provide tailored treatment options and health advice, taking into account an individual’s specific health profile (Miura et al., 2022). Preventive AI systems, on the other hand, are used to alert healthcare providers or family members of irregular patterns in the daily activities of patients, allowing for timely risk mitigation measures (Pilotto et al., 2018). These systems are often associated with “in-place remote healthcare assistance”, involving extensive monitoring for daily-life health support and the triggering of alarms in case of emergencies (Lee et al., 2023). Last but not least, through the participatory dimension of some AI systems, the patient can read and interpret the data from wearable sensors, which can offer them a better understanding of their medical situation and, thus, a better ground for participating in decision-making regarding their own health and well-being.

While AI holds the potential to enhance the quality and breadth of aged care, it is not without its problems. In recent years, researchers have drawn attention to how AI can perpetuate biases, with a notable focus on racial and gender biases (Buolamwini and Gebru, 2018; Noble, 2018). Yet, one of the most pervasive and often unnoticed biases in most societies is ageism (Iversen et al., 2009; Nelson, 2016). This raises the question of whether AI can also embed and reinforce ageist biases.

AI ageism

Relatively recent but scarce work traces a connection between AI and ageism (Stypinska, 2023; Chu, Leslie et al., 2022; Rubeis, 2020; Neves et al., 2023; Berridge and Grigorovich, 2022). AI ageism is defined as those practices within the field of AI that contribute to discrimination against or the exclusion and neglect of the interests of older adults (Stypinska, 2023, p. 669). Like other AI biases, ageism can manifest through the beliefs and ideologies of those creating AI technologies, or it can be embedded in the datasets processed by AI systems (Stypinska, 2023; Rubeis, 2020; Neves et al., 2023). Ageism in AI risks perpetuating negative stereotypes and sidelining older individuals, making their active engagement with and benefit from AI technologies more difficult. Thus, AI systems are not neutral; they reflect the values, beliefs, and biases of their creators or those that are embedded in the data processed (Boyd and Crawford, 2012; Buolamwini and Gebru, 2018; Noble, 2018). How does this happen?

Age scripts

Firstly, the perceptions of designers, developers, and programmers regarding potential technology users, usually called ‘scripts’, infuse technology development, which can influence the way consumers use products (Peine and Neven, 2021, p. 2857). Users have the option to follow these predefined scripts and use the technology as it is supposed to be used (such as the large majority of the population that uses computer operating systems in expected ways), they can adapt the technology to better suit their needs (as in the case of people who tinker with computer operating systems in order to adapt them to their preferences), or reject it entirely if it does not align with their preferences (such as the case of those who refuse to use certain operating systems because of value reasons). When it comes to technology designed for older individuals, age-specific scripts come into play, embedding societal perspectives regarding the aging process in technology design processes. These scripts, in turn, exert a normative influence on users, compelling them to adhere to prevailing expectations (Peine and Neven, 2021, p. 2858). Nonetheless, as recent research at the intersection of social gerontology and Science and Technology Studies (STS) shows, users can also challenge these scripts and reappropriate technologies to fit their needs (Loos et al., 2021). To put it simply, ageing and technology are co-constituted (Peine and Neven, 2019, p. 17).

Often, age-scripts portray older adults as incompetent in dealing with technology, vulnerable, or frail and emerge because ageism is pervasive in the tech industry, which is dominated by young (often white) males who may not be immediately aware of their age-related biases. This issue is so pronounced that the tech industry has been characterized as “one of the most ageist places on Earth” (Gullette, 2017, xx). For instance, most AI technologies for older adults predominantly focus on healthcare and chronic disease management, often referred to as gerontechnology, while aspects related to leisure and enjoyment are overlooked (Neves et al., 2023, p. 1275).

Older adults are often seen as “invisible users” in the development of digital technologies, leading to their exclusion from design processes (Mannheim et al., 2022, p. 1197; Ivan and Cutler, 2021). For example, in a literature review of studies documenting the design of digital technologies with older persons, Mannheim et al. found that the exclusion of older adults from design processes often takes the form of “no or low involvement, upper-age limits, and sample biases toward relatively ‘active,’ healthy and ‘tech-savvy’ older persons.” (2022, p. 1188). This exclusion not only disempowers older adults but also perpetuates their marginalization from the design and use of AI technology. Developers of digital technologies and AI systems often create technologies “on behalf of older people, instead of for older people” (WHO, 2022, p. 8). This means that technology is designed based on inaccurate assumptions about the lifestyles, needs, and interactions of older individuals and has in view older adults as a homogenous group. It is important to note that this lack of consideration does not necessarily imply ill intentions on the part of developers but rather reflects a deficiency in awareness and reflection regarding the needs, preferences, skills, and capacities of older adults (Manor and Herscovici, 2021, p. 1088).

For instance, Neven (2015) analyzed AIMS, an in-place remote healthcare assistance system designed to make monitoring older persons as unobtrusive as possible, allowing them to live at home. This was achieved by installing a variety of sensors and cameras in the homes of older adults that monitored and learned their movements, triggering an alarm in case of detection of unusual behaviors. In this case, “for the older people, the script of AIMS has distinct elements of ‘giving up’—e.g. control over previously private information or access to (spare) rooms which were not equipped with sensors—and ‘putting up’—e.g. with being monitored and with changes in care—and the autonomous nature of AIMS affords very few opportunities to resist this” (Neven, 2015, p. 41). These age scripts made it so that it was nearly impossible for older adults to resist the system without triggering the alarm or using the system in a creative and unforeseen way, as decided by them and not dictated by others. In this sense, older adults had to conform to the new technology and adapt their behaviors to it—for instance, some rooms that were unmonitored became completely off-limits, and some people stopped kneeling when praying because of the fear of triggering the alarm. But older adults do not always conform to technological systems. As Berridge (2017) shows, passive monitoring system do not necessarily invade or respect the privacy of older adults, but instead can provide the opportunity to negotiate what privacy means for them, what are its boundaries, and when they can be infringed.

But customers and users of gerontechnology are not always the same. Customers are oftentimes family members who want to improve the life of an older relative, or they can be large-scale care providers who want to make care more efficient through technology. This means that designers or developers of AI technologies for aged care are caught between competing interests: on the one hand, the customers, the ones who pay, value older adults’ safety above their autonomy, while older adults who are the users of these technologies might, on the contrary, value autonomy above safety. Because of the ways markets operate, developers are incentivized to prioritize the customers’ interests over those of the users, as this is how profit gets maximized. This means that in the case of technologies for aged care, developers have fewer incentives to create devices that can be easily adapted and creatively reappropriated by end-users.

Biased data

Another notable source of bias can permeate the datasets processed by AI systems, which often fail to adequately include older individuals. Even the largest datasets are not independent of the “instruments, practices, and systems of knowledge” used for data collection, processing, and analysis (Sourbati and Behrendt, 2021, p. 1401). Data is an object of power, it includes or excludes certain individuals, processes, or phenomena; it makes them visible or, on the contrary, invisible (Ruppert et al., 2017). Biased data can lead to discriminatory or exclusionary results for minority groups. For example, Buolamwini and Gebru (2018) showed that facial analysis AI does not work for women and men with darker skin, which often results in discriminatory outcomes. Straw and Wu (2022) revealed that AI systems built to predict liver disease, are twice as likely to miss disease in women as in men. These results can be attributed to the underrepresentation of marginalized groups in datasets, as shown by research in critical data studies (Geneviève et al., 2020; Dalton et al., 2016). Older individuals are also frequently absent from datasets used for AI development and assessment (Mannheim et al., 2019; Fernández-Ardèvol and Grenier, 2022; Rosales and Fernández-Ardèvol, 2019). This can be attributed to the lower likelihood of older individuals using digital technologies, which might be the result of the gray digital divide (Mubarak and Suomi, 2022).

However, the underrepresentation of older adults is also due to exclusionary data collection processes (Rosales and Fernández-Ardèvol, 2019; Sourbati and Behrendt, 2021). Data collection practices, even health and medical data from sources such as clinical trials, often prioritize younger demographics, resulting in the underrepresentation of older age groups (United Nations Independent Expert on the Enjoyment of All Human Rights by Older Persons Report, 2020). And this is not due to “explicit age exclusion but implicit age bias” (Jecker, 2020, p. 250). For example, research on osteoporosis amongst older age groups is rare despite the fact that they are the population most affected by this medical condition. One review of randomized control trials found that the average age for osteoporosis study participants is 64, which is almost two decades younger than the average age of people with hip fractures, the most important clinical event in osteoporosis (McGarvey et al., 2017). Jecker (2020, p. 250) notes that implicit age bias that results in the exclusion of older adults is often present in studies involving stroke (Gaynor et al., 2014), cancer (Murthy et al., 2004), acute coronary syndrome (Lee et al., 2001), chronic kidney disease (O’Hare et al., 2009), diabetes (Cruz-Jentoft et al., 2013), and Parkinson’s disease (Buckley and O’Neill, 2015) to name a few. The situation highlights just how far-reaching implicit age bias is in clinical research. The problem is that data from these studies is used to train AI for aged care, which can result in situations where AI systems may fail to generalize to older age groups, leading to poorer performance and user experiences for older users (Chu et al., 2022, p. 950).

But even when older adults are represented in datasets, data might not be disaggregated for relevant use. Disaggregated data is data broken down into sub-categories which allows a better understanding of trends and patterns emergent in these sub-categories. Lack of age-disaggregated health data “impedes the identification of meaningful correlations among various factors and limits the capacity for quantitative program evaluations, to assess causal inference, and to pinpoint best practices” (Diaz et al., 2021, p. e436). Data tend to be disaggregated for younger age groups, but not for older ones. This can be attributed to ageist stereotypes that lead to a failure to see the reality that older adults are not a homogenous group, but that they differ significantly. The risk is that AI systems that incorporate data from extensive cohorts that are not disaggregated may interpret individual data in terms of the average values derived from those datasets. Because individual interests and skills are not reflected in the data, individual variations might be misinterpreted as aberrant behavior (WHO, 2022, p. 6).

Paternalism and its technological version

As noted previously, ageism frequently relies on both positive and negative stereotypes, painting older adults as warm and likable yet also as incompetent, forgetful, and fragile (Levy, 2018; Ayalon et al., 2020; Cary et al., 2017; Chang et al., 2020). This blend of stereotypes can elicit complex emotional responses, including feelings of pity and the desire to help, which lead to paternalistic behaviors towards older adults (North and Fiske, 2012, p. 10). For example, when positive stereotypes are in the mix, people tend to respond actively, that is, to intervene on behalf of the older adult in order to help, without older adults actually asking for help (Cuddy et al., 2007, p. 109). Although helping behaviors are sometimes seen as a form of respect for older adults when such help is accompanied by judgments of frailty and incompetency, they become paternalistic and can undermine older adults’ independence (Sublett et al., 2022).

Paternalism, classically defined, involves interventions that restrict an individual’s liberty for their own benefit and without their consent (Dworkin, 1972, p. 67). Examples of paternalistic interventions abound in day-to-day life, such as the prohibition of the sale of tobacco in New Zealand to anyone born after 2008. Three crucial dimensions of paternalism emerge:

  1. 1.

    Paternalism entails a limitation of freedom;

  2. 2.

    the limitation of freedom is justified by the intention of advancing an individual’s best interests;

  3. 3.

    the limitation of freedom is without prior consent (Dworkin, 1972, p. 65).

Paternalistic behaviors can manifest in various ways, from assuming that older adults cannot make their own choices to making decisions on their behalf without their input. The inclination to over-help or over-protect those who are perceived as needing assistance or guidance leads to the creation of excessively accommodating environments that assume older adults’ dependency and fragility without considering their actual competence or interest in receiving help (Vervaecke and Meisner, 2021, p. 160). While some may argue that paternalism is driven by genuine concern for the well-being of older adults, it can actually reinforce their lower social status as it can lead to a cycle of disempowerment that further entrenches the stereotypes associated with old age. Furthermore, paternalistic behaviors place older individuals in a position of dependence, perpetuating the belief that they are unable to make their own, informed choices (Swift and Chasteen, 2021).

Technological paternalism

As paternalism implies intention, it is usually assumed that only humans can be paternalistic and can impose restrictions on other people’s freedom for their supposed good. But in the last decades, technology has played an increasingly substantial role in decision-making processes and is often used to impose certain ways of doing things or even to prohibit some actions. Examples of the latter include cars that would emit warnings or refuse to start unless seatbelts are fastened, or machinery that prevents operation without safety gear (Spiekermann and Pallas, 2006). These examples show how personal autonomy can not only be constrained through the intentional actions of other individuals but also through various social epistemic and material structures (Hofmann, 2003), such as technology. In this context, the concept of technological paternalism has been introduced to examine the ways in which individual freedom can be constrained by technology, often without users’ consent, and for their own benefit (Millar, 2015; Spiekermann and Pallas, 2006; Hofmann, 2003; Rochi, 2023).

For the concept of technological paternalism to make sense in relation to AI technologies for aged care, the three conditions above have to be accomplished. First it is essential to consider whether AI technologies can interfere with users’ liberty, understood as freedom of action. AI is deployed in aged care not only to create virtual outputs but also to control physical environments. Assistive technologies such as smart homes are a case in point, showing how AI systems can exert physical control over a users’ environment, by monitoring and controlling “physiological parameters (pulse, oxygen saturation, blood pressure); functionality (general activities, motion, meal intake); safety and security (automatic lighting, trip and fall reduction, hazard detection, intruder detection); social interaction (phone calls, video-mediated communication, virtual participation in groups); and cognitive/sensory assistance (medication reminder, lost key locator)” (Facchinetti et al., 2023, p. 2). With all of these sources of data, AI systems can make suggestions or offer advice to older adults, such that they encourage healthy habits and discourage dangerous activities. However, these all-encompassing monitoring systems might also restrict access to certain areas or activities for safety reasons, even when older adults are capable of managing these tasks on their own. Such systems might have a profound influence on older adults’ decision-making processes, making some actions more attractive while others less so. For instance, if an AI system controls medication schedules and dosage, older adults might not have the autonomy to adjust their treatment based on how they feel or their preferences (Fadhil, 2018). What is more, AI systems often collect and analyze personal health data. Older adults might be uncomfortable with constant surveillance and data collection, which can limit their sense of freedom and personal space (Mannheim et al., 2022). Thus, AI systems can limit older adults’ possibilities of acting.

The second condition pertains to the intention behind the limitation of freedom, which should be to promote individuals’ best interests. Can we meaningfully say that technologies have intentions and that these intentions are to protect users’ interests? First, while we cannot (yet) talk of AI systems as possessing intentions, they can be means for various entities, such as states, companies, or designers, family members or healthcare providers to accomplish certain goals, sometimes by imposing constraints on user’ autonomy. Technologies are created to foster the accomplishment of different types of goods or, more generally, to accomplish various types of purposes. In other words, technologies operate with some types of criteria that serve as a definition of their goals (Kühler, 2022, p. 196). In the case of AI systems, these criteria are named objective functions, which are goals they should pursue as they learn more and become more complex (Zhang and Conitzer, 2019). Most of the time, it is other parties, such as developers, healthcare providers, and family members, besides the beneficiaries, that is, older adults, who operate with a notion of the good that technologies are meant to maximize, and this is especially evident in healthcare systems. Thus, technologies work in the same way as if they had intentions and a conception of the good of their users (Kühler, 2022, p. 196). In the case of a fall detection system, the objective function might be an increase in the accuracy of fall prediction. And this objective function is then maximized, even with the price of other important aspects, such as privacy or autonomy. Another example is AI systems that dispense medications according to a strict schedule and dosage, which can limit older adults’ ability to deviate from the prescribed schedule. Similarly, AI monitoring systems can automatically trigger emergency responses in situations interpreted as critical, even if the older adult wishes to manage their condition without immediate medical intervention. This limitation on their freedom is driven by the desire to prioritize their health and safety; and oftentimes, health and safety, in the view of healthcare providers or family members, overrides older adults’ preferences. The same can be said of developers of these technologies, whose intention oftentimes is to support older adults’ safety and well-being through technology, even if the price is sometimes a restriction of the freedom to choose (Boström et al., 2013).

Last but not least, AI systems restrict users’ freedom without users’ actually explicitly agreeing to this or expecting it. Many AI applications for aged care involve surveillance technologies that collect data about users’ daily activities, from wearable sensors to smart home systems. This extensive surveillance often changes the behaviors of individuals who are being monitored. For example, monitoring of food consumption may make individuals feel that they cannot eat what they would like or when they would like, due to feelings of being watched and reprimanded for “bad decisions” (Kang et al., 2010, p. 1584). What is more, users typically lack the ability to opt out or to override technological decisions without compromising the technology’s functionality (Rochi, 2023). Some smart home systems grant remote control to caregivers or family members, but older people report that they prefer and want to have control over these systems (Ghorayeb et al., 2021; Demiris et al., 2009). Also, older adults would prefer to have a say in what information the AI system shares with their family or caregivers (Galambos et al., 2019), a need that stems from the different understandings and approaches to privacy that older adults and their families and caregivers have (Berridge, 2017). For example, in a scoping review on the ethical issues arising from the use of gerontechnology in the home care of older people, an ethical dilemma involving the balance of paternalism and the rights of older individuals emerged (Sundgren et al., 2020). Family members placed greater importance on the advantages of technology and viewed autonomy and privacy as secondary concerns compared to the benefits of technology, particularly in terms of the safety of older individuals (Landau et al., 2010; Wild et al., 2008). This is consonant with previous research that points toward the fact that perceptions of risk differ when it comes to older adults and their families or caregivers (Rolison et al., 2018). What is more, relatives expressed the belief that older individuals would likely refuse technology use and, consequently, stressed that the utilization of technology could be coerced if necessary (Landau et al., 2010, p. 414). Similarly, smart fall detection systems that promptly alert caregivers without giving older adults the chance to confirm or cancel an alert can diminish their sense of autonomy and control over their safety. All in all, older adults are concerned about their safety, but they wouldn’t increase it at any cost (Ienca et al., 2021). Thus, is not only that AI systems in themselves are paternalistic in relation to older individuals, their use is also oftentimes imposed on older adults in a paternalistic manner.

Technological paternalism is often an unintended consequence of designer’ adherence to specific normative frameworks or scripts in their approach to solving problems. The creation of AI technologies for older adults is oftentimes based on stereotypical representations of old age, depicting older adults “as a homogeneous group that can easily be linked to discourses about vulnerability and illness” (Peine and Neven, 2019, p. 58). These assumptions of old age that unintentionally get embedded into AI systems, risk creating “a feedback loop that reinforces negative stereotypes” (Chu, Nyrup, et al., 2022). Negative stereotypes of aging, related to frailty and vulnerability, “have the potential to affect the holistic health (i.e., mental, physical, social, and emotional well-being) of an older person and ultimately the length and quality of their life” (Dionigi, 2015).

While technologies themselves lack intentions, they serve as tools for various entities, such as designers, caregivers, companies, and states, to impose constraints on the autonomy of older users. Many AI applications for aged care, such as home monitoring or fall detection systems, involve surveillance technologies that collect data about users’ daily activities, often without their awareness or ability to override these technological decisions (Rubeis, 2020).

Is technological paternalism towards older adults justified?

It’s important to note that older adults constitute a diverse group. Many older adults have the ability to make informed choices about their lives, and imposing paternalistic AI on them could potentially curtail their freedom (Voinea et al., 2022). Nonetheless, certain segments of the older population, specifically those grappling with severe medical conditions that incapacitate their decision-making abilities, may need a form of paternalistic care facilitated by AI technologies. In these scenarios, the justification for AI-mediated paternalistic interventions can be compelling, rooted in the genuine need to protect those who are incapable of making sound decisions (Buchanan, 2008; Childress and Mount, 1983; Nys, 2008). The justification for embedding AI into healthcare settings for older persons arises from other practical considerations as well, such as staffing shortages and the steadily increasing number of older persons in need of care. In this context, AI-driven paternalism might, at times, seem the most pragmatic response to address the care deficit. However, it is crucial to recognize that while AI paternalism might be beneficial for specific groups, such as those with volitional disabilities, applying such systems universally to older adults capable of informed decisions may be unjust. The question arises: are the potential benefits for a subset of older adults sufficient to justify the costs of a blanket imposition of paternalistic technologies on the aging population?

To address this question, one must initially understand the factors that justify paternalism. Soft paternalists advocate for interference with a person’s freedom when they lack sufficient competence and, thus, when their actions are non-voluntary (Lyngby Pedersen, 2023). Hard paternalists, on the other hand, argue that interventions impinging on a competent person’s freedom are warranted if the good resulting from the intervention outweighs the harm it causes. Thus, paternalist interventions are justified by the good promoted.

Let’s begin with soft paternalism, which focuses on competence. Competence is typically defined in terms of an individual’s decision-making capacities, specifically their ability “to receive information, express wishes, and understand potential consequences” of their actions (Pedersen, 2023, p. 43). The competence argument can be further understood either through the ‘best judge’ or the ‘personal development’ argument.

The ‘best judge’ argument was first articulated by Mill in On Liberty, who stressed that paternalistic interventions can be rejected on the basis that individuals are, in general, the most competent and the best judges of what constitutes their own best interests. Thus, when the public does interfere with a person’s freedom for their supposed well-being, “the odds are that it interferes wrongly, and in the wrong place” ([1859] 2012, p. 98). Furthermore, objections to paternalistic interventions are often based on the idea that they limit opportunities for learning and personal development. Freedom of choice, according to Mill, has a strong educative value, as people learn better through their mistakes (Mill [1859] 2012, p. 74).

However, individuals may not always be the best judges of their own interests, given the susceptibility of human judgment to errors and biases. Take, for example, smoking: people who take up smoking ignore the fact that it might result in harm, so one might build the case that they are not the best judges of their best interest. Similarly, it can be argued that older people may not be the best judges of their best interest, specifically in cases in which they would prefer to prioritize autonomy, even with the risk of potential harm, which in the worst case may result in their death. Yet, the fact that humans are not always the best judges of their own best interest does not necessarily lead to the conclusion that others are better suited to make decisions for them (Kleinig 1983, p. 163).

The case of older adults is paradigmatic here. Older adults possess a wealth of life experience that equips them to make informed decisions about their well-being, as opposed to younger adults who do not yet have a crystallized view on what is best for them. As people’s experience increases with age, it might become more and more difficult to justify paternalistic interventions on competence grounds, as people already know what is best for them, what is risky, what is worth it, and what is not. For example, some older adults might reasonably prefer to keep their privacy and, hence, the space of their freedom, even with the price of potential risks to their safety. They might reason that there is more to life than merely living isolated and cloistered and might prefer having experiences, even if that might endanger or tire them. What is more, the process of acquiring the capacity to decide what is best for oneself also requires the freedom to learn from bad choices (which represents an important objection to the imposition of paternalistic interventions on younger adults on comptenece grounds). Even if older adults have a wealth of life experiences, it’s reasonable to assume that they can continue to discover new life experiences and experiment with them. This also implies the freedom to make mistakes or engage in activities that are risky. Prioritizing a life rich in experiences, even if some of those experiences carry risks, can be, for some, more valuable than leading an overly sheltered existence devoid of experiences. This is because these experiences, even if potentially harmful, can offer opportunities for individuals to discover more about their capacities and make informed decisions regarding paternalistic care (i.e., whether they are in the position to accept or reject it). But, more than anything, people have a right to decide how they want to spend their lives if that decision does not hurt others besides themselves.

In any case, the ability to make decisions is a spectrum, and it is not binary—individuals may have varying degrees of capacity, and this complexity needs to be addressed on a case-by-case basis. For any type of intervention that might reduce older adults’ freedom to choose for themselves, diversity in terms of cognitive abilities and decision-making skills should be considered and analyzed, as older adults are not a homogenous group (Mitnitski et al., 2017; Nguyen et al., 2021; Gray et al., 2002). It is hard to justify the position that other, oftentimes younger, persons are in a better position to judge what might be in an older person’s best interest—obviously, if that person does not suffer from various medical conditions that affect their decision-making capacity. In short, old age is not by itself enough to discount older adults’ competence to make decisions about how they should spend their lives; thus, soft paternalism is not justified.

For hard paternalism, the constraint on people’s freedom is justifiable when the benefits of the intervention outweigh the costs. The only requirement is for individuals to act imprudently, such that they might endanger their lives or cause harm to themselves or to others. Thus, hard paternalism is not contingent on the competence or voluntariness of those involved but rather on their imprudent behavior. Imprudent behavior is commonly understood as behavior that results in harm either to oneself or to others. Most often, though, paternalistic care is justified on the basis of preventing older adults from harming themselves due to various incapacities. Harm to others is seldom, if ever, an issue in this context. The presumption here would be that the good prompted by hard paternalistic care, which is the expansion of older adults’ years, outweighs the harm done, and so justifies a restriction of their freedom to choose how to live.

The justification for preventing people from harming themselves is connected to life expectancy and what Pedersen (2023, p. 49) calls “life-year opportunities”, meaning the years left in a persons’ life to pursue their plans. It seems more harmful to a person to lose life-year opportunities the younger they are, as they have less time to experiment and pursue life plans. Consequently, this implies that paternalistic interventions are warranted for younger individuals but not for older adults since “the additional life expectancy of young people is greater than that of older people” (Pedersen, 2023, p. 48). In other words, paternalistic interventions are more justified in the case of younger individuals, as these have a longer life expectancy, which is taken to be an indicator of the well-being that will be protected by the concerned intervention (Pedersen, 2023).

Pedersen reaches the conclusion that hard paternalistic interventions should take age into account. More precisely, “the number of (good) life years at risk of being lost and the number of years lived are central to assessing the potential harm involved in a given imprudent activity” (2023, p. 42). The good promoted by paternalistic interventions thus diminishes with people’s age. Paternalistic care that reduces people’s freedom might make older adults’ lives less meaningful and pleasurable. While the imposition of paternalistic technologies on older adults might actually contribute to the promotion of the interests of some older adults, especially those who lack the capacity to make their own life decisions, it would negatively impact the greatest proportion of those who can make their own life decisions.

As a general rule, the imposition of paternalistic AI technologies to promote the overall good of older adults is not justified. While some older adults may struggle to foresee the consequences of their actions, this does not apply to all of them. Some may willingly assume certain risks, such as falling or not seeking immediate help, in order to maintain autonomy. Therefore AI paternalism cannot be imposed on older adults under the premise that they are incompetent judges.

Implications

In this paper we showed that the benefits of AI in aged care should not be taken for granted. We reveal how the design of technology, the portrayal of older adults as potential users, and the collection and processing of data can inadvertently reinforce ageist attitudes through the creation and deployment of paternalistic AI systems for aged care.

The theoretical and practical implications of this research go hand in hand. At a theoretical level, more effort should be invested into the investigation of how technological systems, once created and put to use, impact the lives of older adults. In other words, we suggest that we should move beyond the AI hype to thoroughly and sincerely look into how AI systems affect end users. A practical implication of our research is that technology developers and designers have to pay more attention to the stereotypes and preconceptions about old age that might get embedded into their products. Participatory design is one important means of mitigating the risk of the perpetuation of ageist biases and it presupposes the inclusion of older adults in design processes, such that their needs and expectations are known and taken into consideration from the outset of technology creation processes. This would also contribute to avoiding catering to ‘imagined users’ and disregarding actual user contexts (Loos et al., 2021). Moreover, a case-by-case analysis is necessary when considering AI interventions, one that pays attention to the specific needs and capabilities of each person considered. In other words, age, by itself, should never be the only criterion used for deciding whether an AI intervention is justifiable. Instead, the specific health conditions and decision-making abilities of each older adult should be considered. Additionally, the value trade-offs that come with technologies for aged care, such as safety versus autonomy, require careful consideration and should not be taken for granted.

However, we recognize that our diagnosis is not universal. There are instances where AI systems are developed inclusively with older adults’ input, and data curation is meticulous, aimed at eliminating the risks posed by ageist biases that permeate data. Moreover, some older adults may adapt AI systems to their needs creatively. Yet, there will also be situations where neither imposition nor autonomy is fully realized when older adults feel technology systems become too intrusive. In other words, we acknowledge the diverse conditions and use contexts that arise due to AI-user interactions. Thus, this paper does not aim to make universalistic judgments, but only to draw attention to a facet of current technological systems often overlooked—paternalism stemming from age scripts and biased data—and which has the potential to undermine older adults’ autonomy.