Introduction

Hate speech online is on the rise (Oboler, 2016; Perrigo, 2019; Pachego and Melhuish, 2020)Footnote 1. The response to this rise has broadly taken two approaches to harm reduction on platforms. The first approach is technical, attempting to develop software models to detect and remove problematic content. Indeed over the last few years in particular, significant attention has been directed at abusive speech online, with huge amounts of work poured into constructing and improving automated systems (Pavlopoulos et al., 2017; Fortuna and Nunes, 2018). Articles in computer science and software engineering in particular often claim to have studied the failings of previous techniques and discovered a new method that finally solves the issue (Delort et al., 2011; Mulla and Palave, 2016; Tulkens et al., 2016). And yet the inventiveness of users and the ambiguity of language mean that toxic communication remains complex and difficult to address. Technical understanding of this content will inevitably be limited, explains researcher Robyn Caplan (quoted in Vincent, 2019), because automated systems are being asked to understand human culture—racial histories, gender relations, power dynamics and so on—“a phenomenon too fluid and subtle to be described in simple, machine-readable rules”.

The second approach is non-technical, stressing that hate speech online is a problem that only humans can address. This framing, not incorrectly, points out that automated interventions will always be inherently limited, unable to account for the nuances of particular contexts and the complexities of language. The response is to dramatically expand content moderation teams. In May 2018, for example, Facebook announced that it would be hiring 10,000 new workers into it’s trust and safety team (Freeman, 2018). However, the toll for those carrying out this kind of work, where hate speech, graphic images, and racist epithets must be carefully reviewed, is incredibly high, leading to depression and other mental health issues. In being forced to parse this material, workers “do not escape unscathed” (Madrigal, 2017). As well as the hazards of the content itself, employees are often under intense pressure to meet performance targets, an anxiety that only adds to the inherent psychological toll (Newton, 2019).

In addition to these two approaches, there also seems to be a popular assumption, evidenced in online comments and in more mainstream literature, that hate speech is the natural product of hateful people. One user stated that the toxic comments she encountered online were simply produced by rude and frustrated people, perhaps with a difficult background or early life, who have not been taught general manners. Another blog post blames toxic communication on an inherently toxic individual, someone with a predilection for hating or bullying, racism or sexism (Jennings-Edquist, 2014). In this understanding, hate speech results from people translating their fundamental nastiness in the offline world into the online environment.

In contrast to the approaches and assumptions discussed above, this study adopts a design-centric approach. It seeks to understand how hate might be facilitated in particular ways by hate-inducing architectures. Just as the design of urban space influences the practices within it (Jacobs, 1992; Birenboim, 2018), the design of platforms, apps and technical environments shapes our behavior in digital space. This design is not a neutral environment that simply appears, but is instead planned, prototyped, and developed with particular intentions in mind. Indeed, a platform can be conceived as a set of “core design problems” (Tura et al., 2018, Table 1).

This method thus examines a platform’s interfaces, architectures, and functionality, focusing on the types of communicative practices and social interactions they afford (Bucher and Helmond, 2017). As Gillespie (2017, n.p.) argues, these structures:

are designed to invite and shape participation toward particular ends. This includes what kind of participation they invite and encourage; what gets displayed first or most prominently; how the platforms design navigation from content to user to exchange… and how they organize information through algorithmic sorting, privileging some content over others in opaque ways. And it includes what is not permitted, and how and why they police objectionable content and behavior.

A platform’s design is the result of certain decisions, and these decisions have influence. Acknowledging this influence allows us to draw “connections between the design (technical, economic, and political) of platforms and the contours of the public discourse they host” (Gillespie, 2015, p. 2). How might the design of technical environments be promoting toxic communication?

This project examined two notable platforms: Facebook and YouTube. Both platforms have millions or even billions of monthly active users. Both platforms have a global reach, with access available in hundreds of countries worldwide. And both have been linked to hate speech, online harassment, and more overt acts of physical violence in the “real world”. Both platforms are thus highly influential, shaping the beliefs and ideologies of individuals, their media production and consumption, and their relations to others on an everyday basis.

Following the method sketched above, this analysis meant identifying key elements of the platform’s design—the news feed or a recommendation engine, for instance. The analysis then honed in on these architectures and affordances, asking how this design operates, what is its logic, and what type of speech and behavior does it encourage. While using these platforms provided insight, these questions frequently also meant drawing on secondary literature from designers, platform users, and software engineers. This core design analysis was supplemented by two unstructured interviews. The first was with a young social media user. The second was with a former online community manager, whose previous role ranged from guiding forum discussions to offering user assistance and moderating content. Both of these inputs are drawn on at several points to offer a “vernacular” perspective on design (McVeigh-Schultz and Baym, 2015)—foregrounding how it is perceived and dealt with on a practical everyday level.

While this method is novel in some ways, the attention to the design of platforms and their potential to shape behavior is not unprecedented. Over the last few years, we have witnessed a confessional moment from the designers of platforms.Footnote 2 Designers have admitted that their systems are addictive and exploit negative “triggers” (Lewis, 2017). They have explained that Facebook’s design privileges base impulses rather than considered reflection (Bosker, 2016). Others have spoken about their tools “ripping apart the social fabric of how society works” (Vincent, 2017). And these confessions have been echoed with criticism and studies from others. Social media enables negative messages to be distributed farther and faster (Vosoughi et al., 2018) and its affordances enable anger to spread contagiously (Fan et al., 2016). The “incentive structures and social cues of algorithm-driven social media sites” amplify the anger of users over time until they “arrive at hate speech” (Fisher and Taub, 2018). In warning others of these negative social effects, designers have described themselves as canaries in the coal mine (Mac, 2019).

Indeed, we have already begun witnessing the fallout of platform-amplified hate. Shootings in El Paso, Pittsburgh, and Christchurch have been linked to users on Gab and 8chan (Mezzofiore and O'Sullivan, 2019; Silverstein, 2018). Ethnic violence against Rohingya has been connected to material circulating on Facebook (Stevenson, 2018). And anti-Muslim Tweets have been correlated with anti-Muslim hate crime (Williams et al., 2020). These overt acts of hate in the “real world” materialize this issue and highlight its significant stakes. Toxic communication is not just a nuisance or a nasty byproduct of online environments, but has more fundamental implications for human rights. “Online hate is no less harmful because it is online”, stressed a recent U.N. report (Kaye, 2019): “To the contrary, online hate, with the speed and reach of its dissemination, can incite grave offline harm and nearly always aims to silence others”. Hate forms a broad spectrum with extremist ideologies at one end. Online environments allow users to migrate smoothly along this spectrum, forming a kind of pipeline for radicalization (O’Callaghan et al., 2015; Munn, 2019). In this respect, the hate-based violence of the last few years is not random or anomalous, but a logical result of individuals who have spent years inhabiting hate-filled spaces where racist, sexist, and anti-Semitic views were normalized.

Very recently, then, a new wave of designers and technologists have begun thinking about how to redesign platforms to foster calmer behavior and more civil discourse. How might design create ethical platforms that enhance users wellbeing (Han, 2019)? Could technology be designed in a more humane way (Harris, 2019)? And what would be the core principles and processes of such designs (Yablonski, 2019)? Identifying a set of hate-promoting architectures would allow designers and developers to construct future platforms that mitigate communication used to harass or harm, and instead construct more inclusive and affirmative environments.

This article picks up on this nascent work, tracing the relationship between technical architectures and toxic communication. It examines two highly influential global platforms, Facebook and YouTube, unpacking the design of several key features, identifying how they are problematic, and suggesting some possible alternatives.

Platform analysis: Facebook

Facebook is the giant of social media. With 2.41 billion active users worldwide (Noyes, 2019), it is the largest platform, and arguably one of the most significant. On average, users spend 58 min every day on the platform (Molla and Wagner, 2018). While some signs indicate that the platform is plateauing in terms of use, these statistics remain compelling and mean that it cannot be overlooked. From the perspective of this project, Facebook is a technically mediated environment where vast numbers of people spend significant amounts of time. Yet if the platform is influential, it is also increasingly recognized as detrimental. “As Facebook grew, so did the hate speech, bullying and other toxic content on the platform”, one investigation found (Frenkel et al., 2018), “when researchers and activists in Myanmar, India, Germany and elsewhere warned that Facebook had become an instrument of government propaganda and ethnic cleansing, the company largely ignored them”. What kinds of experiences are all of these users having, and how does the design of this environment contribute to this? Rather than calm and civil, this analysis will show how the platform’s affordances can induce experiences that are stressful and impulsive, establishing some of the key conditions necessary for angry communication.

A design approach to Facebook stresses that it was designed—a result of particular decisions made over time. For users, Facebook appears as a highly mature and highly refined environment. Every area has undergone meticulous scrutiny and crafting by teams of developers and designers. This provides the environment with a degree of stability and authority, even inevitability. In this sense, giants like Facebook claim a kind of de-facto standard: this is the way our communication media operates. Yet Facebook has evolved significantly since its inception. Launching in 2004, the site was billed as an “online directory”; in these early days, the site emulated the approach of MySpace, where each user had a profile, populated with fields for status, education, hobbies, relationships, and so on; in 2007, Facebook added a Mini-Feed feature that listed recent changes to friends profiles, and in 2011 Facebook released the Timeline that “told the story of your life” as a move away from the directory or database structures of the past (Albenesius, 2014). Rather than inevitable, then, the design evolution of Facebook reminds us that it has evolved through conscious decisions in response to a particular set of priorities (Fig. 1).

Fig. 1: Early Facebook Screenshot.
figure 1

Early screenshot from “The Facebook” indicating its significant design progression over time.

Design wise, the Feed remains one of the key pieces of functionality within Facebook. The Feed, or the News Feed as it is officially known, is described by the company as a “personalized, ever-changing collection of photos, videos, links, and updates from the friends, family, businesses, and news sources you’ve connected to on Facebook” (Facebook, 2019). It is the first thing that users see when bringing up the app or entering the site. It is the center of the Facebook experience, the core space where content is presented to users. What’s more, because user actions are primed by this content and linked to it—whether commenting on a post, sharing an event, or liking a status update—the Feed acts as the gateway for most user activity, structuring the actions they will perform during that particular session. Indeed, for many users, Facebook is the Feed and the Feed is Facebook (Manjoo, 2017).

Key to the Feed is the idea of automatic curation. Before the Feed, users would have to manually visit each one of their friend’s profile pages in order to discover what had changed in his or her life. Once introduced, the Feed now carries out this onerous task for each user. “It hunts through the network, collecting every post from every connection—information that, for most Facebook users, would be too overwhelming to process themselves” (Manjoo, 2017). In this sense, the Feed provides both personalization and convenience, assembling a list of updates and bringing them together into a single location. Yet from a critical design perspective (Dunne and Raby, 2001; Dunne, 2006; Bardzell and Bardzell, 2013), this begs some fundamental questions about values, ideologies, and norms. What is prioritized in this Feed, bubbling to the top of view and clamoring for a user’s attention? What is deemphasized, only appearing after a long scroll to the bottom? And what are the factors that influence this invisible curation work? In short: what is shown, what is hidden, and how is this decided (see Fig. 2)?

Fig. 2: News Criteria.
figure 2

Screenshot of Facebook page listing some of the criteria used by its News Feed.

The Feed is designed according to a particular logic. Since 2009, stories are not sorted chronologically, where updates from friends would simply be listed in reverse order, with the most recent appearing first (Wallaroo Media, 2019). While this change induced a degree of backlash from users, the chronology itself proved to be overwhelming, especially with the hundreds of friends that each user has. “If you have 1500 or 3000 items a day, then the chronological feed is actually just the items you can be bothered to scroll through before giving up”, explains analyst Benedict Evans (2018), “which can only be 10% or 20% of what’s actually there”. Instead, the Feed is driven by Engagement. In this design, Facebook weighs dozens of factors, from who posted the content to their frequency of posts and the average time spent on this piece of content. Posts with higher engagement scores are included and prioritized; posts with lower scores are buried or excluded altogether (see Fig. 3).

Fig. 3: Content Prioritization.
figure 3

Diagram from Rose-Stockwell showing the change in content prioritization (reproduced with permission).

The problem with such sorting, of course, is that incendiary, polarizing posts consistently achieve high engagement (Levy, 2020, p. 627). This content is meant to draw engagement, to provoke a reaction. Indeed, in 2018 an internal research team at Facebook reported precisely this finding: by design it was feeding people “more and more divisive content in an effort to gain user attention and increase time on the platform” (Horwitz and Seetharaman, 2020). However, Facebook management ignored these findings and shelved the research.

This divisive material often has a strong moral charge. It takes a controversial topic and establishes two sharply opposed camps, championing one group while condemning the other. These are the headlines and imagery that leap out at a user as they scroll past, forcing them to come to a halt. This offensive material hits a nerve, inducing a feeling of disgust or outrage. “Emotional reactions like outrage are strong indicators of engagement”, observes designer and technologist Tobias Rose-Stockwell (2018), “this kind of divisive content will be shown first, because it captures more attention than other types of content”. While speculative, perhaps sharing this content is a way to offload these feelings, to remove their burden on us individually by spreading them across our social network and gaining some sympathy or solidarity.

The design of Facebook means that this forwarding and redistribution is only a few clicks away. As the user I interviewed stated: “it is so easy to share stuff”. Moreover, the networked nature of social media amplifies this single response, distributing it to hundreds of friends and acquaintances. They too receive this incendiary content and they too share, inducing what Rose-Stockwell (2018) calls “outrage cascades — viral explosions of moral judgment and disgust”. Outrage does not just remain constrained to a single user, but proliferates, spilling out to provoke other users and appear in other online environments.

At its worst, then, Facebook’s Feed stimulates the user with outrage-inducing content while also enabling its seamless sharing, allowing such content to rapidly proliferate across the network. In increasing the prevalence of such content and making it easier to share, it becomes normalized. Outrage retains its ability to provoke engagement, but in many ways becomes an established aspect of the environment. For neuroscientist Molly Crockett, this is one of the keys to understanding the rise of hate speech online. Crockett (2017, p. 770) stresses that “when outrage expression moves online it becomes more readily available, requires less effort, and is reinforced on a schedule that maximizes the likelihood of future outrage expression in ways that might divorce the feeling of outrage from its behavioral expression”. Design, in this sense, works to reduce the barrier to outrage expression. Sharing a divisive post to an audience of hundreds or thousands is just a click away.

How might the Feed be redesigned? Essentially there are two separate design problems here. Firstly, there is the stimulus aspect—the content included in the Feed. While the Feed’s filtering operations undoubtedly remain highly technical, its logics can be understood through a design decision to elevate and amplify “engaging” content. Facebook has admitted that hate speech is a problem and has redesigned the Feed dozens of times since its debut in an effort to curtail this problem and the broader kind of misinformation that often stirs it up (Wallaroo Media, 2019). But the core logic of engagement remains baked into the design of the Feed at a deep level. Design, then, might start by experimenting quite concretely with different kinds of values. If the hyperlocal was privileged, for example, then posts from friends or community members in a 5 km radius might only be shown. This would be more mundane in many ways—everyday updates from those in our immediate vicinity rather than vicious attacks from anyone in a friend network. Or following the success of more targeted messaging apps like Messenger and WhatsApp, the Feed might emphasize close familial or friend connections above all. This pivot to a more intimate relational sphere would certainly be quieter and less “engaging” but ultimately more meaningful and civil.

Secondly, there is the response aspect—the platform affordances that make outrage expression online more effortless. Such expression is often impulsive, done in the moment, and so one possible design focus would be time itself. Temporality is a key part of community, stated the community manager I interviewed. “Legacy environments” such as traditional forums simply moved slower, she recalled, and in general there was “just more oxygen between things happening”. This time gap between reading and posting provided both a kind of deceleration and de-escalation, a chance to pause and reconsider. Rather than an instant reaction, would a built-in delay add a kind of emotional weight to such an action? An interval of a few seconds, even if nominal, might introduce a micro-reflection and suggest an alternative response. As a means of combating the effortless and abstract nature of outrage expression, Rose-Stockwell (2018) suggests a number of humanizing prompts that might be designed into platforms: an “empathetic prompt” that asks whether a user really wants to post hurtful content; an “ideological prompt” that stresses how this post will never be seen by those with opposing viewpoints; and a “public/private prompt” that would allow disagreements to take place between individuals rather than in the pressurized public arena. Such design interventions, while clearly not silver bullet solutions, might contribute in their own small way towards a more civil and less reactive online environment.

Platform analysis: YouTube

YouTube remains a juggernaut of online spaces. Recently, it crossed the threshold of 2 billion logged-in users per month (Saima, 2019). Perhaps even more important for this research project is the time spent by users within this environment. Users spend around 250 million hours on the video sharing platform every day (Saima, 2019). The time “inhabiting” YouTube marks it out as distinct from Facebook, and suggests a different kind of influence over time, something slower and more subtle. Indeed, as will be discussed, radicalized individuals have noted how influential YouTube was in shifting their worldview over longer periods of time, a medial pathway that nudged them towards an angrier and more extremist stance (Roose, 2019). While this is just one highly politicized facet of YouTube, it signals the stakes involved here—not only the anger available to be tapped into, but the influence such an environment might have in shaping the ideologies of its vast population.

One key focus of recent critiques of YouTube has been its recommendation engine (Regner, 2014; Schmitt et al., 2018; Ribeiro et al., 2020). The design of the recommendation system is central to YouTube’s user experience for two reasons. Firstly, it determines the content of each user’s homepage. Upon arriving on the site, each user is presented with rows of recommended videos, with each row representing an interest (e.g. gaming), channel (e.g. the Joe Rogan Experience), or an affiliation (“users who watched X enjoyed Y”). As with similar designs such as Netflix, the YouTube homepage is the first thing that users interact with, and the primary “jumping off” point for determining what to watch.

Secondly, the YouTube recommendation system is crucial because it also determines the related videos appearing in the sidebar next to the currently playing video. By default, the Autoplay feature is on, meaning that these sidebar videos are queued to play automatically after the current video. This design feature means that, even if the user does nothing further, the next video in this queue will play. Even if the Autoplay feature has been manually turned off, this sidebar, with its dozens of large thumbnails, presents the most obvious gateway to further content. With a single click, a user can move onto a video which is related to the one they are currently viewing.

From a design perspective, the homepage and the sidebar form the crucial interfaces into content consumption. Search, while possible, is a manual process that requires more effort and has been deemphasized. Browsing recommended results, with its scrolling and tapping, provides a more frictionless user experience. It is unsurprising then, that “we’re now seeing more browsing than searching behavior”, stated one YouTube designer (Lewandowski, 2018), “people are choosing to do less work and let us serve them”. This shift has meant an even greater role for the recommendation engine. In theory, users can watch any video on the vast platform; in practice, they are encouraged towards a very specific subset of content. Indeed, YouTube’s Chief Product Officer revealed that recommended videos account for over 70% of watching time on the platform (Solsman, 2018). This is a single algorithmic system that exerts enormous force in determining what kinds of content users are exposed to and what paths they are steered down.

How is this recommendation system designed? In a paper on its high-level workings, YouTube engineers explain that it comprises two stages. In the first stage, “the enormous YouTube corpus is winnowed down to hundreds of videos” that are termed candidates (Covington et al., 2016, p. 192). These candidates are then ranked by a second neural network, and the highest ranked videos presented to the user. In this way, the engineers can be “certain that the small number of videos appearing on the device are personalized and engaging for the user” (Covington et al., 2016, p. 192). Based on hundreds of signals, users are presented with content that is attractive by design: hooking into their interests, goals, and beliefs. This recommendation engine is not static, but rather highly dynamic and updated in real-time. Your profile incorporates your history, but also whatever you just watched. As YouTube’s engineers (Covington et al., 2016, p. 191) explain, it must be “responsive enough to model newly uploaded content as well as the latest actions taken by the user”. As content is consumed, an individual’s interests and ideologies in turn are shaped (Fig. 4).

Fig. 4: Recommendation System.
figure 4

Diagram from YouTube engineers indicating how the recommendation system works (reproduced with permission).

Of course, these technical explanations remain at a high-level. The recommendation system, as a proprietary technology owned and operated by YouTube, will always remain to some extent a black box. Yet even these general principles provide insight into the system’s design. First, the system is designed to promote “engaging” videos. Which videos are most engaging? As one former developer (Chaslot, 2019) explains:

We know that misinformation, rumors, and salacious or divisive content drives significant engagement. Even if a user notices the deceptive nature of the content and flags it, that often happens only after they’ve engaged with it. By then, it’s too late; they have given a positive signal to the algorithm. Now that this content has been favored in some way, it gets boosted, which causes creators to upload more of it. Driven by AI algorithms incentivized to reinforce traits that are positive for engagement, more of that content filters into the recommendation systems. Moreover, as soon as the AI learns how it engaged one person, it can reproduce the same mechanism on thousands of users.

Recommending content based on engagement, then, often means promoting incendiary, controversial, or polarizing content. The closer a video gets to the edge of what’s allowed under YouTube’s policy, the more engagement it gets (Maack, 2019). In other words, as even Zuckerberg (2018) has admitted, borderline content is more engaging. Because of this dynamic, designing for engagement goes beyond mere customer satisfaction to deeply influence the kind of content that promoted. As the developer quote above suggests, the system’s design establishes a series of powerful feedback loops. Creators create more of this toxic yet high-performing content and the system recommends it more often to users, not only individually, but at scale.

Secondly, the system is designed to be responsive, to be dynamic enough to generate new recommendations based on what was last viewed. The design challenge, as the engineers explain (Covington et al., 2016, p. 194), is to predict “the next watched video”. While again high-level, this creates a design with a degree of self-similarity, promoting more of the same kind of content. And yet if this content stays within the same topic, it is typically more intense, more extreme. “However extreme your views, you’re never hardcore enough for YouTube” attests one article (Naughton, 2018). Based on the strong performance of borderline content discussed earlier, YouTube’s recommendations often move from mainstream content to more incendiary media, or politically from more centrist views to right and even far-right ideologies.

The dynamism designed into the recommendation system establishes a vector, a gradual movement as each video is completed. Based on the current values designed into the system, users can be suggested material that progressively becomes more controversial, more political, more outrage-inducing, and in some cases, more explicitly racist, sexist, or xenophobic (O’Callaghan et al., 2015). Indeed, one analysis (Munn, 2019) suggests that YouTube can form a key part of an “alt-right pipeline”: users are incrementally nudged down a medial pathway towards more far-right content, from anti-SJW videos which demean so-called “social justice warriors” to gaming related misogyny, conspiracy theories, the white supremacism of “racial realism” and thinly veiled anti-Semitism. In a recent paper analyzing approximately 330,925 videos across 349 channels, a study found that “users consistently migrate from milder to more extreme content”, shifting from viewing so-called Alt-Lite material to more strident Alt-Right channels (Ribeiro et al., 2020, p. 131).

What is particularly powerful about this design is its automatic and step-wise quality. Users do not consciously have to select the next video, nor jump suddenly into extreme material. Instead, there is a slow progression, allowing users to acclimate to these views before smoothly progressing onto the next step into their journey. Recommendations are “the computational exploitation of a natural human desire: to look ‘behind the curtain’, to dig deeper into something that engages us”, observes sociologist Zeynep Tufekci (2018): “As we click and click, we are carried along by the exciting sensation of uncovering more secrets and deeper truths. YouTube leads viewers down a rabbit hole of extremism, while Google racks up the ad sales”. At the far end of this journey is an angry and radicalized individual, a figure that has increasingly emerged over the last few years, from Christchurch in New Zealand to El Paso, Texas and Poway, California in the United States. Yet along with these extreme examples, equally troubling is the thought of a broader, more unseen population of users who are gradually being exposed to more hateful material.

The result of these design choices is that the recommendation system emerges as a hate-inducing architecture. From a metrics point of view, the system is successful, delivering “engaging” content while ramping up view counts and watch time on the platform. And yet to do so, the system appears to consistently suggest divisive, untrue, or generally incendiary content. “YouTube drives people to the Internet’s darkest corners” warns one article (Nicas, 2018). In this sense, the design of the current recommendation serves the company well, but not necessarily individual users or online communities, particularly those that are already marginalized (Fig. 5).

Fig. 5: YouTube Recommendations.
figure 5

Screenshots showing anti-SJW (social justice warrior) and anti-LGBTQ+ recommendations in response to viewing a centrist-right video by popular talk show host Joe Rogan.

It should be noted that one recent working paper has questioned the role of the recommendation system in hate speech and far-right indoctrination. Munger and Phillips (2019) argue that the central role given to the recommendation engine is overplayed, and suggests instead a supply and demand explanation. For the duo, YouTube lowers the barriers of media production to almost zero, it offers easy distribution online through hosting and sharing, and it incentivizes content creation via monetization. These conditions have led to a diversification of channels that politically stretch beyond the mainstream center-left/center-right poles. As Munger and Phillips argue (2019, p. 6): “these aspects of YouTube allow new communities that cater increasingly well to audiences’ ideas to form”. The YouTube platform allows for the proliferation of niche media and a greater variety of alt-right and far-right material. The duo essentially argue that a radicalized audience already existed, it was simply constrained by too little supply of radical material.

On the one hand, the report is a productive reminder that social media is a sociotechnical system. Technologies are never purely determinist and any analysis should strive to account for the political and cultural background of users, their relations to others in the world, and the racial and gendered worldviews that “link” content together, even without an engine or automated system. As Rebecca Lewis (2018) has shown, the network of alt-right influencers on YouTube is a social network in the conventional sense—a web of individuals who share particular ideologies, use common phrases, and even recommend each other’s channels organically through formats like the talk show.

On the other hand, however, Munger and Phillips are using a rather conventional economic model to understand online environments. Their analysis presupposes an offline, radicalized audience with their minds already made up. In doing so, it fails to register the psychological and cognitive force exerted by platform environments, a force potentially magnified both by time spent consuming media and by the young age of particular users. Contrary to the duo’s straw man caricature of such influence as a “zombie bite”, this force is not an instant contagion, but something far more drawn out and subtle, a quiet influence that alters individuals as they inhabit online spaces over the months and years. As Wendy Chun (2017, p. x) observes, media exerts force over a “creepier, slower, more unnerving time”, effectively “disappearing from consciousness”. Media derives its power precisely by catering to the curiosities and desires of the user rather than overpowering them.

Along with the recommendation engine, another problematic design element identified in this analysis is YouTube’s comment system. For years, YouTube has consistently held a reputation for being an environment with some of the most toxic and vitriolic comments online (Tait, 2016). Even those used to online antagonism admitted that “you will see racist, sexist, homophobic, ignorant, and/or horrible comments on virtually every popular post” and yet the same post from 2013 naively claims that the problem will soon be solved with new technical features (Rose, 2013). Far from being solved, the years since have seen toxic communication on the platform proliferate and take on concerning new forms. While regarded as a “cesspool” for over a decade, the latest indictment has been a large number of predatory and sexual comments on the videos of minors (Alexander, 2018).

Why is YouTube so toxic, so angry? One common explanation is that YouTube is simply one of the largest platforms. For some, its extremely broad demographic explains its trend towards the lowest common denominator in terms of intelligent, relevant commentary. Yet while the platform certainly has a massive user-base, there also seem to be clear design decisions exacerbating these toxic comments. “Comments are surely affected by who writes them”, admits one analysis (Polymatter, 2016), “but how a comment system is designed greatly affects what is written”. For instance, YouTube comments can be upvoted or downvoted, but downvoting doesn’t lower the number of upvotes. This suggests a design logic that favors any kind of engagement, whether positive or negative. The result is that provocative, controversial, or generally polarizing comments seem to appear towards the top of the page on every video (Fig. 6).

Fig. 6: Toxic Comment.
figure 6

Screenshot showing just one example from the many toxic comments on YouTube.

The design choices built into both YouTube’s recommendation engine and its comment system might be understood as natural outcomes of an overarching set of company values. As recent articles have shown (Bergen, 2019), YouTube as purposefully ignored warnings of its toxicity for years—even from its own employees—in its pursuit of one value: engagement. Of course, this should come as no surprise for a publicly listed company driven by shareholder values and the broader dictates of capitalism. However it opens the question into what values are prioritized within online environments and how design supports them. Rather than grand vision statements or aspirational company values, what are the incentives built into platforms at the level of design: features, metrics, interfaces, and affordances?

Echoing this low-level design influence, the community manager I interviewed underlined how the typical all-consuming focus on likes and shares could be damaging. A key part of a community manager’s role is to foster healthy relations between members, to encourage beneficial content, and to block, delete or demote toxic posts—in short, to facilitate “more of the good and less of the corrosive”. But her fellow community managers often speak of “algorithm chasing”, where they attempt to combat or counteract the features built into the systems they use. There are often “competing logics” on a platform, she explained, an opposition between the value of creating a cohesive and civil community, and the values seen as necessary for platform growth and revenue such as expanding a user-base, extending use times, and attracting advertisers. Social media and community are often an awkward fit, and “marketing efficiencies are not social efficiencies”. On YouTube specifically, these designs privilege engagement above all else, resulting in a community that can be toxic and angry. Yet design might be rethought to prioritize an alternative set of values.

How might design contribute to a calmer, more considerate and more inclusive environment? One concrete intervention would be a redesigned recommendation system. Programmer and activist Francis Irving (2018) has found that the current system described earlier is both populist, prioritizing the popular, and short-term, using criteria to find videos that you’ll watch the longest. What kind of design interventions would make it more conducive to user well-being? For one, the system could be intentionally broadened, breaking its hyper-focused bubble and instead providing access points into a range of communities and a diversity of political views—even those that run counter to the user. Of course, other possibilities abound. Irving (2018) suggests one playful alternative: ask whether a YouTube user is more or less happy 6 months later, and use this signal as a way to improve video recommendations. As another option, Irving (2018) speculates about removing automated recommendations altogether, and moving to a more user-centered recommendation model. Like film or music, such a model would elevate taste makers who could curate great “playlists” of content.

Secondly, the comment system might be rethought entirely. It is clear that the current upvote/downvote binary is not working, rewarding quick immediate comments that are provocative—at best flippant, at worst, hateful or degrading. It also seems apparent that the relative anonymity of commenters and lack of any concept of reputation means that there is no real disincentive for consistently generating toxic comments. As one analysis noted (Polymatter, 2016): “Each comment stands on its own, attached to nothing, bringing out the worst in every commenter”. Introducing a reputation system into this environment would be one concrete design intervention. Reddit, for example, features a Karma system that rewards high quality comments while docking points for comments against community guidelines. Such a system, while naturally not perfect, significantly “thickens” the identity of a user. Each user has a history of contributions and comments that persists over time. Based on this past behavior, they have a combined score that signals whether or not the community has found their contributions helpful or beneficial. Even if this score is mainly symbolic, these reputation systems hook into offline conventions of social standing within a community, introducing a degree of accountability.

Conclusion

This article has asked how design might be contributing to polarizing, impulsive, or antagonistic behaviors. After selecting two global platforms, it approached the problem of online hate from a design perspective, identifying key affordances and structures, investigating how they function, and showing how they facilitate particular practices while discouraging others.

Based on engagement, Facebook’s Feed drives clicks and views, but also privileges incendiary content, setting up a stimulus–response loop where outrage expression becomes easier and even normalized. Alternative ways of prioritizing content should be explored to decrease this kind of stimuli and in general to de-escalate the user experience, providing a slower, calmer and more civil environment. In terms of user responses to this content, design interventions might be used to question, delay, or limit the scope of hateful comments. YouTube’s recommendation system is at the heart of the platform’s design, exerting enormous influence on viewing and consumption. The system’s design also privileges engagement, creating an environment criticized for leading users towards more extreme content. Both this recommendation system and YouTube’s infamous comment system need to be thoroughly redesigned, with the section laying out several suggestions.

How feasible are such suggestions? Would these platforms realistically ever be redesigned? The prime directive of engagement, for example, is driven by monetization. It befits a corporation aiming to accelerate growth, stimulate ad revenue, and generate profits for its shareholders. After all, these platforms are a new “space of accumulation” (Fuchs, 2011), with a business model predicated on the production and extraction of data as a form of capital (Sadowski, 2019). And yet even from a purely economic perspective, engagement at any cost has been criticized. This incentive, designed deeply into the platform’s interfaces and affordances, seems to encourage profiting from hate speech and other toxic communication, with both users and advertisers leaving the platform as a result (Hern, 2020). This suggests that companies like Facebook or Google—or the future platforms that will follow them—might also be searching for alternate ways of designing their products and services.

Regardless of the likelihood of a redesign in the present, one strength of a design-focused approach is that it reminds us that redesign is possible. Despite their maturity, these objects are not fixed but fluid. Each platform is the result of a set of a careful set of decisions over time. Each design element had to be conceived, prototyped, coded, tested, and launched. And what has been made can be remade. In this way, design alerts us to alternatives, to other ways of keeping us informed, structuring sociality, and valuing the people and things surrounding us. It allows us to imagine a post-Facebook/post-YouTube media environment with a different set of imperatives. Design gives us permission to speculate, to ask “what if?” (Dunne and Raby, 2014, p. 141). When our dominant technical systems seem so given, this ability to speculate about other designs becomes increasingly important.

A design approach also highlights its influence on platforms. Design privileges certain forms of content, it enables particular kinds of relations, and encourages specific forms of participation. For this reason, design proved to be a productive lens for understanding toxic communication. Of course, this study also had its limits. In particular, the degree to which design may influence individuals—and how that influence might be modulated by age, gender, class, or culture—has yet to be precisely determined. One path for research future would be to take up this challenge, producing a more quantitative analysis of design influence. Another path would be to apply this approach to other platforms: Reddit, TikTok, 4chan, and so on. Yet if this single study inevitably has constraints, it reaffirms the key role that design plays within online environments. As everyday life increasingly migrates online, platforms become crucial mediators for communication and key environments for inhabitation. These are spaces where time is spent, identities are forged, and ideologies are shaped. Understanding how these spaces might be redesigned in order to discourage hate speech and encourage civility and inclusivity remains an urgently needed task.