On 28 January 2022, China’s State Internet Information Office released the “Provisions on the Administration of Deep Synthesis Internet Information Services (Draft for solicitation of comments)”1. The proposal (henceforth the Provisions) is a draft of regulations for deep synthesis technology, an umbrella term covering “text, images, audio, video, virtual scenes, or other information” created with generative models (Article 2). Also known as synthetically generated content, this includes deepfakes (photos, videos or audio that depicts a person doing or saying things that they have not been recorded doing or saying), generated texts such as those produced by OpenAI’s GPT-3, advanced image enhancement techniques and the construction of virtual ‘scenes’, akin to the immersive movies described in the classic novel Ready Player One. Once enacted, the Provisions will represent a leap past the USA and the EU in deepfake regulation and a considerable advance in the Chinese government’s efforts to control the content of its domestic Internet, and social stability more generally.

Effect of the Provisions

The Provisions appear to be an elaboration on the 2019 “Regulations on the Administration of Online Audio and Video Information Services,” which broadly banned the use of machine-generated images, audio and video to create or spread “rumors”2,3. The new regulations are aimed at deep synthesis service providers and emphasize cybersecurity, real-name verification of users, data management, marking of synthetic content to alert viewers and “dispelling rumors”1. They expand the Chinese government’s efforts to prevent social and political disruption by increasing its control of the Internet. These efforts are tied to the actions of tech platforms and companies. Article 5 encourages industry organizations to establish industry standards and self-discipline systems while “accept[ing] societal oversight”. This oversight is exercised through state regulations but requires industry cooperation.

This is an established trend, as the government has been increasingly relying on tech companies to enforce new Internet regulations to facilitate the Chinese Communist Party’s (CCP) vision of a stable, prosperous society, with consequences if they do not. For instance, after Jack Ma, co-founder and executive chairman of Alibaba, criticized Chinese regulators for stifling innovation, the previously approved initial public offering of Alibaba’s financial technology organization Ant Financial was scuttled, and the entire Ant Group was fined and forced to restructure under new anti-monopoly rules. Ma himself vanished from the public eye for months4. In response to the fine, Alibaba acknowledged that they needed to shoulder more responsibility for China’s social and economic development5, showing that technology companies understand what is expected of them: a commitment to the Party’s image of a stable society of “common prosperity,” with severe consequences for those who do not contribute. Shortly after the IPO cancellation, Alibaba and Tencent promised to crack down on overwork culture. They invested 100 billion yuan (US$15 billion) to support common prosperity, which Xi has emphasized as a way to ensure the Party’s longevity6.

Technological regulations are another means to support this drive. One example is real-name verification requirements, which have increased in recent years. For example, instant messaging services like WeChat have required that users provide real names since 20147. Other major sites such as Weibo, Zhihu and Baidu began requiring real names in 2017 as a result of a law banning anonymous online commenting8. The synthetic content Provisions recruit technology companies to a new front in the CCP’s offensive. They would require all synthetic content to be labelled, and ban a wide swathe of supposedly destabilizing content ranging from subversive incitements to pornography to intellectual property violations1.

Automatically generated, highly convincing mis- and disinformation and deepfake pornography are among the main concerns that have been raised by the increased availability of generative technologies. The Provisions are the most radical reaction against them so far. At least two issues will determine the impact of the Provisions.

First, some technical problems have yet to be solved. According to one of the central regulations of the draft, synthetic content produced by deep synthesis technology services must be labelled, yet how to ensure that labels are created and preserved is unclear. Even whole-frame watermarks can be removed by re-encoding or using another AI system, and metadata or accompanying documentation can be altered or omitted9.

Second, controlling the spread of deep synthesized content is intrinsically difficult. Real-name verification laws can enable authorities to track the sources and spreaders of unlabeled or misleading deep synthesis content. However, deep synthesis service providers are instructed to immediately stop transmitting unlabeled information and institute procedures to dispel rumours to combat mis- and disinformation (Articles 15 and 17). This is challenging because, once a piece of content is created, it can be decoupled from the service on which it is created and spread independently of it. Videos can be re-uploaded, audio re-recorded, images screenshotted, removing them from the originator’s control. And, as the adage goes, the Internet is forever: once some information has spread on the Internet, it is extremely difficult to erase it completely. In China, as anywhere else, content may be removed by censorship, but collective memory may persist. For instance, several weeks into the Shanghai COVID-19 lockdown that began in March of 2022, a video called “四月之声” (Voices of April) that chronicled events during the lockdown through audio began spreading on Weibo, reaching millions through a relay of re-uploads and copies that kept ahead of the censors for some hours. Although the videos were eventually taken down, the dead links remained, a memorial to what had happened10. As part of the censorship effort, cleanup notices were issued to all platforms requiring the total scrubbing of content related to Voices of April11.

Voices of April is not synthetic content, but it exemplifies how platforms where content is distributed (not just those where synthetic content is created) are expected to cooperate in information-quashing efforts. This mandatory, large-scale coordination shows the degree of responsibility that the government is placing on platforms and service providers to maintain the ‘hygiene’ of the Chinese Internet. In the September 2021 “Opinions on Further Intensifying Website Platforms’ Responsibility for Information Content,” one reads that the State Internet Information Office tasks platforms with preventing the creation and spread of “illegal” and “negative” information12. This burdens platforms and deep synthesis service providers with demanding tasks and implies a corresponding amount of government oversight of these platforms. Both the Opinions and the Provisions say that platforms need to maintain the “correct” values or politics, which implies that the government will be defining these “correct” values (likely in support of Xi Jinping Thought and common prosperity). Although these regulations are new, government efforts to control the views expressed on the Internet have a long history in China.

The promise of social stability

The technical difficulties just discussed are linked to the need to maintain the “correct political orientation” in the media, which stems from “Document 9,” a document circulated secretly in April 2013 before the Third Plenum of the Eighteenth Party Congress. What followed the document, which warned against the “infiltration” of Western ideas such as constitutionalism and civil society, was a crackdown on human rights activists, media outlets and dissident academics13, as well as increased efforts to block Western websites14. Since then, more sites have been blocked, VPNs have been removed from app stores15 and real-name verification rules have been increased8.

A frequent justification for controlling the Internet is that China needs social stability to develop16, implying that an open Internet ecosystem would cause instability. When looking at incidents such as the 6 January 2021 attack on the US Capitol, which was fuelled by misinformation circulated on social media, it is easy to see why the Chinese government perceives an uncontrolled Internet as a threat and is taking extremely seriously its efforts to build the Chinese infosphere into an impenetrable bubble. Deep synthesis services are a new destabilizing force: false or misleading information can be generated in massive quantities and distributed rapidly across the Internet to millions of users. A single fake post can be censored or officially discredited. However, a hypothetical fake conspiracy theory supported by deepfaked photographs, videos and documents may not be countered so easily; even when content is removed from the Internet, it lingers in people’s minds. For a government that relies on being able to control what information is available to its citizens, synthetic content represents an existential threat.

Paradoxically, the ease with which deepfakes can be generated and spread is more threatening politically than reliable news, which can be more easily blocked. This unique threat clarifies why the Chinese government has moved to regulate deep synthesis technology more quickly and thoroughly than the United States or the EU. The Provisions will likely be revised lightly and approved by August 2022, since 4–6 months is a typical timeframe for Internet-related regulations in China17. This is a much faster and more comprehensive process than those in the USA or the EU. Only a handful of American states have issued regulations for deep synthesis technology, and those focus solely on deepfake pornography or deepfakes intended to influence elections18. The First Amendment protection of freedom of speech is likely to hinder efforts to regulate deepfakes very radically, as the US Supreme Court has ruled that non-libellous lies are constitutionally protected—“The remedy for speech that is false is speech that is true”19, not prohibiting the false speech—representing a fundamental difference from the approach adopted by China’s government. The “DEEP FAKES Accountability Act,” which has been under consideration in the US Congress since 2019, would impose labelling requirements similar to those of the Chinese draft Provisions and establish mechanisms for redress9,20. However, it is aimed mainly at preventing “unauthorized digital recreations of people” on the grounds that those are unlawful impersonation, which includes generated pornography, and it has been criticized as unenforceable partly because of the difficulty of tracing and attribution9. Most synthesized media regulations have come from individual platforms (including Facebook and Twitter), which have issued limited rules stating that they will label or remove content that violates their policies19. The EU has moved to require deepfake labelling from platforms through amendments to the Digital Services Act, which would take effect in 202321. However, it does not address the use of generated pornography, and, like the US regulations, it does not target the deep synthesis service providers. The assumption seems to be that platforms will be playing cat-and-mouse with anonymous bad actors who will distribute deepfakes and must be detected and labelled. Instead, Chinese regulations address the sources of the generated content.

Beyond deepfakes

The fact that China’s proposed regulations go beyond deepfakes to include generated text, image enhancement and virtual scenes indicates that the government is thinking more broadly about how emerging technology could impact the stability of its regime (even anticipating future developments in virtual reality technology). It also speaks to how the Chinese government is more dependent on controlling this kind of media than the liberal US or European governments. As Chinese authorities have more control over what digital services people can access, this is a more feasible strategy for them than for US or European regulators. In fact, these fundamental differences in governance structures both contribute to this dependency and shape enforcement abilities. As a “rule by law” state, where law is used as a tool of political control, China has far greater enforcement capabilities than the “rule of law” in the USA or the EU, where the law is intended to be above politics22. Compliance is expected, and punishment is a deterrent for peer companies and individuals, whereas enforcement in the USA and the EU would likely be less capricious but also less severe. Furthermore, a single-party system has the advantage in legislative agility—which we see in the fact that these Provisions have come well before anything in the US or EU.

Although the pre-emptive nature of these regulations could herald what has been termed a “Beijing Effect” on forthcoming regulations in the USA and the EU, the siloed nature of China’s Internet may stymie this unless individual Western companies are inspired to take action against synthetic content in order to do business in China. Still, the Provisions are a concrete example of what is expected from technology companies as the CCP works toward its dream of a stable society of “common prosperity”, and they reflect a government commitment to maintaining and tightening control over an ever-more-siloed Internet. Indeed, the recent “2022 Special Action on the Comprehensive Governance of Algorithms”23 reinforces this trend.

At the same time, they also demonstrate a prescient understanding of how new technologies could threaten social stability and thus the regime’s power. It is likely that the Chinese government will continue to exercise such foresight in regulating new technologies, with possible implications for personal freedoms online. However, the impact of deep synthesis technologies, in particular, will depend on solving technological problems and ensuring platform cooperation and coordination. Neither is likely to happen seamlessly, but both represent new frontiers in China’s technological governance.