Whether China, the USA, Japan, the European Union, South Korea, Taiwan, the UK, Singapore, Canada, India or Costa Rica — and surely many others — governments have concluded that semiconductors are a strategic technology, drivers of technological progress, economic prosperity and military power. This Comment explores the origins and nature of the current ‘chip war’ — a struggle between countries for market share and technological leadership. Researchers must accustom themselves to a new era in which semiconductor research and development will be highly sensitive and politicized.

In some ways, it is no surprise that governments see semiconductors as strategic technologies. The first integrated circuits were invented in the late 1950s by two companies — Fairchild Semiconductor and Texas Instruments — that provided technology to the US military during the arms race of the Cold War. In the 1960s, the first major uses of chips included guidance computers on the LGM-30 Minuteman nuclear missile and the Apollo spacecraft. Chips were a subject of Cold War spy-craft, as Western countries banned the transfer of advanced chips and chipmaking machines to the USSR, and Soviet spies mounted worldwide efforts to acquire banned technology via front companies in Austria and South Africa.

In the first decade after integrated circuits were invented, governments were the primary buyers, for space and defence systems. Governments remain important supporters of research into semiconductors, via funds for university-based programs and programs such as the US Defense Advanced Projects Research Agency (DARPA), which has funded crucial research into spheres such as electronic design automation tools and FinFET transistor development.

Today, however, an estimated 98% of semiconductors are not procured for defence but instead for civilian applications such as smartphones, cars, computers and data centres1. Why are governments interested in semiconductors? First, the economic importance of semiconductors grows each year as more devices incorporate a larger number of chips. A car designed several decades ago might have had only a couple of semiconductors inside. Today, a new car can have a thousand chips. As demonstrated by the pandemic-era shortages, the reliance of the economy on chips creates vulnerabilities when chip supplies are disrupted. The car industry alone faced several hundred billion dollars of lost sales owing to chip shortages during the COVID-19 pandemic2.

Increasing dependence on chips for a wide variety of devices has coincided with the concentration of chip manufacturing. Owing to the enormous economies of scale in semiconductor fabrication, the industry is defined today by a relatively small number of firms that produce most key types of chip. Only three companies produce cutting-edge DRAM chips, for example. Taiwan’s TSMC is estimated to produce roughly 90% of the most advanced processor chips (defined as 7 nm or below)3. For many types of analogue and mixed-signal semiconductors, there is high concentration. One reason that governments are trying to reshape the supply chains for semiconductors is to reduce concentration, or at least to ensure that this occurs in friendly countries.

A second reason that governments are focusing on semiconductors is because they believe that semiconductors will be crucial to the future balance of military power. Industry officials often argue that chips produced by a given firm are solely used for civilian purposes. It is true that most chips produced end up in civilian use cases. It is also true that many chips used in military systems are less advanced (although militaries use smaller volumes of high-end logic and memory chips as well as specialized sensors).

Governments respond by making several arguments. First, they note, it is widely accepted in defence and intelligence circles that a key trend in modern warfare is the application of artificial intelligence. For example, militaries already use computer vision systems to identify threats or track targets. Recent advances in large language models will also be rapidly applied to defence and intelligence uses. Open-source analyses of government procurement find that both the Chinese and US militaries are investing heavily in artificial intelligence (AI)-enabled systems4. Intelligence agencies are almost certainly doing the same.

AI, meanwhile, has only been possible because of tremendous advances in semiconductors. Recent empirical research has found that cutting-edge AI models have been trained on quantities of data that grow exponentially, doubling every 6–9 months5. Demand for processing power to handle training data has therefore been growing far more rapidly than Moore’s law. The fact that Nvidia GPUs — the chip most used in training large language models — have been in severe shortage in recent months is cited by some officials as evidence that processing power will be a critical limiting factor on the ability of companies and countries to develop advanced AI systems6.

A second justification for seeing semiconductors as relevant to defence and intelligence use focuses not solely on the GPUs needed for AI training but the broad applicability of semiconductors to defence systems. From the end of the Cold War until recently, almost all advanced semiconductor design and manufacturing capabilities were in the USA, Europe, Japan, South Korea or Taiwan. Only these countries had the workforces and expertise needed to deploy sophisticated chips to military systems. In addition, considerable variation in the semiconductor industries in each country also created variation in the workforces that could be drawn on. For example, Korea has an especially strong memory workforce, whereas US and European firms retain strong market positions in analogue and mixed signal chips. In the past decade, however, China has actively tried to develop more advanced chipmaking capabilities, just as geopolitical tension between China and other major chip producers such as Taiwan, Japan, South Korea and the USA have escalated. This has raised the prospect that the US and its partners and allies might no longer have unique advantages in cutting-edge chipmaking capabilities and that their adversaries might use chip technologies to enable their own defence systems  (The US has formal military alliances with Japan and South Korea, and an informal but substantial partnership with Taiwan.) Now, competing military blocs are trying to build advanced semiconductor industries. Meanwhile, governments in the Asia-Pacific region are in an arms race. China has markedly expanded its military spending in recent years, bringing new online systems such as aircraft carriers and hypersonic missiles7. Japan plans to double its defence budget as a share of GDP8. South Korea has built ballistic missile submarines9. Taiwan has tripled the length of military conscription for young men10. Australia, the USA, and the UK have forged the AUKUS partnership to jointly build nuclear-powered submarines11. So long as governments in the region are fixated on building up their military capabilities, they are likely to remain focused on controlling and developing leadership positions in key component technologies, including chips.

This competition has led governments to pour money into their chip industries. China set off the current round of chip subsidies in 2014, by identifying chips as key technologies of its ‘Made in China 2025’ industrial policy, and by launching a National Integrated Circuit Industry Investment Fund to bankroll semiconductor investment12. In response, Japan has offered funding for foreign chip firms such as TSMC and Micron to build new facilities or upgrade existing facilities, as well as establishing a new chipmaker known as Rapidus, which will seek cutting-edge chip manufacturing capabilities. The US and the EU have each launched ‘Chips Acts’ to provide tens of billions of dollars in funding to attract more chip investment.

In addition, governments have imposed new restrictions. The highest profile examples are the US decision to ban the transfer of cutting-edge GPUs to China as well as the trilateral Japan–The Netherlands–USA deal to restrict China’s access to advanced chipmaking tools such as etch, deposition and lithography equipment. Taiwan and Korea have taken steps to stop the leakage of semiconductor expertise to China by prevent certain employees from leaving to work at Chinese firms. The US has made it illegal for citizens and green-card holders to work with certain Chinese firms. Several governments have encouraged universities to assess more carefully the impact of research partnerships with Chinese universities.

The ‘chip war’ is reshaping the industry by bringing political and security concerns to the forefront of the chip supply chain and semiconductor research. For researchers and companies, this presents complex balancing acts. On the one hand, all of humanity benefits from further advances in chips and the many technologies that they enable. On the other, research and the production of chips is likely to become even more fragmented under the pressure of these political disputes. This is an unexpected development for many researchers, but it is also a return to the historical norm. The first chips emerged out of the arms race and space race of the Cold War era. Then, as now, semiconductors were not seen solely as an interesting or important technology, but also as key to the balance of power on the world stage.