The integrated circuit is today synonymous with the concept of technological progress. In the seven decades since the invention of the transistor at Bell Labs, relentless progress in the development of semiconductor devices — Moore’s law — has been achieved despite regular warnings from industry observers about impending limits. Here, drawing on technical and organizational archival work and oral histories, we argue that the current technological and structural challenges facing the industry are unprecedented and undermine the incentives for continued collective action in research and development, which has underpinned the past 50 years of transformational worldwide economic growth and social advance. We conclude by arguing that the lack of private incentives, due in part to a splintering of technology trajectories and short-term private profitability of many of these new splinters, creates a case for greatly increased public funding and the need for leadership beyond traditional stakeholders.
The integrated circuit provides a superb example of what economists call a ‘general-purpose technology’ — a technology that makes possible other important technologies, products and services, which in turn contribute greatly to economic growth and welfare1. Owing to its long-term systematic declines in cost and lockstep increases in performance — that is, Moore’s law — the integrated circuit has made a dazzling array of new or advanced products possible, from intercontinental ballistic missiles to global environmental monitoring systems and from smart phones to medical implants. Indeed, so fecund and important is the integrated circuit that economists identified it as the “foundation for the American growth resurgence” in the 1990s and the leading source of worldwide economic growth during that same period2,3. Moreover, so cheap, small, fast, powerful and abundant have chips become, and so numerous are their applications, that many of their social benefits transcend economic measurement; they simply increase the quality of life in a way that is hard for economists to measure.
Before Moore’s law
Ironically, this tremendous advance is based on the 1947 invention of a technology, the point-contact transistor, that was so feeble in initial performance and uncooperative in operation that it barely caught the notice of the New York Times4. Although Bell Labs’ point-contact transistor found its way into a handful of early applications — mostly within the Bell system itself — it ultimately proved to be too unstable and difficult to manufacture to be widely commercialized. Instead, the bipolar junction transistor (BJT), conceived and patented by Bell Lab’s William Shockley in 1948, emerged as the first transistor broadly adopted and manufactured by firms in the emergent semiconductor industry. The rapid diffusion of the transistor in the 1950s stemmed from an agreement reached between AT&T and the Antitrust Division of the US Department of Justice (with the Pentagon’s blessing) that forbade AT&T to manufacture transistors for applications outside its own system. The agreement also required the company to openly license its transistor patents and also to hold a three-part series of conferences, beginning in 1951, which ended with licensees’ acquisition of what was colloquially called “Ma Bell’s Cookbook”. It contained papers, patents and descriptions of what Bell Labs and its co-owner Western Electric Manufacturing Company explicitly knew about transistors.
The Bell Labs transistor conferences served to diffuse knowledge about the fundamental science of semiconductors, transistor design, operation and manufacture, and modes of transistor application. Cold War military contractors and their researchers were particularly prominent among the dozens of initial transistor licensees. Military agencies provided direct funding for R&D and production facilities, served as knowledge clearing houses for disseminating best practices between firms, and defined technology needs and trajectories as an early, eager and very large customer. The stringent requirements of military applications — for example, miniaturization of circuits, low power consumption and high reliability in rugged, high-temperature environments — served to elevate silicon over germanium as the industry’s material of choice. Building on Bell Labs’ core theoretical and technological advancements, subsequent contributions to the art of semiconductor manufacturing came from both large, established companies and small, young firms, which recent research indicates uniquely benefited from access to Bell Labs’ technology made possible by the consent decree5.
By the close of the 1950s, even as semiconductor shipments averaged annual growth rates of over 50%, some researchers began to project limits to progress in BJT transistors and their applications, at best two orders of magnitude away6,7,8,9. Many of these early warnings rested on erroneous assumptions about limits to transistor feature size, but semiconductor electronics faced pressing reliability challenges at the circuit level. Although BJT transistors offered advantages in power consumption and waste heat relative to vacuum tubes, these new devices faced similar integration challenges. Operators were still hand-soldering discrete transistors into circuits, and these inconsistent connections often led to circuit failures. Like vacuum tubes, early transistors also suffered from reliability problems as electronic circuitry grew more complex; Bell Labs’ manager Jack Morton called these challenges associated with growing circuit complexity the “tyranny of numbers”10. These problems especially concerned military clients, and military agencies each sponsored different approaches to reliable circuit ‘miniaturization’ and device integration in the late 1950s and early 1960s11. By the middle of the 1960s, this diversity of approaches gave way to an emerging dominant design, the integrated circuit. Texas Instruments introduced Jack Kilby’s hybrid approach in 1959, but it was Fairchild’s monolithic integrated circuit, invented by Robert Noyce in 1960 and made possible by Jean Hoerni’s development of the planar process in 1959, that became the industry standard. Military agencies were the earliest adopters of the integrated circuit, initially for the Minuteman I ICBM project in 1961.
Moore’s law and 40 years of industry-led innovation
Well into the 1960s, sales of integrated circuits to commercial markets lagged those to the military, owing to their high cost and to commercial electronics firms’ unfamiliarity with how to design products incorporating integrated circuits (Fig. 1). As semiconductor firms began a widespread campaign to convince commercial electronics users of the maturity and cost-effectiveness of integrated circuit technology, Gordon Moore, then R&D director at Fairchild, published his fabled paper, “Cramming more components onto integrated circuits”, in the journal Electronics12. Moore stressed the maturity and stability of silicon integrated circuits, noting that considerable future progress was possible with “only engineering effort” (as opposed to advances achieved through risky or uncertain scientific research). Additionally, Moore projected that the “complexity of minimum component costs” would continue to double annually. His projection, the first formulation of his eponymous ‘law’, was consistent with contemporary assessments by other industry executives.
Moore’s most radical projection was that continued exponential growth in integrated-circuit component density would open up entirely new applications and technological possibilities at continually decreasing prices. This proclamation rested on the integrated circuit’s ability to resolve the reliability issues of complex interconnections. Moore’s piece codified a shift in the industry’s focus from improving the operating characteristics of individual devices to increasing integrated-circuit density. Previously, transistors had been drop-in replacements for vacuum tubes, offering lower power consumption, reduced size and weight, and improved reliability. Integrated circuits offered vast new technological capabilities driven by extensive integration. In this new approach, improvements to the now-incorporated transistor would still be pursued but primarily to serve the larger goal of “cramming more components onto integrated circuits.”
The invention and adoption of a new transistor technology, the metal–oxide–semiconductor field-effect transistor (MOSFET), assisted the industry’s drive for more extensive integration. MOSFETs emerged from noteworthy research in industry laboratories, especially at Bell Labs and RCA’s corporate laboratory. Although early MOSFETs were considerably slower than BJTs, they offered advantages in miniaturization, circuit density and manufacturing cost. Consequently, MOS integrated circuits rapidly captured market share through the 1970s, growing from less than 2% of all integrated circuit sales by US firms in 1968 to over 52% a decade later (Fig. 2). By the close of the 1980s, the industry’s leading-edge products converged around complementary-MOS (CMOS) integrated circuits, a highly energy-efficient form of MOSFET-based circuits, as managing power-density became a primary concern of the market.
Two publications13,14 in the 1970s, provided the broad technical blueprints to maintain that trajectory over the next three decades. Named after IBM corporate researcher Robert H. Dennard, ‘Dennard scaling’ summarized the parameters (dimension, voltage and doping) available to achieve scaling and identified challenges that arose with continued scaling (minimum gate oxide thickness, interconnect resistance and non-scaling of the subthreshold slope). Although Dennard scaling called for the scaling of key parameters by the same constant, in practice the industry followed a more general form of scaling to maximize performance by scaling voltage more slowly than doping and dimension factors15. At a 1975 industry conference, Gordon Moore, now at Intel with co-founder Robert Noyce, concluded that, while die size increases and transistor shrinkage would continue unabated, he expected density gains from “circuit cleverness” to tail off in the coming years as new devices with optimal layouts — charge-coupled devices (CCDs) — hit the market14. In fact, the slowdown occurred earlier than Moore’s revised forecast; CCDs never reached the market significance that he anticipated.
The 1970s marked the first of three decades of Dennard-scaling-driven rapid progress in semiconductors. The industry’s technological pacesetter in the 1970s and 1980s was a commodity product, dynamic random access memory (DRAM). Firms first to market with the next-generation DRAM captured short-term monopoly rents, but Japanese manufacturers came to dominate the DRAM market owing in part to aggressive industrial policy, including the use of collaborative research ventures16.
US semiconductor manufacturers initially responded to Japanese competition in 1977 with the formation of the Semiconductor Industry Association (SIA) to address US trade policy. With the introduction of the 64K DRAM, Japanese firms began outpacing American competitors, and the American computer and semiconductor industries embraced the ‘Japanese’ idea of collaborative research, despite long-standing anti-trust prohibitions. In 1982, SIA created the Semiconductor Research Corporation (SRC). To rectify what SIA members saw as too little university research devoted to CMOS design and manufacturing, the SRC focused on elevating the industry’s research needs deemed by leading academic researchers to be worthy of study, and on leveraging collective industry money to fund those researchers17. Despite these early efforts, US semiconductor manufacturers continued to cede market share to Japanese firms. Concern among US producers, as well as the military, over the future of the US semiconductor industry, and, in particular, suppliers of key semiconductor equipment such as photolithography, reached a feverish pitch in 1987. Consequently, 14 US semiconductor manufacturers came together through the SIA and the SRC to form SEMATECH. Congress authorized US$100 million in annual funding for 5 years through DARPA, and industry participants annually contributed another US$100 million. SEMATECH quickly shelved its original goal of horizontal research collaboration across firms and refocused the consortium’s R&D on improving the capabilities of US equipment suppliers18. SEMATECH’s equipment improvement programmes helped to cut the Japanese process yield advantage from 50% in 1985 to 9% in 199119. US equipment manufacturers improved their market share from 45% in 1990 to 51% in 199219. By 1993, US semiconductor firms had once again surpassed Japanese manufacturers in global market share. In addition to the SEMATECH-led improvements in manufacturing capabilities, however, US firms benefited from their decisions to leave the DRAM market and focus on higher margin products (for example, microprocessors or CPUs), and from changes in US–Japan trade policy20,21,22.
During these 10 years in which the US semiconductor industry lost and regained its competitive advantage against Japan, it also underwent its first wave of vertical disintegration. Before the 1980s, chip design, manufacture and sale in the semiconductor industry were all done within individual firms: that is, these firms were ‘vertically integrated’. The stabilization of MOSFET manufacturing technologies and improvements in electronic design automation software allowed designers to incorporate knowledge of manufacturing capabilities into circuit designs without having that knowledge themselves23,24. Soon, ‘fabless’ semiconductor firms — chip companies without their own fabrication facilities — entered the market, initially using spare capacity at integrated device manufacturers. Between 1978 and 1987, two-thirds of US semiconductor start-ups owned no manufacturing facilities25. In contrast to these start-ups, large firms competing in the CPU and memory markets tended not to vertically disintegrate because complementarities between design and manufacturing provided keys to their competitive advantage24. These integrated device manufacturers were more likely to patent ‘systemic innovations’ than their vertically disintegrated peers26, and they also drove the process innovations that spilled over to the rest of the industry through the industry’s collaborative research institutions27.
Throughout this period, increasing development costs, shortening product cycles, and growing domestic and foreign competition limited the ability of firms to capture the social and private returns from their R&D investments28. Although overall R&D spending by the US semiconductor industry increased hugely in the 1980s, the central research laboratories of the large semiconductor manufacturers such as IBM, Texas Instruments, RCA and AT&T faced obstacles as the decade progressed. Most firms made considerable cuts to their basic research budgets and refocused projects away from ‘blue-sky’ research toward applied research that supported existing businesses21. Meanwhile, two of the industry’s emerging juggernauts, Motorola and Intel, eschewed central research organizations entirely. Intel, famously, was guided by Robert Noyce’s principle of least (or minimum) information and focused on problem solving on the factory floor29. As the industry’s leading firms scaled back their research, the military, principally through its Defense Advanced Research Projects Agency (DARPA), played the primary funding and coordinating role in non-CMOS semiconductors and alternatives to optical lithography used to manufacture CMOS chips. During the 1990s, DARPA’s ULTRA Electronics and ‘Advanced Lithography’ programmes were important funding sources for alternatives to CMOS and next-generation lithography.
Meanwhile, the predictability of semiconductor advancement — still following the Moore and Dennard et al. blueprints — enabled long-ranging forecasts extrapolated from the industry’s existing trajectory. In fact, the US industry’s early roadmaps nearly perfectly continued to extrapolate the existing Moore/Dennard trajectory through the first decade of the twenty-first century, and industry participants upheld the roadmap’s projections as a baseline of progress, transforming Moore’s law into a self-fulfilling prophecy30,31. The industry-led roadmap came to define the direction and focus of research efforts for the industry’s suppliers and its collaborative research organizations, including SRC and SEMATECH32. Academic researchers also began to rely on the roadmap, frequently citing issues raised in it in research grant proposals. As such, the roadmap served to tighten the focus of academic research toward the industry’s most pressing, short-term problems17,33. At the same time, with the decline of the industry’s central research laboratories, industry leaders eventually asked how they could fund much-needed longer-term research. An analysis commissioned by the SIA and the SRC following the release of the 1994 National Technology Roadmap for Semiconductors (NTRS) found industry and government funding for long-term research to be insufficient to address the technology obstacles identified by the NTRS. The study recommended that universities should be used to address much of the projected “research funding gap”34.
Beyond Moore’s law
By the late 1990s and early 2000s, the industry was beginning to reckon with many of the limits first identified several decades earlier. In a widely cited Science article, an Intel researcher highlighted three challenges from the 1998 International Technology Roadmap for Semiconductors (ITRS) as “roadblocks” with “no known solutions”: dopant clustering, electron tunnelling through the gate oxide, and dopant distribution35. As the industry entered the sub-100-nm regime, Dennard scaling failed to provide the benefits enjoyed since the 1960s, and devices became limited by materials’ shortcomings. During this time, the industry’s collaborative institutions shifted their focus away from international competition toward overcoming these new technology challenges. SEMATECH and SRC both began admitting international members, and new consortia were established to address long-term technology needs.
The industry responded by introducing a set of materials innovations that together came to be known as ‘equivalent scaling’. The term ‘equivalent scaling’ hints at the purpose of these materials innovations: to deliver performance benefits equivalent to those previously gained through Dennard scaling. Equivalent scaling improved the variables from the transistor current–voltage relationship that did not scale (such as electron mobility, μ) and addressed quantum mechanical effects not accounted for in the basic current–voltage model (tunnelling through the gate oxide, for example). Despite the success of these innovations, by the mid-2000s the industry faced departures from its 40-year performance trajectory.
Perhaps the most readily apparent of these departures was the abrupt plateau in microprocessor frequency (or ‘speed’) improvements and the industry’s unanticipated switch to multi-core designs. It is evident from public announcements by semiconductor firms, as well as published ITRS roadmap projections, that the steep performance wall for microprocessors was not widely anticipated36,37. As a result, recent research suggests that end-users were unable to take advantage of the performance potential of multi-core processors in the way they had previous advancements in integrated circuits, owing to the difficulty of parallel programming approaches36,37, and that this inability to parallelize applications may explain an overall decline in the contribution of IT-using sectors to growth in total-factor productivity36,38.
Achieving continued progress in semiconductors through equivalent scaling also grew increasingly expensive: as the technological challenges grew, semiconductor R&D expenditures as a percentage of sales were, throughout the 2000s, considerably higher than in previous decades (Fig. 3). The number of firms with integrated fabrication facilities at the process frontier began to dwindle, dropping from 29 in 2001 to 8 by 2015. By the late 2000s, venture capital funding of new semiconductor firms also began to dry up, with fewer firms funded from 2007 through 2011 (36) than in 2003 alone (44). Moore’s law also began to break down. By 2013, firms at industry conferences began presenting data showing an increase in the cost per transistor at the newest process nodes (sub-28-nm), and in 2015 Intel announced a departure from its ‘tick-tock’ development strategy in which new process nodes would be followed by new architectures built on that process node.
Worsening economics of leading-edge semiconductor production, industry consolidation and shifts in the industry’s end-markets also began to fracture the industry’s collaborative research ecosystem. Full membership in the SRC (horizontal collaboration by manufacturers on pre-competitive research) has declined since the early 1990s, and partial membership has dropped from the mid-1990s (Fig. 4). In the past 2 years, SEMATECH (a vertical research collaboration between equipment suppliers and manufacturers) has ceased to exist as a stand-alone organization, and its signature initiative, the Global 450 Consortium, stalled altogether after Intel and Samsung pulled out of the consortium. Similarly, whereas the early 2000s had several consortia aimed at developing extreme ultraviolet (EUV) lithography technology to support the manufacture of next-generation transistors at small feature sizes, in 2012 Intel, TSMC and Samsung instead concentrated their R&D efforts, each taking equity ownership in ASML, the leading EUV producer. Industry has increasingly shifted to a customer–client model exemplified by Belgium’s IMEC and ‘customization’ programs at SRC. These programmes allow member firms to identify specific projects of interest at the R&D organizations and to earmark their funds for that work. In effect, parts of the collaborative R&D organization are now doing private contract research. This structure reduces the public-goods aspect of collaborative research. The 2013 edition of the ITRS was the last edition to be supported by SEMATECH and the SRC, and the 2015 ITRS edition, released in July 2016, was the last to be sponsored by the SIA.
This fracturing of the industry’s pre-competitive collaborative research structure comes as the industry faces its greatest technological uncertainty. In the late 2000s, the ITRS adopted a three-trajectory typology: ‘More Moore’ (that is, continuing the historical trajectory of performance improvements with continued CMOS evolution), ‘More-than-Moore’ (heterogeneous functionality integration into the CMOS platform) and ‘Beyond CMOS’ (everything from a new computing element to entirely new computing architectures that are at least initially CMOS-compatible). In the roadmap presented by ITRS, CMOS remains central to computing for the foreseeable future, with eventual industry transition to Beyond CMOS toward the end of the roadmap’s 15-year horizon. Advances in More Moore, pursued by only a few firms that compete in markets for commodity products (for example DRAM, Flash, CPU), continue to form the core of semiconductor technology research and development efforts. In the short term, these advances are supplemented by More-than-Moore and Beyond CMOS technologies. In the long term, according to the roadmap, an unknown Beyond CMOS technology (or combination of technologies) eventually continues the extendibility of computing advances.
More Moore is a continuation of the industry’s historical trajectory of improvement in power (energy per switching event), performance (operating frequency), area (density) and cost per transistor. According to the ITRS, the technical drivers of advancements through 2020 will be further iterations of equivalent scaling techniques: implementation of new device geometries (for example gate-all-around transistors and nanowires), integration of new materials for the transistor channel and interconnects, and a possible switch to tunnel FETs beginning sometime after 202039. Going forward, the scope of technical challenges on this path exceeds those faced over the past decade or addressable by device-level innovation. For example, transistor leakage current and interconnect delay across a chip have continued to worsen, and commercially available transistors are close to the physical limits for subthreshold slope40. Leakage power has grown to become a substantial portion of total power consumption, and power consumption has limited the benefits of further scaling41. As a result, computing is power-constrained with consequences for system and architecture design42, and even programming. Furthermore, the increasing complexity of designing leading-edge chips and delays in integrating EUV lithography have contributed to a slowing of improvements, or outright increase, in cost per transistor at the newest nodes.
Although the leading-edge producers — Intel, Micron (jointly with Intel), Samsung TSMC and Global Foundries — have published product roadmaps that include continued progress along More Moore for at least two more nodes, both product announcements and industry projections show only muted improvements in performance and cost relative to the industry’s historical trend. As noted earlier, the time between node introductions has increased43, and firms are reporting that the latest nodes were not more cost-effective than previous nodes44. The final version of the ITRS roadmap projected that performance improvements after 2018 would lag the historical trend39, and Intel’s 2017 manufacturing day presentation showed lower overall performance characteristics for the 10-nm node to be released in 2018 than for the ‘14nm+’ and ‘14nm++’ nodes released in 2016 and 2017 respectively45.
Unlike More Moore, which aims to continue improvements consistent with a general-purpose technology, More-than-Moore is an application-driven trajectory. European researchers first put forth a definition for More-than-Moore in the late 1990s as “functional diversification of semiconductor-based devices.” Per the ITRS, examples of More-than-Moore include the integration of new functionalities into a system, including new sensors, radio-frequency circuits or micro-electromechanical devices to meet the specialized needs of particular applications. The first commercial applications of quantum computing are also likely to fit this mould, as accelerators working alongside standard CMOS chips46. Integrating these capabilities requires the development of new manufacturing processes, design methodologies and possibly business models. Given the focus on applications, advancement in these technologies requires collaboration across industries (for example medicine and semiconductor manufacturing) and, like the firm-driven application-focused research, lacks the shared R&D platform that helped to drive progress in microelectronics historically.
We add to this trajectory the strategy of firms that are exploring an array of alternative methods to improve performance for their specific applications as the relative improvement in power, performance, cost and area from More Moore techniques has flagged. End-users have begun to integrate vertically upstream into chip design and are increasingly deploying specialized chips tailored to specific applications. Examples include search companies (such as Microsoft Bing) using field-programmable gate-arrays in data centres as accelerators in conjunction with CPUs, and Google’s announcement of proprietary ‘tensor-processing unit’ chips developed in-house for its deep-learning activities. To lead these developments, firms such as Apple (2008), Amazon (2015) and Google (2014) have all brought chip design in-house. As large end-users vertically integrate chip-design, they begin competing with their current suppliers, re-shaping the industry’s traditional market structure and collaborative supplier–customer relationships. Finally, although these approaches have introduced real benefits for specific firms, their long-term extendibility is likely to require advances in underlying transistor process and device technology stemming from basic research for the More Moore and Beyond Moore trajectories. The software firms mentioned above are, to our knowledge, not currently involved in such research, at least through the traditional semiconductor collaborative institutions. What research exists is being conducted by fewer and fewer firms in the semiconductor industry (for More Moore, Intel, IBM, Micron, TSMC, Samsung and Texas Instruments; and for Beyond Moore, Intel, IBM, Micron and Texas Instruments). The longer-term Beyond Moore activities are located in the United States primarily through the Nanoelectronics Research Initiative in the SRC and probably underfunded relative to the size of the challenge (the argument and possible solutions for which we flesh out at the end of the piece).
Regardless of any More-than-Moore strategy, firms’ pursuit of divergent technological paths threatens to undermine the economy-wide contributions made by semiconductors as a general-purpose technology. As advances in semiconductors slow, and downstream firms increasingly pursue application- or domain-specific innovations, technological progress will be increasingly unevenly distributed. Additionally, the benefits of R&D in these specialized applications will accrue mostly within those firms, in contrast to the industry-wide benefits of advances in the underlying transistor technology.
The highest uncertainty and, arguably, greatest payoff in More-than-Moore pursuits may be in ‘Beyond CMOS’. Beginning in 2001, the ITRS47 included a new chapter, “Emerging research devices” (ERD), which speculated on the technological revolutions needed to extend microelectronics beyond 15 years. That chapter included a bevy of potential successors to the silicon CMOS transistor, many with research lineages dating back to the industry’s by-then-defunct basic research laboratories and kept alive by military agency funding through the 1990s and early 2000s. By 2003, the authors of the ITRS saw these devices as being central to the future of the industry, with the report’s executive summary highlighting post-CMOS devices as “pav[ing] the way to a complete technological revolution looming ahead towards the end of the next decade”48. And yet, the 2003 ITRS continues, “even if entirely different electron transport devices are invented for digital logic, their scaling for density and performance may not go much beyond the ultimate limits obtainable with CMOS technology, due primarily to limits on heat removal capacity”49. The 2003 ERD report within the ITRS concludes that none of the existing research devices are “viable emerging logic technologies for integration”48.
These conclusions elicited some, if paltry, action among industry players. Over the next 2 years, industry leaders, acting under the aegis of the SIA, worked to establish a ‘nanotechnology strategy’ to address the industry’s long-term research challenges and leverage vast new public funds for nanotechnology research allocated by the 2001 National Nanotechnology Initiative. These efforts culminated in the establishment of a new consortium, the Nanoelectronics Research Initiative (NRI), co-funded by six US semiconductor manufacturers with additional public funding from both federal and state sources. Total combined funding has averaged approximately US$20 million per year since the NRI’s founding in 2005. In the ensuing decade, funding for critical research areas in nanoelectronics has improved50. Still, comparatively fewer resources are being devoted to inventing, developing and commercializing Beyond CMOS technologies than were devoted to previous industry–government partnerships with far less challenging technical problems. The latest Triennial Review of the National Nanotechnology Initiative shows that NNI funding for nanoelectronics averaged only US$90 million annually since 201151. When adjusted for inflation, this figure represents less than half the annual public funding for SEMATECH from 1987 through 1997 (US$100 million in 1987 is over US$200 million in 2016 inflation-adjusted dollars) and a pittance compared with the industry’s own R&D spending, US$55.4 billion in 201552. Throughout, SRC remains the primary source and organizer of funding for Beyond CMOS technologies. Our interviews with university researchers indicate that industry funding for these technologies primarily comes through the SRC (rather than direct projects from industry). Although the SRC has two new programmes planned to come online in 2018, JUMP (joint with DARPA) and nCore (joint with NIST), overall increases in funding for Beyond CMOS will be minimal.
Although advances in software and in application-specific devices (C. Leiserson, et al., manuscript in preparation) may continue to lead to gains in the short to medium term, ensuring long-term advance in computing performance is likely to require the commercialization of semiconductor process and device technologies based on currently undiscovered scientific breakthroughs. A commercially successful alternative to CMOS will require bringing to commercial reality an entirely new computing element based on a new materials system that probably operates using a different set of physical phenomena from those traditionally relied on by the industry. The commercial success of such a technology will require, eventually, new manufacturing techniques and design tools to bring these devices to market. Yet, the industry’s institutions that historically shaped the evolution of basic science into technology are greatly weakened: corporate research programmes and military demand both played instrumental roles in the industry’s earliest periods, but the former have dried up, and the latter is no longer the primary source of demand. Indeed, the concomitant investments in equipment and education represent looming financial challenges to an industry grappling with the worsening economics of its current products. Further, as highlighted in the move from single- to multi-core processors, downstream technology shifts require coordination up and down the computing technology stack. Specifically, an entirely new computing element will possibly require changes in state variable, material, device, data representation, architecture and programming methodology53. The current environment simply lacks the sort of coordinating institutions that guided previous technology shifts (large corporate research laboratories, for example, in the case of the shifts from vacuum tubes to transistors to integrated circuits). Moreover, the collaborative R&D institutions that remain are being broken apart.
Although other countries may not be poised to overtake the United States’ leadership in next-generation computing in the near term, they also are not necessarily positioned on their own to overcome the global problem of extending Moore’s law.
Currently, there are no Chinese-owned firms with 14-nm facilities able to produce microprocessors at the technology frontier. China is committed to attracting foreign direct investment in microprocessor fabrication at the technological frontier, and TSMC is in the process of building a 16-nm fabrication plant in China to be up and running by 2018. China has also committed to spending US$20 billion annually in building wafer fabrication facilities. Microprocessors from Intel’s 14-nm facility were commercially available in 2014, but China’s current plans are to first have a Chinese-owned 14-nm fabrication plant in 202054. Thus, rather than trying to tackle the end of Moore’s law, China is leveraging its current slow-down to catch up to the technological frontier and, it hopes, win at the game of manufacturing semiconductor chips as commodities52. This strategy seems wise for China. However, it does not overcome the potential global economic slow-down that could be associated with failure for microprocessors to continue to advance according to Moore’s law36. In addition, if China were able to produce existing-generation chips at lower cost, these cheaper commodities could further erode company profit margins necessary to invest in the research and development necessary for next-generation transistors.
In Europe, IMEC — a public–private research institute with 3,500 researchers — has considerable research thrusts in Beyond CMOS, including research (paralleling efforts in NRI) on next-generation logic devices, new memory concepts, advanced patterning and key process steps, 3D system integration, advanced nano-interconnects, neuromorphic computing and quantum computing. Faculty members from Stanford University and from University of California, Berkeley, as well as Senior Executives from Intel and Samsung, are on IMEC’s advisory board. IMEC also claims to have the most advanced 300-mm cleanroom facility for research and development. IMEC does contract work with firms leading to 500 million euros in annual revenues, and also enjoys strong public support through both European Council research funding and direct funding from the Flemish Government of over 100 million euros per year55. In addition to efforts at IMEC, in 2013, Europe’s Horizon 2020 announced 1 billion euros in funding for graphene-based electronics and for nanoelectronics research.
Despite the growing strength of IMEC and other such efforts in Europe, the firms that have the most at stake in identifying a successor to CMOS (such as Intel, IBM, Micron and Texas Instruments) are primarily US-based and have their strongest ties to US-based researchers and universities. Leading researchers on beyond-CMOS devices are also primarily located at US universities, and although they may advise or have collaborations with IMEC, there are not extensive channels for IMEC funding to flow to these leading US-based researchers. A 2010 survey of nanoelectronics programmes in the United States, Japan and Europe conducted by IWPGN (and involving NRI-affiliated researchers) found limited support for the research vectors deemed most important to finding a replacement to CMOS56. A 2015 update to that survey found improved support for these programmes, but in both cases the survey results indicated that US programmes — despite their fiscal limitations — were the furthest along50.
Policy for Mo[o]re
The semiconductor industry is on the verge of entering uncharted territory for the first time in more than 50 years. The array of scientific, technological and commercial possibilities for future semiconductor and computing technologies presents a challenge to scientists, engineers and policymakers alike. As various actors are making diverse efforts in Beyond CMOS, policymakers will be faced not only with questions of who and how much to fund in what form but also with a need to understand which technological trajectories could have increasingly uneven returns — both across firms and across society. The magnitude of the potential social benefits, increasing challenges in coordination and dwindling private incentives underscore the case for public funding of fundamental research to advance Beyond CMOS technologies, and how they can integrate across the computing technology stack. The scale of the scientific and technological challenge puts the onus on policymakers to think carefully about the proper organizational form for allocating public funds.
A range of institutional and organizational forms have been used in the past to fund scientific grand challenges, ranging from the multi-firm, multi-university vertically integrated Manhattan Project to the networked effort around the Human Genome Project57. Other innovations in organizational forms have included public–private partnerships and contests. Although NRI may be able to make progress on the next logic device with more funding, both NRI and the SRC more broadly are limited by the fact that they are dominate by traditional semiconductor industry players. These traditional collaborative institutions have also so far failed to engage new entrants farther up the computing technology stack, such as Google, Facebook, Microsoft and Amazon (in part potentially because of lack of market incentives for those firms to engage.) A contest or prize for inventing the next generation of underlying transistor technologies may seem tempting, but the complexity of the goal (it would, for example, be difficult to tightly define the measures for such a prize), the extent of capital investment and the level of coordination across the computing technology stack and multiple industry players required to advance computing make this approach problematic. Policymakers should also resist the urge to consolidate the public’s research portfolio and instead focus on cultivating numerous parallel approaches to Beyond CMOS58,59. Although narrowly focused public–private partnerships have been successful in the industry’s past, there is reason to be cautious about their appropriateness for an effort with this degree of scientific uncertainty. The decline of centralized corporate R&D labs and collapse of the industry’s collaborative research ecosystem highlights the prevailing incrementalism of private R&D interests. New technologies may completely re-map the computing industry’s value chain, threatening the viability of incumbents entrusted to bring them to the market.
We can ill afford to ignore or delay path-breaking innovations simply because they are not in line with existing firms’ commercial interests. National and global strategies should unequivocally scale with the economic and social costs inherent in the end of Moore’s law and the gains in discovering and developing a robust replacement. Two avenues seem promising: the first is to pursue the research institute originally proposed by traditional semiconductor industry leaders back in 2004. Here, however, academics and government programme managers should be the key players, given that the nature of the problem is more basic research, and at the same time it would be critical that industry players across the computing technology stack (not just traditional semiconductor firms) be engaged. An alternative would be a semi-coordinated government funding effort focused on the next generation of transistor technology across all government agencies, such as was undertaken in the case of the National Nanotechnology Initiative, in which key programme managers met weekly to discuss initiatives and share insights. Such an effort would require a focus group specifically on technological limits to semiconductors and active management on behalf of the programme managers to engage industry. A strategy pursuing these two organizational modes in parallel could also be promising.
One estimate for appropriate funding levels for any of these organizational models is the SIA Technology Strategy Committee’s recommendation of US$600 million per annum with, given the basic nature of the research needed, 90% of those funds from public research dollars60. Given the criticality of cutting-edge semiconductor technology to defence systems, military strength and the mitigation of cybersecurity risk52, the vast majority of these public dollars (around 70%) should probably come from the defence sector (such as DARPA and the Office of Naval Research). That said, given the importance of commercial off-the-shelf semiconductor technologies in today’s military, the extent to which semiconductor advances promise to open up new frontiers for devices and services, creating new businesses and industries52, and the fact that these commercial applications, not the military ones, will be dominant in any production facility, the remaining public dollars (around 30%) should come from the National Science Foundation (NSF) and the National Institute of Standards and Technology (NIST), to ensure dual-use developments and commercial applicability.
The incremental increases in performance along a consistent trajectory achieved through Moore’s law and Dennard scaling may never be replicated in Beyond CMOS devices. Likewise, it is unclear whether the general-purpose technology nature of the microprocessor will in future decades be replicated. Nevertheless, the social and economic costs of not funding the basic research fundamental to ongoing advances in transistor technologies, as well as, arguably, the cost to long-term advances in computing, would be dire.
Publisher’s note: Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
A correction to this article is available online at https://doi.org/10.1038/s41928-018-0031-2.
We thank the NSF Graduate Research Fellowship Program, the NIST (award no. 28994.1.1080278) and the NSF’s Science of Science and Innovation Policy Program (award no. 28935.1.1121844) for funding this research. We also thank the 50 individuals from across industry, academia and government who agreed to oral histories, the many individuals around the industry who took the time to provide feedback, and the Semiconductor Research Corporation for granting us access to their archives.
About this article
Techniques in Coloproctology (2018)