How we made the microprocessor

The Intel 4004 is renowned as the world’s first commercial microprocessor. Project leader and designer of the 4004, Federico Faggin, retraces the steps leading to its invention.

Computers were, at first, a decidedly unintegrated technology. They were composed of vacuum tubes, resistors, capacitors, inductors and mercury delay lines, and as a result were huge, expensive and power hungry. The situation improved with developments in microelectronics based on solid-state germanium transistors and diodes, which began replacing vacuum tubes in radios and phonographs, and eventually led to the first commercial transistor-based computers in 1959. That same year, the development of the planar process at Fairchild Semiconductor allowed tens of silicon transistors to be fabricated at the same time on the surface of a single-crystal silicon wafer (ten years later it would be possible to fabricate thousands of transistors). This invention was quickly followed by the commercialization of the first bipolar digital integrated circuits (ICs) in 1962, and from that point on, progress in semiconductor ICs became exponential, with the maximum number of components integrated in a silicon chip doubling every year, at least initially. But what allowed all the functions of a general-purpose computer to be integrated together was a monolithic central processing unit (CPU) — a microprocessor — and the first commercial microprocessor was born nine years later in 1971 (Fig. 1).

Fig. 1: Die shot of the Intel 4004.

image provided courtesy of Intel Corp.

The chip measures 3 mm by 4 mm and contains about 2,300 MOS transistors. It is signed in the lower-right corner with the initials (F.F.) of Federico Faggin, designer and project leader.

Working at Fairchild Semiconductor in 1968, I was the project leader and architect of the metal–oxide–semiconductor (MOS) silicon gate technology (SGT), a key breakthrough in the development of the first microprocessor. With SGT, new device types could be fabricated such as dynamic random access memory (RAM), non-volatile memories and charged-coupled device imagers, increasing the range of IC functions possible with solid-state electronics. The SGT was the first commercial self-aligned gate MOS process that was capable of building reliable, dense and fast MOS ICs. Compared with the incumbent aluminium-gate MOS technology, the SGT was five times faster, reduced the leakage current by more than a factor of 100, and could host twice as many random-logic transistors for the same power dissipation and chip area. The first commercial chip to use SGT was the Fairchild 3708, an 8-bit analogue multiplexer with decoding logic that I had designed. It was introduced to the market at the end of 1968.

I moved to Intel in 1970. At that time, many engineers knew how to create the logic design of a small CPU. However, no one had achieved the circuit density needed for a cost-effective general-purpose CPU that had sufficient speed to handle a wide range of applications. At Intel, I used the perfected SGT process for random-logic design with the ‘buried contact’ and the silicon-gate bootstrap load that had been developed at Fairchild, which provided the high speed and circuit density required for a single-chip CPU for the first time. The first microprocessor we made was a 4-bit CPU, which was christened the ‘4004’. It was custom designed for a Japanese manufacturer in order to build a family of electronic calculators. The first sales for the 4004 were in March 1971 and it was introduced to the general market in November 1971. It was quickly followed by the first 8-bit microprocessor — the Intel 8008 — in April 1972.

The emergence of these microprocessors was undoubtedly a turning point for the electronics industry. They fitted an entire computer system into a small and low-cost printed circuit board by adding variable amounts of read only memory (ROM) and RAM, plus application-specific input–output electronics. With their development, the same hardware could now be programmed to perform a variety of applications that previously required specialized custom hardware for each application. This was, of course, the key advantage of computers, but meant that custom hardware had to be replaced with software, and this turned out to be a difficult transition — many industries that could not adapt were swept away, including electromechanical calculator manufacturers such as Marchant, Facit and Comptometer.

Microprocessors quickly appeared in applications such as traffic light controllers, automatic blood analysers, point-of-sale registers, electric toothbrushes, electric motor controllers and electronic games — applications where computers would have been considered incongruous only a few years earlier, given that mainframes and minicomputers were then much bigger than the products they were supposed to fit inside.

With microprocessors, it became feasible to add a degree of intelligence to many existing products, providing significant improvements in their performance and ease of use. Today, a smartphone houses dozens of powerful embedded computers to manage a range of specialized communication, sensing and control functions. It also contains powerful user-programmable microprocessors that offer higher speeds and more memory than most mainframe computers of the late 1970s. Looking back now, it is clear that the microprocessor was one of the technologies that helped change what a computer could be and could do, and also the way that they influence our lives.

Author information



Corresponding author

Correspondence to Federico Faggin.

Rights and permissions

Reprints and Permissions

About this article

Verify currency and authenticity via CrossMark

Cite this article

Faggin, F. How we made the microprocessor. Nat Electron 1, 88 (2018).

Download citation

Further reading