Skip to main content

The Hacker in Your Hardware: The Next Security Threat

As if software viruses weren't bad enough, the microchips that power every aspect of our digital world are vulnerable to tampering in the factory. The consequences could be dire

Your once reliable mobile phone suddenly freezes. The keypad no longer functions, and it cannot make or receive calls or text messages. You try to power off, but nothing happens. You remove the battery and reinsert it; the phone simply returns to its frozen state. Clearly, this is no ordinary glitch. Hours later you learn that yours is not an isolated problem: millions of other people also saw their phones suddenly, inexplicably, freeze.

This is one possible way that we might experience a large-scale hardware attack—one that is rooted in the increasingly sophisticated integrated circuits that serve as the brains of many of the devices we rely on every day. These circuits have become so complex that no single set of engineers can understand every piece of their design; instead teams of engineers on far-flung continents design parts of the chip, and it all comes together for the first time when the chip is printed onto silicon. The circuitry is so complex that exhaustive testing is impossible. Any bug placed in the chip’s code will go unnoticed until it is activated by some sort of trigger, such as a specific date and time—like the Trojan horse, it initiates its attack after it is safely inside the guts of the hardware.

The physical nature of hardware attacks makes them potentially more problematic than worms, viruses and other malicious software. A virus can jump from machine to machine, but it can also in principle be wiped clean from any system it infects. In contrast, there is no fix for a hardware attack short of replacing the infected units. At least, not yet.


On supporting science journalism

If you're enjoying this article, consider supporting our award-winning journalism by subscribing. By purchasing a subscription you are helping to ensure the future of impactful stories about the discoveries and ideas shaping our world today.


The difficulty of fixing a systemic, malicious hardware problem keeps cybersecurity experts up at night. Anything that uses a microprocessor—which is to say, just about everything electronic—is vulnerable. Integrated circuits lie at the heart of our communications systems and the world’s electricity supply. They position the flaps on modern airliners and modulate the power in your car’s antilock braking system. They are used to access bank vaults and ATMs and to run the stock market. They form the core of almost every critical system in use by our armed forces. A well-designed attack could conceivably bring commerce to a halt or immobilize critical parts of our military or government.

Because Trojan hardware can hide for years before it is activated, it is possible—perhaps likely—that hardware bugs have already been planted. And although no large-scale hardware attacks have yet been confirmed, they are inevitable.

As we know all too well from combating software-based cyberattacks, a relatively small proportion of people who use their technical skills for malicious purposes can have an big impact. Thus, rather than asking whether or not hardware attacks will occur, the better questions are: What forms will these attacks take? What consequences will they have? And, perhaps most important of all, what can we do to detect and stop them or at least minimize their effects?

Block by Block
an integrated circuit, or chip, is simply an electronic circuit etched onto a single piece of a semiconductor material, most often silicon. Modern integrated circuits are physically quite small—no more than a few square centimeters and often much smaller—but can contain several billion transistors. The very complexity of modern chips creates the vulnerabilities that make Trojan attacks possible.

Modern chips are divided into subunits called blocks that perform different functions. In a mobile phone’s processor, for example, one block might be memory that can be used to store frames of video captured by the camera. A second block might compress that video into an MPEG file, and a third block might convert those files into a form that can be transmitted over the antenna. Data move among these blocks across a system bus, which acts like a highway connecting the different parts of the chip.

When a company embarks on the design of a new integrated circuit, it first maps out what functional blocks the circuit will need. Some of these blocks will be designed in-house, either from scratch or as a modification of a block design used in the company’s earlier chips. Others will be licensed from third parties that might specialize in a certain type of functionality—receiving data from an antenna, for example.

The block from the third party does not come as a physical piece of silicon, because the goal in building the integrated circuit is to have all the functional blocks printed onto the same surface. Instead the block comes as a data file that fully describes how the block should be etched onto the silicon. The file can be thousands of lines long, making it a practical impossibility for a human to read the file and understand everything that is going on. The block provider will also typically supply some software that the block purchaser uses to model how the block will respond to a variety of situations. Before any circuits are printed, the lead company will join all the model blocks into a computer simulation to ensure the chip will function as expected. Only when the model passes a battery of tests will the company begin the time-consuming and expensive process of fabricating the physical integrated circuits.

Here is where the vulnerability lies: because the rogue hardware requires a specific trigger to become active, chipmakers will have to test their models against every possible trigger to ensure that the hardware is clean. This is simply not possible—the universe of possible triggers is far too large. In addition to internal triggers such as a date-based trigger described in the mobile phone example, hackers could employ external triggers such as the reception of a text or e-mail message containing a specific set of characters. Companies test as best as they can, even though this necessarily means testing only a very small percentage of possible inputs. If a block behaves as expected, it is assumed to be functioning correctly.

An Issue of Trust
in the early days of integrated-circuit design, no one had to worry about hackers. The first designs were created completely in-house, executed by small teams who were working toward a common purpose. Because of this organizational security, the designers established open protocols that assumed different parts of the chip would behave as expected. (The history echoes the choices that were made in the early days of the Internet, when a small academic community built an open platform that assumed everyone would behave nicely. That assumption has not withstood the growth of the Internet.)

In today’s world, however, the design process for a single, large integrated circuit can involve contributions from hundreds or even thousands of people at locations on multiple continents. As this design goes through various stages of development, portions of the design are stored on many different physical platforms and repeatedly exchanged among many parties. For example, an American manufacturer might combine designs from separate branches of the company with designs from third-party vendors in the U.S., Europe and India, then fabricate the chip in a Chinese factory. These global networks have become a fact of life in recent years, and they have provided large savings in cost and efficiency. But they make security far more complicated than back in the days when things were done in one facility. Given the sheer number of people and complexity involved in a large integrated-circuit design, there is always a risk that an unauthorized outsider might gain access and corrupt the design without detection.

A very small—but not zero—risk also exists that a design could be corrupted by someone with internal access. While the overwhelming majority of people involved in any aspect of circuit design will endeavor to deliver designs of the highest quality, as with any security issue, malicious actions taken by even a very small minority of those with inside access acting maliciously can create significant problems.

Ideally, would-be attackers would never get the opportunity to gain access to an integrated circuit during the design and manufacturing process, thereby ensuring that hardware attacks never occur. This is the strategy that the Defense Advanced Research Projects Agency (darpa), the research agency run by the Pentagon, has pursued with its Trust in Integrated Circuits program. darpa is designing processes to ensure that all the steps in the design and manufacturing chain are carried out by companies and people known to be trustworthy and working in secure environments. (In addition, the agency is funding research into new ways to test chips before they are placed into U.S. weapons systems.) Yet in the real world, actions taken to secure the design process are never perfect.

Hardware designers should also build circuits that identify and respond to attacks even as they are taking place, like an onboard police force. Although a community should certainly engage in all reasonable measures to discourage potential criminals from committing crimes, any responsible community also recognizes that such efforts, no matter how well intentioned and thorough, will never be 100 percent effective. It is critical to have a police force that can respond quickly and appropriately when crimes do occur.

Securing the Circuit
a circuit that can effectively detect and respond to attacks is called a secure circuit. These chips have a modest amount of extra circuitry specifically designed to look for behavior that may reveal a problem. If an attack is suspected, the secure circuit will identify the type of attack and attempt to minimize the resulting damage.

In the example of the frozen cell phone, the failure may have been caused by a single block that was acting out of order. That block interacts with all the other blocks over the system bus. This bus, in turn, has a bus arbiter—a traffic cop that decides what information can travel over the bus at what time. Yet the traffic cop analogy is not perfect. While a traffic cop can instruct traffic when to start and when to stop, a bus arbiter has less authority. It can grant permission for a block to start sending information through the bus, but the block can retain that access for as long as it wants—a vestige of the long-ago assumption that blocks would always behave properly. Herein lies the problem.

In a typical system, a block will retain access to the system bus for only as long as necessary before it relinquishes it for use by other blocks. The bus arbiter sees that the system bus is available and then assigns it to another block. But if a block keeps control of the bus indefinitely, no further data will be able move within the integrated circuit, and the system will freeze.

In contrast, a secure integrated circuit performs constant checks to ensure that the communications among different blocks have not been disrupted. When it detects one block monopolizing access to the bus, the secure integrated circuit can respond by quarantining the malicious block. It can then use its store of programmable logic hardware to replace the lost functionality. This process will likely slow the overall operation, but it will at least keep the device working.

An overt attack is probably not the most pernicious threat, however. A covert attack could be much worse. In a covert attack, the device appears to operate normally, but in reality it is acting with malicious intent. A mobile phone, for instance, might secretly begin to transmit a copy of all incoming and outgo­ing text messages to a third party. An unsuspecting observer would not notice anything wrong, and the attack could continue indefinitely.

A secure integrated circuit would provide a critically important defense against this type of attack. The chip would constantly monitor the amount and type of data moving on and off the integrated circuit and statistically compare this movement with the expected data flows. Any anomaly would be flagged as a potential data leak, and the chip would either alert the user or begin to staunch the flow on its own.

In addition to taking steps to counter the effects of a Trojan attack on its own operations, an integrated circuit can notify other devices of the type of assault, potentially allowing them to take preemptive actions to avoid it (or at least to minimize its effects). Such notification is not as far-fetched as it might seem given the level of network connectivity that almost all systems now have. For example, if a circuit experiencing an attack can identify the initiating trigger, it can alert other circuits to screen for that particular message.

The measures described here will be effective only if the parts of the circuit responsible for managing security are themselves secure and trustworthy. This might seem like a circular argument—another way of saying that the only way to secure a circuit is to secure a circuit—but the elements of the circuit devoted to security constitute only a small fraction of the overall design. They can be designed in-house to ensure that only highly trusted parties have access.

What to Do Next
thanks to the efforts of governments, academic researchers and the commercial sector, enormous progress has been made in Internet security. The same cannot be said of the state of integrated-circuit security, which lies roughly where Internet security was 15 years ago: there is growing awareness that the issue is worthy of attention, but defensive strategies have not yet been fully developed, much less put into practice.

A comprehensive approach to preventing hardware attacks requires action on several levels. Strategies that aim to ensure compromised hardware never gets out the door, such as DARPA’s program, are a good start. But most important, we must begin to implement secure design measures such as the ones discussed here that can defend against attacks as they occur. These defenses will not come free. As with security in other domains, integrated-circuit security will require the expense of time, money and effort. A wide spectrum of options represents trade-offs between the effectiveness of the security and the cost of implementing it. Fortunately, it is possible to deliver effective security at modest costs.

A secure integrated circuit contains a small amount of extra logic. In research my group has conducted at the University of California, Los Angeles, we have found that the increase in integrated-circuit size is typically several percent. There is also generally a cost in operating speed, given that the steps taken to ensure that functional blocks are behaving appropriately can consume some clock cycles that might otherwise be used for core operational tasks. Again, however, we have found the speed reduction to be small in relative terms, and in some cases no speed reduction happens at all if the security measures are performed using logic and functional blocks that are temporarily dormant.

Keeping hardware secure will inevitably become an arms race requiring continual innovation to stay ahead of the latest attacks, as has been the case in the software world. Whereas new circuits cannot be downloaded over the Internet in the manner used to fix security holes identified in software, modern integrated circuits have a number of reconfigurable aspects that, with appropriate steps taken during the integrated-circuit design process, could be used to automatically replace parts of hardware that become incapacitated in the event of an attack. Engineered flexibility is our best defense.

Even if hardware attacks are inevitable, that does not mean that they have to be successful.

John Villasenor is professor of law and electrical engineering at the University of California, Los Angeles, and a nonresident senior fellow at the Brookings Institution.

More by John Villasenor
Scientific American Magazine Vol 303 Issue 2This article was originally published with the title “The Hacker in Your Hardware: The Next Security Threat” in Scientific American Magazine Vol. 303 No. 2 ()