Silicon Valley is heading towards a brick wall. The development of ever more powerful microprocessors depends on continued progress in miniaturizing their components. But if current trends continue, conventional silicon chips will reach their physical limits around 2012. By then, their transistors will be so small that current leakage will become an insurmountable problem.

Until now, the industry's development has followed the famous rule of thumb identified by Gordon Moore, co-founder of the chip manufacturer Intel, which states that the number of transistors that can be packed onto a microprocessor doubles every 18–24 months (see figure, this page). The challenge ahead is to find an alternative to the silicon chip that will allow this rise in computational power to continue. Some researchers are placing their bets on quantum computing; others believe it will be necessary to process information by manipulating photons of light. But over the past few years, companies such as IBM and Hewlett-Packard have started seriously entertaining the idea of constructing computers in which computations are performed by individual molecules.

Figure 1: Moore's Law
figure 1

Moore's Law: the growing power of silicon chips, measured in millions of instructions per second.

False start

Chemical futures: Williams believes molecular logic will underpin tomorrow's computers. Credit: HEWLETT-PACKARD LABORATORIES

Researchers have long been striving to create a viable ‘molecular electronics’ from molecules that act as linear wires, as transistor-like switches, and even as logic gates that process digital input signals. In 1974, Mark Ratner, now at Northwestern University in Evanston, Illinois, and Ari Aviram of IBM's Thomas J. Watson Research Center in Yorktown Heights, New York, set the ball rolling. They proposed that a molecule with a conjugated electronic structure — one in which single and double bonds alternate — could act as a rectifier1. This basic electronic device allows current to flow in one direction only. Shortly afterwards, a team led by chemist Hideki Shirakawa, now at the University of Tsukuba in Japan, discovered how to make a form of polyacetylene that conducted electricity2, raising hopes that single molecules of the polymer could act as wires.

But subsequent progress in computing with molecules was frustratingly slow. In 1992, theoretical biologist John Hopfield, now at Princeton University in New Jersey, was moved to observe: “The field suffers from an excess of imagination and a deficiency of accomplishment.” Yet in the past couple of years, the field's reputation has improved, boosted by the demonstration of molecular systems that can carry out the logic operations required of computer circuitry. That is a long way from a practical molecular computer. But it is proof of principle. “Molecular electronics offers a means for the information industry to continue the exponential increase in performance and lower cost for another two to perhaps even five decades,” says Stanley Williams, an expert in nanoscale electronics at the Hewlett-Packard Research Laboratories in Palo Alto, California.

Molecular memory: this prototype RAM is based on the protein bacteriorhodopsin.

The first step on the road to molecular computing was to produce molecules that can be switched between two stable states, capable of encoding a binary 1 or 0 in a memory device. In the early 1990s, Robert Birge and his colleagues at Syracuse University in New York State demonstrated the principle by developing optical random-access memories (RAMs) based on films of the protein bacteriorhodopsin3. This molecule can be switched from one state to another by the absorption of light. But Birge's RAMs did not work at the single-molecule level — limits to the focusing of the laser beams used for writing and reading from the devices meant that many thousands of molecules in the film were switched in unison.

In England, chemist Fraser Stoddart's studies were yielding synthetic molecules that could act as switches. With his research groups at the University of Sheffield and later at the University of Birmingham, Stoddart developed techniques for making rotaxanes, which consist of a hoop-shaped molecule threaded onto a linear ‘axle’. Bulky ‘stopper’ groups at either end of the axle keep the hoop from unthreading. In 1994, Stoddart and his colleagues made a rotaxane with an axle bearing two distinct ‘donor’ groups with which the hoop could interact. By oxidizing and reducing the molecules electrochemically, the hoop could be made to jump controllably from one group to the other4.

Making switches talk

To perform logic operations with such molecules, the switching caused by an input signal must be able to induce a change in an output signal. Only then might the switches ‘talk’ to each other or to other devices in a circuit. In most prototype molecular logic gates, the output is optical: an input signal alters the molecule's light emission or absorption properties.

As early as the 1980s, Birge devised molecular ‘NAND’ gates. These have two inputs and one output that fires (output 1) only when both inputs are activated (inputs 1,1). In Birge's molecules, two light-sensitive groups, or chromophores, influenced the absorption of light by an output chromophore by processes involving the transfer of electrons or energy within the molecules5.

More recently, researchers have created other varieties of logic gate. In 1998, a team led by Stoddart and Vincenzo Balzani at the University of Bologna in Italy created a molecular ‘XOR’ gate that used chemical input signals while producing an optical output due to the fluorescence of a chromophore. An XOR gate generates an output signal of 1 if the inputs are different (0,1 or 1,0), and an output of 0 if they are the same (0,0 or 1,1). Stoddart and Balzani achieved this in a pseudorotaxane — a threaded hoop with no end stoppers, so that the hoop, which suppresses the chromophore's fluorescence, can slip off the axle (see diagram )6.

Figure 2: Light Switch
figure 2

Light switch: the addition of an acid (H+) or a base (B) to this pseudorotaxane allows the hoop to slip off the axle, which then fluoresces. When both the acid and the base are added together, the hoop remains in place and there is no fluorescence.

In April this year, chemists A. Prasanna de Silva and Nathan McClenaghan of the Queen's University of Belfast in Northern Ireland unveiled molecules with receptor groups that bind either calcium ions or protons, which together could do simple arithmetic7. The molecules carried chromophores with optical outputs that were altered by this binding. When both molecules were mixed in solution, their combined optical responses to the presence or absence of calcium ions and protons encoded binary responses to simple sums. For example, an input of ‘1,1’ gave an output of ‘10’, the binary equivalent of 2. While molecules that ‘know’ 1+1=2 might not seem like much competition for silicon circuitry, they represent a significant step in the right direction.

Meanwhile, Balzani's team has described a photochemical system showing threshold behaviour, like that seen in nerve cells. This system consists of an organic molecule called 4′-methoxyflavylium and a cobalt complex, the absorption spectra of both being altered by light-driven reactions. A solution containing both molecules could ‘integrate’ two optical signals, ‘firing’ (absorbing light at a particular wavelength) only when these signals exceed a certain threshold8. This behaviour can only be achieved in conventional electronics by combining several components.

Simple molecular logic

All these examples involve molecules floating around in solution. De Silva suggests that their potential lies partly with the development of new chemical sensor systems. But solution-based molecular devices with optical outputs are of questionable value for complex information processing, because the emission or absorption of light is a general, multidirectional signal that does not readily allow one molecule to communicate specifically with another. “The significant problem we could never solve was how to connect the output of one gate to the input of another,” says Birge of his pioneering 1980s work. That problem still dogs the field. “The chaining of one gate to another is ungainly, at best,” de Silva concedes.

“Eventually, the molecular components of the system should be fixed in addressable arrays,” concludes Balzani. This forces the issue of connectivity: how to wire them up. And that points towards electronics, rather than light or chemical diffusion, as the means of controlling the interactions between individual logic gates. Williams of Hewlett-Packard goes so far as to dismiss many of the demonstrations of molecular logic as “bogus”, because the output signal is different from the input signal. “There is no way to couple circuit elements together without transducers that would completely dominate the system,” he observes.

What is needed, some researchers suggest, are molecules with electronic properties that mimic those of the workhorse of today's microprocessors: the silicon field-effect transistor (FET). This has three terminals: the source (input), drain (output) and gate (control) electrodes. In 1998, Cees Dekker and his colleagues at Delft University of Technology in the Nether-lands created a rudimentary three-terminal transistor consisting of a carbon nanotube connected to two metal electrodes9. But the ability to wire up a single molecule like this is unusual, facilitated by the tubes being several micrometres long.

FETs do not merely control an output — they also introduce amplification, or gain. In theory, a molecule could do this job. In 1997, researchers led by Mark Reed of Yale University in New Haven, Connecticut, and James Tour of Rice University in Houston showed that the molecule benzene-1,4-dithiolate can transmit current across the junction between two gold electrodes10. Norton Lang of IBM's Watson research centre and his colleagues have recently shown theoretically that gain could be developed at this molecular junction, if a third, control terminal could be added11 — although adding this is likely to prove extremely difficult in practice.

Heath: fabricating solid-state molecular gates.

All the same, the field of molecular logic is starting to feel its way towards the fabrication of practical electronic devices. This was illustrated in dramatic fashion last year when Williams, working with Jim Heath and colleagues at the University of California at Los Angeles — including Stoddart, who is now based there — made solid-state logic gates based on rotaxanes12.

With these molecules, the axle was kinked into a ‘V’ shape, and the switching did not seem to depend on motion of the hoop on the axle — instead, the molecules' electrical conductivity was switched by oxidizing the axle unit. The researchers first deposited a film of the rotaxanes onto an electrode. The molecules in this monolayer were oriented so that the apex of each V pointed down at the electrode. They then deposited tiny metal wires on top of the rotaxane film. Applying a voltage to these wires oxidized the rotaxane molecules and irreversibly ‘opened’ the switch. Heath and his colleagues also showed that these switches could be connected into configurations that acted as two different varieties of logic gate. Each device comprised many thousands of rotaxane molecules. But in principle they could be scaled down to single-molecule dimensions, if electrodes could be made small enough.

Almost entirely memory

While such devices could form the basis of a molecular computer, the architecture of such a machine is likely to be quite different from that of today's microchips. “The big mistake that most people in this field are making,” says Williams, “is to attempt to reproduce the functionality of discrete silicon devices in molecules, and then string the molecules together in the same way a silicon integrated circuit is built.” Williams points out that the molecular switches he devised in collaboration with Heath's team are basically two-state memory elements. If you have enough memory, he says, computation can be largely done using ‘look-up’ tables rather than, as at present, calculating everything from scratch. “Computers based on molecular components will be almost entirely memory,” Williams predicts.

Nanowires could control molecular devices. Credit: HEWLETT-PACKARD LABORATORIES

Phil Kuekes, a colleague of Williams who specializes in computer architecture, says that the Hewlett-Packard team is already thinking about how to wire molecular memory arrays into nanometre-scale devices. Their approach borrows from the 1970s concept of ‘programmable logic arrays’, in which single chips using conventional electronic circuitry could perform basic logic operations, such as adding two bits together. Kuekes says the goal is to reproduce this function in molecular memory circuits measuring 100 nanometres across, which could be wired up using a grid of self-assembling nanowires with the arrays sandwiched at the crossing points. Significantly, Williams and his colleagues have just demonstrated the growth of such wires13.

As well as thinking more deeply about computer architecture, molecular logic researchers will need to address some fundamental obstacles inherent to the technology. For one thing, logic circuits consisting of arrays of molecules are sure to contain defects, since no chemical synthesis is perfect. But Heath and Williams argue that experience with the Teramac, a massively parallel but defect-ridden experimental computer built by Hewlett-Packard, suggests it should be possible to devise software routines to search for useable elements in an array containing many defective components14.

At the scale of individual molecules, quantum effects might also limit reliability. In theory, molecular switches could operate incredibly quickly, with switching times measured in femtoseconds (10−15 seconds), rather than the nanoseconds of today's silicon FETs. At these fleeting timescales, the Uncertainty Principle will limit the precision with which the states of the switches can be measured — and thus the feasibility of storing and processing information. “This entire issue has been ignored in much of the recent work because such considerations appear to dampen the excitement,” says Birge. But he believes it should be possible to counter the problem by applying some redundancy — encoding each bit of information in more than one molecule.

Aside from the technical obstacles, there is also the inbuilt inertia of the multibillion-dollar computer chip industry, which is understandably reluctant to abandon silicon for approaches that have yet to yield a single commercial device. But the developments of the past few years have convinced some within the computer industry that molecular logic will eventually deliver the goods. “I was one of the biggest sceptics,” says Williams. “Now I believe that this is the inevitable wave of the future.”