The most sensitive phase-measuring instrument yet uses quantum trickery and a scaled-down version of the notorious Schrödinger's cat. It means that more sensitive devices for metrology and imaging could be on the way.
Elsewhere in this issue, a group of Australian researchers (Higgins et al., page 393)1 reports the construction of the most sensitive optical interferometer yet. At the heart of this success lies a cunning use of quantum feedback to minimize photon phase noise in the device — a technique that could have applications in imaging, remote sensing, gravity-wave detection and spectroscopy.
An interferometer detects differences in the phase of two waves by measuring the extent to which their amplitudes add up or cancel out when they interfere. This makes it a very useful instrument for calibrating tiny distances. The optical interferometer (that is, one using visible light) really came of age in 1887, when Albert Michelson and Edward Morley exploited its exquisite phase sensitivity to measure the speed of light2. Their result — that, however you look at it, light always travels at the same speed — was accurate enough to disprove the existence of the 'luminiferous aether', a universal medium through which light waves were thought to propagate, and thus to pave the way for Albert Einstein's special theory of relativity. The Michelson interferometer still serves today as a test bed for Einstein's general relativity: huge interferometer arms are the basis of observatories dotted around the world looking for signs of gravitational waves, predicted disturbances in the fabric of space-time.
One of these is LIGO, the Laser Interferometer Gravitational-Wave Observatory based at sites in Louisiana and Washington state. It is so sensitive that it can measure a change in the distance travelled by a photon reflected up and down the interferometer's 4-km-long arms, caused by a passing gravitational wave, as small as the diameter of a proton3. Even so, over a certain range of frequencies, LIGO is limited by 'shot noise' in the laser beams used to stake out the detector. The origin of this noise is the wave–particle duality of light, which imparts a fluctuation to the light wave's phase (Φ) that scales inversely with the square root of the number of photons (n) in the laser beam, Δ Φ 1/√n. This uncertainty determines the weakest gravity-wave amplitude that can be detected — the smaller that amplitude is, the better.
The number of photons circulating in the interferometer is proportional to the laser intensity — so, all things being equal, the greater the circulating power, the smaller the phase uncertainty will be. But, in LIGO as in other interferometers, the power cannot be increased indefinitely: mirrors begin to shake from the photon pressure, and the noise starts going up once again. The sweet spot that minimizes noise and maximizes sensitivity gives what is known as the 'standard quantum limit' to the accuracy of an interferometer measurement. For a long time, it was considered the best you could do in experiments involving quantum particles such as photons.
But in 1981, it was pointed out that you could beat the standard limit using quantum trickery4. You could simply squeeze the noise in the photon field in such a way that, in the limit of infinite squeezing, you could lower the shot noise and make it depend not on 1/√n, but on 1/n — a quadratic improvement in accuracy at the same light intensity. This lower limit is, in fact, the best you could hope for from the photonic version of the Heisenberg uncertainty principle. This principle is the most fundamental statement of quantum uncertainty, and puts an upper limit on the accuracy to which pairs of variables such as position and momentum (or, equivalently, phase and intensity) can be known. The 1/n rule is therefore called the Heisenberg limit.
This sounds all very well and good, and people continue to talk about putting squeezed light into LIGO some day. But the fact is that infinite squeezing is hard to come by, and the Heisenberg limit had until now never been realized in practice.
Enter, stage left, the weirdness of quantum entanglement, which occurs when the quantum states of remote particles become intertwined. In 1986, a way was proposed to get close to the Heisenberg limit not with squeezed light, but with quantum-entangled neutrons in a matter-wave interferometer5. The entanglement idea percolated along for a number of years, but really gained momentum in the past ten, when people realized that the entanglement approach to interferometry could be implemented using ideas from quantum computing such as error correction and quantum feedback6.
A quantum computer is, in essence, a big machine filled with quantum-entangled qubits. A quantum interferometer is also a big machine filled with quantum-entangled particles, and these can be treated as qubits. A popular approach to the phase-estimation problem exploits whacky beasts such as the Schrödinger's cat 'High-NOON' state7, in which all the photons are either in one arm of the interferometer or the other, but you can't tell which arm is which (Fig. 1a). In this case, a NOON state of n photons, each of wavelength λ, acts like a single high-frequency photon of wavelength λ/n. Hence, if one has ten red photons of 500 nm wavelength in an n δ 10 NOON state, the result is an entangled red-photon state, but one with the resolving power of an X-ray photon of wavelength 50 nm. The shorter the wavelength, the more accurate the phase estimation. Much progress was made with such states on both the theoretical and experimental front, and they have got closer to the Heisenberg limit than have squeezed states. But owing to losses in the interferometer and the fragile nature of these states, they have never quite reached the mythical Heisenberg limit8.
Until Higgins and colleagues came along1. In January 2007, in a theoretical talk at the Physics of Quantum Electronics workshop in Snowbird, Utah, Howard Wiseman from Griffith University in Brisbane, a co-author on the paper, made the remarkable claim that you could get to the ultimate uncertainty limit by sending not Schrödinger's cat through the interferometer, but a bunch of Schrödinger's kittens — single photons. You then compensate for the lower flux and apparent lack of quantum entanglement with an elaborate quantum feedback loop (Fig. 1b). Good luck with that, I remember thinking to myself: applying a feedback loop to single photons at light speed would be technologically impossible any time soon. I am now forced to eat my hat. The authors' optical interferometer, operating at the Heisenberg limit, involves no squeezing, minimal entanglement, and no Schrödinger's cat; the quantum weirdness is in the feedback loop.
This loopy demonstration in fact implements an ingenious phase-estimation algorithm based on quantum computing9 that uses simple optics to recycle photons through the phase shift to be measured. Although the solution is too low in intensity to be of use in LIGO anytime soon — the largest number of photons the authors used was 378, whereas LIGO has a circulating power of 1014 photons per second — the work breaks new ground. It could have other, more immediate applications in areas such as quantum metrology, quantum imaging and quantum sensing.
So what is the immediate lesson to be learned? That tricks from quantum computing will find their practical near-term implementation in spooky gizmos with scientific and practical importance, but nothing to do with computers at all. Bravo!
About this article
Contemporary Physics (2008)