From probing living cells under a microscope to scanning the heavens for gravity waves, the limitations of precision measurements constrain our capacity to discover more about the world. But what exactly are those limits?
Just how accurate can measurements get? Whereas classical physics places no fundamental limits on how well we can do, in the quantum world it's a different story. Writing in Physical Review Letters, Giovannetti, Lloyd and Maccone^{1} derive general limits for the precision with which a single variable can be measured quantum mechanically.
But is this new? After all, Heisenberg's uncertainty principle — one of the earliest results in quantum mechanics — already places a fundamental limitation on the precision with which we can make a measurement. In its simplest form, the uncertainty principle identifies socalled complementary observables, pairs of quantities for which knowing one quantity precisely means that the other can only be poorly known. This fundamental principle makes it impossible to learn everything about a quantummechanical system.
If we monitor only one quantity, however, there is no such inprinciple limitation. In fact, this is exactly the strategy exploited in interferometric measurements, in which light travels down a pair of distinct paths and the difference between the two path lengths leads to an observable change in the output of the device. This path difference can be measured to an arbitrary accuracy. But what if we are given some constraint, such as a total energy budget or total light intensity? We all know that it is easier to see in a welllit room than in a dim one. Similarly, the higher the energy or light intensity in an interferometer, the higher its resolution. One may therefore ask, for a fixed budget, how small a path difference can be discerned?
Our intuition from everyday experience tells us that the most promising strategy for measuring a distance is to choose a measuring stick with marked intervals of length comparable to the distance we wish to measure. We would not, for example, choose a metre stick to measure a molecule. Following similar logic, we might choose the wavelength of light for our interferometer to be comparable to the path difference we want to measure. Surprisingly, Giovannetti and colleagues' latest result^{1} can be used to show that, for optimal quantum strategies, there is no such bias to the size of our measuring stick or the separation of its tick marks.
An optimal strategy refers to a measurement procedure that minimizes the effects of noise on a signal. Ultimately, any measurement is limited by the amount of noise in the system: to discern a signal, the signaltonoise ratio should be around one or larger. This premise underpins all parameterestimation theory, both classical and quantum. Classically, statistical averaging over N repeated but independent measurements will lead to a √N reduction in the noise. This improvement is known to be optimal because it achieves the bound, known as the CramérRao lower bound^{2}, that expresses the best accuracy that can be accomplished in the statistical estimation of a parameter. When this classical bound is generalized to repeated quantum measurements, the analogous quantum bound provides a tighter form of the uncertainty principle recast in the language of parameter estimation^{3}. However, quantum theory allows much more freedom in choosing measurement strategies than is possible in the classical world.
One of the most bizarre features of the quantum world is quantum ‘entanglement’, which allows systems to exhibit stronger correlations than are possible classically. Using entanglement and other tricks, quantum mechanics has led us to devise sophisticated informationprocessing algorithms that one day may lie at the heart of the enormous speedups promised by quantum computation. For example, searching for a needle in a haystack would be much faster — in principle — on a quantum computer than a classical one. The possibility of using entangled systems and/or entangled measurements, and sophisticated algorithms built into measurement devices, raises questions about the ultimate (most general) quantum bounds to measurement.
Giovannetti and colleagues' key insight^{1} into this question is to recast the measurement process in terms of quantum circuits, analogous to electrical circuits, with various quantum gates, similar to logic gates, representing different quantummechanical ‘operators’. They then introduce blackbox operators that perturb the quantum state in a known fashion, but by an unknown amount. Such an operation might, for instance, be adding a phase delay along one arm of an interferometer: the unknown parameter associated with the black box thus corresponds to the parameter we would like to estimate. Once such a black box is conceptualized, it may be reused in the circuit again and again (each black box having the same unknown parameter). The beauty of this language lies in its generality, which allows a rich class of measurement strategies involving N such identical black boxes in a circuit of arbitrary design.
Using this formalism, Giovannetti et al. show that the optimal accuracy achievable in estimating the value of the blackbox parameter can be obtained in a simple circuit with N black boxes, running on an Nfold entangled state. Surprisingly, recourse to entangled measurements (joint measurements of multiple paths of the circuit), or rearrangements of the circuit to correspond to sophisticated quantumsearch strategies, will not lead to any further improvement.
What is this optimal performance? In fact, it depends entirely on the range of observable values of the blackbox operator. In any circuit with N black boxes, the noise associated with the estimation of the black boxes' parameter will be reduced at most Nfold compared with the noise in the best circuit with only a single black box. That represents a considerable advantage over the √N improvement of the classical case. The good (and reassuring) news is that this limit is exactly what one would have expected from a naive application of the good old Heisenberg uncertainty principle: it is none other than the Heisenberg limit.
So what relevance does all this have to the choice of size in our metre sticks? Well, let's return to our interferometer. For a given energy budget (or light intensity), but freedom in our choice of wavelength, we would naively expect the shorter wavelength to yield higher sensitivity. However, the longer the wavelength, the more photons we can squeeze into our interferometer. In other words, with the same budget, we can sample the black box exactly that many more times. Indeed, the Heisenberglimited measurement is equally good, independent of our choice of measuring stick.
Two limitations to the strategy of Giovannetti et al.^{1} lie in the quantum version of the CramérRao bound on which it is based^{3}. First, this bound can be reached only for problems involving singleparameter estimation, so extensions to multiple parameters may lead to different results. For instance, the estimation of the orientation of quantum spins (involving two unknown angles in threedimensional space) can be enhanced by entangled measurements^{4}. Second, the CramérRao bound can be achieved only for an infinite number of repeated measurements. Thus, a result that expresses the approach to this asymptote would fill a gap in our current understanding. Indeed, it may be just this discrepancy that underlies the enhanced precision in determining the orientation of quantum spins using entangled measurements — an enhancement that vanishes in the limit of an infinite number of spins^{4}.
Currently, we are far from putting the ultimate bounds described by Giovannetti et al.^{1} into practice. One example would be the Laser Interferometer GravitationalWave Observatory (LIGO), an exciting experiment that aims to detect tiny ripples in the fabric of spacetime. The LIGO interferometer currently implements only classical strategies scaling as 1/√N (where N is the number of photons in the interferometer). In its current setup, LIGO requires a circulating power of 10–20 kilowatts to achieve minimal sensitivities for detecting gravity waves. In principle, if we could implement a quantumlimited scheme, a similar sensitivity could be achieved with only nanowatts. Such prospects promise an even brighter future for gravitywave astronomy in the long term — and for precision measurement in general.
References
 1
Giovannetti, V., Lloyd, S. & Maccone, L. Phys. Rev. Lett. 96, 010401 (2006).
 2
Cramér, H. Mathematical Methods of Statistics 500–504 (Princeton Univ. Press, 1946).
 3
Braunstein, S. L. & Caves, C. M. Phys. Rev. Lett. 72, 3439–3442 (1994).
 4
Gill, R. & Massar, S. Phys. Rev. A 61, 042312 (2000).
Author information
Affiliations
Rights and permissions
About this article
Published
Issue Date
DOI
Further reading

Entanglementenhanced probing of a delicate material system
Nature Photonics (2013)

Ultimate limits to quantum metrology and the meaning of the Heisenberg limit
Physical Review A (2012)

SubRayleighdiffractionbound quantum imaging
Physical Review A (2009)

Robust strategies for lossy quantum interferometry
Physical Review A (2009)

Quantumprocess tomography: Resource analysis of different strategies
Physical Review A (2008)
Comments
By submitting a comment you agree to abide by our Terms and Community Guidelines. If you find something abusive or that does not comply with our terms or guidelines please flag it as inappropriate.