Computer science

Data analysis meets quantum physics

A technique that combines machine learning and quantum computing has been used to identify the particles known as Higgs bosons. The method could find applications in many areas of science. See Letter p.375

With the advent of high-performance computing and the ability to process enormous amounts of data, the need for advanced data-analysis techniques continues to grow. This is particularly true for experiments at the Large Hadron Collider near Geneva, Switzerland, where particle collisions occur up to 40 million times per second1, generating enormous data sets. These data sets often involve only a tiny number of the particles of interest — for instance, the particles called Higgs bosons are produced approximately once every billion collisions2,3. On page 375, Mott et al.4 report a data-analysis technique that unites machine learning and quantum computing, and apply it to the problem of identifying Higgs bosons. Their approach has advantages with respect to conventional methods5,6 and opens up further opportunities for research.

Rare events at the Large Hadron Collider, such as the production of a Higgs boson, are identified using classifiers — combinations of variables whose values depend on the particles produced in the collisions. Classifiers need to be optimized to maximize the sensitivity of the data analysis to rare events and to reject the typically abundant background events that result from ordinary particle-physics processes. Such optimization has conventionally been achieved either by testing combinations of variables manually or by using machine-learning techniques. Each of these approaches has advantages in different situations and both were instrumental in the discovery of the Higgs boson7,8 in 2012.

Human-constructed classifiers often have a physically intuitive meaning and can be optimized using a relatively small amount of data. However, the optimization procedure can require a substantial investment of human time. Furthermore, it rarely fully exploits correlations between variables, meaning that the data analysis is not as sensitive to the signal of interest as it could be.

By contrast, machine-learning techniques mostly need computing time rather than human time to find optimal variable combinations. In addition, they exploit both linear and non-linear correlations between variables, therefore maximizing the sensitivity of the data analysis. However, these methods require a substantial amount of data, and the optimal classifier usually does not have a clear physical meaning.

Mott and colleagues report an alternative data-analysis technique called quantum annealing for machine learning (QAML) that has the advantages of both of the conventional approaches. Rather than relying on humans to test all possible combinations of variables, the optimization problem is converted into a form that a quantum computer can understand. The computer, or a classical simulation thereof, is then tasked with finding the optimal classifier. The final classifier has a physically intuitive meaning and takes linear correlations between variables into account.

In the context of Higgs-boson identification, the authors demonstrate that QAML needs a relatively small data set to obtain a maximally sensitive data analysis. By contrast, they find that conventional machine-learning techniques5,6 that require a large amount of data provide only minimal gains in sensitivity.

In principle, the advantages of QAML should be transferable to other data-analysis problems. However, Mott and colleagues' sensitivity comparison was evaluated in the case of an event that is relatively easy to identify: the decay of a Higgs boson into a pair of photons. Additional studies are needed to conclude whether or not the sensitivity of QAML data analyses are competitive with conventional approaches5,6 when studying more-complicated physical processes.

Although it can be beneficial to determine an optimal classifier using only a small data set, many analyses in particle physics require large data sets for other reasons, such as reducing uncertainties. Therefore, if machine-learning techniques outperform QAML, particle physicists will tend to use machine learning to maximize analysis sensitivity, irrespective of the requirement for a large data set.

Instead, QAML is likely to be most useful in situations for which machine learning is either not possible or not particularly beneficial. By providing a moderate gain in analysis sensitivity, with minimal work required, and retaining physical intuition about each variable in the analysis, QAML could benefit many particle-physics studies. However, the approach will probably be even more valuable outside particle physics, in fields for which data sets are often smaller and the benefits are therefore larger.

Mott et al. find no evidence that QAML implemented on a quantum computer is in any way superior to simulating the same process on an ordinary computer. In fact, the authors show that their quantum computer performed slightly worse than the simulation because of limitations in its hardware. Furthermore, the relatively small size of Mott and colleagues' quantum computer meant that the authors needed to use Boolean weights when optimizing classifiers — each variable was either used or not, and the variables that were used were given equal weight in the analysis. As noted by the authors, the sensitivity of QAML could be improved by using a more powerful quantum computer that allows variables to have different weights and exploits anti-correlations between variables.

Although the quantum-computing hardware used by the authors was a limitation in their work, the lack of advantage over a classical simulation means that researchers can benefit from QAML on ordinary computers. Furthermore, Mott and colleagues' classical simulation can already use weighted variables when optimizing classifiers, suggesting that the authors' results could be improved in the future.Footnote 1


  1. 1.

    See all news & views


  1. 1

    Evans, L. & Bryant, P. J. J. Instrum. 3, S08001 (2008).

    Article  Google Scholar 

  2. 2

    Antchev, G. et al. (TOTEM Collaboration) Phys. Rev. Lett. 111, 012001 (2013).

    ADS  CAS  Article  Google Scholar 

  3. 3

    de Florian, D. et al. (LHC Higgs Cross Section Working Group) CERN Yellow Report 2017-002-M (2017).

  4. 4

    Mott, A., Job, J., Vlimant, J.-R., Lidar, D. & Spiropulu, M. Nature 550, 375–379 (2017).

    ADS  CAS  Article  Google Scholar 

  5. 5

    Chollet, F. (2015).

  6. 6

    Chen, T. & Guestrin, C. Preprint at (2016).

  7. 7

    Aad, G. et al. (ATLAS Collaboration) Phys. Lett. B 716, 1–29 (2012).

    ADS  CAS  Article  Google Scholar 

  8. 8

    Chatrchyan, S. et al. (CMS Collaboration) Phys. Lett. B 716, 30–61 (2012).

    ADS  CAS  Article  Google Scholar 

Download references

Author information



Corresponding author

Correspondence to Steven Schramm.

Related links

Related links

Related links in Nature Research

Artificial intelligence: A social spin on language analysis

Nobel 2013 Physics: Endowing particles with mass

Rights and permissions

Reprints and Permissions

About this article

Verify currency and authenticity via CrossMark

Cite this article

Schramm, S. Data analysis meets quantum physics. Nature 550, 339–340 (2017).

Download citation

Further reading


By submitting a comment you agree to abide by our Terms and Community Guidelines. If you find something abusive or that does not comply with our terms or guidelines please flag it as inappropriate.