Skip to main content

Thank you for visiting nature.com. You are using a browser version with limited support for CSS. To obtain the best experience, we recommend you use a more up to date browser (or turn off compatibility mode in Internet Explorer). In the meantime, to ensure continued support, we are displaying the site without styles and JavaScript.

  • Article
  • Published:

Quantum circuit synthesis with diffusion models

A preprint version of the article is available at arXiv.

Abstract

Quantum computing has recently emerged as a transformative technology. Yet, its promised advantages rely on efficiently translating quantum operations into viable physical realizations. Here we use generative machine learning models, specifically denoising diffusion models (DMs), to facilitate this transformation. Leveraging text conditioning, we steer the model to produce desired quantum operations within gate-based quantum circuits. Notably, DMs allow to sidestep during training the exponential overhead inherent in the classical simulation of quantum dynamics—a consistent bottleneck in preceding machine learning techniques. We demonstrate the model’s capabilities across two tasks: entanglement generation and unitary compilation. The model excels at generating new circuits and supports typical DM extensions such as masking and editing to, for instance, align the circuit generation to the constraints of the targeted quantum device. Given their flexibility and generalization abilities, we envision DMs as pivotal in quantum circuit synthesis, both enhancing practical applications and providing insights into theoretical quantum computation.

This is a preview of subscription content, access via your institution

Access options

Buy this article

Prices may be subject to local taxes which are calculated during checkout

Fig. 1: Quantum circuit generation pipeline summary.
Fig. 2: Entanglement generation.
Fig. 3: Masking and editing circuits.
Fig. 4: Unitary compilation.

Similar content being viewed by others

Data availability

The weights of the models trained for entanglement generation and for unitary compilation used to produce the figures presented in this paper can be found at https://doi.org/10.5281/zenodo.10282061 (ref. 23). The datasets are not shared due to space constraints but can easily be generated with the released code. All necessary details are provided in Methods.

Code availability

All the resources necessary to reproduce the results in this paper are accessible at https://doi.org/10.5281/zenodo.10282061 (ref. 23). The code is given in the form of a Python library, genQC, which allows the user to train new models, generate circuits from pre-trained models as well as fine-tune the latter at will. The library also contains multiple examples that will guide the user through the various applications of the proposed method.

References

  1. Feynman, R. P. et al. Simulating physics with computers. Int. J. Theor. Phys. https://doi.org/10.1007/BF02650179 (2018).

  2. McArdle, S., Endo, S., Aspuru-Guzik, A., Benjamin, S. C. & Yuan, X. Quantum computational chemistry. Rev. Mod. Phys. 92, 015003 (2020).

    Article  MathSciNet  Google Scholar 

  3. Cerezo, M., Verdon, G., Huang, H.-Y., Cincio, L. & Coles, P. J. Challenges and opportunities in quantum machine learning. Nat. Comput. Sci. 2, 567–576 (2022).

    Article  Google Scholar 

  4. Farhi, E., Goldstone, J. & Gutmann, S. A quantum approximate optimization algorithm. Preprint at https://arxiv.org/abs/1411.4028 (2014).

  5. Preskill, J. Quantum computing in the NISQ era and beyond. Quantum 2, 79 (2018).

    Article  Google Scholar 

  6. Arrazola, J. M. et al. Machine learning method for state preparation and gate synthesis on photonic quantum computers. Quantum Sci. Technol. 4, 024004 (2019).

    Article  Google Scholar 

  7. Bolens, A. & Heyl, M. Reinforcement learning for digital quantum simulation. Phys. Rev. Lett. 127, 110502 (2021).

    Article  MathSciNet  Google Scholar 

  8. Melnikov, A. A. et al. Active learning machine learns to create new quantum experiments. Proc. Natl Acad. Sci. USA 115, 1221–1226 (2018).

    Article  Google Scholar 

  9. He, Z. et al. A GNN-based predictor for quantum architecture search. Quantum Inf. Process. 22, 128 (2023).

    Article  MathSciNet  Google Scholar 

  10. Shen, Y. Prepare ansatz for VQE with diffusion model. Preprint at https://arxiv.org/abs/2310.02511 (2023).

  11. Zhang, S.-X., Hsieh, C.-Y., Zhang, S. & Yao, H. Neural predictor based quantum architecture search. Mach. Learn. Sci. Technol. 2, 045027 (2021).

    Article  Google Scholar 

  12. Fösel, T., Niu, M. Y., Marquardt, F. & Li, L. Quantum circuit optimization with deep reinforcement learning. Preprint at https://arxiv.org/abs/2103.07585 (2021).

  13. Ostaszewski, M., Trenkwalder, L. M., Masarczyk, W., Scerri, E. & Dunjko, V. Reinforcement learning for optimization of variational quantum circuit architectures. Adv. Neural Inf. Process. Syst. 34, 18182–18194 (2021).

    Google Scholar 

  14. Zhang, Y.-H., Zheng, P.-L., Zhang, Y. & Deng, D.-L. Topological quantum compiling with reinforcement learning. Phys. Rev. Lett. 125, 170501 (2020).

    Article  Google Scholar 

  15. Moro, L., Paris, M. G., Restelli, M. & Prati, E. Quantum compiling by deep reinforcement learning. Commun. Phys. 4, 178 (2021).

    Article  Google Scholar 

  16. Sarra, L., Ellis, K. & Marquardt, F. Discovering quantum circuit components with program synthesis. Preprint at https://arxiv.org/abs/2305.01707 (2023).

  17. Preti, F. et al. Hybrid discrete–continuous compilation of trapped-ion quantum circuits with deep reinforcement learning. Preprint at https://arxiv.org/abs/2307.05744 (2023).

  18. Khatri, S. et al. Quantum-assisted quantum compiling. Quantum 3, 140 (2019).

    Article  Google Scholar 

  19. Sohl-Dickstein, J., Weiss, E., Maheswaranathan, N. & Ganguli, S. Deep unsupervised learning using nonequilibrium thermodynamics. In International Conference on Machine Learning 2256–2265 (PMLR, 2015).

  20. Rombach, R., Blattmann, A., Lorenz, D., Esser, P. & Ommer, B. High-resolution image synthesis with latent diffusion models. In Proc. IEEE/CVF Conference on Computer Vision and Pattern Recognition 10684–10695 (IEEE, 2022).

  21. Raussendorf, R. & Briegel, H. J. A one-way quantum computer. Phys. Rev. Lett. 86, 5188–5191 (2001).

    Article  Google Scholar 

  22. González-Cuadra, D. et al. Fermionic quantum processing with programmable neutral atom arrays. Proc. Natl Acad. Sci. USA 120, e2304294120 (2023).

    Article  MathSciNet  Google Scholar 

  23. Fürrutter, F., Muñoz-Gil, G. & Briegel, H. J. genQC—quantum circuit synthesis with diffusion models. Zenodo https://doi.org/10.5281/zenodo.10282061 (2023).

  24. Ho, J. & Salimans, T. Classifier-free diffusion guidance. Preprint at https://arxiv.org/abs/2207.12598 (2022).

  25. Podell, D. et al. SDXK: improving latent diffusion models for high-resolution image synthesis. In Proc. Twelfth International Conference on Learning Representations (ICLR, 2024).

  26. Kong, Z., Ping, W., Huang, J., Zhao, K. & Catanzaro, B. DiffWave: a versatile diffusion model for audio synthesis. https://arxiv.org/abs/2009.09761 (2020).

  27. Singer, U. et al. Make-a-video: text-to-video generation without text-video data. In Proc. Eleventh International Conference on Learning Representations (ICLR, 2022).

  28. Watson, J. L. et al. De novo design of protein structure and function with RFdiffusion. Nature 620, 1089–1100 (2023).

    Article  Google Scholar 

  29. Radford, A. et al. Learning transferable visual models from natural language supervision. In International Conference on Machine Learning 8748–8763 (PMLR, 2021).

  30. Chen, C.-F. R., Fan, Q. & Panda, R. CrossVIT: cross-attention multi-scale vision transformer for image classification. In Proc. IEEE/CVF International Conference on Computer Vision 357–366 (IEEE, 2021).

  31. Vaswani, A. et al. Attention is all you need. In Advances in Neural Information Processing Systems 30 (NIPS, 2017).

  32. Krenn, M., Landgraf, J., Foesel, T. & Marquardt, F. Artificial intelligence and machine learning for quantum technologies. Phys. Rev. A 107, 010101 (2023).

    Article  Google Scholar 

  33. Krenn, M., Malik, M., Fickler, R., Lapkiewicz, R. & Zeilinger, A. Automated search for new quantum experiments. Phys. Rev. Lett. 116, 090405 (2016).

    Article  Google Scholar 

  34. Huber, M. & de Vicente, J. I. Structure of multidimensional entanglement in multipartite systems. Phys. Rev. Lett. 110, 030501 (2013).

    Article  Google Scholar 

  35. Lugmayr, A. et al. RePaint: inpainting using denoising diffusion probabilistic models. In Proc. IEEE/CVF Conference on Computer Vision and Pattern Recognition 11461–11471 (IEEE, 2022).

  36. Bäumer, E. et al. Efficient long-range entanglement using dynamic circuits. Preprint at https://arxiv.org/abs/2308.13065 (2023).

  37. Weiden, M., Younis, E., Kalloor, J., Kubiatowicz, J. & Iancu, C. Improving quantum circuit synthesis with machine learning. In IEEE International Conference on Quantum Computing and Engineering (QCE) 1–11 (IEEE, 2023).

  38. Dalzell, A. M. et al. Quantum algorithms: a survey of applications and end-to-end complexities. Preprint at https://arxiv.org/abs/2310.03011 (2023).

  39. Daley, A. J. et al. Practical quantum advantage in quantum simulation. Nature 607, 667–676 (2022).

    Article  Google Scholar 

  40. Niu, Z., Zhong, G. & Yu, H. A review on the attention mechanism of deep learning. Neurocomputing 452, 48–62 (2021).

    Article  Google Scholar 

  41. Wiegreffe, S. & Pinter, Y. Attention is not not explanation. In Proc. 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (IJCNLP, 2019).

  42. Hertz, A. et al. Prompt-to-prompt image editing with cross attention control. Preprint at https://arxiv.org/abs/2208.01626 (2022).

  43. Li, Y., Keuper, M., Zhang, D. & Khoreva, A. Divide & bind your attention for improved generative semantic nursing. Preprint at https://arxiv.org/abs/2307.10864 (2023).

  44. Raussendorf, R., Browne, D. E. & Briegel, H. J. Measurement-based quantum computation on cluster states. Phys Rev. A 68, 022312 (2003).

    Article  Google Scholar 

  45. Briegel, H. J., Browne, D. E., Dür, W., Raussendorf, R. & Van den Nest, M. Measurement-based quantum computation. Nat. Phys. 5, 19–26 (2009).

    Article  Google Scholar 

  46. Krenn, M., Kottmann, J. S., Tischler, N. & Aspuru-Guzik, A. Conceptual understanding through efficient automated design of quantum optical experiments. Phys. Rev. X 11, 031044 (2021).

    Google Scholar 

  47. Qiskit contributors. Qiskit: an open-source framework for quantum computing. Zenodo https://doi.org/10.5281/zenodo.2573505 (2023).

  48. Kazemnejad, A., Padhi, I., Ramamurthy, K. N., Das, P. & Reddy, S. The impact of positional encoding on length generalization in transformers. In Advances in Neural Information Processing Systems, 36 (NIPS, 2024).

  49. Ilharco, G. et al. Openclip. Zenodo https://doi.org/10.5281/zenodo.10037810 (2023).

  50. Ho, J., Jain, A. & Abbeel, P. Denoising diffusion probabilistic models. In Advances in Neural Information Processing Systems, 33, 6840–6851 (NIPS, 2020).

  51. Ning, M., Sangineto, E., Porrello, A., Calderara, S. & Cucchiara, R. Input perturbation reduces exposure bias in diffusion models. In International Conference on Machine Learning 26245–26265 (PMLR, 2023).

  52. Nichol, A. & Dhariwal, P. Improved denoising diffusion probabilistic models. In International Conference on Machine Learning 8162–8171 (PMLR, 2021).

  53. Kingma, D. P. & Ba, J. Adam: a method for stochastic optimization. Preprint at https://arxiv.org/abs/1412.6980 (2014).

  54. Smith, L. N. & Topin, N. Super-convergence: very fast training of neural networks using large learning rates. In Artificial Intelligence and Machine Learning for Multi-domain Operations Applications Vol. 11006, 369–386 (SPIE, 2019).

  55. Ruiz, N. et al. Dreambooth: fine tuning text-to-image diffusion models for subject-driven generation. In 2023 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) 22500–22510 (IEEE, 2023).

  56. Song, J., Meng, C. & Ermon, S. Denoising diffusion implicit models. Preprint at https://arxiv.org/abs/2010.02502 (2022).

  57. Lin, S., Liu, B., Li, J. & Yang, X. Common diffusion noise schedules and sample steps are flawed. In 2024 IEEE/CVF Winter Conference on Applications of Computer Vision (WACV) 5392–5399 (IEEE, 2024).

  58. Meng, C. et al. SDEdit: guided image synthesis and editing with stochastic differential equations. In International Conference on Learning Representations (ICLR, 2022).

Download references

Acknowledgements

G.M.-G. acknowledges funding from the European Union. H.J.B. acknowledges funding from the Austrian Science Fund (FWF) through [10.55776/F71] (BeyondC), the Volkswagen Foundation (Az: 97721), and the European Union (ERC Advanced Grant, QuantAI, no. 101055129). Views and opinions expressed are, however, those of the author(s) only and do not necessarily reflect those of the European Union, European Commission, European Climate, Infrastructure and Environment Executive Agency (CINEA), nor any other granting authority.

Author information

Authors and Affiliations

Authors

Contributions

Conceptualization: G.M.-G. and H.J.B. Methodology: F.F. and G.M.-G. Software: F.F. Formal analysis: F.F., G.M.-G. and H.J.B. Writing: G.M.-G., F.F. and H.J.B.

Corresponding author

Correspondence to Gorka Muñoz-Gil.

Ethics declarations

Competing interests

The authors declare no competing interests.

Peer review

Peer review information

Nature Machine Intelligence thanks Evert van Nieuwenburg, and the other, anonymous, reviewer(s) for their contribution to the peer review of this work.

Additional information

Publisher’s note Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Extended data

Extended Data Fig. 1 Quantum circuit tensor encoding.

(a) Schematic representation of the gate embeddings for a single and multi qubit gate. (b) Quantum circuit encoding and decoding pipeline. For encoding (green arrows), an input quantum circuit (top left) is first tokenized based on the proposed vocabulary. Then, the token matrix is transformed into a continuous tensor based on the chosen embeddings vi (bottom right). In order to decode a continuous tensor into a circuit (blue arrows), we first use the cosine similarity between input embeddings and the ones assigned to existing tokens to generate a tokenized matrix, which is then transformed back into a circuit by means of the vocabulary. The transformation between circuits and tokens depends on such vocabulary, and can be changed at will to cope with the desired computing framework or platform. Further details are given in text.

Extended Data Fig. 2 Entanglement generation dataset distribution.

Characteristics of the training dataset used for the entanglement generation task, sampled according to Extended Data Table 1 and balanced as described in Methods, Training section. Depending on the training step (max or bucket padding, see aforementioned section), we sample batches either from the whole dataset or buckets containing circuits of fixed number of qubits. (a) Number of distinct circuits as a function of the number of qubits. For lower qubit counts, less distinct circuits exist, resulting in an inevitable lower number in the training dataset. (b) Distribution of circuit lengths, which are in this case multiples of the U-Net scaling factor 4, due to the length padding explained in Methods, Pipeline and Architectures section.

Extended Data Fig. 3 Machine learning architectures.

(a) Scheme of the denoising U-Net architecture predicting the noise ϵθ(xt, t, c). First, we project the input tensor features into a higher space through a convolutional layer (red) and then apply a 2D positional sinusoidal encoding. Then, we apply a typical encoder-decoder structure, with skip connections scaled with \(1/\sqrt{2}\). The time step encoding t is injected into residual convolution layers (turquoise). The condition embeddings c are input to the residual transformer blocks (purple) as detailed in the Method’s Pipeline and Architecture section. All the transformer blocks have a residual connection. (b) Scheme of the unitary encoder used to transform input unitaries into conditionings.

Extended Data Fig. 4 Generated circuit lengths distributions.

Distribution of circuit lengths w.r.t. to the number of entangled qubits for: (a) the training (balanced) dataset of Extended Data Fig. 2 filtered for 5 qubit circuits; (b-c) generated circuits with an input tensor constraining a maximum of 16 and 24 gates, respectively.

Extended Data Table 1 Dataset sampling parameters
Extended Data Table 2 Training parameters
Extended Data Table 3 Sampling parameters

Rights and permissions

Springer Nature or its licensor (e.g. a society or other partner) holds exclusive rights to this article under a publishing agreement with the author(s) or other rightsholder(s); author self-archiving of the accepted manuscript version of this article is solely governed by the terms of such publishing agreement and applicable law.

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Fürrutter, F., Muñoz-Gil, G. & Briegel, H.J. Quantum circuit synthesis with diffusion models. Nat Mach Intell 6, 515–524 (2024). https://doi.org/10.1038/s42256-024-00831-9

Download citation

  • Received:

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1038/s42256-024-00831-9

Search

Quick links

Nature Briefing AI and Robotics

Sign up for the Nature Briefing: AI and Robotics newsletter — what matters in AI and robotics research, free to your inbox weekly.

Get the most important science stories of the day, free in your inbox. Sign up for Nature Briefing: AI and Robotics