Skip to main content

Thank you for visiting nature.com. You are using a browser version with limited support for CSS. To obtain the best experience, we recommend you use a more up to date browser (or turn off compatibility mode in Internet Explorer). In the meantime, to ensure continued support, we are displaying the site without styles and JavaScript.

  • Review Article
  • Published:

Physics-informed machine learning

Abstract

Despite great progress in simulating multiphysics problems using the numerical discretization of partial differential equations (PDEs), one still cannot seamlessly incorporate noisy data into existing algorithms, mesh generation remains complex, and high-dimensional problems governed by parameterized PDEs cannot be tackled. Moreover, solving inverse problems with hidden physics is often prohibitively expensive and requires different formulations and elaborate computer codes. Machine learning has emerged as a promising alternative, but training deep neural networks requires big data, not always available for scientific problems. Instead, such networks can be trained from additional information obtained by enforcing the physical laws (for example, at random points in the continuous space-time domain). Such physics-informed learning integrates (noisy) data and mathematical models, and implements them through neural networks or other kernel-based regression networks. Moreover, it may be possible to design specialized network architectures that automatically satisfy some of the physical invariants for better accuracy, faster training and improved generalization. Here, we review some of the prevailing trends in embedding physics into machine learning, present some of the current capabilities and limitations and discuss diverse applications of physics-informed learning both for forward and inverse problems, including discovering hidden physics and tackling high-dimensional problems.

Key points

  • Physics-informed machine learning integrates seamlessly data and mathematical physics models, even in partially understood, uncertain and high-dimensional contexts.

  • Kernel-based or neural network-based regression methods offer effective, simple and meshless implementations.

  • Physics-informed neural networks are effective and efficient for ill-posed and inverse problems, and combined with domain decomposition are scalable to large problems.

  • Operator regression, search for new intrinsic variables and representations, and equivariant neural network architectures with built-in physical constraints are promising areas of future research.

  • There is a need for developing new frameworks and standardized benchmarks as well as new mathematics for scalable, robust and rigorous next-generation physics-informed learning machines.

This is a preview of subscription content, access via your institution

Access options

Buy this article

Prices may be subject to local taxes which are calculated during checkout

Fig. 1: Physics-inspired neural network architectures.
Fig. 2: Inferring the 3D flow over an espresso cup based using the Tomo-BOS imaging system and physics-informed neural networks (PINNs).
Fig. 3: Physics-informed filtering of in-vivo 4D-flow magnetic resonance imaging data of blood flow in a porcine descending aorta.
Fig. 4: Uncovering edge plasma dynamics.
Fig. 5: Transitions between metastable states.

Similar content being viewed by others

References

  1. Hart, J. K. & Martinez, K. Environmental sensor networks: a revolution in the earth system science? Earth Sci. Rev. 78, 177–191 (2006).

    Article  ADS  Google Scholar 

  2. Kurth, T. et al. Exascale deep learning for climate analytics (IEEE, 2018).

  3. Reddy, D. S. & Prasad, P. R. C. Prediction of vegetation dynamics using NDVI time series data and LSTM. Model. Earth Syst. Environ. 4, 409–419 (2018).

    Article  Google Scholar 

  4. Reichstein, M. et al. Deep learning and process understanding for data-driven earth system science. Nature 566, 195–204 (2019).

    Article  ADS  Google Scholar 

  5. Alber, M. et al. Integrating machine learning and multiscale modeling — perspectives, challenges, and opportunities in the biological, biomedical, and behavioral sciences. NPJ Digit. Med. 2, 1–11 (2019).

    Article  Google Scholar 

  6. Iten, R., Metger, T., Wilming, H., Del Rio, L. & Renner, R. Discovering physical concepts with neural networks. Phys. Rev. Lett. 124, 010508 (2020).

    Article  ADS  Google Scholar 

  7. Raissi, M., Perdikaris, P. & Karniadakis, G. E. Physics-informed neural networks: a deep learning framework for solving forward and inverse problems involving nonlinear partial differential equations. J. Comput. Phys. 378, 686–707 (2019).

    Article  ADS  MathSciNet  MATH  Google Scholar 

  8. Schmidt, M. & Lipson, H. Distilling free-form natural laws from experimental data. Science 324, 81–85 (2009).

    Article  ADS  Google Scholar 

  9. Brunton, S. L., Proctor, J. L. & Kutz, J. N. Discovering governing equations from data by sparse identification of nonlinear dynamical systems. Proc. Natl Acad. Sci. USA 113, 3932–3937 (2016).

    Article  ADS  MathSciNet  MATH  Google Scholar 

  10. Jasak, H. et al. OpenFOAM: A C++ library for complex physics simulations. Int. Workshop Coupled Methods Numer. Dyn. 1000, 1–20 (2007).

    ADS  Google Scholar 

  11. Plimpton, S. Fast parallel algorithms for short-range molecular dynamics. J. Comput. Phys. 117, 1–19 (1995).

    Article  ADS  MATH  Google Scholar 

  12. Jia, X. et al. Physics-guided machine learning for scientific discovery: an application in simulating lake temperature profiles. Preprint at arXiv https://arxiv.org/abs/2001.11086 (2020).

  13. Lu, L., Jin, P., Pang, G., Zhang, Z. & Karniadakis, G. E. Learning nonlinear operators via DeepONet based on the universal approximation theorem of operators. Nat. Mach. Intell. 3, 218–229 (2021).

    Article  Google Scholar 

  14. Kashefi, A., Rempe, D. & Guibas, L. J. A point-cloud deep learning framework for prediction of fluid flow fields on irregular geometries. Phys. Fluids 33, 027104 (2021).

    Article  ADS  Google Scholar 

  15. Li, Z. et al. Fourier neural operator for parametric partial differential equations. in Int. Conf. Learn. Represent. (2021).

  16. Yang, Y. & Perdikaris, P. Conditional deep surrogate models for stochastic, high-dimensional, and multi-fidelity systems. Comput. Mech. 64, 417–434 (2019).

    Article  MathSciNet  MATH  Google Scholar 

  17. LeCun, Y. & Bengio, Y. et al. Convolutional networks for images, speech, and time series. Handb. Brain Theory Neural Netw. 3361, 1995 (1995).

    Google Scholar 

  18. Mallat, S. Understanding deep convolutional networks. Phil. Trans. R. Soc. A 374, 20150203 (2016).

    Article  ADS  Google Scholar 

  19. Bronstein, M. M., Bruna, J., LeCun, Y., Szlam, A. & Vandergheynst, P. Geometric deep learning: going beyond Euclidean data. IEEE Signal Process. Mag. 34, 18–42 (2017).

    Article  ADS  Google Scholar 

  20. Cohen, T., Weiler, M., Kicanaoglu, B. & Welling, M. Gauge equivariant convolutional networks and the icosahedral CNN. Proc. Machine Learn. Res. 97, 1321–1330 (2019).

  21. Owhadi, H. Multigrid with rough coefficients and multiresolution operator decomposition from hierarchical information games. SIAM Rev. 59, 99–149 (2017).

    Article  MathSciNet  MATH  Google Scholar 

  22. Raissi, M., Perdikaris, P. & Karniadakis, G. E. Inferring solutions of differential equations using noisy multi-fidelity data. J. Comput. Phys. 335, 736–746 (2017).

    Article  ADS  MathSciNet  MATH  Google Scholar 

  23. Raissi, M., Perdikaris, P. & Karniadakis, G. E. Numerical Gaussian processes for time-dependent and nonlinear partial differential equations. SIAM J. Sci. Comput. 40, A172–A198 (2018).

    Article  MathSciNet  MATH  Google Scholar 

  24. Owhadi, H. Bayesian numerical homogenization. Multiscale Model. Simul. 13, 812–828 (2015).

    Article  MathSciNet  MATH  Google Scholar 

  25. Hamzi, B. & Owhadi, H. Learning dynamical systems from data: a simple cross-validation perspective, part I: parametric kernel flows. Physica D 421, 132817 (2021).

    Article  MathSciNet  Google Scholar 

  26. Reisert, M. & Burkhardt, H. Learning equivariant functions with matrix valued kernels. J. Mach. Learn. Res. 8, 385–408 (2007).

    MathSciNet  MATH  Google Scholar 

  27. Owhadi, H. & Yoo, G. R. Kernel flows: from learning kernels from data into the abyss. J. Comput. Phys. 389, 22–47 (2019).

    Article  ADS  MathSciNet  MATH  Google Scholar 

  28. Winkens, J., Linmans, J., Veeling, B. S., Cohen, T. S. & Welling, M. Improved semantic segmentation for histopathology using rotation equivariant convolutional networks. in Conf. Med. Imaging Deep Learn. (2018).

  29. Bruna, J. & Mallat, S. Invariant scattering convolution networks. IEEE Trans. Pattern Anal. Mach. Intell. 35, 1872–1886 (2013).

    Article  Google Scholar 

  30. Kondor, R., Son, H. T., Pan, H., Anderson, B. & Trivedi, S. Covariant compositional networks for learning graphs. Preprint at arXiv https://arxiv.org/abs/1801.02144 (2018).

  31. Tai, K. S., Bailis, P. & Valiant, G. Equivariant transformer networks. Proc. Int. Conf. Mach. Learn. 97, 6086–6095 (2019).

    Google Scholar 

  32. Pfau, D., Spencer, J. S., Matthews, A. G. & Foulkes, W. M. C. Ab initio solution of the many-electron Schrödinger equation with deep neural networks. Phys. Rev. Res. 2, 033429 (2020).

    Article  Google Scholar 

  33. Pun, G. P., Batra, R., Ramprasad, R. & Mishin, Y. Physically informed artificial neural networks for atomistic modeling of materials. Nat. Commun. 10, 1–10 (2019).

    Article  Google Scholar 

  34. Ling, J., Kurzawski, A. & Templeton, J. Reynolds averaged turbulence modelling using deep neural networks with embedded invariance. J. Fluid Mech. 807, 155–166 (2016).

    Article  ADS  MathSciNet  MATH  Google Scholar 

  35. Jin, P., Zhang, Z., Zhu, A., Tang, Y. & Karniadakis, G. E. SympNets: intrinsic structure-preserving symplectic networks for identifying Hamiltonian systems. Neural Netw. 132, 166–179 (2020).

    Article  Google Scholar 

  36. Lusch, B., Kutz, J. N. & Brunton, S. L. Deep learning for universal linear embeddings of nonlinear dynamics. Nat. Commun. 9, 4950 (2018).

    Article  ADS  Google Scholar 

  37. Lagaris, I. E., Likas, A. & Fotiadis, D. I. Artificial neural networks for solving ordinary and partial differential equations. IEEE Trans. Neural Netw. 9, 987–1000 (1998).

    Article  Google Scholar 

  38. Sheng, H. & Yang, C. PFNN: A penalty-free neural network method for solving a class of second-order boundary-value problems on complex geometries. J. Comput. Phys. 428, 110085 (2021).

    Article  MathSciNet  Google Scholar 

  39. McFall, K. S. & Mahan, J. R. Artificial neural network method for solution of boundary value problems with exact satisfaction of arbitrary boundary conditions. IEEE Transac. Neural Netw. 20, 1221–1233 (2009).

    Article  Google Scholar 

  40. Beidokhti, R. S. & Malek, A. Solving initial-boundary value problems for systems of partial differential equations using neural networks and optimization techniques. J. Franklin Inst. 346, 898–913 (2009).

    Article  MathSciNet  MATH  Google Scholar 

  41. Lagari, P. L., Tsoukalas, L. H., Safarkhani, S. & Lagaris, I. E. Systematic construction of neural forms for solving partial differential equations inside rectangular domains, subject to initial, boundary and interface conditions. Int. J. Artif. Intell. Tools 29, 2050009 (2020).

    Article  Google Scholar 

  42. Zhang, D., Guo, L. & Karniadakis, G. E. Learning in modal space: solving time-dependent stochastic PDEs using physics-informed neural networks. SIAM J. Sci. Comput. 42, A639–A665 (2020).

    Article  MathSciNet  MATH  Google Scholar 

  43. Dong, S. & Ni, N. A method for representing periodic functions and enforcing exactly periodic boundary conditions with deep neural networks. J. Comput. Phys. 435, 110242 (2021).

  44. Wang, B., Zhang, W. & Cai, W. Multi-scale deep neural network (MscaleDNN) methods for oscillatory stokes flows in complex domains. Commun. Comput. Phys. 28, 2139–2157 (2020).

    Article  MathSciNet  Google Scholar 

  45. Liu, Z., Cai, W. & Xu, Z. Q. J. Multi-scale deep neural network (MscaleDNN) for solving Poisson–Boltzmann equation in complex domains. Commun. Comput. Phys. 28, 1970–2001 (2020).

    Article  MathSciNet  Google Scholar 

  46. Mattheakis, M., Protopapas, P., Sondak, D., Di Giovanni, M. & Kaxiras, E. Physical symmetries embedded in neural networks. Preprint at arXiv https://arxiv.org/abs/1904.08991 (2019).

  47. Cai, W., Li, X. & Liu, L. A phase shift deep neural network for high frequency approximation and wave problems. SIAM J. Sci. Comput. 42, A3285–A3312 (2020).

    Article  MathSciNet  MATH  Google Scholar 

  48. Darbon, J. & Meng, T. On some neural network architectures that can represent viscosity solutions of certain high dimensional Hamilton-Jacobi partial differential equations. J. Comput. Phys. 425, 109907 (2021).

    Article  MathSciNet  Google Scholar 

  49. Sirignano, J. & Spiliopoulos, K. DGM: a deep learning algorithm for solving partial differential equations. J. Comput. Phys. 375, 1339–1364 (2018).

    Article  ADS  MathSciNet  MATH  Google Scholar 

  50. Kissas, G. et al. Machine learning in cardiovascular flows modeling: predicting arterial blood pressure from non-invasive 4D flow MRI data using physics-informed neural networks. Comput. Methods Appl. Mech. Eng. 358, 112623 (2020).

    Article  ADS  MathSciNet  MATH  Google Scholar 

  51. Zhu, Y., Zabaras, N., Koutsourelakis, P. S. & Perdikaris, P. Physics-constrained deep learning for high-dimensional surrogate modeling and uncertainty quantification without labeled data. J. Comput. Phys. 394, 56–81 (2019).

    Article  ADS  MathSciNet  MATH  Google Scholar 

  52. Geneva, N. & Zabaras, N. Modeling the dynamics of PDE systems with physics-constrained deep auto-regressive networks. J. Comput. Phys. 403, 109056 (2020).

    Article  MathSciNet  MATH  Google Scholar 

  53. Wu, J. L. et al. Enforcing statistical constraints in generative adversarial networks for modeling chaotic dynamical systems. J. Comput. Phys. 406, 109209 (2020).

    Article  MathSciNet  Google Scholar 

  54. Pfrommer, S., Halm, M. & Posa, M. Contactnets: learning of discontinuous contact dynamics with smooth, implicit representations. Preprint at arXiv https://arxiv.org/abs/2009.11193 (2020).

  55. Erichson, N.B., Muehlebach, M. & Mahoney, M. W. Physics-informed autoencoders for Lyapunov-stable fluid flow prediction. Preprint at arXiv https://arxiv.org/abs/1905.10866 (2019).

  56. Shah, V. et al. Encoding invariances in deep generative models. Preprint at arXiv https://arxiv.org/abs/1906.01626 (2019).

  57. Geneva, N. & Zabaras, N. Transformers for modeling physical systems. Preprint at arXiv https://arxiv.org/abs/2010.03957 (2020).

  58. Li, Z. et al. Multipole graph neural operator for parametric partial differential equations. in Adv. Neural Inf. Process. Syst. (2020).

  59. Nelsen, N. H. & Stuart, A. M. The random feature model for input–output maps between Banach spaces. Preprint at arXiv https://arxiv.org/abs/2005.10224 (2020).

  60. Cai, S., Wang, Z., Lu, L., Zaki, T. A. & Karniadakis, G. E. DeepM&Mnet: inferring the electroconvection multiphysics fields based on operator approximation by neural networks. J. Comput. Phys. 436, 110296 (2020).

  61. Mao, Z., Lu, L., Marxen, O., Zaki, T. A. & Karniadakis, G. E. DeepM&Mnet for hypersonics: predicting the coupled flow and finite-rate chemistry behind a normal shock using neural-network approximation of operators. Preprint at arXiv https://arxiv.org/abs/2011.03349 (2020).

  62. Meng, X. & Karniadakis, G. E. A composite neural network that learns from multi-fidelity data: application to function approximation and inverse PDE problems. J. Comput. Phys. 401, 109020 (2020).

    Article  MathSciNet  MATH  Google Scholar 

  63. Sirignano, J., MacArt, J. F. & Freund, J. B. DPM: a deep learning PDE augmentation method with application to large-eddy simulation. J. Comput. Phys. 423, 109811 (2020).

    Article  MathSciNet  Google Scholar 

  64. Lu, L. et al. Extraction of mechanical properties of materials through deep learning from instrumented indentation. Proc. Natl Acad. Sci. USA 117, 7052–7062 (2020).

    Article  ADS  Google Scholar 

  65. Reyes, B., Howard, A. A., Perdikaris, P. & Tartakovsky, A. M. Learning unknown physics of non-Newtonian fluids. Preprint at arXiv https://arxiv.org/abs/2009.01658 (2020).

  66. Wang, W. & Gómez-Bombarelli, R. Coarse-graining auto-encoders for molecular dynamics. NPJ Comput. Mater. 5, 1–9 (2019).

    Article  Google Scholar 

  67. Rico-Martinez, R., Anderson, J. & Kevrekidis, I. Continuous-time nonlinear signal processing: a neural network based approach for gray box identification (IEEE, 1994).

  68. Xu, K., Huang, D. Z. & Darve, E. Learning constitutive relations using symmetric positive definite neural networks. Preprint at arXiv https://arxiv.org/abs/2004.00265 (2020).

  69. Huang, D. Z., Xu, K., Farhat, C. & Darve, E. Predictive modeling with learned constitutive laws from indirect observations. Preprint at arXiv https://arxiv.org/abs/1905.12530 (2019).

  70. Xu, K., Tartakovsky, A. M., Burghardt, J. & Darve, E. Inverse modeling of viscoelasticity materials using physics constrained learning. Preprint at arXiv https://arxiv.org/abs/2005.04384 (2020).

  71. Li, D., Xu, K., Harris, J. M. & Darve, E. Coupled time-lapse full-waveform inversion for subsurface flow problems using intrusive automatic differentiation. Water Resour. Res. 56, e2019WR027032 (2020).

    Article  ADS  Google Scholar 

  72. Tartakovsky, A., Marrero, C. O., Perdikaris, P., Tartakovsky, G. & Barajas-Solano, D. Physics-informed deep neural networks for learning parameters and constitutive relationships in subsurface flow problems. Water Resour. Res. 56, e2019WR026731 (2020).

    Article  ADS  Google Scholar 

  73. Xu, K. & Darve, E. Adversarial numerical analysis for inverse problems. Preprint at arXiv https://arxiv.org/abs/1910.06936 (2019).

  74. Yang, Y., Bhouri, M. A. & Perdikaris, P. Bayesian differential programming for robust systems identification under uncertainty. Proc. R. Soc. A 476, 20200290 (2020).

    Article  ADS  MathSciNet  Google Scholar 

  75. Rackauckas, C. et al. Universal differential equations for scientific machine learning. Preprint at arXiv https://arxiv.org/abs/2001.04385 (2020).

  76. Wang, S., Yu, X. & Perdikaris, P. When and why PINNs fail to train: a neural tangent kernel perspective. Preprint at arXiv https://arxiv.org/abs/2007.14527 (2020).

  77. Wang, S., Wang, H. & Perdikaris, P. On the eigenvector bias of Fourier feature networks: from regression to solving multi-scale PDEs with physics-informed neural networks. Preprint at arXiv https://arxiv.org/abs/2012.10047 (2020).

  78. Pang, G., Yang, L. & Karniadakis, G. E. Neural-net-induced Gaussian process regression for function approximation and PDE solution. J. Comput. Phys. 384, 270–288 (2019).

    Article  ADS  MathSciNet  MATH  Google Scholar 

  79. Wilson, A. G., Hu, Z., Salakhutdinov, R. & Xing, E. P. Deep kernel learning. Proc. Int. Conf. Artif. Intell. Stat. 51, 370–378 (2016).

    Google Scholar 

  80. Owhadi, H. Do ideas have shape? Plato’s theory of forms as the continuous limit of artificial neural networks. Preprint at arXiv https://arxiv.org/abs/2008.03920 (2020).

  81. Owhadi, H. & Scovel, C. Operator-Adapted Wavelets, Fast Solvers, and Numerical Homogenization: From a Game Theoretic Approach to Numerical Approximation and Algorithm Design (Cambridge Univ. Press, 2019).

  82. Micchelli, C. A. & Rivlin, T. J. in Optimal Estimation in Approximation Theory (eds. Micchelli, C. A. & Rivlin, T. J.) 1–54 (Springer, 1977).

  83. Sard, A. Linear Approximation (Mathematical Surveys 9, American Mathematical Society, 1963).

  84. Larkin, F. Gaussian measure in Hilbert space and applications in numerical analysis. Rocky Mt. J. Math. 2, 379–421 (1972).

    Article  MathSciNet  MATH  Google Scholar 

  85. Sul’din, A. V. Wiener measure and its applications to approximation methods. I. Izv. Vyssh. Uchebn. Zaved. Mat. 3, 145–158 (1959).

  86. Diaconis, P. Bayesian numerical analysis. Stat. Decision Theory Relat. Top. IV 1, 163–175 (1988).

    Article  MathSciNet  MATH  Google Scholar 

  87. Kimeldorf, G. S. & Wahba, G. A correspondence between Bayesian estimation on stochastic processes and smoothing by splines. Ann. Math. Stat. 41, 495–502 (1970).

    Article  MathSciNet  MATH  Google Scholar 

  88. Owhadi, H., Scovel, C. & Schäfer, F. Statistical numerical approximation. Not. Am. Math. Soc. 66, 1608–1617 (2019).

    MathSciNet  MATH  Google Scholar 

  89. Tsai, Y. H. H., Bai, S., Yamada, M., Morency, L. P. & Salakhutdinov, R. Transformer dissection: a unified understanding of transformer’s attention via the lens of kernel. Preprint at arXiv https://arxiv.org/abs/1908.11775 (2019).

  90. Kadri, H. et al. Operator-valued kernels for learning from functional response data. J. Mach. Learn. Res. 17, 1–54 (2016).

    MathSciNet  Google Scholar 

  91. González-García, R., Rico-Martínez, R. & Kevrekidis, I. G. Identification of distributed parameter systems: a neural net based approach. Comput. Chem. Eng. 22, S965–S968 (1998).

    Article  Google Scholar 

  92. Long, Z., Lu, Y., Ma, X. & Dong, B. PDE-Net: learning PDEs from data. Proc. Int. Conf. Mach. Learn. 80, 3208–3216 (2018).

    Google Scholar 

  93. He, J. & Xu, J. MgNet: a unified framework of multigrid and convolutional neural network. Sci. China Math. 62, 1331–1354 (2019).

    Article  MathSciNet  MATH  Google Scholar 

  94. He, K., Zhang, X., Ren, S. & Sun, J. Deep residual learning for image recognition (IEEE, 2016).

  95. Rico-Martinez, R., Krischer, K., Kevrekidis, I., Kube, M. & Hudson, J. Discrete- vs. continuous-time nonlinear signal processing of Cu electrodissolution data. Chem. Eng. Commun. 118, 25–48 (1992).

    Article  Google Scholar 

  96. Weinan, E. A proposal on machine learning via dynamical systems. Commun. Math. Stat. 5, 1–11 (2017).

    Article  MathSciNet  MATH  Google Scholar 

  97. Chen, T. Q., Rubanova, Y., Bettencourt, J. & Duvenaud, D. K. Neural ordinary differential equations. Adv. Neural Inf. Process. Syst. 31, 6571–6583 (2018).

    Google Scholar 

  98. Jia, J. & Benson, A. R. Neural jump stochastic differential equations. Adv. Neural Inf. Process. Syst. 32, 9847–9858 (2019).

    Google Scholar 

  99. Rico-Martinez, R., Kevrekidis, I. & Krischer, K. in Neural Networks for Chemical Engineers (ed. Bulsari, A. B.) 409–442 (Elsevier, 1995).

  100. He, J., Li, L., Xu, J. & Zheng, C. ReLU deep neural networks and linear finite elements. J. Comput. Math. 38, 502–527 (2020).

    Article  MathSciNet  MATH  Google Scholar 

  101. Jagtap, A. D., Kharazmi, E. & Karniadakis, G. E. Conservative physics-informed neural networks on discrete domains for conservation laws: applications to forward and inverse problems. Comput. Methods Appl. Mech. Eng. 365, 113028 (2020).

    Article  ADS  MathSciNet  MATH  Google Scholar 

  102. Yang, L., Zhang, D. & Karniadakis, G. E. Physics-informed generative adversarial networks for stochastic differential equations. SIAM J. Sci. Comput. 42, A292–A317 (2020).

    Article  MathSciNet  MATH  Google Scholar 

  103. Pang, G., Lu, L. & Karniadakis, G. E. fPINNs: fractional physics-informed neural networks. SIAM J. Sci. Comput. 41, A2603–A2626 (2019).

    Article  MathSciNet  MATH  Google Scholar 

  104. Kharazmi, E., Zhang, Z. & Karniadakis, G. E. hp-VPINNs: variational physics-informed neural networks with domain decomposition. Comput. Methods Appl. Mech. Eng. 374, 113547 (2021).

    Article  ADS  MathSciNet  MATH  Google Scholar 

  105. Jagtap, A. D. & Karniadakis, G. E. Extended physics-informed neural networks (XPINNs): a generalized space-time domain decomposition based deep learning framework for nonlinear partial differential equations. Commun. Comput. Phys. 28, 2002–2041 (2020).

    Article  MathSciNet  Google Scholar 

  106. Raissi, M., Yazdani, A. & Karniadakis, G. E. Hidden fluid mechanics: learning velocity and pressure fields from flow visualizations. Science 367, 1026–1030 (2020).

    Article  ADS  Google Scholar 

  107. Yang, L., Meng, X. & Karniadakis, G. E. B-PINNs: Bayesian physics-informed neural networks for forward and inverse PDE problems with noisy data. J. Comput. Phys. 415, 109913 (2021).

    Article  MathSciNet  Google Scholar 

  108. Wang, S. & Perdikaris, P. Deep learning of free boundary and Stefan problems. J. Comput. Phys. 428, 109914 (2020).

    Article  MathSciNet  Google Scholar 

  109. Spigler, S. et al. A jamming transition from under-to over-parametrization affects generalization in deep learning. J. Phys. A 52, 474001 (2019).

    Article  MathSciNet  Google Scholar 

  110. Geiger, M. et al. Scaling description of generalization with number of parameters in deep learning. J. Stat. Mech. Theory Exp. 2020, 023401 (2020).

    Article  MathSciNet  MATH  Google Scholar 

  111. Belkin, M., Hsu, D., Ma, S. & Mandal, S. Reconciling modern machine-learning practice and the classical bias–variance trade-off. Proc. Natl Acad. Sci. USA 116, 15849–15854 (2019).

    Article  MathSciNet  MATH  Google Scholar 

  112. Geiger, M. et al. Jamming transition as a paradigm to understand the loss landscape of deep neural networks. Phys. Rev. E 100, 012115 (2019).

    Article  ADS  Google Scholar 

  113. Mei, S., Montanari, A. & Nguyen, P. M. A mean field view of the landscape of two-layer neural networks. Proc. Natl Acad. Sci. USA 115, E7665–E7671 (2018).

    Article  MathSciNet  MATH  Google Scholar 

  114. Mehta, P. & Schwab, D. J. An exact mapping between the variational renormalization group and deep learning. Preprint at arXiv https://arxiv.org/abs/1410.3831 (2014).

  115. Stoudenmire, E. & Schwab, D. J. Supervised learning with tensor networks. Adv. Neural Inf. Process. Syst. 29, 4799–4807 (2016).

  116. Choromanska, A., Henaff, M., Mathieu, M., Arous, G. B. & LeCun, Y. The loss surfaces of multilayer networks. Proc. Artif. Intell. Stat. 38, 192–204 (2015).

    Google Scholar 

  117. Poole, B., Lahiri, S., Raghu, M., Sohl-Dickstein, J. & Ganguli, S. Exponential expressivity in deep neural networks through transient chaos. Adv. Neural Inf. Process. Syst. 29, 3360–3368 (2016).

    Google Scholar 

  118. Yang, G. & Schoenholz, S. Mean field residual networks: on the edge of chaos. Adv. Neural Inf. Process. Syst. 30, 7103–7114 (2017).

  119. Poggio, T., Mhaskar, H., Rosasco, L., Miranda, B. & Liao, Q. Why and when can deep — but not shallow — networks avoid the curse of dimensionality: a review. Int. J. Autom. Comput. 14, 503–519 (2017).

    Article  Google Scholar 

  120. Grohs, P., Hornung, F., Jentzen, A. & Von Wurstemberger, P. A proof that artificial neural networks overcome the curse of dimensionality in the numerical approximation of Black–Scholes partial differential equations. Preprint at arXiv https://arxiv.org/abs/1809.02362 (2018).

  121. Han, J., Jentzen, A. & Weinan, E. Solving high-dimensional partial differential equations using deep learning. Proc. Natl Acad. Sci. USA 115, 8505–8510 (2018).

    Article  MathSciNet  MATH  Google Scholar 

  122. Goodfellow, I. et al. Generative adversarial nets. Adv. Neural Inf. Process. Syst. 27, 2672–2680 (2014).

  123. Brock, A., Donahue, J. & Simonyan, K. Large scale GAN training for high fidelity natural image synthesis. in Int. Conf. Learn. Represent. (2019).

  124. Yu, L., Zhang, W., Wang, J. & Yu, Y. SeqGAN: sequence generative adversarial nets with policy gradient (AAAI Press, 2017).

  125. Zhu, J.Y., Park, T., Isola, P. & Efros, A. A. Unpaired image-to-image translation using cycle-consistent adversarial networks (IEEE, 2017).

  126. Yang, L., Daskalakis, C. & Karniadakis, G. E. Generative ensemble-regression: learning particle dynamics from observations of ensembles with physics-informed deep generative models. Preprint at arXiv https://arxiv.org/abs/2008.01915 (2020).

  127. Lanthaler, S., Mishra, S. & Karniadakis, G. E. Error estimates for DeepONets: a deep learning framework in infinite dimensions. Preprint at arXiv https://arxiv.org/abs/2102.09618 (2021).

  128. Deng, B., Shin, Y., Lu, L., Zhang, Z. & Karniadakis, G. E. Convergence rate of DeepONets for learning operators arising from advection–diffusion equations. Preprint at arXiv https://arxiv.org/abs/2102.10621 (2021).

  129. Xiu, D. & Karniadakis, G. E. The Wiener–Askey polynomial chaos for stochastic differential equations. SIAM J. Sci. Comput. 24, 619–644 (2002).

    Article  MathSciNet  MATH  Google Scholar 

  130. Marzouk, Y. M., Najm, H. N. & Rahn, L. A. Stochastic spectral methods for efficient Bayesian solution of inverse problems. J. Comput. Phys. 224, 560–586 (2007).

    Article  ADS  MathSciNet  MATH  Google Scholar 

  131. Stuart, A. M. Inverse problems: a Bayesian perspective. Acta Numerica 19, 451 (2010).

    Article  ADS  MathSciNet  MATH  Google Scholar 

  132. Tripathy, R. K. & Bilionis, I. Deep UQ: learning deep neural network surrogate models for high dimensional uncertainty quantification. J. Comput. Phys. 375, 565–588 (2018).

    Article  ADS  MathSciNet  MATH  Google Scholar 

  133. Karumuri, S., Tripathy, R., Bilionis, I. & Panchal, J. Simulator-free solution of high-dimensional stochastic elliptic partial differential equations using deep neural networks. J. Comput. Phys. 404, 109120 (2020).

    Article  MathSciNet  MATH  Google Scholar 

  134. Yang, Y. & Perdikaris, P. Adversarial uncertainty quantification in physics-informed neural networks. J. Comput. Phys. 394, 136–152 (2019).

    Article  ADS  MathSciNet  MATH  Google Scholar 

  135. Raissi, M., Perdikaris, P. & Karniadakis, G. E. Machine learning of linear differential equations using Gaussian processes. J. Comput. Phys. 348, 683–693 (2017).

    Article  ADS  MathSciNet  MATH  Google Scholar 

  136. & Fan, D. et al. A robotic intelligent towing tank for learning complex fluid-structure dynamics. Sci. Robotics 4, eaay5063 (2019).

    Article  Google Scholar 

  137. Winovich, N., Ramani, K. & Lin, G. ConvPDE-UQ: convolutional neural networks with quantified uncertainty for heterogeneous elliptic partial differential equations on varied domains. J. Comput. Phys. 394, 263–279 (2019).

    Article  ADS  MathSciNet  MATH  Google Scholar 

  138. Zhang, D., Lu, L., Guo, L. & Karniadakis, G. E. Quantifying total uncertainty in physics-informed neural networks for solving forward and inverse stochastic problems. J. Comput. Phys. 397, 108850 (2019).

    Article  MathSciNet  MATH  Google Scholar 

  139. Gal, Y. & Ghahramani, Z. Dropout as a Bayesian approximation: representing model uncertainty in deep learning. Proc. Int. Conf. Mach. Learn. 48, 1050–1059 (2016).

    Google Scholar 

  140. Cai, S. et al. Flow over an espresso cup: inferring 3-D velocity and pressure fields from tomographic background oriented Schlieren via physics-informed neural networks. J. Fluid Mech. 915 (2021).

  141. Mathews, A., Francisquez, M., Hughes, J. & Hatch, D. Uncovering edge plasma dynamics via deep learning from partial observations. Preprint at arXiv https://arxiv.org/abs/2009.05005 (2020).

  142. Rotskoff, G. M. & Vanden-Eijnden, E. Learning with rare data: using active importance sampling to optimize objectives dominated by rare events. Preprint at arXiv https://arxiv.org/abs/2008.06334 (2020).

  143. Patel, R. G. et al. Thermodynamically consistent physics-informed neural networks for hyperbolic systems. Preprint at https://arxiv.org/abs/2012.05343 (2020).

  144. Shukla, K., Di Leoni, P. C., Blackshire, J., Sparkman, D. & Karniadakis, G. E. Physics-informed neural network for ultrasound nondestructive quantification of surface breaking cracks. J. Nondestruct. Eval. 39, 1–20 (2020).

    Article  Google Scholar 

  145. Behler, J. & Parrinello, M. Generalized neural-network representation of high-dimensional potential-energy surfaces. Phys. Rev. Lett. 98, 146401 (2007).

    Article  ADS  Google Scholar 

  146. Zhang, L., Han, J., Wang, H., Car, R. & Weinan, E. Deep potential molecular dynamics: a scalable model with the accuracy of quantum mechanics. Phys. Rev. Lett. 120, 143001 (2018).

    Article  ADS  Google Scholar 

  147. Jia, W. et al. Pushing the limit of molecular dynamics with ab initio accuracy to 100 million atoms with machine learning. Preprint at arXiv https://arxiv.org/abs/2005.00223 (2020).

  148. Nakata, A. et al. Large scale and linear scaling DFT with the CONQUEST code. J. Chem. Phys. 152, 164112 (2020).

    Article  Google Scholar 

  149. Zhu, W., Xu, K., Darve, E. & Beroza, G. C. A general approach to seismic inversion with automatic differentiation. Preprint at arXiv https://arxiv.org/abs/2003.06027 (2020).

  150. Abadi, M. et al. Tensorflow: a system for large-scale machine learning. Proc. OSDI 16, 265–283 (2016).

  151. Paszke, A. et al. PyTorch: an imperative style, high-performance deep learning library. Adv. Neural Inf. Process. Syst. 32, 8026–8037 (2019).

  152. Chollet, F. et al. Keras — Deep learning library. Keras https://keras.io (2015).

  153. Frostig, R., Johnson, M. J. & Leary, C. Compiling machine learning programs via high-level tracing. in Syst. Mach. Learn. (2018).

  154. Lu, L., Meng, X., Mao, Z. & Karniadakis, G. E. DeepXDE: a deep learning library for solving differential equations. SIAM Rev. 63, 208–228 (2021).

    Article  MathSciNet  MATH  Google Scholar 

  155. Hennigh, O. et al. NVIDIA SimNet: an AI-accelerated multi-physics simulation framework. Preprint at arXiv https://arxiv.org/abs/2012.07938 (2020).

  156. Koryagin, A., Khudorozkov, R. & Tsimfer, S. PyDEns: a Python framework for solving differential equations with neural networks. Preprint at arXiv https://arxiv.org/abs/1909.11544 (2019).

  157. Chen, F. et al. NeuroDiffEq: A python package for solving differential equations with neural networks. J. Open Source Softw. 5, 1931 (2020).

    Article  ADS  Google Scholar 

  158. Rackauckas, C. & Nie, Q. DifferentialEquations.jl — a performant and feature-rich ecosystem for solving differential equations in Julia. J. Open Res. Softw. 5, 15 (2017).

    Article  Google Scholar 

  159. Haghighat, E. & Juanes, R. SciANN: a Keras/TensorFlow wrapper for scientific computations and physics-informed deep learning using artificial neural networks. Comput. Meth. Appl. Mech. Eng. 373, 113552 (2020).

    Article  MathSciNet  MATH  Google Scholar 

  160. Xu, K. & Darve, E. ADCME: Learning spatially-varying physical fields using deep neural networks. Preprint at arXiv https://arxiv.org/abs/2011.11955 (2020).

  161. Gardner, J. R., Pleiss, G., Bindel, D., Weinberger, K. Q. & Wilson, A. G. Gpytorch: blackbox matrix–matrix Gaussian process inference with GPU acceleration. Adv. Neural Inf. Process. Syst. 31, 7587–7597 (2018).

    Google Scholar 

  162. Novak, R. et al. Neural Tangents: fast and easy infinite neural networks in Python. in Conf. Neural Inform. Process. Syst. (2020).

  163. Xu, K. & Darve, E. Physics constrained learning for data-driven inverse modeling from sparse observations. Preprint at arXiv https://arxiv.org/abs/2002.10521 (2020).

  164. Xu, K. & Darve, E. The neural network approach to inverse problems in differential equations. Preprint at arXiv https://arxiv.org/abs/1901.07758 (2019).

  165. Xu, K., Zhu, W. & Darve, E. Distributed machine learning for computational engineering using MPI. Preprint at arXiv https://arxiv.org/abs/2011.01349 (2020).

  166. Elsken, T., Metzen, J. H. & Hutter, F. Neural architecture search: a survey. J. Mach. Learn. Res. 20, 1–21 (2019).

    MathSciNet  MATH  Google Scholar 

  167. He, X., Zhao, K. & Chu, X. AutoML: a survey of the state-of-the-art. Knowl. Based Syst. 212, 106622 (2021).

    Article  Google Scholar 

  168. Hospedales, T., Antoniou, A., Micaelli, P. & Storkey, A. Meta-learning in neural networks: a survey. Preprint at arXiv https://arxiv.org/abs/2004.05439 (2020).

  169. Xu, Z.-Q. J., Zhang, Y., Luo, T., Xiao, Y. & Ma, Z. Frequency principle: Fourier analysis sheds light on deep neural networks. Commun. Comput. Phys. 28, 1746–1767 (2020).

    Article  MathSciNet  Google Scholar 

  170. Rahaman, N. et al. On the spectral bias of neural networks. Proc. Int. Conf. Mach. Learn. 97, 5301–5310 (2019).

    Google Scholar 

  171. Ronen, B., Jacobs, D., Kasten, Y. & Kritchman, S. The convergence rate of neural networks for learned functions of different frequencies. Adv. Neural Inf. Process. Syst. 32, 4761–4771 (2019).

  172. Cao, Y., Fang, Z., Wu, Y., Zhou, D. X. & Gu, Q. Towards understanding the spectral bias of deep learning. Preprint at arXiv https://arxiv.org/abs/1912.01198 (2019).

  173. Wang, S., Teng, Y. & Perdikaris, P. Understanding and mitigating gradient pathologies in physics-informed neural networks. Preprint at arXiv https://arxiv.org/abs/2001.04536 (2020).

  174. Tancik, M. et al. Fourier features let networks learn high frequency functions in low dimensional domains. Adv. Neural Inf. Process. Syst. 33 (2020).

  175. Cai, W. & Xu, Z. Q. J. Multi-scale deep neural networks for solving high dimensional PDEs. Preprint at arXiv https://arxiv.org/abs/1910.11710 (2019).

  176. Arbabi, H., Bunder, J. E., Samaey, G., Roberts, A. J. & Kevrekidis, I. G. Linking machine learning with multiscale numerics: data-driven discovery of homogenized equations. JOM 72, 4444–4457 (2020).

    Article  ADS  Google Scholar 

  177. Owhadi, H. & Zhang, L. Metric-based upscaling. Commun. Pure Appl. Math. 60, 675–723 (2007).

    Article  MathSciNet  MATH  Google Scholar 

  178. Blum, A. L. & Rivest, R. L. Training a 3-node neural network is NP-complete. Neural Netw. 5, 117–127 (1992).

    Article  Google Scholar 

  179. Lee, J. D., Simchowitz, M., Jordan, M. I. & Recht, B. Gradient descent only converges to minimizers. Annu. Conf. Learn. Theory 49, 1246–1257 (2016).

    Google Scholar 

  180. Jagtap, A. D., Kawaguchi, K. & Em Karniadakis, G. Locally adaptive activation functions with slope recovery for deep and physics-informed neural networks. Proc. R. Soc. A 476, 20200334 (2020).

    Article  ADS  MathSciNet  MATH  Google Scholar 

  181. Wight, C. L. & Zhao, J. Solving Allen–Cahn and Cahn–Hilliard equations using the adaptive physics informed neural networks. Preprint at arXiv https://arXiv.org/abs/2007.04542 (2020).

  182. Goswami, S., Anitescu, C., Chakraborty, S. & Rabczuk, T. Transfer learning enhanced physics informed neural network for phase-field modeling of fracture. Theor. Appl. Fract. Mech. 106, 102447 (2020).

    Article  Google Scholar 

  183. Betancourt, M. A geometric theory of higher-order automatic differentiation. Preprint at arXiv https://arxiv.org/abs/1812.11592 (2018).

  184. Bettencourt, J., Johnson, M. J. & Duvenaud, D. Taylor-mode automatic differentiation for higher-order derivatives in JAX. in Conf. Neural Inform. Process. Syst. (2019).

  185. Newman, D, Hettich, S., Blake, C. & Merz, C. UCI repository of machine learning databases. ICS http://www.ics.uci.edu/~mlearn/MLRepository.html (1998).

  186. Bianco, S., Cadene, R., Celona, L. & Napoletano, P. Benchmark analysis of representative deep neural network architectures. IEEE Access 6, 64270–64277 (2018).

    Article  Google Scholar 

  187. Vlachas, P. R. et al. Backpropagation algorithms and reservoir computing in recurrent neural networks for the forecasting of complex spatiotemporal dynamics. Neural Networks (2020).

  188. Shin, Y., Darbon, J. & Karniadakis, G. E. On the convergence of physics informed neural networks for linear second-order elliptic and parabolic type PDEs. Commun. Comput. Phys. 28, 2042–2074 (2020).

    Article  MathSciNet  Google Scholar 

  189. Mishra, S. & Molinaro, R. Estimates on the generalization error of physics informed neural networks (PINNs) for approximating PDEs. Preprint at arXiv https://arxiv.org/abs/2006.16144 (2020).

  190. Mishra, S. & Molinaro, R. Estimates on the generalization error of physics informed neural networks (PINNs) for approximating PDEs II: a class of inverse problems. Preprint at arXiv https://arxiv.org/abs/2007.01138 (2020).

  191. Shin, Y., Zhang, Z. & Karniadakis, G.E. Error estimates of residual minimization using neural networks for linear PDEs. Preprint at arXiv https://arxiv.org/abs/2010.08019 (2020).

  192. Kharazmi, E., Zhang, Z. & Karniadakis, G. Variational physics-informed neural networks for solving partial differential equations. Preprint at arXiv https://arxiv.org/abs/1912.00873 (2019).

  193. Jo, H., Son, H., Hwang, H. Y. & Kim, E. Deep neural network approach to forward-inverse problems. Netw. Heterog. Media 15, 247–259 (2020).

    Article  MathSciNet  MATH  Google Scholar 

  194. Guo, M. & Haghighat, E. An energy-based error bound of physics-informed neural network solutions in elasticity. Preprint at arXiv https://arxiv.org/abs/2010.09088 (2020).

  195. Lee, J. Y., Jang, J. W. & Hwang, H. J. The model reduction of the Vlasov–Poisson–Fokker–Planck system to the Poisson–Nernst–Planck system via the deep neural network approach. Preprint at arXiv https://arxiv.org/abs/2009.13280 (2020).

  196. Kingma, D. P. & Ba, J. Adam: a method for stochastic optimization. in Int. Conf. Learn. Represent. (2015).

  197. Luo, T. & Yang, H. Two-layer neural networks for partial differential equations: optimization and generalization theory. Preprint at arXiv https://arxiv.org/abs/2006.15733 (2020).

  198. Jacot, A., Gabriel, F. & Hongler, C. Neural tangent kernel: convergence and generalization in neural networks. Adv. Neural Inf. Process. Syst. 31, 8571–8580 (2018).

  199. Alnæs, M. et al. The FEniCS project version 1.5. Arch. Numer. Softw. 3, 9–23 (2015).

    Google Scholar 

  200. Kemeth, F. P. et al. An emergent space for distributed data with hidden internal order through manifold learning. IEEE Access 6, 77402–77413 (2018).

  201. Kemeth, F. P. et al. Learning emergent PDEs in a learned emergent space. Preprint at arXiv https://arxiv.org/abs/2012.12738 (2020).

  202. Defense Advanced Research Projects Agency. DARPA shredder challenge rules. DARPA https://web.archive.org/web/20130221190250/http://archive.darpa.mil/shredderchallenge/Rules.aspx (2011).

  203. Rovelli, C. Forget time. Found. Phys. 41, 1475 (2011).

    Article  ADS  MathSciNet  MATH  Google Scholar 

  204. Hy, T. S., Trivedi, S., Pan, H., Anderson, B. M. & Kondor, R. Predicting molecular properties with covariant compositional networks. J. Chem. Phys. 148, 241745 (2018).

    Article  ADS  Google Scholar 

  205. Hachmann, J. et al. The Harvard clean energy project: large-scale computational screening and design of organic photovoltaics on the world community grid. J. Phys. Chem. Lett. 2, 2241–2251 (2011).

    Article  Google Scholar 

  206. Byrd, R. H., Lu, P., Nocedal, J. & Zhu, C. A limited memory algorithm for bound constrained optimization. SIAM J. Sci. Comput. 16, 1190–1208 (1995).

    Article  MathSciNet  MATH  Google Scholar 

Download references

Acknowledgements

We thank H. Owhadi (Caltech) for his insightful comments on the connections between NNs and kernel methods. G.E.K. acknowledges support from the DOE PhILMs project (no. DE-SC0019453) and OSD/AFOSR MURI grant FA9550-20-1-0358. I.G.K. acknowledges support from DARPA (PAI and ATLAS programmes) as well as an AFOSR MURI grant through UCSB. P.P. acknowledges support from the DARPA PAI programme (grant HR00111890034), the US Department of Energy (grant DE-SC0019116), the Air Force Office of Scientific Research (grant FA9550-20-1-0060), and DOE-ARPA (grant 1256545).

Author information

Authors and Affiliations

Authors

Contributions

Authors are listed in alphabetical order. G.E.K. supervised the project. All authors contributed equally to writing the paper.

Corresponding author

Correspondence to George Em Karniadakis.

Ethics declarations

Competing interests

The authors declare no competing interests.

Additional information

Peer review information

Nature Reviews Physics thanks the anonymous reviewers for their contribution to the peer review of this work.

Publisher’s note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Related links

ADCME: https://kailaix.github.io/ADCME.jl/latest

DeepXDE: https://deepxde.readthedocs.io/

GPyTorch: https://gpytorch.ai/

NeuroDiffEq: https://github.com/NeuroDiffGym/neurodiffeq

NeuralPDE: https://neuralpde.sciml.ai/dev/

Neural Tangents: https://github.com/google/neural-tangents

PyDEns: https://github.com/analysiscenter/pydens

PyTorch: https://pytorch.org

SciANN: https://www.sciann.com/

SimNet: https://developer.nvidia.com/simnet

TensorFlow: www.tensorflow.org

Glossary

Multi-fidelity data

Data of variable accuracy.

Lax–Oleinik formula

A representation formula for the solution of the Hamilton–Jacobi equation.

Deep Galerkin method

A physics-informed neural network-like method with random sampling.

Lyapunov stability

Characterization of the robustness of dynamic behaviour to small perturbations, in the neighbourhood of an equilibrium.

Gappy data

Sets with regions of missing data.

ReLU activation function

Rectified linear unit.

Double-descent phenomenon

Increasing model capacity beyond the point of interpolation resulting in improved performance.

Restricted Boltzmann machines

Generative stochastic artificial neural networks that can learn a probability distribution over their set of inputs.

Aleatoric uncertainty

Uncertainty due to the inherent randomness of data.

Epistemic uncertainty

Uncertainty due to limited data and knowledge.

Arbitrary polynomial chaos

A type of generalized polynomial chaos with measures defined by data.

Boussinesq approximation

An approximation used in gravity-driven flows, which ignores density differences except in the gravity term.

Committor function

A function used to study transitions between metastable states in stochastic systems.

Allen–Cahn type system

A type of system with both reaction and diffusion.

hp-refinement

Dual refinement of the mesh by increasing either the number of subdomains or the approximations degree.

Hölder regularization

A regularization term associated with Hölder constants of differential equations that controls the derivatives of neural networks.

Rademacher complexity

A quantity that measures richness of a class of real-valued functions with respect to a probability distribution.

Koopman model

Linear model of a (nonlinear) dynamical system obtained via a Koopman operator theory.

Nesterov iterations

Iterations of an algorithm for the numerical computation of equilibria.

ISOMAP

A nonlinear dimensionality reduction technique for embedding intrinsically low-dimensional data from high-dimensional representations to lower-dimensional spaces.

t-SNE

t-distributed stochastic neighbour embedding. A nonlinear dimensionality reduction technique for embedding intrinsically low-dimensional data from high-dimensional representations to lower-dimensional spaces.

Diffusion maps

A nonlinear dimensionality reduction technique for embedding intrinsically low-dimensional data from high-dimensional representations to lower-dimensional spaces.

Rights and permissions

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Karniadakis, G.E., Kevrekidis, I.G., Lu, L. et al. Physics-informed machine learning. Nat Rev Phys 3, 422–440 (2021). https://doi.org/10.1038/s42254-021-00314-5

Download citation

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1038/s42254-021-00314-5

This article is cited by

Search

Quick links

Nature Briefing AI and Robotics

Sign up for the Nature Briefing: AI and Robotics newsletter — what matters in AI and robotics research, free to your inbox weekly.

Get the most important science stories of the day, free in your inbox. Sign up for Nature Briefing: AI and Robotics