Machine learning and serving of discrete field theories

A method for machine learning and serving of discrete field theories in physics is developed. The learning algorithm trains a discrete field theory from a set of observational data on a spacetime lattice, and the serving algorithm uses the learned discrete field theory to predict new observations of the field for new boundary and initial conditions. The approach of learning discrete field theories overcomes the difficulties associated with learning continuous theories by artificial intelligence. The serving algorithm of discrete field theories belongs to the family of structure-preserving geometric algorithms, which have been proven to be superior to the conventional algorithms based on discretization of differential equations. The effectiveness of the method and algorithms developed is demonstrated using the examples of nonlinear oscillations and the Kepler problem. In particular, the learning algorithm learns a discrete field theory from a set of data of planetary orbits similar to what Kepler inherited from Tycho Brahe in 1601, and the serving algorithm correctly predicts other planetary orbits, including parabolic and hyperbolic escaping orbits, of the solar system without learning or knowing Newton’s laws of motion and universal gravitation. The proposed algorithms are expected to be applicable when the effects of special relativity and general relativity are important.


INTRODUCTION AND STATEMENT OF THE PROBLEM
Data-driven methodology has attracted much attention recently in the physics community.This is not surprising since one of the fundamental objectives of physics is to deduce or discover the laws of physics from observational data.The rapid development of artificial intelligence technology begs the question of whether such deductions or discoveries can be carried out algorithmically by computers.
The problem addressed in this paper belongs to a new category.The method proposed learns a field theory from a given set of training data consisting of observed values of a physical field at discrete spacetime locations.The laws of physics are fundamentally expressed in the form of field theories instead of differential equations.It is thus more important to learn the underpinning field theories when possible.Since field theories are in general simpler than the corresponding differential equations, learning field theories is easier, which is true for both human intelligence and artificial intelligence.Except for the fundamental assumption that the observational data are governed by field theories, the learning and serving algorithms proposed do not assume any knowledge of the laws of physics, such as Newton's law of motion and Schrödinger's equation.This is a stark contrast to all other methodologies of machine learning in physics.
Without losing of generality, let's briefly review the basics of field theories using the example of first-order field theory in the space of R n for a scalar field ψ.A field theory is specified by a Lagrangian density L(ψ, ∂ψ/∂x α ), where x α (α = 1, ..., n) are the coordinates for R n .The theory requires that with the value of ψ fixed at the boundary, ψ(x) varies with respect to x in such a way that the action of the system is minimized.Such a requirement of minimization is equivalent to the condition that the following Euler-Lagrange (EL) equation is satisfied everywhere in R n , The problem of machine learning of field theories can be stated as follows: Problem Statement 1.For a given set of observed values of ψ on a set of discrete points in R n , find the Lagrangian density L(ψ, ∂ψ/∂x α ) as a function of ψ and ∂ψ/∂x α , and design an algorithm to predict new observations of ψ from

L.
Now it is clear that learning the Lagrangian density L(ψ, ∂ψ/∂x α ) is easier than learning the EL equation ( 2), which depends on ψ in a more complicated manner than L does.For example, the EL equation depends on second-order derivatives ∂ 2 ψ/∂x α ∂x β and L does not.
However, learning L from a given set of observed values of ψ is not an easy task either for two reasons.Suppose that L is modeled by a neural network.We need to train L using the EL equation, which requires the knowledge of ∂ and the most effective way to serve a field theory with long term accuracy and fidelity is by offering the discrete version of the theory, as has been proven by the recent advances in structure-preserving geometric algorithms .Therefore, learning a discrete field theory directly from the training data and then serving it constitute an attractive approach for discovering physical models by artificial intelligence.
It has long been theorized since Euclid's study on mirrors and optics that as the most fundamental law of physics, all nature does is to minimize certain actions [84,85].But how does nature do that?The machine learning and serving algorithms of discrete field theories proposed may provide a clue, when incorporating the basic concept of the simulation hypothesis by Bostrom [86].The simulation hypothesis states that the physical universe is a computer simulation, and it is being carefully examined by physicists as a possible reality [87][88][89].If the hypothesis is true, then the spacetime is necessarily discrete.So are the field theories in physics.It is then reasonable to suggest that some machine learning and serving algorithms of discrete field theories are what the discrete universe, i.e., the computer simulation, runs to minimize the actions.
In Sec. 2, the learning and serving algorithms of discrete field theories are developed.
Two examples of learning and predicting nonlinear oscillations in 1D are given in Sec. 3 to demonstrate the method and algorithms.In Sec. 4, I apply the methodology to the Kepler problem.The learning algorithm learns a discrete field theory from a set of observational data for orbits of the Mercury, Venus, Earth, Mars, Ceres, and Jupiter, and the serving algorithm correctly predicts other planetary orbits, including the parabolic and hyperbolic escaping orbits, of the solar system.It is worthwhile to emphasize that the serving and learning algorithms do not know, learn, or use Newton's laws of motion and universal gravi- Spacetime lattice and discrete field ψ.
The discrete Lagrangian density ) of the grid cell whose lower left vertex is at the grid point (i, j) is chosen to be a function of the values of the discrete field at the three vertices marked by solid circles.
The action A d of the system depends on ψ i,j through the discrete Lagrangian densities of the three neighboring grid cells indicated by gray shading.
tation.The discrete field theory directly connect the observational data and new predictions.
Newton's laws are not needed.

MACHINE LEARNING AND SERVING OF DISCRETE FIELD THEORIES
In this section, I describe first the formalism of discrete field theory on a spacetime lattice, and then the algorithm for learning discrete field theories from training data and the serving algorithm to predict new observations using the learned discrete field theories.The connection between the serving algorithm and structure-preserving geometric integration methods is highlighted.
To simplify the presentation and without losing generality, the theory and algorithms are given for the example of a first-order scalar field theory in R 2 .One of the dimension will be referred to as time with coordinate t, and the other dimension space with coordinate x.
Generalizations to high-order theories and to tensor fields or spinor fields are straightforward.
For a discrete field theory in R 2 , the field ψ i,j is defined on a spacetime lattice labeled by two integer indices (i, j).For simplicity, let's adopt a rectangular lattice shown in Fig. 1.
The first index i identifies temporal grid points, and the second index j spacial grid points.
The discrete action A d of the system is the summation of discrete Lagrangian densities over all grid cells, where ∆t and ∆x are the grid sizes in time and space respectively, and is the discrete Lagrangian density of the grid cell whose lower left vertex is at the grid point (i, j).I have chosen L d to be a function of ψ i,j , ψ i+1,j , and ψ i,j+1 only, which is suitable for first-order field theories.For instance, in the continuous theory for wave dynamics, the Lagrangian density is Its counterpart in the discrete theory can be written as The discrete Lagrangian density L d defined in Eq. ( 5) can be viewed as an approximation of the continuous Lagrangian density L in Eq. ( 4).But I prefer to take L d as an independent object that defines a discrete field theory.
For the discrete field theory, the condition of minimizing the discrete action A d with respect to each ψ i,j demands Equation ( 6) is called Discrete Euler-Lagrange (DEL) equation for the obvious reason that its continuous counterpart is the EL equation ( 2) with x 1 = t and x 2 = x.Following the notation of the continuous theory, I also denote the left-hand-side of the last equal sign in Eq. ( 6) by an operator EL i,j (ψ), which maps the discrete field ψ i,j into another discrete field.
The DEL equation is employed to solve for the discrete field ψ on the spacetime lattice when a discrete Lagrangian density L d is prescribed.This has been the only usage of the DEL equation in the literature so far [48,49,.I will come back to this shortly.
For the problem posed in the present study, the discrete Lagrangian density L d is unknown.It needs to be learned from the training data.Specifically, in terms of the discrete field theory, the learning problem discussed in Sec. 1 becomes: Problem Statement 2. For a given set of observed data ψ i,j on a spacetime lattice, find the discrete Lagrangian density L d (ψ i,j , ψ i+1,j , ψ i,j+1 ) as a function of ψ i,j , ψ i+1,j , and ψ i,j+1 , and design an algorithm to predict new observations of ψ i,j from L d .
Unlike the difficult situation described in Sec. 1 for learning a continuous field theory, learning a discrete field theory is straightforward.The algorithm is obvious once the problem is declared as in Problem Statement 2. We set up a function approximation for L d with three inputs and one output using a neural network or any other approximation scheme adequate for the problem under investigation.The approximation is optimized by adjusting its free parameters to minimize the loss function on the training data ψ, where I and J are the total number of grid points in time and space respectively.In Problem Statement 2 and the definition of loss function (7), it is implicitly assumed that the training data are available over the entire spacetime lattice.
Notice that according to Eqs. ( 6) and ( 7), first-order derivatives of L d with respect to all three arguments are required to evaluate F (ψ). Automatic differential algorithms [29], which have been widely adopted in artificial neural networks, can be applied.To train the neural network or other approximation for L d , established methods, including Newton's root searching algorithm and the Adam optimizer [90], are available.
Once the discrete Lagrangian density L d is trained, the learned discrete field theory is ready to be served to predict new observations.After boundary conditions are specified, the DEL equation ( 6) is solved for the discrete field ψ i,j .A first-order field theory requires two boundary conditions in each dimension.As an illustrative example, let's assume that ψ 0,j and ψ 1,j are specified for all js, corresponding to two initial conditions at t = 0, and ψ i,0 and ψ i,1 are specified for all is, corresponding to two boundary conditions at x = 0.Under these boundary and initial conditions, the DEL equation ( 6) can be solved for field ψ i,j for all is and js as follows.
Step 3) Repeat Step 2) with increasing value of j to generate solution ψ 2,j for all js.
Step 4) Increase index i to 2. Apply the same procedure in Step 3) for generating ψ 2,j to generate ψ 3,j for all js.
In a nutshell, the DEL equation at the grid cell labeled by (i, j) (see Fig. 1) is solved as an algebraic equation for ψ i+1,j .This serving algorithm propagates the solution from the initial and boundary conditions to the entire spacetime lattice.It is exactly how the physical field propagates physically.According to the simulation hypothesis, the algorithmic propagation and the physical propagation are actually the same thing.When different types of boundary and initial conditions are imposed, the algorithm needs to be modified accordingly.But the basic strategy remains the same.Specific cases will be addressed in future study.
The above algorithms in R to bound globally errors on energy and other invariants of the system for all simulation time-steps.More sophisticated discrete field theories have been designed to preserve other geometric structures of physical systems, such as the gauge symmetry [56,81] and Poincaré symmetry [78,88,89,91].What proposed in this paper is to learn the discrete field theory directly from observational data and then serve the learned discrete field theory to predict new observations.

TIONS
In this section, I use two examples of learning and predicting nonlinear oscillations in 1D to demonstrate the effectiveness of the learning and serving algorithms.In 1D, the discrete action reduces to the summation of the discrete Lagrangian density L d over the time grids, Here, L d (ψ i , ψ i+1 ) is a function of the field at two adjacent time grid points.The DEL equation is simplified to The training data ψi (i = 0, ..., I) form a time sequence, and the loss function on a data set ψ is After learning L d , the serving algorithm will predict a new time sequence for every two initial conditions ψ 0 and ψ 1 .Note that Eq. ( 9) is an algebraic equation for ψ i+1 when ψ i−1 and ψ i are known.It is an implicit two-step algorithm from the viewpoint of numerical methods for ordinary differential equations.It can be proven [49,51,52] that the algorithm exactly preserves a symplectic structure defined by The algorithm is thus a symplectic integrator, which is able to bound globally the numerical error on energy for all simulation time-steps.Compared with standard integrators which do not possess structure-preserving properties, such as the Runge-Kutta method, variational integrators deliver much improved long-term accuracy and fidelity. For where a pq are trainable parameters.
with initial conditions ψ(t = 0) = 1.2 and ψ ′ (t = 0) = 0.Here ψ ′ denote dψ/dt.The Lagrangian density for the system is The optimizer for training the discrete Lagrangian density L d is Newton's algorithm with against the time sequences solved for from the nonlinear ODE (13).The predicted sequence in Fig. 3 starts at ψ 0 = −0.6, and its dynamic characteristics is significantly different from that of the sequence in Fig. 2. The predicted sequence in Fig. 4 starts at a much smaller amplitude, i.e., ψ 0 = 0.1, and shows the behavior of linear oscillation, in contrast with the strong nonlinearity of the sequence in Fig. 2 and the mild nonlinearity of the sequence in Fig. 3.The agreement between the predictions of the learned discrete field theory and the accurate solutions of the nonlinear ODE ( 13) is satisfactory.These numerical results demonstrate that the proposed algorithms for machine learning and serving of discrete field theories are effective in terms of capturing the fundamental structure and predicting the complicated dynamical behavior of the physical system.with initial conditions ψ(t = 0) = 1.7 and ψ ′ (t = 0) = 0.The Lagrangian for the system is accurately solved for from the nonlinear ODE (13).The dynamics starts at ψ 0 = −0.6, and its characteristics is significantly different from that of the time sequence in Fig. 2. shows the behavior of linear oscillation, in contrast with the strong nonlinearity of the sequence in Fig. 2 and the mild nonlinearity of the sequence in Fig. 3.The predicted time sequence agrees with the time sequence (empty squares) accurately solved for from the nonlinear ODE (13).

Solution of ODE Discrete Field Theory
where V (ψ) is a nonlinear potential plotted in Fig. the learned discrete field theory agrees very well with the training sequence ψi .
The learned discrete field theory predicts two very different types of dynamical sequences shown in Fig. 7 and Fig. 8.The predicted sequences are plotted using solid circle markers and the sequences accurately solved for from the nonlinear ODE (15) are plotted using empty square markers.The sequence predicted in Fig. 7 is a nonlinear oscillation in the small potential well between ψ = −0.1 and ψ = 1.5 on the right of Fig. 6, and the sequence predicted in Fig. 8 is a nonlinear oscillation in the small potential well between ψ = −1.3 and ψ = −0.1 on the left.For both cases, the predictions of the learned discrete field theory agree with the accurate solutions.Observe that in Fig. 6 the two small potential wells are secondary to the large potential wall between ψ = ±1.6.In Fig. 5

KEPLER PROBLEM
In this section, to further demonstrate the effectiveness of the method developed, I apply it to the Kepler problem, which is concerned with dynamics of planetary orbits in the solar  system.Let turn the clock back to 1601, when Kepler inherited the observational data of planetary orbits meticulously collected by his mentor Tycho Brahe.It took Kepler 5 years to discover his first and second laws of planetary motion, and another 78 years before Newton solved the Kepler problem using his laws of motion and universal gravitation [92].Assume that we have a set of data similar to that of Kepler, as displayed in Fig. 9.For simplicity, the data are the orbits of the Mercury, Venus, Earth, Mars, Ceres and Jupiter generated by solving Newton's equation of motion for a planet in the gravity field of the Sun according to Newton's law of universal gravitation.The spatial and temporal normalization scale-lengths are 1 a.u. and 58.14 days, respectively, and the time-steps of the orbital data is 0.05.My goal here is not to rediscover Kepler's laws of planetary motion or Newton's laws of motion and universal gravitation by machine learning.Instead, I train a discrete field theory from the orbits displayed in Fig. 9 and then serve it to predict new planetary orbits.
For this case, the discrete field theory is about a 2D vector field defined on the time grid.Denote the field as ψ i = (x i , y i ), where i is the index for the time grid, and x i and y i are the 2D coordinates of a planet in the solar system.In terms of the discrete field, the discrete and the DEL is a vector equation with two components, The loss function on a data set ψ = (x, y) is Akin to the situation in Sec. 3, the serving algorithms preserves exactly an discrete symplectic form defined by and hyperbolic escaping orbits, even though the training orbits are all elliptical, see Figs. 9 and 10.Historically, Kepler argued that escaping orbits and elliptical orbits are governed by different laws.It was Newton who discovered or "learned" the 1/r dependency of the gravitational field from Kepler's laws of planetary motion and Tycho Brahe's data, and unified the elliptical orbits and escaping orbits under the same law of physics.Most of the studies physicists have been doing since then is applying Newton's methodology to other physical phenomena.The results displayed in Figs.11 and 12 show that the machine learning and serving algorithms solve the Kepler problem in terms of correctly prediction planetary orbits without knowing or learning Newton's laws of motion and universal gravitation.
To complete this section, a few footnotes are in order.(i) There exist small discrepancies between the predictions from the learned discrete field theory and Newton's laws in Figs.11 and 12 when r = √ x 2 + y 2 7.This is because no training orbit in this domain was provided to the learning algorithm.The orbits predicted there are thus less accurate.(ii) The study presented is meant to be a proof of principle.Practical factors, such as three-body effects, are not included.Nevertheless, the method itself is robust against variations of the governing laws of physics, because the method does not require any knowledge of the laws of physics other than the fundamental assumption that the governing laws are field theories.In particular, the learning and serving algorithms for planetary orbits described above do not assume or make use of Newton's equation of motion and Newton's law of universal gravitation.Therefore, when the effects of special relativity or general relativity are important, the algorithms are valid without modification.Further study will be reported in the future.

CONCLUSIONS AND DISCUSSION
In this paper, a method for machine learning and serving of discrete field theories in physics is developed.The learning algorithm trains a discrete field theory from a set of observational data of the field on a spacetime lattice, and the serving algorithm employs the learned discrete field theory to predict new observations of the field for given new boundary and initial conditions.Finally, I should emphasize that no machine learning algorithm is meaningful or effective without presumptions.The algorithms developed here certainly do not apply to any given set of data.The data relevant to the present study are assumed to be observations of physical fields in the spacetime governed by field theories.
However, the existence of a governing field theory is the only physical assumption required.Laws of physics in specific forms, such as Newton's laws of motion and gravity, special relativity and general relativity, and Schrödinger's equation, are not needed for the machine learning and serving algorithms of discrete field theories to be effective in terms of correctly predicting observations.

Example 1 .
For these two examples, I choose (P, Q) = (4, 8), and the total number of trainable parameters are 45.For high-dimensional or vector discrete field theories, such as the Kepler problem in Sec. 4, deep neural networks are probably more effective.The training data are plotted in Fig. 2 using empty square markers.It is a time sequence ψi (i = 0, ..., 50) generated by the nonlinear ODE 2 (sin ψ + 1)

4 TrainingFigure 2 .
Figure 2. The predicted sequence ψ i (solid circles) from the learned discrete field theory and the training sequence ψi (empty squares) are barely distinguishable in the figure.The discrete Lagrangian is trained until the loss function F (ψ) is less than 10 −7 .

)Figure 3 .
Figure 3.The predicted time sequence (solid circles) agrees with the time sequence (empty squares)

6 .Figure 5 .
Figure 5.The training sequence (empty squares) represents a nonlinear oscillation in potential wall between ψ = ±1.6 in Fig. 6.The trained discrete Lagrangian density L d is accepted when the loss function F (ψ) on the training sequence is less than 10 −7 .The predicted sequence (solid circles) from the learned discrete field theory agrees very well with the training sequence.

Figure 6 .
Figure 6.The training sequence in Fig. 5 represents a nonlinear oscillation in the large potential wall between ψ = ±1.6.There are two small potential wells secondary to the large potential well, one on the left between between ψ = −1.3 and ψ = −0.1, and one on the right between ψ = −0.1 and ψ = 1.5.

Figure 7 .
Figure 7.The learned discrete field theory correctly predicts a nonlinear oscillation in the small potential well between ψ = −0.1 and ψ = 1.5 on the right of Fig. 6.The predicted sequence (solid circles) agrees with the accurate solution (empty squares) of the nonlinear ODE (15).

Figure 8 .
Figure 8.The learned discrete field theory correctly predicts a oscillation in the small potential well between ψ = −1.3 and ψ = −0.1 on the left of Fig. 6.The predicted sequence (solid circles) agrees with the accurate solution (empty squares) of the nonlinear ODE (15).

Figure 9 .
Figure 9. Orbits of the Mercury, Venus, Earth, Mars, Ceres and Jupiter generated by solving Newton's equation of motion for a planet in the gravity field of the sun according to Newton's law of universal gravitation.These orbits are the training data for the discrete field theory.

Figure 10 . 9 .Figure 11 .Figure 12 .
Figure 10.Orbits of the Mercury, Venus, Earth, Mars, Ceres and Jupiter.The orbits indicated by red markers are generated by the learned discrete field theory.The orbits indicated by green markers are the training orbits from Fig. 9.
The algorithm does not attempt to capture statistical properties of the training data, nor does it try to discover differential equations that govern the training data.Instead, it learns a discrete field theory that underpins the observed field.Because the learned field theory is discrete, it overcomes the difficulties associated with the learning of continuous theories.Compared with continuous field theories, discrete field theories can be served more easily and with improved long-term accuracy and fidelity.The serving algorithm of discrete field theories belongs to the family of structure-preserving geometric algorithms, which have been proven to be superior to the conventional algorithms based on discretization of differential equations.The demonstrated advantages of discrete field theories relative to continuous theories in terms of machine learning compatibility are consistent with Bostrom's simulation hypothesis.The synergy between artificial intelligence and the concept of discrete universe may bring pleasant surprises.
The first-order derivatives ∂ψ/∂x α and second-order derivatives ∂ 2 ψ/∂x α ∂x β are hidden inside the neural network for L, which is nonlinear and possibly deep.Solving differential equations defined by neural networks ventures into uncharted territory.As will be shown in Sec. 2, reformulating the problem in terms of discrete field theory overcomes both difficulties.Problem Statement 1 will be replaced by Problem Statement 2 in Sec. 2. To learn a discrete field theory, it suffices to learn a discrete Lagrangian density L d , a function with n + 1 inputs, which are the values of ψ at n + 1 adjacent spacetime locations.The training of L d is straightforward.Learning serves the purpose of serving, 2ψ/∂x α ∂x β .For this purpose, we can set up another neural network for ψ(x), which needs to be trained simultaneously with L. This is obviously a complicated situation.Alternatively, one may wish to calculate ∂ 2 ψ/∂x α ∂x β from the training data.But it may not be possible to calculate them with desired accuracy, depending on the nature of the training data.Secondly, even if the optimized neural network for L is known, serving the learned field theory by solving the EL equation with a new set of boundary conditions presents a new challenge.
[48,49,ncip52]of variational integrators is to discretize the action and Lagrangian density instead of the associated EL equations.Methods and techniques of variational integrators have been systematically developed in the past decade[48,49,.The advantages of variational integrators over standard integration schemes based on discretization of differential equations have been amply demonstrated.For example, variational integrators in general are symplectic or multi-symplectic[48,49, 51,52], and as such are able 2can be straightforwardly generalized to R n , where the discrete Lagrangian density L d will be a function of n + 1 variables, i.e, ψ i 1 ,i 2 ,...,in , ψ i 1 +1,i 2 ,...,in , ψ i 1 ,i 2 +1,...,in ,......, ψ i 1 ,i 2 ,...,in+1 .And in a similar way as in R 2 , the serving algorithm solves for ψ i 1 ,i 2 ,...,in by propagating its values at the boundaries to the entire lattice.It can also be easily generalized to vector fields or spinor fields, as exemplified in Sec.4.It turns out this algorithm to serve the learned discrete field theory is a variational integrator.
Onlythe training sequence is visible to the learning and serving algorithms, and the EL equation and the continuous Lagrangian are not.After learning the discrete Lagrangian from the training data, the algorithm serves it by predicting new dynamic sequences ψ i for different initial conditions.The predictions are compared with accurate numerical solutions of the EL equation.Before presenting the numerical results, I briefly describe how the algorithms are implemented.To learn L d (ψ i , ψ i+1 ), a neural network can be set up.Since it has only two inputs and one output, a deep network may not be necessary.For these two specific examples, the functional approximation for L d (ψ i , ψ i+1 ) is implemented using polynomials in terms of each of the two examples, the training data taken by the learning algorithm are a discrete time sequence generated by solving the EL equation of an exact continuous Lagrangian.In 1D, the EL equation is an Ordinary Differential Equation (ODE) in time.