Magnetic field mapping of inaccessible regions using physics-informed neural networks

A difficult problem concerns the determination of magnetic field components within an experimentally inaccessible region when direct field measurements are not feasible. In this paper, we propose a new method of accessing magnetic field components using non-disruptive magnetic field measurements on a surface enclosing the experimental region. Magnetic field components in the experimental region are predicted by solving a set of partial differential equations (Ampere’s law and Gauss’ law for magnetism) numerically with the aid of physics-informed neural networks (PINNs). Prediction errors due to noisy magnetic field measurements and small number of magnetic field measurements are regularized by the physics information term in the loss function. We benchmark our model by comparing it with an older method. The new method we present will be of broad interest to experiments requiring precise determination of magnetic field components, such as searches for the neutron electric dipole moment.

Magnetic field mapping is commonly used in many fields of science, medicine and technology such as particle accelerators, nuclear storage experiments 1-3 , cardiac beat detection 4 , magnetic resonance imaging (MRI) 5 and magnetic indoor positioning systems (IPS) 6,7 . For example, in nuclear and particle physics experiments, with one example being the search for the neutron electric dipole moment, it is often crucial to measure and control the magnetic field components in the experimental region, because these experiments are typically sensitive to perturbations in magnetic fields. An undetected disturbance in a magnetic field may result in systematic uncertainties and cause a limitation for the precision of the measured quantities. To minimize systematic uncertainties, magnetic field components should be monitored in real-time and any unwanted field should be compensated during the operation time of the experiment. Real-time measurement of the magnetic field in an experimental region of space is not always practical or feasible. In most cases, the experimental region is not accessible due to a physical enclosure (e.g., a setup placed in a vacuum chamber), or it could be that placing a magnetic field sensor inside the experimental region is too disruptive to the system.
There exist several approaches in the literature that can be utilized to solve the problems stated above. For instance, Solin et al. 8 make use of Gaussian processes (GPs) to interpolate/extrapolate ambient magnetic fields. They train the model using a data set collected by a magnetic field sensor at different locations of space and reconstruct the whole ambient magnetic field. Another method is proposed by Nouri et al. 9,10 . They introduced a non-disruptive magnetic field mapping method using exterior measurements at fixed locations and leverage the multipole expansion of the magnetic field vector. Expanding the magnetic field to some finite degree n = N , they provide a systematic way to optimize sensor locations and fit the unknown coefficients of the multipole expansion using the data from those exterior sensor measurements. This method is susceptible to noise in the data and, since the multipole expansion terms need to be picked to a specific field profile, the coefficients of the expansion terms are not regularized.
In this paper, we propose a robust way of predicting the magnetic field vector in the experimental region. In order to accomplish this, we utilize physics-informed neural networks (PINNs) 11 . PINNs propose a way to incorporate prior physical knowledge about the system in terms of its partial differential equations, into the deep neural networks while still being able to utilize their universal function approximator property. With PINNs, data and mathematical models of physics are combined in a smooth way, even in situations that are only partially understood, uncertain, and have a lot of dimensions. In noisy and high-dimensional situations, physicsinformed learning blends data and mathematical models easily and can solve general inverse problems extremely To illustrate, the first 10 f n basis vector functions are listed in Table 1. The right-hand side of the Eq. (6) is expanded to some finite order n = N and the magnetic field vector inside the volume can be interpolated using linear regression techniques.
Magnetic field prediction using PINNs. The exact values of the partial derivatives in (1) and (2) can be calculated by automatic differentiation 11 , which is implemented in some well-known machine learning libraries such as TensorFlow 15 and PyTorch 16 . The neural network we train to approximate the magnetic field inside the region will have the structure as shown in Fig. 2. The hyperbolic tangent is used for the activation of each hidden layer. The other tested activation functions have not performed as well as the hyperbolic tangent for this network architecture. The number of hidden layers are chosen to be 4 and 8 with each having 32 or 64 neurons. The performance of these 4 different-sized networks are discussed later.
Then, the network can be trained by a combined loss function of data, curl and divergence losses where and where the points r i B and r i d denote the positions of the magnetic sensors and the collocation points, respectively. N B is the number of the magnetic field sensors, N f is the number of collocation points in the domain and B s is the measured magnetic field vector at r i d . The parameter in Eq. (7) can be adjusted according to the performance of the network. The collocation points, r i d , in Eqs. (9) and (10) are sampled from the volume encapsulated by the surface S ( Fig. 1) and can be chosen to be fixed throughout the training process 11 . However, randomly choosing collocation points in each epoch leads to a quicker convergence as well as more accurate results. This is partly due to being able to choose fewer number of collocation points, and since they are assigned randomly each iteration, they represent the domain better than any fixed collocation points scheme. The ADAM optimizer 17 , an adaptive method for gradient-based first-order optimization, is what we make use of in order to minimize the loss function 7. The general procedure for training is given in Algorithm 1. www.nature.com/scientificreports/ Experiments Simulated experiment. In the following example, we will demonstrate the capability of our magnetic field prediction model by placing an arbitrary number of triple-axis magnetic sensors on the surface of a cube.
Magnetic field sensors will be placed on the cube randomly and we will generate training and validation data by using Biot-Savart law for circular current loop(s). In the next section, we will give the analytical expression of the three dimensional magnetic field vector of a single circular current loop and then we will construct a higher order asymmetric magnetic field by placing multiple loops with different currents to benchmark our method on. We begin by demonstrating the ability of our magnetic field reconstruction method by considering the magnetic field of a simple circular current loop (in arbitrary units). The magnetic field components of a circular current loop with radius a are given by 18,19 with where E(k) and K(k) are elliptic integrals, ρ 2 ≡ x 2 + y 2 , α 2 ≡ a 2 + r 2 − 2aρ , β 2 ≡ a 2 + r 2 + 2aρ and r ≡ x 2 + y 2 + z 2 and z = r cos θ . In this work, we will work in arbitrary units by setting C = 1.
We want to show the potential of the network by comparing it to the multipole expansion method for various sensor counts and different types and levels of noises. To create a non-uniform higher order magnetic field, we positioned 8 circular loops of different current values at positions (x = ±1.01, y = ±1, z = ±4) and the triple-axis magnetic sensors are placed randomly on the surface of a cube with side length L = 2 centered at the origin. The configuration is illustrated in Fig. 3. Our goal is to predict the magnetic field in the inner region of the surface.
The number of hidden layers and neurons of the network characterizes the complexity of the function it can approximate. Having more hidden layers and neurons should not negatively affect the performance, training larger networks are slower and may require more care with initialization and regularization of the weights 20 . In this example, larger network sizes resulted in better performance as shown in Table 2 as expected. Models were trained for less than 2 min for all cases on an NVIDIA RTX 3080 GPU.
Greater sensor counts gives more information about the magnetic field of the system, and we would expect the network to be able to use that information to predict magnetic field better. As shown in Table 2, having more sensory information has led to a better performance for all network structures. Moreover, fewer sensor counts has not led to a divergence from the exact magnetic field. This is not the case with the multipole expansion method as shown in Table 3. The other method seems to suffer with relatively few sensors and higher order versions overfit the sensor data. Decreasing the order in this case leads to better results but due to lower orders having fewer basis functions, the method is not able to predict the exact magnetic field as well as our network. This can also be seen in Figs. 4 and 5.
Performance of the network when Gaussian noise is introduced to the sensory information is given in Table 4. This noise has led to a further deterioration of the performance for the multipole expansion method. Our method has also been affected, however, performed better across various sensor counts.

Mapping the magnetic field of a square coil system
To demonstrate our methodology using actual data, we conducted an experiment in which a Bartington triple axis magnetic field probe (Mag-13MS1000) was moved to the locations of the training data collection points. In order to generate a non-uniform magnetic field, two rectangular coils are stacked vertically and driven with different current magnitudes in the opposite directions (Fig. 6). Each face of the coils is a printed circuit board (PCB) with dimensions 55 cm × 16 cm containing 50 parallel line traces along the long side of the PCB. A current with magnitude 1 A flows counterclockwise and another current with magnitude 0.6 A flows clockwise in the top and bottom coils respectively. In order to isolate the field generated by the coils, at each measurement location, the data is collected as the difference between the sensor measurements with the coil turned on and off. Then, we trained the network for the magnetic field data collected using the magnetic mapping system. The training domain is chosen as a cube  Table 2. Error between the predicted and exact magnetic field in vector norm for various sensor counts and network structures.

Conclusions
In this study, we presented an efficient and practical method for mapping the magnetic field of inaccessible locations. We encoded previous knowledge from Maxwell's equations for magnetostatics into a physics-informed neural network model for magnetic field prediction in regions where direct measurements are not possible. We provided two experiments that proved the practicability of the proposed method. A simulated experiment proved the value of incorporating extra physics knowledge into the model. Mapping the magnetic field of a square coil system illustrated the effectiveness of the approximation technique in real world applications.
Our method compared with the multipole expansion method indicated better performance results across various sensor counts and noise levels both in simulation data and real world measurement data.    www.nature.com/scientificreports/

Data availability
The datasets generated and/or analysed during the current study are available in the Github repository, https:// github. com/ ucosk un/ bmapp ing-pinn/ tree/ main/ data.