Prominence of the training data preparation in geomagnetic storm prediction using deep neural networks

The direct interaction between large-scale interplanetary disturbances emitted from the Sun and the Earth’s magnetosphere can lead to geomagnetic storms representing the most severe space weather events. In general, the geomagnetic activity is measured by the Dst index. Consequently, its accurate prediction represents one of the main subjects in space weather studies. In this scenario, we try to predict the Dst index during quiet and disturbed geomagnetic conditions using the interplanetary magnetic field and the solar wind parameters. To accomplish this task, we analyzed the response of a newly developed neural network using interplanetary parameters as inputs. We strongly demonstrated that the training procedure strictly changes the capability of giving correct forecasting of stormy and disturbed geomagnetic periods. Indeed, the strategy proposed for creating datasets for training and validation plays a fundamental role in guaranteeing good performances of the proposed neural network architecture.

www.nature.com/scientificreports/ plasma temperature (T), density (D), total speed (V), pressure (P), and east-west component of the electric field ( E y derived from B z and V x ). The dataset covers the period January 1990-November 2019, and includes half of the 22nd solar cycle, all of the 23rd, and almost all of the 24th. To produce a robust forecasting of the Dst index, it is crucial to determine how the dataset is split and processed for the training and evaluation of the model. On the other hand, adopting a correct methodology for treating data is crucial to avoid bias especially when both a machine learning approach is used to develop predictive models and the data are time series.
Periodicity and arrow of time. If data are periodic, it is safe to train the model considering at least one complete period and test it on different periods. In fact, being the arrow of time fixed and the future unknown, the training operation that make use of points that follow the data used in the test can introduce bias. Therefore, the validation and test data-sets must be constructed by points of the time series that follow what is used for training one. In the present case, since we have only data from two solar cycles, the best option is to use one cycle for training and the other for both validation and test. Anyway, such a choice forces the validation to contain data relative to the first half of a solar cycle with a distribution of Dst values and storms different from the test set. Therefore, in our opinion, the most efficient choice for the validation and test process is to select points randomly for the two datasets.
Forecast of rare events (storms). Training a supervised fashion Deep Learning (DL) model requires both a balanced sampling of data referring to quiet and storm periods, and a proper evaluation of the metrics used to measure the performances. If not, the model will learn to predict only the most frequent case represented in the training set. Moreover, the standard performance metrics, computed on the full validation and test dataset, would produce a the prediction that would be correct most of the time but wrong in most relevant cases.
Taking care of these two aspects, we split the dataset using all the data before 1/1/2009 for training, and the remaining part for validation and test. In this way, we have at least one solar cycle for the training and one for the evaluation of the model. As previously said, for the validation and test we can choose dataset subsequent in time (i.e. ordered) or an equal number of points randomly from those available after 1/1/2009. The difference between random and ordered selection are displayed in Fig. 1. In panel a the validation data includes the points in the first half of the cycle while the test is the other half. It is evident that the tail of the two distributions is different: in the validation dataset, events with very low Dst, which are particularly important being connected with storms, are missing. The situation completely changes when the points are picked randomly. In this case, the distributions are quite similar and also similar to the training dataset, representing the best starting point for the development of a data-driven predictive model. The last problem, directly connected to the data distribution, is that there are only few events associated with storms. In the framework used in this paper, where the algorithm learns by looking at the data, if the distribution is highly peaked around some value of the target variable, the algorithm will learn to predict only such values. To avoid this issue, we apply a re-weighting function for the sampling of the data that feed the algorithm's training. In this way, every value of Dst is almost equally probable. The difference between the nominal distribution and the flatten (weighted) distribution is presented in Fig. 1c.
The points discussed above limit also the applicability of standard cross-validation methods usually recommended in machine learning applications to test the robustness of the models. While specific schemes of crossvalidations have been developed for time series (e.g., the TimeSeriesSplit function available in the Scikit Python library), we prefer not to adopt this type of check because this kind of split increases the size of the training dataset, namely: in the first iterations, there are much fewer storms than in the latest. This automatically will favor the last iterations of the procedure in predicting storms, introducing an indirect bias in the interpretation of the results.
All the features are scaled linearly on a compact range as an additional pre-processing step. The scale is fitted on the training dataset, mapping these min and max values of data in 0.1 and 0.9, respectively. This choice leaves some room to accommodate smaller or larger values than those available in the training dataset that can emerge in future measurements of the variables. In optimizing DL networks, two types of parameters need to be fixed: the layers' weights and the hyperparameters specifying the architecture. During training, the back-propagation procedure takes care of the former, which can be millions or even billions (in our case 25,244). The others, typically limited in number (in our case 7), are usually determined manually by testing different solutions and considering only the training and validation dataset in the evaluation to avoid bias.
We found that better predictions are obtained using the following values for the hyper parameters: • LSTM, number of hidden layers: 2, • LSTM, size of the hidden layers: 8, • FCNN, number of layers: 4, • FCNN, number of output features for each layer: 96, 96, 48, 12.
Batch normalization is applied to the input vector of the FCNN, ReLU activation function, and a dropout layer with a drop factor of 0.2 follows every fully connected layer except the last one. The loss function minimized during the training of the network is the Mean Absolute Error (MAE) function We use the Adam optimizer and a learning rate of 10 −5 . During the training, back-propagation is applied after computing the loss on samples extracted from the dataset in batches. The procedure is repeated an arbitrary number of times. Statistics are collected after iterating back-propagation on as many samples as the number of elements in the training dataset: this is called an epoch. The training ends once the loss function stops decreasing on the validation dataset. We used batches of size 256 and stopped training after 10,000 epochs. Examples of the loss function behaviors are presented in Fig. 3.
The code with the implementation of the network architecture and the procedure to generate the training, validation, and test datasets are available as a Python notebook in the public GitLab repository gitlab. fbk. eu/ dsip/ dsip_ physi cs/ dsip_ ph_ space/ Dstw.

Baseline model and evaluation metric.
A typical baseline forecast method for time series is the persistent model. The assumption at the base of this approach is that nothing changes between the last known value and all the future points: It is expected that the predictive power of this model will decrease with the increase of the forecast horizon; on the contrary, in the short term, assuming persistence is often a good approximation of the actual trend.
Different metrics can be considered to highlight and study models' features and compare their predictive power. However, the focus of this work is the importance of how the training data are selected and used. This is www.nature.com/scientificreports/ appreciable even considering only the most common of these metrics, the Root Mean Squared Error (RMSE), defined as:

Results
Dst prediction. Before discussing the importance of processing the data in the training procedure and analyzing the performances obtained with the different approaches, we compared our results with similar state-ofthe-art calculations available in the literature that uses other networks on similar data. For such a comparison, we consider both 25 , and 26 . In the first, a similar way of splitting the data is used, and their test set has a consistent overlap with ours. The second uses a DL network with similarities with ours but a different approach for the splitting. A comparison of the performances in this last case is more difficult. Having this in mind, Table 1 and Fig. 4 show how our network outperforms the other two approaches.
We also investigated neural networks not using the complete set of input parameters. We are not showing the results here because this type of analysis is lateral to the paper's main topic. Still, summarizing, we found agreement with 31 where it was shown that the solar wind electric field and the north-south component of the IMF have a key role in obtaining a good prediction of the Dst index. Furthermore, testing different combinations of input data for the network, we found that the temperature contributes less to the solution's performance. The SW density, instead, is more significant when combined with other parameters, such as the SW speed.  www.nature.com/scientificreports/ Established that our network can obtain comparable or even better performances than the analogous solutions available in the literature, we focus now on the importance of training data preparation.
In Table 2 is reported the RMSE obtained until t + 12 on three different models: the baseline persistence model, the Neural Network (NN) trained with the nominal sampling Fig. 1c blue, and the NN trained with the weighted sampling Fig. 1c orange. Remember that the difference is not the data used but how they are sampled in each epoch of the training procedure.
As a side remark, it is worth noting how close are the performances obtained in validation and test. This result is expected given the similar distribution of Dst values for the two datasets (Fig. 1b).
Moving to the comparison of the three models, the RMSE obtained looks smaller for the nominal method, and the persistence approach shows better results than the weighted model, at least in validation and test. However, is this the correct way to determine the effectiveness of the models to see storms coming? It is crucial to remember that "in production" we are interested in knowing in advance when a storm starts and not predicting quiet periods.
In the validation and test datasets, most of the samples refer to low geomagnetic activity. So if our method is good at predicting those events, the RMSE will be small even if the error is significant in the much more relevant and rarer cases of storms. But this is not the aim of the project.
Therefore, it is crucial to understand how the models perform in different classes of geomagnetic activities. We consider here four classes:  To increase the statistics on low Dst events and since the performances are comparable between validation and test set, in the following, we present results obtained by merging the validation and test datasets. Table 3 shows the RMSE for the three models in the four classes. For low geomagnetic activity ( Dst > −20 nT and −20 nT < Dst < −50 nT ) the nominal model obtained the best previsions, and this is expected from what discussed before. On the other hand, the weighted model performs better than nominal during high geomagnetic conditions ( −20 nT > Dst > −100 nT and Dst < −100 nT ). This behavior confirms the importance of the re-weighting procedure: The possibility for the network to see storm examples more often during the training procedure considerably improves the ability in its prediction.
Dynamic time warping analysis. In 26 Dynamic Time Warping (DTW) has been suggested as a method to check the level of persistence present in models based on Neural Networks. DTW is a method to estimate the similarity between time series. What characterizes DTW is that the measure takes care of shifts and stretches in time of the series. Given two time series of different lengths n and m, in the DTW algorithms: • A grid nxm is made.
• Eeach point (i, j) of the grid is filled with the distance between the i-th element of the first series and j-th of the second. The metric can be freely chosen and we use the Euclidean distance. • The warping path P is build that minimizes where p k is a point of the grid and the length K of the path P = [p 1 , p 2 , . . . , Conditions that must be satisfied by path P are that: the beginning and end of the series are matched; every point is mapped at least with one of the other series; the elements are ordered in time.
Using this method, we can check immediately if the original time series and the prediction differ only for a constant shift, as in the case of the persistence model. Following 26 we test DTW for the nominal and weighted solution.
From the results presented in Table 4 emerges that a certain degree of persistence is present in the nominal case. Indeed, the larger values of the coefficients in each row are for the corresponding time-shift column. Nevertheless, the values are much smaller in the weighted case (Table 5), showing once again the importance of the different sampling of the events during training.

Discussion
From the perspective of developing a warning system for geomagnetic storms influencing the Earth's magnetosphere, it is interesting to convert our models into classification models which try to predict the Dst-index class. To accomplish this task, we associate the output value of the models to the corresponding Dst class. www.nature.com/scientificreports/ In evaluating the performance of these classifiers, the accuracy (ACC) alone can be a misleading measure when facing an unbalanced dataset. For this reason, we also consider the geometric mean (G-Mean), which is more sensitive to errors in underrepresented classes. The Geometric mean is defined as: Sensitivity, computed by counting the number of true positive (TP) and false negative (FN) cases, refers to the true positive rate and measures the ability to predict the correct class. Specificity instead focuses on how well the negative class is predicted by looking at the true negative (TN) and false positive (FP) occurrences. Combining these two scores into a single number gives the geometric mean. We indicate as 'classification score' the couple (ACC, G).
It can be easily seen from Table 6 that, as expected, at t + 1 the persistence approach provides better previsions than the weighted model. Such a result changes at both t + 6 and t + 12 . In fact, the persistence model completely fails in predicting storming periods, while the weighted is able to correctly predict stormy periods at both t + 6 and t + 12 . Globally for t + 12 , the persistence model has an overall score equal to (0.84, 0.32) while the weighted has an overall score (0.77, 0.39).
The difficulty in establishing the best model by looking simultaneously at accuracy and G-Mean shows that we cannot verify the model's performances by focusing only on global scores for this problem with highly unbalanced datasets and multiclass. Instead, it is more informative to analyze the confusion matrices from where we can (5) G-Mean = Sensitivity * Specificity,    www.nature.com/scientificreports/ extract information about efficiency in each class. This is also considering that we are interested in quantifying how often we could issue a reliable alert using these models: a clear understanding of the goodness in the classes connected with intense storms is crucial. Figure 5 shows the confusion matrices at t + 1 (upper panels), t + 6 (middle panels) and t + 12 (lower panels) for the persistence model on the left, nominal model in the center and weighted model on the right. The colors, from from light to dark blue, are representative of the percentage of events collected in each class, namely: light blue corresponds to the 0% and dark blue to the 100%.
The first row of the figure confirms that the persistence model is good at one-hour forecasting. The situation changes significantly for predictions at t + 6 and t + 12 . From the confusion matrices, it is immediately clear that the global better score obtained for the persistence model was an artifact of the highly unbalanced dataset where the first class of events is much more frequent but not interesting. Excluding this first class, the performances of the reweigthed model are better in all the other three. Moreover, not only the values on the diagonals are higher (ability to predict the class correctly), but an incorrect prediction is more likely to be associated with a contiguous class than the correct one. Although the reweighted model fails most of the time to predict events in the last class, it can distinguish stormy from quiet periods much better than the persistence or nominal model. On the single last class, the accuracy is only 11% while summing high and intense storms we reach 57% , this against 28% of the persistence and 29% of the nominal.
As a final step, we consider the ability of the models to predict storms when the Dst values in the input data are associated only with low or medium activities. The performances in this condition are particularly relevant because predicting a storm coming from data in a quiet period is much more helpful than having the prediction when we are already in the middle of a dramatic event. Figure 6 shows the confusion matrix obtained when all the 12 elements of the input time series have Dst > −20 nT. The persistence model fails by definition in all the classes except the first. It is instead remarkable to notice how a not insignificant number of times, the rewighted model can forecast stormy and disturbed geomagnetic periods both at t + 6 and t + 12 . The same occurs relaxing the constraint permitting Dst > −50 nT as input data as shown in Fig. 7).

Conclusions
In this research, we introduced a detailed data analysis plan for the development and validation of algorithms based on Deep Learning Neural Networks to predict the Dst index during both quiet and disturbed geomagnetic conditions using the interplanetary magnetic field and the solar wind parameters. Our analysis shows that both what we called nominal and weighted models provide better results than the persistent benchmark and other state-of-the-art neural network architectures. The response of the neural networks to training procedures that differ in the preparation of the training dataset was investigated. We strongly demonstrated that the training procedure strictly changes the capability of giving a correct forecast of stormy and disturbed geomagnetic periods. Indeed, the strategy proposed for the training dataset selection and the optimization of the architecture plays a key role in the algorithm's performance.

Data availability
The datasets analysed during the current study are available in the OMNIWeb online repository, http:// omniw eb. gsfc. nasa. gov/. Figure 7. Same as Fig. 6 but here, all the input elements have Dst larger than −50.