Neuro-computing solution for Lorenz differential equations through artificial neural networks integrated with PSO-NNA hybrid meta-heuristic algorithms: a comparative study

In this article, examine the performance of a physics informed neural networks (PINN) intelligent approach for predicting the solution of non-linear Lorenz differential equations. The main focus resides in the realm of leveraging unsupervised machine learning for the prediction of the Lorenz differential equation associated particle swarm optimization (PSO) hybridization with the neural networks algorithm (NNA) as ANN-PSO-NNA. In particular embark on a comprehensive comparative analysis employing the Lorenz differential equation for proposed approach as test case. The nonlinear Lorenz differential equations stand as a quintessential chaotic system, widely utilized in scientific investigations and behavior of dynamics system. The validation of physics informed neural network (PINN) methodology expands to via multiple independent runs, allowing evaluating the performance of the proposed ANN-PSO-NNA algorithms. Additionally, explore into a comprehensive statistical analysis inclusive metrics including minimum (min), maximum (max), average, standard deviation (S.D) values, and mean squared error (MSE). This evaluation provides found observation into the adeptness of proposed AN-PSO-NNA hybridization approach across multiple runs, ultimately improving the understanding of its utility and efficiency.


Non-linear Lorenz differential equations
The nonlinear chaotic system is a set of three non-linear differential equations that describe the behavior of a dynamical systems.The following system of three differential equations yields the Lorenz equations: (1) dx(t) dt = σ y(t) − x(t) dy(t) dt = Rx(t) − y(t) − x(t)z(t) dz(t) dt = x(t)y(t) − Bz(t) t ∈ [0, T] with initial conditions x(0) = c 1 , y(0) = c 2 , z(0) = c 3 In this model three variables x(t) , y(t) and z(t) are be thought of as coordinates is 3-dimensional.The non- linear behavior of the chaotic system is examined by (3) three parameters σ , R , and B which control the intensity of the non-linear interactions between the variables.

Physics informed ANN based Lorenz differential equations
In recent years, there has been an advancing interest to use machine learning, deep learning approach to solve the chaotic systems.This algorithm, such as supervised neural networks (ANN) unsupervised neural networks (PINN) has been shown to be effective in approximating the numerical solutions to the equations.The physics informed neural network (PINN) based model is evolved to solve the chaotic system.Activation functions are a major component of neural networks (NN) and utilized for non-linearity into the ANN.There are different types of activation functions used in ANN such as sigmoid, ReLU, and tanh, each with their own properties and advantages.The choice of activation function can greatly impact the performance of ANN and accuracy of an ANN.The physics informed neural Lorenz Differential Equations are as follows: where x(t) = a x i 1+e −(wx i +bx i ) , y(t) = a y i 1+e −(wy i +by i ) and z(t) = a z i 1+e −(wz i +bz i ) in sigmoid form in artificial neural networks (ANN) architecture.
The artificial neural networks (ANN) based derivatives of the Lorenz differential equations are given following: The artificial neural networks (ANNs) based Lorenz systems applied to solution taking ten (10) number of neuron in one hidden layer with ninety (90) correspondence weights W = [a x i , w x i , b x i , a y i , w y i , b y i , a z i , w z i , b z i ], W are the weights and biases of unsupervised artificial neural networks (ANN) to optimize using hybrid PSO-NNA. (2)

ANN based fitness function
The ANN based fitness function is following where x 0 , y 0 and z 0 initial conditions, ε is fitness function based on physics informed neural networks and N is total number of initial conditions.

Meta-heuristic optimization algorithms
Meta-heuristic algorithms have used a pivotal role in tackling a wide range of non-linear optimization problems both constrained and unconstrained, across various engineering domains.These optimization algorithms are designed to find approximate solutions of the problems when traditional optimization techniques struggle due to the complexity or non-linearity of the objective functions and constraints involved.Over the years, researchers have developed numerous meta-heuristics each with its unique approach, methodology and strengths.Notable algorithms that have made significant contributions to the engineering field include Genetic Algorithm, Particle Swarm, Firefly Algorithm, Water Cycle, Ant Colony, Levy Flight, Artificial Bee Colony, Hunting Search, Simulated Annealing, and many others.

Particle swarm optimization (PSO)
Particle Swarm Optimization (PSO) stands as a remarkable evolutionary optimization algorithm inspired by specifically the behavior of birds within a swarm.Its main objective is to analyze complex non-linear complex optimization problems that difficult to solve using traditional approaches.PSO helps to alter particle placements based on swarm-discovered global best solutions as well as individual best through a iterative phases.PSO functions by sustaining a population of particles, each of which represents a potential solution to the optimization issue, much like a flock of birds adjusting to their environment.The best-performing location for each particle, called the personal best (pbest), and the best-performing position for the entire swarm, called the global best (gbest), control how the particles moves to optimal position.Particles collaborate indirectly by continually fine-tuning their locations in relation to these standards, therefore traversing the optimization terrain with effectiveness.
The use of PSO in several domain demonstrates its adaptability and helpful.Notably, PSO has shown useful in the fields of robotics and wireless networks, where it has optimized resource allocation and network parameter setup 56,57 .In the context of power systems, PSO has played a crucial role in optimizing energy distribution and load management 58,59 .In complex scheduling problems like job shop scheduling, where tasks must be allocated efficiently among limited resources, PSO has showcased its prowess 60,61 .This balance ensures that the algorithm not only explores the solution space thoroughly but also exploits the promising areas identified during the exploration.This trait is particularly valuable when dealing with multifaceted optimization scenarios, where the landscape can be rugged and intricate.Researchers utilized PSO in different complex non-linear problems [64][65][66][67][68][69][70][71][72][73] .Starting with randomly chosen particles each iteration updates particle positions and velocities based on their latest best local position P x−1 LB and global position P x−1 GB .The continuous standard PSO framework involves updating particle positions and velocities using the following general formulas: particle velocity update: www.nature.com/scientificreports/ In these equations i ranges from 1 to p where p is the total number of particles.Where X i represents the position of the ith particle in the swarm, while v i is its velocity vector.The framework incorporates parameters such as V (inertia weight), c 1 and c 2 (local and global social acceleration constants), and a weight that linearly decreases from [0, 1] Random vectors r 1 and r 2 are constrained between [0, 1].

Neural network algorithm (NNA)
Introducing a novel variant of the NNA 74 this distinctive evolutionary approach draws inspiration from both biological nervous systems and ANN.While ANN prediction purposes, the NNA ingeniously amalgamates neural network principles with randomness to tackle optimization problems.By using the intrinsic structure of neural networks NNA demonstrates strong global optimization search performance.Remarkably, NNA sets itself apart from traditional meta-heuristic methods by relying solely on population size and stopping criteria, eliminating the need for additional parameters 74 .NNA is a population-centered optimization algorithm that consists of the following four key elements:

Update population
By NNA the vector for population X t = {x Here Q denotes the size of the population, and t is the number of iterations that are currently in use.x t new,i Represents the solution for the ith person at the same time point, computed with the proper weights, and x t i , represents the solution for the ith individual at time t.Moreover, the following formulation applies to the weight vector w t i :

Update weight matrix
As depicted in Eq. ( 35) the weight matrix W t assumes a pivotal role within NNA process of generating a novel population.The dynamics of the weight matrix W t can be refined through: where 2 randomly value from [0, 1] and w t obj is the objective/fitness weight vector.It's highlight that both the objective weight vector w t obj and the target value x t obj share corresponding indices.To elaborate further, if x t obj matches x t v , ( v ∈ [1, Q] ) at t where t is time, then w t obj is equivalently to w t v .

Bias operator
The bias operator in NNA is to increase the organization ability to explore the best optimal values.A modification factor, represented as β 1 , becomes important for determining the amount of bias that has been introduced.Updates to this factor can be obtained through: The bias operator consists of a bias weight matrix and a bias population, which are each described as follows: two variables are involved in the bias population operator: a set designated as P and a randomly generated number Q p .The lower and higher bounds of the variables are represented by the expressions l = (l 1 , l 2 , l 3 , . . ., l D ) and u = (u 1 , u 2 , u 3 , . . ., u D ) , respectively.β t 1 × D , which represents the ceiling value of the product of β t 1 and D , is used to calculate M p .Q p randomly chosen numbers from the interval [0, D] make up the set P. As a result, the bias population is exactly defined as follows: x t i,P(S) = l t P(S) + u P(S) − l P(S) × 3 , S = 1, 2, 3, . . ., Q P where 4 , is a random number between [0, 1] subject to uniform distribution.

Transfer operator (TO)
In order to reach the present optimal solution, the transfer operator must produce the best solution possible with an emphasis on NNA local search using the following equation where 5 a random value from 0 to 1.The NNA initialized following equation where 6 is random value from [0, 1]

Results and discussion
Case 1 In this case, Lorenz differential equations in given equation has been evaluated by taking the fixed numerical values of the parameters σ, R and B .The ANN based fitness function of Lorenz differential equation for this case written as, The artificial neural networks (ANNs) scheme are applied to solution of the problem taking ten (10) number of neuron in hidden layer with ninety (90 the fitness function constructed using artificial neural network for this case taking t ∈ [0, 1] with step size 0.1 .To find the optimal weights of the artificial neural networks used hybridization of particle swarm optimization (PSO) and neural network algorithm (NNA).
The fitness function shown Eq. ( 25) with 90 weights and biases, denoted by set W , here as well as its components are In this research, introduced a novel hybridization approach that combines unsupervised artificial neural networks (ANN) with two powerful global optimization algorithms: Particle Swarm Optimization (PSO) and Neural Network Algorithm (NNA) as (ANN-PSO-NNA).This hybrid approach to aims to find the optimum weights and biases for ANN based of Lorenz differential equations.Our ANN-PSO-NNA methodology involves a two-step process.Firstly, PSO employ to generate randomized weight sets, which serves as a promising initialization point.Further employ NNA to more fine-tune and optimize these weight sets for more accurate results.This hybrid strategy showcases the potential for achieving superior results in optimizing ANN based differential equation within defined initial conditions constraints.
(24)  25) is meticulously designed to converge towards zero using these optimized set of weights and biases.The resulting optimal weight set, denoted as W and meticulously tabulated in Tables 1, 2 and 3 and plotted in Fig. 1.Delving into the realm involves exploring of ANN-PSO-NNA to calculate and predict the ANN based x(t), y(t) and z(t) .These predicted values are then meticulously tabulated in Tables 4, 5 and 6.Our study takes a step further by conducting a comprehensive comparison analysis, pitting the solutions from the NDsolve method against those generated by our innovative numerical ANN-PSO-NNA hybrid algorithm represented in Fig. 2. The error metric especially absolute errors (AE) inherent in these predictions are tabulated in Tables 4, 5 and 6, graphical visually in Fig. 3.The crux of proposed machine learning evaluation lies in the accuracy and convergence evolution of the ANN-PSO-NNA.To ensure robustness, the proposed ANN-PSO-NNA approach undergoes rigorous testing across a (100) hundred independent runs.The fitness function for the Lorenz differential equation is established by this procedure for two (2) separate scenarios.The effectiveness of obtaining precise and convergent presented in Fig. 4 for case 1 is assessed by using the fitness function with ANN-PSO-NNA across one hundred (100) independent runs.
Additionally, concentrate on validation the proposed ANN-PSO-NNA approach performance by thoroughly investigation its convergence behavior.To assess its efficacy computed numerically mean square error (MSE) over a (100) one hundred independent runs.The acquired numerical MSE values offer insightful analysis to verify and efficiency the proposed hybrid ANN-PSO-NNA capacity to converge towards optimal solutions.Plots illustrate the convergence patterns of the proposed ANN-PSO-NNA hybrid algorithm are used to show MSE over (100) separate runs.Figures 5, 6

Case 2
In the context of this study, delve into the analysis of the Lorenz differential equation problem using the machine learning techniques.The focus of our investigation lies in evaluating the behavior of the equation under specific conditions.For case (2) a comprehensive evolution selected fixed values for the parameters σ , R and B which play a important role in shaping the dynamics of the Lorenz equation.In this article introduce a novel hybridization approach for case (2) that combines physics informed neural networks (PINN) with two powerful global optimization algorithms: Particle PSO and NNA called ANN-PSO-NNA.This hybrid machine learning approach to aims to find the optimum weights and biases for ANN based of chaotic behavior of Lorenz differential equations.Machine learning approach ANN-PSO-NNA involves a 2-step process.Firstly PSO used to optimal to generate randomized weight and biases of ANN sets, which serves as a promising initialization for NNA.Further, NNA employ to more fine-tune and optimize these set W .This hybrid- ization strategy showcases the potential for achieving more superior numerical results in optimizing ANN based differential equation within associated initial conditions.The fitness / objective function as represented for case (2) in Eq. ( 27), has been meticulously crafted to steadily converge towards zero (0) for (100) independent runs illustrated in Fig. 8.This is achieved by leveraging the best set of weights and biases of ANN obtained through the PSO-NNA hybridization approach.The resulting set of optimal weights, denoted as W, is meticulously organized and presented in detail in Tables 7, 8 and 9 and represented in Fig. 9.The efficacy of ANN-PSO-NNA approach is visualized in Fig. 8, where the convergence of the fitness function is visually highlighted across one hundred (100) independent runs.This visual representation effectively showcases the prowess of ANN-PSO-NNA hybrid methodology in steering the fitness function towards convergence.
Delving into the realm involves exploring of proposed ANN-PSO-NNA to compute and predict the ANN based x(t), y(t) and z(t) .These predicted numerical values are then thoroughly tabulated in Tables 10,11 and 12.This study takes a step further by conducting a comprehensive comparison analyze, pitting the finding from the NDsolve method against those generated by innovative numerical proposed ANN-PSO-NNA hybrid algorithm represented in Fig. 10.The absolute errors (AE) between NDsolve and ANN-PSO-NNA are tabulated in Tables 10, 11 and 12, visually represented in Fig. 11.Also focus on assessing the performance of the ANN-PSO-NNA algorithm through a comprehensive analysis of its convergence behavior.To ensure robustness, the ANN-PSO-NNA algorithm testing across a hundred (100) independent runs for fitness function.
To quantitatively evaluate its effectiveness, compute the mean square error (MSE) across one hundred (100) independence runs.The obtained numerical values of the MSE provide valuable analysis to check ability to converge towards optimal solutions of ANN-PSO-NNA.The variations in MSE across one hundred (100) independent runs are visually presented through plots offering a clear depiction of the ANN-PSO-NNA hybrid algorithm convergence trends.The plotted MSE values for different experimental scenarios are showcased in Figs. 12, 13, and 14 for case 2. To accentuate the study robustness, a comprehensive statistical analysis is conducted to draw insightful comparisons between the outcomes of two distinct methods: NDsolve and ANN.In particular, the analysis extends across an extensive set of one hundred (100) independent runs, yielding a rich array of data points for evaluation.This encompassing analysis takes into account key statistical measures encompassing minimum, maximum, average, and standard deviation values are tabulated in Table 13.These measures collectively offer a efficiency of the algorithmic performance and its consistency.( 26)

Conclusion
In conclusion, the rapid evolution of unsupervised artificial neural networks (ANN) technique has opened up exciting methodologies for addressing complex nonlinear differential equation problems in various engineering domains.This study has analyzed on the power of machine learning algorithm to solve the non-linear Lorenz differential equation, using a novel approach of artificial neural networks (ANN) combining with particle swarm optimization (PSO) hybrid with neural networks algorithm (NNA).The Lorenz differential equations renowned for the chaotic behavior, have served as a fundamental benchmark for scientific exploration.
Using the ANN-PSO-NNA hybrid approach, our objective has been to enhance effectiveness, validation and accuracy of solving ANN based Lorenz differential equation, enabling more accurate approximation of problems.Further for evaluated the ANN-PSO-NNA through a comprehensive statistical analysis involving one hundred (100) independent runs.The metrics for the statistical analysis such as minimum, maximum, standard deviation, average, and mean square error values between the NDsolve and ANN-PSO-NNA.The ANN based fitness function optimized through hybrid PSO-NNA algorithms for minimum error achieved of x(t), y(t) and z(t) upto 1.75 × 10 −06 , 1.07 × 10 −07 and 3.93 × 10 −07 respectivily, for the highly nonlinear chaotic system the PSO-NNA www.nature.com/scientificreports/maybe achieve less accuracy and use the more computational cost, to tackle these types of system will be used quantum based optimization algorithms.In the future, enhancing the accuracy and efficiency of solving nonlinear dynamics problems will be a priority using other heuristic optimization algorithms, such as genetic algorithms (GA), ant colony optimization (ACO), firefly algorithm (FA) and quantum computing based algorithm.

Figure 2 .
Figure 2. Comparison solution of Lorenz differential equation using NDsolve and ANN-PSO-NNA for case 1.

Figure 6 .
Figure 6.Mean square error (MSE) plotted over 100 independent runs of y(t) for case 1.

Figure 7 .
Figure 7. Mean square error (MSE) plotted independent runs of z(t) for case 1.

Figure 10 .
Figure 10.Comparison solution of Lorenz differential equation using NDsolve and ANN-PSO-NNA for case 2.

Figure 12 .
Figure 12.Mean square error (MSE) plotted over 100 independent runs of x(t) for case 2.

Table 1 .
The best optimized set of weighs through ANN-PSO-NNA for x (t) for with σ = 0.1, R = 0.2 and B = 0.3.

Table 2 .
The

Table 4 .
An evaluation of the accuracy of x (t) using NDsolve and ANN-PSO-NNA for case 1, with σ = 0.1, R = 0.2 and B = 0.3.

Table 5 .
An evaluation of the accuracy of y (t) using NDsolve and ANN-PSO-NNA for case 1, with σ = 0.1, R = 0.2 and B = 0.3.

Table 7 .
The best optimized set of weighs through ANN-PSO-NNA for z (t) for with σ = 1, R = 2 and B = 3.

Table 8 .
The best optimized set of weighs through ANN-PSO-NNA for y (t) for with σ = 1, R = 2 and B = 3.

Table 9 .
The best optimized set of weighs through ANN-PSO-NNA for z (t) for with σ = 1, R = 2 and B = 3.

Table 10 .
An evaluation of the accuracy of x (t) using NDsolve and ANN-PSO-NNA for case 1, with σ = 0.1, R = 0.2 and B = 0.3.t x(t)

Table 11 .
An evaluation of the accuracy of y (t) using NDsolve and ANN-PSO-NNA for case 1, with σ = 0.1, R = 0.2 and B = 0.3.t y(

Table 12 .
An evaluation of the accuracy of z (t) using NDsolve and ANN-PSO-NNA for case 1, with σ = 0.1, R = 0.2 and B = 0.3.

Table 13 .
Summary of absolute error between ND solve and ANN-PSO-NNA across hundred (100) independence runs for x(t), y(t) and (t).