An artificial neural network approach for the language learning model

The current study provides the numerical solutions of the language-based model through the artificial intelligence (AI) procedure based on the scale conjugate gradient neural network (SCJGNN). The mathematical learning language differential model is characterized into three classes, named as unknown, familiar, and mastered. A dataset is generalized by using the performance of the Adam scheme, which is used to reduce to mean square error. The AI based SCJGNN procedure works by taking the data with the ratio of testing (12%), validation (13%), and training (75%). An activation log-sigmoid function, twelve numbers of neurons, SCJG optimization, hidden and output layers are presented in this stochastic computing work for solving the learning language model. The correctness of AI based SCJGNN is noted through the overlapping of the results along with the small calculated absolute error that are around 10–06 to 10–08 for each class of the model. Moreover, the regression performances for each case of the model is performed as one that shows the perfect model. Additionally, the dependability of AI based SCJGNN is approved using the histogram, and function fitness.

The sum of u, f and m must be taken as one as they characterize the language's proportion.α and β are the con- stants that are taken in 0 and 1.The parameter α shows that how rapidly an individual learns the new information, while β represents how the person quickly loses or forgets proficiency in information that they already educated.
There are various language systems, e.g., GloVe, Word2Vec, and embedding learned in the process of neural networks defines the words as a vector of high-dimension.The dimensionality reduction schemes can also be used to imagine these embedding in double or triple sizes.The language system provides the attention appliances (1) OPEN 1 Department of Computer Science and Mathematics, Lebanese American University, Beirut, Lebanon.• The solutions of the learning language differential model using the stochastic AI along with the SCJGNN solver are presented successfully.• The competence of Adams numerical scheme is confirmed to accomplish the dataset using the certification and train/test data process.• The constant and reducible absolute error (AE) validate the implication of SCJGNN method to solve the learning language differential model.• The reliable matching of results, a strong treaty using the typical manufactures provide the precision of AI along with the SCJGNN for solving the language model.
The remaining paper's portions are given as: "Designed stochastic AI along with the SCJGNN solver" is related to the designed computing solver."Numerical results of the language model" designates the output representations; the conclusions are provided in the final section.

Designed stochastic AI along with the SCJGNN solver
This section portrays the computational framework using the AI along with the SCJGNN in two steps.The SCJGNN features are stated together with the execution process in Fig. 1, however the network performances based on the multi-layers are depicted in Fig. 2. The AI along with the SCJGNN procedure for the language-based model is provided by using the 'nftool' (built-in Matlab), cross validation (n-folded), activation function (logsigmoid), epochs (maximum 1000), tolerances (10 -07 ), step size (0.01), hidden layers (twelve neurons), algorithm (SCJG), and layers (output/hidden/input).The label statics using the performance of training with input/target are obtained through the basic outputs.
The neural network constructions involve various major forms, e.g., architecture defining, stipulating the cost function, selecting the activation function, and training of the model.To integrate the scaling into this procedure might mean considering in order to deal with the input topographies, regularization, and weight initialization.Some of the steps of the general structure for neural network designing with a motivation on scaling are presented as: • Problem description: The problem is define based on the language learning model using the process of regres- sion by selecting the nature of data and the value ranges for input topographies.• Data preprocessing: Standardize or normalize the input topographies to an analogous gage.The data pre- processing provides the support to the model in order to converge quicker during the process of training in order to upgrade the performance.• Design of architecture: The design of neural network based on the types and number of layers is presented by using the layers of normalization to support the interior covariate shift that is relevant to scaling.• Weight initialization: Select an appropriate scheme to initialize the weights.The weights have been selected with care and concentrations.• Activation functions: The activation function is also selected carefully, the log-sigmoid function is selected in this study, which is shown in the Fig. 3 in the process of hidden layers.• Loss function: This function is selected by using the mean squared error (MSE) for regression.
• Optimization: The optimization is performed by using the SCJG.
• Training: The training of the model is performed in order to train the data, and to observe the performance using the sets of authentication.• Evaluation: The evaluation of the model is performed based on the set of separate tests to measure its gen- eralization presentation.
Table 1 presents the adjustment of the parameters by using the SCJGNN to solve the language nonlinear mathematical model.To assess the limitations and advantages of one of the numerical schemes SCJGNN, the following factors are considered, like the performance of the method is used to approximate the problem's result, the method perform the consistently and quickly convergence for the solutions, the SCJGNN scheme perform computationally competent, particularly large-scale of problems, the robustness of the SCJGNN scheme is performed to deal with numerous forms of the input data.
The computational stochastic AI along with the SCJGNN solver performances has been presented to solve the learning language differential model using the twelve neurons, which is validated with the optimal performance of underfitting/overfitting using the process of train and corroboration at epochs 49, 58 and 55.The under fitting (premature) convergence is provided by taking small values of the neurons, while the comparable accuracy via

Mathematical model Mathematical language model
A mathematical model based on the learning language differential model is categorized into three classes, named as unknown, familiar, and mastered, which is a nonlinear system of equations.

EHs Neuron System
A learning procedure

Implementation of the procedure
The AI along with the SCJGNN procedure to present the numerical solutions of the language-based model is provided by using the 'nftool' (builtin Matlab), cross validation (n-folded), activation function (log-sigmoid), epochs (maximum 1000), tolerances (10-07), step size (0.01), hidden layers (twelve neurons), algorithm (SCJG), and layers (output/hidden/input).www.nature.com/scientificreports/ the higher complexity (overfitting) is monitored for larger values of the neurons.The label data based on the training and input/target measures is performed through the basic outputs with testing (12%), validation (13%), and training (75%) ratio for solving the nonlinear learning language differential model.The predictable values are found in input [0, 1] through the computational stochastic AI along with the SCJGNN solver are presented to solve the learning language differential model.The layers performances have been performed in Fig. 3 for solving the learning language differential model.Figures 4, 5, 6, 7 and 8 represent the AI procedure along with the SCJGNN for the learning language differential model.Figure 4 presents the MSE and state of transition (SoT) by applying the AI procedure along with the SCJGNN.The assessed MSE performances have been illustrated in Figs.4a-c based on the best curves, training, and accreditation, while the values of the SoT are given in Fig. 4d-f.The optimal outputs of the learning language differential model are represented at epochs 49, 58 and 55, which are shown as 1.36973934 × 10 -13 , 5.44578 × 10 -12 and 5.77556 × 10 -11 .The gradient for the classes unknown, familiar, and mastered are measured as 9.5716 × 10 -08 , 9.9898 × 10 -08 and 9.5247 × 10 -08 .These gradient curves indicate the convergence, exactitude, and exactness, and of the AI along with the SCJGNN solver for the nonlinear learning language differential model.Figure 5 depicts the fitting curves for the learning language differential model based on the comparison of results.Figure 5d-f signifies the values of the error histogram (EHs) for the learning language differential model through the stochastic performances of AI along with the SCJGNN solver.These measures have been reported as 7.73 × 10 -08 for unknown class, 5.63 × 10 -07 for familiar category, and 9.90 × 10 -07 for mastered class.Figures 6, 7 and 8 presents the correlation graphs for the differential form of the learning language model by applying the stochastic computing AI along with the SCJGNN performance.These graphs show that the determination coefficient R 2 is 1 for the unknown, familiar, and mastered categories.The curves based on the testing, validation, and training representations authenticate the precision of the AI along with the SCJGNN for solving the differential form of the learning language model.The convergence of the train, epochs, endorsement, backpropagation, test and intricacy are tabulated in Table 2.The complexity performances using the AI along with the SCJGNN for solving the differential form of the learning language model are capable to prove the network's training (epoch performance).
Figures 9 and 10 shows the outputs comparison along with the values of the AE using the obtained and reference solutions for the learning language differential model.The calculated and source solutions of the learning language differential model have been matched clearly.Figure 9 provides the AE measures for each class of the learning language differential model.The values of the AE is a metric, which is applied to assess the accuracy of the language learning model.It provides the average absolute difference that is calculated between the obtained and actual performances.The implication of the AE is based on its interpretability and simplicity.Dissimilar to some other operator like mean absolute deviation, the AE provides equal weights to each error and not used to penalize the large values of the errors more deeply.In case 1, the AE measures for the unknown class are reported as 10 -06 to 10 -07 , whereas the familiar, and mastered categories are performed as 10 -05 to 10 -06 .In case 2, the AE are performed as 10 -06 to 10 -07 for unknown, 10 -05 to 10 -07 for familiar category and 10 -05 to 10 -06 for mastered class of the model.In case 3, the AE measures are reported as 10 -07 to 10 -09 for unknown, 10 -05 to 10 -07 for familiar category and 10 -05 to 10 -08 for mastered dynamic of the mathematical model.These insignificant AE performances represent the exactness of the AI procedure along with the SCJGNN for the learning language differential model.

Conclusions
The motive of current investigations is to present the solutions of the learning language differential model by applying the artificial neural networks.The differential models are not only played a role in the diseased spread modeling, but also have some role in the learning language differential models.Hence a language-based differential model is numerically presented through the process of artificial intelligence along with the optimization of scale conjugate gradient neural network.The mathematical dynamics of the learning language differential model is characterized into three forms, called as unknown, mastered, and familiar.The AI based SCJGNN procedure has been programmatic by applying the statics of testing (12%), validation (13%), and training (75%).In the process of neural network for solving the learning language model, a transfer function log sigmoid, SCJG optimization, twelve numbers of neurons, output, hidden and output layers have been presented in this stochastic computing framework to present the numerical solutions of the learning language model.The correctness of the (2)   In addition, the reliability of the AI based SCJGNN is observed by applying the function fitness, histogram, and correlation/regression of the language differential model.In future, the designed AI based SCJGNN structure can be developed for the computational framework of the mathematical model, fluid dynamics, and nonlinear models [43][44][45][46][47][48][49][50][51][52] .Table 2. AI along with the SCJGNN solver for the learning language differential model.
MSEThe calculated and source solutions of the learning language differential model have been matched clearly and the insignificant AE performances represent the exactness of the AI procedure along with the SCJGNN for the learning language differential model Neural network structure

Figure 1 .
Figure 1.Graphical representations for the SCJGNN solver to solve the learning language differential model.

Figure 2 .
Figure 2. A layer structure of the neural network.
https://doi.org/10.1038/s41598-023-50219-9AI based SCJGNN has been observed through the overlapping of the obtained and source (Adam) results.The negligible absolute error has been performed around 10 -07 to 10 -09 for respective cases of the language model.

Figure 4 .
Figure 4. Optimal verification and gradient for the learning language differential model.

Figure 9 . 9 (Figure 10 .
Figure 9. Obtained and reference results for the learning language differential model.

Table 1 .
Adjustment of the parameters based on the SCJGNN.