A complex system health state assessment method with reference value optimization for interpretable BRB

Health condition assessment is the basis for formulating and optimizing maintenance strategies of complex systems, which is crucial for ensuring the safe and stable operation of these systems. In complex system health condition assessment, it is not only necessary for the model to handle various uncertainties to ensure the accuracy of assessment results, but also to have a transparent and reasonable assessment process and interpretable, traceable assessment results. belief rule base (BRB) has been widely used as an interpretable modeling method in health condition assessment. However, BRB-based models currently face two issues: (1) inaccuracies in expert-provided parameters that can affect the model's accuracy, and (2) after model optimization, interpretability may be reduced. Therefore, this paper proposes a new method for complex system health condition assessment called interpretable BRB with reference value optimization (I-BRB). Firstly, to address the issue of inaccurate reference values, a reference value optimization algorithm with interpretability constraints is designed, which optimizes the reference values without compromising expert knowledge. Secondly, the remaining parameters are optimized using the projection covariance matrix adaptation evolution strategy (P-CMA-ES) with interpretability constraints to improve the model's accuracy. Finally, a case study evaluating the bearing components of a flywheel system is conducted to validate the proposed method. Experimental results demonstrate that I-BRB achieves higher accuracy in health condition assessment.

is proposed to optimize the reference values while maintaining interpretability.This method addresses the issue of inaccurate parameters provided by experts and ensures the reliability of the assessment process.(3) The design of interpretability constraints for complex system health condition assessment.In the context of assessing complex system health conditions, interpretability constraints are introduced to preserve the interpretability of the models during the optimization process.This constraint ensures that the models remain transparent and explainable, facilitating the understanding and acceptance of the assessment results.
The remaining structure of the paper is organized as follows: In Section "Problem description", attention is directed towards three critical issues that need consideration when constructing models for the assessment of health conditions in complex systems.Emphasis is placed on outlining the challenges and prerequisites associated with accuracy, interpretability, and reference value optimization.In Section "Basic BRB and interpretability definitions", the basic BRB model is introduced, accompanied by a definition of interpretability.Fundamental concepts of BRB are explained, setting the foundation for the subsequent development of the I-BRB model.In Section "Inference and optimization", a reference value optimization algorithm is proposed.Detailed descriptions of the inference and optimization processes within the I-BRB model for assessing the health condition of complex systems are provided.The algorithm incorporates interpretability constraints to ensure the accuracy and interpretability of the evaluation results.In Section "Case study", a case study is presented, focusing on the health condition assessment of an aerospace engine flywheel system.This case study serves as a validation of the effectiveness and performance of the proposed I-BRB method in a practical application scenario.In Section "Conclusion", the paper concludes with a summary of the key findings and contributions of the research.Furthermore, potential directions for future work are discussed, and the significance of the proposed I-BRB method in the context of complex system health condition assessment is considered.

Problem description
To construct an interpretable I-BRB model for complex system health assessment, three key issues need to be addressed: Problem 1: How to guarantee interpretability in complex system health state assessment models?Considering the characteristics of complex systems and the requirements of health state assessment, there is a need to design reasonable interpretability constraints to maintain the interpretability of the whole modelling, inference, and optimisation process 23 .This process could be described as follows: where C is the set of interpretable constraints, z represents the number of interpretability constraints.Problem 2: How to construct a transparent reasoning process that meets the interpretability requirements of complex system health state assessment?In building the initial BRB model for complex system health state assessment, it is important to consider parameter settings and the rationality of the reasoning process in order to maintain the interpretability of the inference results.This process can be described as follows: where s denotes the final belief distribution, data denotes the set of evaluation indicators for health state assessment, t denotes the initial parameters given by the experts, and ek denotes the expert knowledge, f (•) denotes the inference function.
Problem 3: How to improve the accuracy of the model without compromising its interpretability?Optimizing the parameters of the complex system health state assessment model can further enhance its accuracy 11 .It is therefore important to design a rational optimisation process that takes into account the interpretability constraints of the model.The interaction between the interpretability constraint and the optimisation process can be described as follows: where denotes the set of parameters in the optimization process.

Basic BRB
The BRB model is based on the IF-THEN modeling approach and consists of multiple rule 28 .The k th rule in the model can be expressed as follows: (1) where x i (i = 1, 2, • • • , T) represents the i th indicator of the complex system health assessment, RA k i is the reference value provided by experts for the i th evaluation indicator, represents the belief level of each evaluation result under the k th rule, θ k represents the weight of the k th rule, and δ i represents the attribute weight of the i th attribute.

Interpretability definitions
The importance of understanding and interpreting assessment results in complex system health assessment cannot be ignored.Decision-makers need to understand the basis and reasoning process of assessment results in order to make informed decisions and take appropriate actions.Therefore, to maintain the interpretability of the I-BRB model, it is necessary to establish a reasonable and effective definition of interpretability.In reference 11 , a set of general interpretability criterion for BRB was designed and defined, and I-BRB conforms to these general interpretability criterions.Additionally, addressing the existing issues in current BRB-based complex system health assessment models, this paper specifically emphasizes criterions 1 and 8.The I-BRB interpretability criterions is illustrated in Fig. 1.
Criterion 1: The reference values of variables can be distinguished.
In BRB, the reference values represent the positions on the evaluation scale where an attribute has typical meanings 19 .They should be able to differentiate different ranges of the variable space and are typically set by experts based on domain knowledge and experience.The setting of reference values should match the specific implementation objectives and application scenarios, as different domains may require different approaches for setting reference values.Therefore, it is important to reasonably divide the reference value intervals for the evaluation indicators of complex system health status and assign them to different ranges of evaluation levels.These ranges should not overlap, and the reference value ranges should encompass the meanings associated with the evaluation indicators, ensuring a clear distinction between different divisions to meet the requirements of real complex systems.
Due to the significant uncertainty in complex systems, the reference values provided by experts may not be precise enough.This could impact the accurate differentiation of system states and, consequently, hinder the understanding of the system 12 .Additionally, it may limit the accuracy of the complex system health condition assessment model.Typically, reference values for technical indicators in a system can exist within a certain range.When constructing a BRB, the reference values provided by experts are often empirical values within a feasible range, rather than exact values.Therefore, to enhance the accuracy of the I-BRB model without sacrificing interpretability, it is necessary to optimize the reference values within a reasonable range.The optimal reference values should be determined within the feasible interval provided by experts, and this can be described as: (4) www.nature.com/scientificreports/where Q k i represents the interpretability constraint for the i th reference value in the k th rule, RA k i _Min and RA k i _Max denote the maximum and minimum acceptable values for the reference value as determined by the experts, h represents the set of reference values.This constraint ensures that the optimized reference values remain within the acceptable physical range during the reference value optimization process.By doing so, it prevents the parameters from deviating too far from the initial values provided by the experts, thus preserving the influence of expert knowledge.
Criterion 8: The optimized rules satisfy the requirements of complex system health state assessment.
In complex system health state assessment using I-BRB, it is essential that each step can be clearly described, and there should be a reasonable cause-and-effect relationship between the inputs and outputs.This is a prerequisite to ensure that the results of the assessment are understood and accepted for decision makers 29 .In the construction of an I-BRB-based model for assessing the health status of complex systems, the expert knowledge is translated as parameters as well as applied to the construction of rules.Therefore, the model's inference results possess interpretability.However, in practical engineering problems, optimisation algorithms are often used to enhance model assessment accuracy.The use of optimisation algorithms to optimise model parameters is stochastic, which can undermine expert knowledge and lead to unconvincing evaluation results.
For example, in the assessment of the health state of an aircraft engine, the belief distribution of the output results is given as {(excellent: 0.35) (good: 0.1) (fair: 0.1) (poor: 0.45)}.This implies that the probability of the aircraft engine being in an excellent health state is 0.35, and the probability of it being in a poor health state is 0.45.Clearly, such an assessment result is unreasonable.The correct assessment result should be able to reasonably differentiate between two conflicting health states 30 .
Therefore, in order to ensure that the initial expert knowledge is not disrupted during the optimization process of the model, the following interpretability constraints are proposed: where Z k represents the interpretability constraint in the k th rule, which may vary depending on different system characteristics.However, they should all satisfy the actual belief distribution.A reasonable belief distribution shape should be monotonic or convex.As shown in Fig. 2, the belief distributions of Output1, Output2, and Output3 are reasonable.On the other hand, the belief distributions of Output4, Output5, and Output6 are concave or non-monotonic, which clearly indicates conflicting belief distributions 11 .
Complex system health assessment models constructed on the basis of BRB have traceable relationships between inputs and outputs, which makes the interpretability of the model an inherent feature.However, due to limited expert knowledge, experts build initial models that may not meet the requirements of the actual system and require optimisation using observed data 28 .Nevertheless, algorithms for optimisation introduce stochasticity, and this can compromise the interpretability for health assessment models.Given the stringent reliability requirements for health assessment results of complex systems, in order to maintain the interpretability of the BRB model, the following constraints were designed.
Constraint 1: Effective use of expert knowledge.Domain experts typically possess rich knowledge and experience, providing them with a deeper understanding of the problem domain 11 .The complex system health assessment model based on BRB effectively incorporates this valuable expertise and insights into the model, thereby enhancing its accuracy and predictive capabilities.This becomes an important source of interpretability for the BRB-based model.The process of optimisation in the interpretable BRB model is based as a local search guided by initial expert judgement 17 .Thus, of expert knowledge is translated and incorporated in the initial population for the optimisation algorithm, providing instructions for the optimisation process and efficiently extracting useful pieces of information out of the search space.
where w g represents the parameters of the g th generation.
Constraint 2: The optimized parameters meet the judgement of experts.
In complex system health assessment, the interpretability of the evaluation results is of paramount importance.When constructing a health assessment model using BRB, the parameters are derived from expert knowledge 11 .Compared to black-box models, the evaluation results of BRB have interpretability and can be convincing to decision-makers.However, when optimizing the BRB model using optimization algorithms, it is possible for the (5) parameters to lose their original meanings and deviate significantly from the initial expert knowledge.This can make the evaluation results difficult to trust.To address this issue, it is possible to set reasonable range constraints to ensure that the parameters vary within an acceptable physical range.This can prevent the parameters from deviating too far from the initial values provided by the experts and preserve the influence of expert knowledge.Therefore, the proposed interpretability constraints are as follows: where H lp and H up denote the lower and upper bounds of the parameters, respectively.The parameters referred to here include rule weights, attribute weights, and belief degrees.
In the context of complex system health assessment based on BRB, the model's rules are constructed based on the knowledge and expertise of domain experts.Each rule describes a specific decision or reasoning process under certain conditions 30 .These rules can be obtained through interactions with domain experts, knowledge extraction, or rule learning techniques.The parameters in the BRB model have practical meanings and can be interpreted as weights and belief degrees assigned to rules and conditions.Furthermore, the inference process of the BRB model is interpretable, as the model can demonstrate how it performs reasoning and decisionmaking based on input conditions and rules 9 .By tracing the inference process, users can understand the logical reasoning and basis behind the model's decisions.Such interpretability allows users to comprehend the decisionmaking logic and rationale of the model.These characteristics make the BRB model widely applicable in complex system health assessment, particularly in application scenarios where model interpretation and understanding are essential.To optimize the model without compromising its interpretability, it is necessary to introduce the aforementioned interpretability constraints.

Reference value optimization
Complex systems often have numerous variables and interconnected parameters, and their operating mechanisms can be complex and partially unknown.Due to the system's uncertainty, experts may have limitations in understanding the system, resulting in less accurate reference values.Furthermore, the provision of expert knowledge is often influenced by individual subjectivity and experience.Different experts may have varying viewpoints and preferences, leading to differences in the reference values they provide.In some cases, experts may also face the challenge of insufficient data.Particularly in emerging fields or complex system assessments, the available data may be limited, affecting the experts' ability to provide accurate reference values.
The accuracy of the complex system health assessment model based on BRB is influenced by the reference values, as even slight differences in reference values can impact the assessment results.Setting reference values should be meaningful and aim to activate as many rules as possible.Due to the uncertainty of complex systems, the reference values provided by experts may not be precise 22 .This can impact the differentiation of system states and further affect the understanding of the system.Typically, reference values for technical indicators of a system (8) www.nature.com/scientificreports/can be a range of values.In the BRB, reference values represent the range of values for rule antecedent attributes, used to transform input data into belief distributions and support the calculation of rule activation weights 7 .
The selection of reference values is crucial as it significantly influences the performance of the model.Firstly, reference values should cover all possible ranges of rule attributes.This ensures that input data falls within the range of some reference value, enabling reasonable membership degree calculations.This is critical because if reference values cannot cover the entire range of possible values, it will lead to inadequate reasoning for all input data 23 .Additionally, the design of reference values should minimize overlapping regions as much as possible 20 .This means that the intersection between different reference values should be kept minimal to avoid situations where input data has high membership degrees in multiple reference values, causing uncertainty in rule activation weights.Reducing intersections helps improve the stability of system decision-making.Therefore, it is necessary to optimize the reference values without compromising the model's interpretability.Based on the above analysis, this paper proposes a K-means algorithm with interpretability constraints (KA-WIC), as shown in Fig. 3. To preserve the model's interpretability, this paper introduces certain constraint conditions in KA-WIC to guide the optimization process of the reference values.Firstly, to effectively utilize expert knowledge, the reference values provided by experts are used as the initial cluster centers.This ensures that the optimization process starts from a meaningful and expert-guided initialization point.Secondly, the optimization process incorporates the experts' prior knowledge or experience as additional constraint conditions.This helps to enforce the rationality and accuracy of the reference values under the guidance of expert knowledge.By integrating these interpretability constraints into the optimization process, the proposed approach ensures that the reference values are optimized while maintaining the interpretability of the model.This allows for a more accurate and reliable assessment of the complex system's health status, leveraging both expert knowledge and data-driven optimization techniques.
By incorporating these interpretability constraints into the K-means algorithm, it is possible to consider both the data characteristics and expert knowledge during the optimization process of the reference values, without compromising the model's interpretability.This ensures that the optimized reference values are more aligned with the actual requirements and are easier to interpret and understand.It is important to note that when introducing constraint conditions, a balance between interpretability and clustering performance needs to be struck to ensure the effectiveness and accuracy of the algorithm.
The KA-WIC algorithm clusters data points by minimizing the distance between data points and cluster centers.Therefore, each cluster's center represents the data points within that cluster.The cluster center can be seen as the average or centroid of the data points within the cluster, as they are close in proximity to other data points and exhibit higher similarity.Thus, using the cluster center as a reference value provides a holistic description of the overall characteristics of the data within that cluster.
Moreover, cluster centers can also be seen as a summary of the data distribution.By calculating the coordinates or feature values of the cluster centers, we can obtain the average or central tendencies of the data in each dimension.These tendencies can reveal the concentration, bias, or focus of the data in different dimensions.Therefore, using cluster centers as reference values provides an understanding of the overall data distribution, aiding in the comprehension of data concentration and distribution patterns.
In order to optimize the reference values of the model, the objective function is formulated as follows: where RA k i represents the k th reference value for the i th attribute given by the expert, A k i represents the k th optimized reference value for the i th attribute, and oa(•) denotes the interpretability-optimized algorithm for reference value mining.The detailed steps of the KA-WIC algorithm for mining the reference value set are as follows:

Optimized reference values
Step 1: Initialize the reference value set A by using the expert-provided reference values as the initial cluster centers.
where, c i represents the i th cluster, µ i represents the i th cluster center, and T represents the number of cluster centers.
Step 2: Calculate the Euclidean distance between two points.For each data point and each cluster center, calculate the distance between them as follows: where d j and x j represent the j th data point of the health assessment indicator data , M represents the total number of data points, and dist(x j , u i ) is used to denote the distance from data point x j to cluster center u i .
Step 3: Update the assigned cluster for each data point: where arg min represents the index of the minimum value.
Step 4: Introduce interpretable criterion 1 to ensure that the cluster centroids are updated within a reasonable range and that the updated centroids still maintain distinctiveness.The formula for updating the centroids is as follows: Step 5: The objective function is the sum of squared errors within clusters, which is minimized: where J represents the sum of squares of errors in the cluster.
Repeat steps 2 to 5 until a certain criterion is met or the maximum number of iterations is reached.At this point, the obtained cluster centroids represent the optimized reference values, as shown in the following formula:

Reference value optimized BRB
To address the challenges in complex system health assessment, an I-BRB model is constructed, where the k th rule is formulated as follows: Criterion 1 : The overall modeling process of I-BRB is illustrated in Fig. 4.
After constructing the I-BRB model for complex system health assessment, the inference process can be performed on each model.This process is based on the ER algorithm, and the inference process is transparent and interpretable 31 .
Step 1: Transforming different forms of information into belief distributions.
where a i,j represents the matching degree of the i th attribute and A i,j represents the corresponding reference values for that attribute.
Step 2: Calculate the activation weight ω k for the k th rule using the following formula: where δ i represents the attribute weight for the i th evaluation indicator.
Step 3: Generate the inference output belief degree β n using the ER algorithm. (17)

Initial Parameters
Expert knowledge

Initial BRB P-CMA-ES
Constraints 1, 2 where S(•) represents the set of belief distributions, A ′ is the actual input vector.

Optimization of remaining parameters
In the optimal case of reference values in BRB, the optimization of the remaining parameters, including rule weights, belief degrees, and attribute weights, is equally important.Even slight differences in these parameters can significantly affect the prediction accuracy of BRB 8 .In the current research stage, many high-performance algorithms are used for the optimization process of the model 29 .In this paper, the P-CMA-ES algorithm is employed to optimize the remaining parameters of I-BRB, further improving the model's accuracy.To ensure the interpretability of the model is not compromised during the optimization process, interpretability constraints 1, 2 and interpretability criterion 8 are embedded in the algorithm.
To optimize the remaining parameters of the model, including rule weights, belief degrees, and attribute weights, the objective function is formulated as follows: where MSE(•) represents the prediction accuracy of the model, which can be further described as: where M represents the number of samples, output forecast represents the model's predicted results, out actual represents the actual values.
The steps for running the P-CMA-ES algorithm are shown in Fig. 5, and the specific implementation process is as follows: Step 1: To effectively utilize expert knowledge, incorporate interpretability constraint 1 during the parameter initialization step.( 21) where the initial parameter set w g = � 0 (θ, β, δ) represents the parameters to be optimized.Interpretability constraint 1 incorporates expert knowledge into the initial population of the model, allowing expert knowledge to guide the optimization process and improve it.Additionally, interpretability constraint 1 ensures that the optimization starts near the optimal solution of the model.
Step 2: Sampling operation is performed to obtain each generation, incorporating interpretability constraint 2. The corresponding formula is as follows: where g+1 i represents the i th solution in the (g + 1) th generation evolved, w g and ε g represent the strength generating and step size in generation g th , C g denotes the covariance matrix of the strength generating in generation g th , N( * ) and represent the normal distribution and the number of offspring, respectively.Interpretability constraint 2 ensures that the parameters do not lose their physical meaning during the optimization process, thereby maintaining the interpretability of the model.
Step 3: Criterion operation, by using interpretability criterion 8, adjust the rules that are not consistent with reality.where g+1 i represents the i th solution in the g + 1 th generation, which may not be consistent with the actual belief distribution,β g+1 i represents the reasonable belief generated under interpretability criterion 8, which is replaced through the ← operation.
Step 4: Projection Operation: The solution is projected onto the feasible hyperplane to satisfy the constraint given by Eq. (30).The hyperplane can be represented by Eq. (31).
where A e represents the parameter vector, in the solution g i , n e and j respectively denote the number of constrained variables and the number of equality constraints.
Step 5: Updating the mean of the next generation is done using the following formula: where h i represents the weight coefficient, g+1 i: is the i th solution in the solutions of the (g + 1) th generation, τ represents the size of the offspring population.
Step 6: The update formula for the covariance matrix is as follows: where ρ g represents the step size of the g th generation, c 1 and c 2 represent the learning rates, P g+1 c represents the evolution path of the (g + 1) th generation, ϕ g represents the offspring population size of the g th generation, K g+1 i: represents the i th parameter vector of the vector in the (g + 1) th generation.
Step 7: Recursively execute steps 1 to 6 until the best parameters are obtained.

Case study
The flywheel system is a typical complex system, and its stable operation has a significant impact on the safe operation of spacecraft in orbit.Due to the high cost of conducting experiments on the entire flywheel system and the high failure rate of bearing components, this experiment only selects the flywheel bearing component as a case to validate the effectiveness of the proposed method.In this case, the elevated bearing temperature and decreased rotational speed are taken as two input indicators, and the bearing health status is the output.The remaining parts of this section are arranged as follows: In Section "Initial I-BRB build", the optimization of reference values and the construction of the initial I-BRB model are discussed.In Section "Model optimization", the inference and optimization of the model are presented.In Section "Analysis of experimental results", the experimental results of the case study are analyzed.In Section "Contrast experiment", comparative experiments are discussed.

Initial I-BRB build
In the BRB-based health assessment of complex systems, the reference values are initially provided by experts.Expert knowledge is accumulated knowledge of the long-term operation of the actual flywheel system and is an important source of interpretability for the BRB expert system.In this experiment, the dataset contains a total of 199 samples.30% of the data is selected for model training, and 70% of the data is used for validation.The experts have set 4 reference values for each input indicator, as shown in Table 1, resulting in a total of 16 rules being defined 29 .
Among them, Z1 represents axial temperature, Z2 represents rotational speed, and H represents the health status of the bearing component.In this experiment, the health status is categorized into four levels: very poor (H1), poor (H2), fair (H3), and very good (H4).Due to the limitations of expert knowledge, the reference values provided by experts may not be sufficiently accurate.Therefore, it is necessary to optimize the reference values within a reasonable range in practical health assessment to improve the accuracy of model evaluation.
Under the constraint of interpretability criterion 1, the KA-WIC algorithm is employed to optimize the reference values.The reference points and reference value constraints are shown in Table 2, and the optimized results are presented in Fig. 6.In Fig. 6, the optimized reference values for Z1 (axial temperature) closely match the Table 1.Reference points and reference values.   of multiple batches of the same model flywheels.This analysis was combined with in-orbit usage and historical failure cases.In the experimental case, there is a positive correlation between the health status levels of the assessment indicators, namely, the axle temperature and the rotational speed, and the health status level of the bearing.For example, when the temperature is in state H1 and the speed is in state H1, both indicators are in the worst state, indicating the poorest initial health status of the bearing.Based on their expertise, the experts set the initial belief distribution as {0.95, 0.05, 0.00, 0.00}, where the belief for the "very poor" health status assessment is 0.95, for the "poor" health status assessment is 0.05, and for the "fair" and "very good" health status assessments is 0. Due to the fuzziness and incompleteness of cognition, the initial parameter distribution provided by experts may not be perfectly accurate, but it can still provide a relatively reasonable initial parameter distribution.
Combining the optimized reference values with the initial values of attribute weights, rule weights, and belief degrees provided by experts, an initial I-BRB model for the health assessment of the flywheel is constructed.

Model optimization
In the health assessment of complex systems, the initial parameters provided by experts may not be sufficiently accurate, which can affect the accuracy of the model.To improve the accuracy of the I-BRB model without compromising its interpretability, this experiment employs the P-CMA-ES algorithm with interpretability constraints 1, 2 and interpretability criterion 8 for model optimization.The optimized belief degrees are shown in Fig. 7.
Expert knowledge is an important source of interpretability for BRB-based complex systems, representing accumulated knowledge from the long-term operation of actual flywheel systems.Assuming that expert knowledge is authoritative and reliable, users can have a high level of trust in the initial BRB model constructed based on expert knowledge.By using expert knowledge as the initial input for belief distribution and appropriately adjusting it based on the data from the I-BRB model, the resulting belief distribution should not deviate excessively from the initially set distribution.The degree of proximity between the output belief distribution and the initial belief distribution can reflect the interpretability of the model.Therefore, the closer the belief after real-time data correction by the I-BRB model is to expert knowledge, the stronger the model's interpretability.constructed for the assessment of the flywheel health status.The mean squared error (MSE) of the evaluation results is presented in Table 8.Compared to machine learning algorithms such as LR, RLR, DT, MDT, CDT, LSVM, FGP, GBT, and RF, I-BRB demonstrates better predictive accuracy and interpretability in the assessment of flywheel health status.Although CGP achieves higher predictive accuracy, its evaluation results lack interpretability and are difficult to convince decision-makers.
K-P-BRB shows significantly higher accuracy compared to P-BRB, indicating that the KA-WIC algorithm effectively adjusts the reference values and improves the model's accuracy.I-BRB, compared to K-P-BRB and P-BRB, achieves higher accuracy while maintaining interpretability.
Based on the above comparisons, I-BRB can be effectively applied to complex system health assessment problems.It improves modeling accuracy while retaining the interpretability of the model.

Conclusion
In conclusion, this method provides a powerful approach for the health assessment of complex systems by conducting a comprehensive optimization of all parameters while preserving the interpretability of the BRB.By optimizing the reference values within a reasonable range, the method achieves improved accuracy while maintaining model interpretability.
The results demonstrate that the optimized reference values closely align with expert knowledge, indicating the effectiveness of the KA-WIC algorithm and P-CMA-ES algorithm in fine-tuning the reference values.The assessment model based on the optimized reference values outperforms machine learning algorithms such as LR, RLR, DT, MDT, CDT, LSVM, FGP, GBT, and RF in terms of both prediction accuracy and interpretability.
Furthermore, the I-BRB model surpasses the K-P-BRB and P-BRB models in accuracy and interpretability, highlighting its superiority in complex system health assessment.The CGP model exhibits higher prediction accuracy, but its lack of interpretability hinders its acceptance by decision-makers.
Overall, the proposed method, with its emphasis on reference value optimization and interpretability, offers an effective solution for complex system health assessment.It balances accuracy and comprehensibility, providing decision-makers with reliable and understandable assessment results.Future research can explore further enhancements to this method and its application in various domains to improve system reliability and decision-making processes.

W} 2 . The completeness of rule base 3 . The par ameters and str uct ures have actual meaning 4 . The matching degr ee standar dizat ion 5 . The r easonable inf or mati on t ransf ormation 6 . The tr anspar ent infer ence engine 7 . The simplicity of the r ule base 8 .Figure 1 .
Figure 1.Interpretability criterions of I-BRB.

Criterion 8 Figure 4 .Step 4 :
Figure 4.The modeling process of complex system health state assessment model based on I-BRB.

Figure 8 .
Figure 8. I-BRB evaluation results and actual values.

Table 2 .
Reference points and reference value constraints.

Table 8 .
Comparative experiments of different models.