Introduction

Statistical Process Control (SPC) is a powerful and widely used quality management approach that empowers organizations to achieve process improvements, minimize variability, and ensure the consistent delivery of high-quality products and services. By leveraging data-driven decision-making and driving continuous process improvements, SPC makes a vital contribution to maintaining customer satisfaction and achieving competitiveness in dynamic markets. Originated by Walter A. Shewhart and W. Edwards Deming, the fundamental principle of SPC is the ability to distinguish between common and special cause variations within processes and enable companies to locate and remediate the sources of special cause variations to optimize process control. Using statistical tools such as control charts and root cause analysis, SPC provides valuable insights for taking corrective and preventive actions, resulting in significant reductions in defects, waste, and operational costs, further enhancing a company's overall efficiency and profitability. To conclude, SPC and control charts (CCs)are essential for maintaining process stability, and product quality, and facilitating ongoing enhancements., enabling organizations to stay competitive and meet the demands of a dynamic market environment. Shewhart1 introduced CCs that operate without the use of historical data, relying exclusively on present sample information to effectively detect significant changes in production processes. In contrast, memory-type CCs, such as the cumulative sum (CUSUM) and exponentially weighted moving average (EWMA) CCs suggested by2 and3, integrated both current and historical sample data into their computations. Notably, the CUSUM and EWMA CCs exhibit a heightened sensitivity to identifying minor to moderate shifts in process parameters, surpassing the capabilities of the traditional Shewhart CC. These memory-type CCs, especially the EWMA and CUSUM variants, are widely applied in the fields of chemical and industrial production processes. Herdiani et al.4 studied CCs in SPC are adapted to address the violation of mutually independent observations, especially for interrelated processes with autocorrelation, by using time series models and the Markov Chain Method to evaluate the ARL performance of the mean of EWMA in autoregressive processes. Wu et al.5 introduced a distribution-free upper-sided EWMA CC to monitor both time intervals and magnitudes of events without relying on known distributions. The method utilizes 'continuousify' and classical Markov chain analysis to ensure reliable run length properties. Numerical comparison with the parametric Shewhart TBEA chart confirms the effectiveness of this approach, demonstrated through an example using a French forest fire database. Alevizakos et al.6 presented the Triple EWMA (TEWMA) CC, outperforming other charts in detecting small and moderate shifts in process mean. Monte Carlo simulations confirm its effectiveness, showing superior performance for small shifts and comparable results for moderate and large shifts, with better inertia properties and robustness for small smoothing parameters. CCs serve as a beneficial tool when shift size information is available or when analyzing specific shifts. Nevertheless, in certain instances, the shift size may be predetermined prior to using CCs. Quality investigators can enhance the detection capability for changes of varying magnitudes by exploring double and adaptive control charting techniques. Ali et al.7 investigated the NPMEPSN chart, utilizing both simple and RSS methods to effectively detect small shifts, and demonstrated its robust performance across various distributions. The enhanced version (NPMEPRSN) under the RSS scheme exhibited superior performance compared to alternatives, confirmed through simulations and a real dataset concerning piston ring diameter. Noor-ul-Amin and Sarwar8 presented a function-based AEWMA CC for monitoring the variance–covariance matrix in a multivariate normal process with composite observations, demonstrating superior performance in covariance matrix detection compared to existing EWMA and AEWMA charts. An example using data from the bimetal thermostat industry highlights the chart's practical effectiveness. Noor-ul-Amin and Sarwar9 proposed an AMEWMA CC for detecting mean vector variations in a multivariate normal distribution, demonstrating superior shift detection compared to MEWMA and existing AMEWMA charts through Monte Carlo simulations. A real-life case study on process capability for turning aluminum pins exemplifies the practical application of the proposed chart, monitoring six quality characteristics. Zaman et al.10 proposed an AEWMA using Huber and Tukey's bisquare functions to monitor shifts of varying sizes in manufacturing and non-manufacturing processes. Performance measures, like ARL and extra quadratic loss, show its competitive efficiency in detecting small to large shifts, validated through an illustrative example with real data. CCs have enhanced capabilities in detecting minor to moderate disturbances, but their efficiency is affected when measurement errors (ME) occur during data collection for constructing the charts. ME can cause variations in the measured study variables, leading to quality disruptions and undesirable results, consequently diminishing the ability of CCs to detect out-of-control signals. Researchers have introduced various methods to address this challenge of ME in CCs. Maravelakis et al.11 investigated the influence of ME on the EWMA CCs capability to detect out-of-control situations, particularly focusing on a shift in mean with linear covariates. The study also explores the impact of multiple measurements on each sampled unit and linearly increasing variance, concluding that ME significantly affects the chart's performance in detecting mean shifts. FAEWMA-ME-CV CC, considering ME, linear covariate model, and multiple measurements, efficiently detects infrequent process changes in the form of parametric shifts in the process coefficient of variation (CV) was studied by Arshad et al.12. Tang et al.13 investigated the impact of measurement errors on the AEWMA median chart's efficiency, proposing a parameter optimization strategy, and demonstrate its superiority over Shewhart and classical EWMA schemes, emphasizing the importance of multiple measurements per sample point. Yang et al.14 proposed an error-corrected dispersion CC, using a corrected sign statistic and an EWMA, effectively addressing MEs and improving control limits for process dispersion monitoring. Numerical analyses demonstrate the chart's capability to handle substantial measurement error levels, validated by its successful application in semiconductor data. Zaidi et al.15 worked on monitoring compositional data using the MEWMA-CoDa chart, exploring the impact of MEs and showcasing its superior performance in detecting shifts. The study assesses device parameters (σM, b) influence alongside independent observations (m) and variables (p), exemplified through a muesli production case. Evidence from earlier studies shows a prevalent use of the conventional methodology, which relies solely on sample data without incorporating prior knowledge. In contrast, the Bayesian approach combines sample data with prior knowledge to update and generate a posterior (P) distribution, thereby enhancing the estimation process. A Bayesian CC utilizing the P distribution to monitor the process mean was introduced by Saghir et al.16. Their methodology incorporates various LFs, providing adaptability to capture inherent process characteristics. Noor-ul-Amin and Noor17 studied novel AEWMA CC is proposed for process mean monitoring using Bayesian theory, informed by different LFs, evaluated via ARL and SDRL, compared to existing Bayesian EWMA, and validated with Monte Carlo simulations and a real-data case. Jones et al.18 adapted CUSUM and EWMA CCs to a Bayesian framework, utilizing P distributions and different LFs. Applied to count data, simulations evaluate performance by considering shift size sensitivity and hyper-parameters, along with a real data application. Noor-ul-Amin and Noor19 introduced a Bayesian chart with SELF and LLF, addressing ME through P and posterior predictive (PP) distributions, evaluated via a linear covariate model, multiple measurements, and linearly increasing variance methods, and validated using run length profiles, Monte Carlo simulations, and a real-life data example. Khan et al.20 explored the performance of Bayesian-AEWMA control charts using RSS and two various LFs, incorporating MEs, a covariate model, and multiple measurement techniques. Assessment through run length profiles, alongside a semiconductor application, emphasizes its efficiency in identifying out-of-control signals, favoring the MRSS approach for managing MEs.

All the aforementioned work has been carried out for both the classical and Bayesian approaches. The main motivation of the current study is to address the issue of measurement error through CC using Bayesian methodology. We introduce a Bayesian AEWMA CC that incorporates ME using conjugate priors and shows its performance in the presence of ME. It accommodates two various LF i.e., (SELF) and false negatives (LLF) and employs three ME methods: (i) covariate, (ii) multiple measurements, and (iii) linearly increasing variance. Performance assessment utilizes ARL and SDRL, determined through Monte Carlo simulations. The structure of this paper is as follows: In Section “Bayesian approach”, we delve into the Bayesian methodology applied to the AEWMA control chart and discuss the utilized LFs. Section “Measurement error” is dedicated to exploring ME, while in Section “Suggested Bayesian AEWMA CC using lf under me”, we introduce the suggested implementation of the AEWMA CC with ME through Bayesian techniques. Discussions and key findings are summarized in Section “Discussion on tables and main findings”, and real-life data applications are showcased in Section “Real life data applications”. Section “Conclusion” concludes the article, while Section “Limitations of the study and future recommendations” addresses the limitations and provides recommendations.

Bayesian approach

The Bayesian methodology provides an alternative perspective compared to the frequentist (classical) approach, treating the parameter as a “random variable” following a prior distribution defined by hyperparameters. The P distribution is constructed using two categories of prior distributions: non-informative and informative. Non-informative priors, like Jeffreys and uniform priors, are frequently employed, while informative priors often rely on conjugate priors as a prominent family. Let’s examine the study variable X in an in-control process defined by parameters \(\theta\) (mean) and \(\delta^{2}\) (variance). In this context, a normal prior is utilized, with \(\theta_{0}\) and \(\delta_{0}^{2}\) serving as its corresponding parameters and defined as:

$$p\left( \theta \right) = \frac{1}{{\sqrt {2\pi \delta_{0}^{2} } }}\exp \left\{ { - \frac{1}{{2\delta_{0}^{2} }}\left( {\theta - \theta_{0} } \right)^{2} } \right\}$$
(1)

Formulating the P distribution involves combining the likelihood function of the sampling distribution and the prior distribution, resulting in a proportional relationship achieved through multiplication. Consequently, the resulting P distribution representing the unknown parameter \(\theta\), based on the observed data x, can be expressed as:

$$p\left( {\theta |x} \right) = \frac{{p\left( {x|\theta } \right)p\left( \theta \right)}}{{\int {p\left( {x|\theta } \right)p\left( \theta \right)d\theta } }}$$
(2)

The PP distribution is utilized to forecast future observations using the prior distribution while integrating data-derived information. Often used as a prior distribution for new data Y, it enables predictions for upcoming observations with consideration of uncertainty. Integral to Bayesian theory, the PP distribution allows for updating prior distributions with new data. Its mathematical representation is provided below:

$$p\left( {y|x} \right) = \int {p\left( {y|\theta } \right)p\left( {\theta |x} \right)d\theta }$$
(3)

Loss functions

In the Bayesian approach, the “loss function” is essential for informed decision-making, quantifying costs tied to choices and aiding trade-offs. It bridges statistical analysis and decision-making, guiding choices under uncertainty, and integrating it enhances the Bayesian framework's holistic decision-making, optimizing options and refining models across fields like quality control and parameter estimation. In the current study we considered both the symmetric and asymmetric LFs. The Squared Error loss function (SELF) is commonly utilized as a symmetric LF in the context of Bayesian inference, as suggested by Gauss21. If the predictive variable X and \(\hat{\theta }_{(SELF)}\) is its estimate then SELF is mathematically described as:

$$L\left( {X,\hat{\theta }_{(SELF)} } \right) = \left( {X - \hat{\theta }_{(SELF)} } \right)^{2}$$
(4)

and the Bayes estimator, which minimized the SELF is given below:

$$\hat{\theta }_{(SELF)} = E_{\theta /x} \left( \theta \right).$$
(5)

Varian22 introduced the LINEX loss function (LLF), an asymmetric LF tailored to scenarios where the impact of overestimation is pronounced. LLF allows a versatile assessment of the balance between underestimation and overestimation, making it valuable when one type of error holds greater importance than the other, especially in situations where the cost or consequences of overestimating a parameter or outcome are of greater concern than underestimation. The LLF is mathematically presented as:

$$L\left( {\theta ,\hat{\theta }_{(LLF)} } \right) = \exp \left( {c\left( {\theta - \hat{\theta }_{(LLF)} } \right)} \right) - c\left( {\theta - \hat{\theta }_{(LLF)} } \right) - 1$$
(6)

Using LLF, Bayes estimator is described as:

$$\hat{\theta }_{(LLF)} = - \frac{1}{c}InE_{\theta /x} \left( {e^{ - c\theta } } \right).$$
(7)

Measurement error

Measurement error, arising from factors like instrument precision, human mistakes, and environmental variability, can significantly affect data accuracy across diverse fields. This phenomenon introduces discrepancies between actual values and recorded measurements, posing challenges in research by distorting relationships, statistical inferences, and conclusions. Recognizing its importance is vital for researchers and practitioners, as understanding and addressing ME impacts data validity, decision-making, and research quality, underscoring the need for robust strategies to enhance data integrity and interpretation. This study utilizes the covariate model to manage ME, employing the strategy of multiple measurements to minimize its impact. This process entails gathering multiple measurements for each observation, resulting in enhanced precision when estimating the actual underlying value. Additionally, the linearly increasing variance method is also discussed to address the issue of ME.

Covariate model

Bennett23 offered a model for assessing the influence of ME on the Shewhart control chart, described as Y = X + e. In this equation, X represents the study variable, following a normal distribution with a mean of and a variance of \(\delta^{2}\). This framework pertains to the in-control process, where \(\varepsilon\) captures stochastic error arising from measurement imprecision. Linna and Woodall24 subsequently investigated the covariate model, outlined as:

$$Y = A + BX + \varepsilon$$
(8)

The model incorporates constants A and B, alongside a normally distributed variable with a mean of zero and variance \(\delta_{m}^{2}\). All parameters in the model are presumed to be known, and X and e are treated as independent variables. i.e., \(Cov\left( {X,\varepsilon } \right) = 0\), and the variable under consideration Y is also distributed normally having mean \(A + B\theta\) and variance \(B^{2} \delta^{2} + \delta_{m}^{2}\).

Multiple measurements method

Walden25 adopted a strategy of employing multiple measurements for each sampling unit, substituting a single measurement, thereby mitigating the variation induced by ME. As the number of multiple measurements increases indefinitely, the variance of the ME component approaches zero. Notably, implementing multiple measurements without ME does not affect the performance of CCs techniques. When employing multiple measurements and considering a sample size of m, the variance of the overall mean can be formulated as:

$$\left( {\frac{{B^{2} \delta^{2} }}{n} + \frac{{\delta_{m}^{2} }}{nk}} \right)$$
(9)

Linearly increasing variance method

Within Section “Measurement error”, a covariate model was explored, assuming a consistent variance. Let us utilize the identical model as Y = A + BX + e, wherein the variance alters linearly in response to fluctuations in the variable Y. Here, the term e adheres to a normal distribution with an average of 0 and a variance of C + D\(\theta\). Subsequently, Y follows a normal distribution with an average of A + B\(\theta\) and a variance of \(B^{2} \delta^{2} + C + D\theta\).

Suggested Bayesian AEWMA CC using lf under me

The recommended Bayesian CC considering ME to identify unusual fluctuations in the location parameter of the normally distributed process mean. Let X1, X2, …, Xn denote a sequence of independent and identically distributed random variables, each adhering to a normal distribution with a mean of θ and a variance of δ2. The corresponding probability density function is expressed as follows:

$$f\left( {x_{t} :\theta ,\sigma^{2} } \right) = \frac{1}{{\sqrt {2\pi \delta^{2} } }}\exp \left( { - \tfrac{1}{{2\delta^{2} }}\left( {x_{t} - \theta } \right)^{2} } \right)$$
(10)

Consider the computed mean shift estimate \({\widehat{\delta }}_{t}^{*}\) as an AEWMA sequence originating from {Xt}, illustrated as:

$$\widehat{{\delta_{t}^{*} }} = \psi X_{t} + \left( {1 - \psi } \right)\widehat{{\delta_{t - 1}^{*} }}$$
(11)

In this scenario, with \(\widehat{{\delta_{0}^{*} }}\) set to 0 and ψ as the smoothing constant, the estimator \(\widehat{{\delta_{t}^{*} }}\) shows an unbiased in the in-control scenario, yet exhibits bias in an out-of-control process. To achieve unbiased measurement in both cases, Haq et al. 7 proposed the adoption of \(\widehat{{\delta_{t}^{*} }}\), defined as follows:

$$\widehat{{\delta_{t}^{**} }} = \frac{{\widehat{{\delta_{t}^{*} }}}}{{1 - \left( {1 - \psi } \right)^{t} }}$$
(12)

The author recommended to utilize \(\widehat{{\delta_{t} }} = \left| {\widehat{{\delta_{t}^{**} }}} \right|\) for estimating \(\delta\).

The offered AEWMA CC applying Bayesian approach for detecting the process mean utilizing the sequence of \(\{ X_{t} \}\) is mathematically described as

$$Z_{t} = g\left( {\hat{\delta }_{t} } \right)\hat{\theta }_{(SELF)} + \left( {1 - g\left( {\hat{\delta }_{t} } \right)} \right)Z_{t - 1}$$
(13)

where \(v\left( {\hat{\delta }_{t} } \right) \in \left( {0,\left. 1 \right]} \right.\) and \(Z_{0} = 0\) such that

$$v\left( {\hat{\delta }_{t} } \right) = \left\{ {\begin{array}{*{20}l} {\frac{1}{{a\left[ {1 + \left( {\hat{\delta }_{t} } \right)^{ - c} } \right]}}} \hfill & {if} \hfill & {0 < \hat{\delta }_{t} \le 2.7} \hfill \\ 1 \hfill & {if} \hfill & {\hat{\delta }_{t} > 2.7} \hfill \\ \end{array} } \right.$$
(14)

Atif et al.26 proposed the application of the function described in equation (24) to dynamically modify the smoothing constant value, responding to the estimated shift. The suggested values for the constants used in the function \(v(\hat{\delta }_{t} )\) are a = 7 and c = 1 when the estimated shift \(\hat{\delta }_{t}\) falls within the range of 1 to 2.7. Additionally, the constant c = 2 for estimated shift values \(\hat{\delta }_{t}\) that are less than or equal to 1. Establishing whether the process is within control or beyond control depends on whether the plotting statistic of the Bayesian AEWMA surpasses a predetermined threshold value identified as h. Alternatively, the process is considered to be under control if the plotting statistic remains below this threshold.

If both the likelihood function and the prior distribution conform to normal distributions, the resultant P distribution takes on a Gaussian form, characterized by a mean denoted as θn and a variance represented by \(\delta_{n}^{2}\). This culminates in the establishment of a probability density function, which can be expressed as follows:

$$P\left( {\theta |y} \right) = \frac{1}{{\sqrt {2\pi } \sqrt {\frac{{\delta^{2} \delta_{0}^{2} }}{{\delta^{2} + n\delta_{0}^{2} }}} }}\exp \left[ { - \frac{1}{2}\left( {\frac{{\theta - \sum\limits_{i = 1}^{n} {\frac{{y_{i} \delta_{0}^{2} + \theta_{0} \delta_{0}^{2} }}{{\delta^{2} + n\delta_{0}^{2} }}} }}{{\sqrt {\frac{{\delta^{2} \delta_{0}^{2} }}{{\delta^{2} + n\delta_{0}^{2} }}} }}} \right)^{2} } \right]$$
(15)

where \(\theta /Y \sim N\left( {\theta_{n} ,\delta_{n}^{2} } \right)\), i.e., \(\theta_{n} = \frac{{n\overline{y}\delta_{0}^{2} + \delta^{2} \theta_{0} }}{{\delta^{2} + n\delta_{0}^{2} }}\) and \(\delta_{n}^{2} = \frac{{\delta^{2} \delta_{0}^{2} }}{{\delta^{2} + n\delta_{0}^{2} }}\).

Suggested Bayesian CC under ME utilizing SELF using P and PP with covariate model

The Bayes estimator utilized in the proposed Bayesian AEWMA CC under SELF with covariate model is given by:

$$\hat{\theta }_{psc(SELF)} = \frac{{n\overline{y}\delta_{0}^{2} + \left( {B^{2} \delta^{2} + \delta_{m}^{2} } \right)\theta_{0} }}{{n\delta_{0}^{2} + B^{2} \delta^{2} + \delta_{m}^{2} }}$$
(16)

Recommended Bayesian CC under ME under SELF using P and PP for multiple measurement method

The Bayes estimator based on the suggested CC using SELF for P and PP distribution under multiple measurements method is mathematized as:

$$\hat{\theta }_{psmm(SELF)} = \frac{{n\overline{y}\delta_{0}^{2} + \left( {\frac{{B^{2} \delta^{2} }}{n} + \frac{{\delta_{m}^{2} }}{nk}} \right)\theta_{0} }}{{n\delta_{0}^{2} + \left( {\frac{{B^{2} \delta^{2} }}{n} + \frac{{\delta_{m}^{2} }}{nk}} \right)}}.$$
(17)

Proposed Bayesian AEWMA CC under ME under SELF using P and PP for linearly increasing variance method

The estimator under SELF using suggested AEWMA utilizing Bayesian approach based on the linearly increasing variance method is described as

$$\hat{\theta }_{psl(SELF)} = \frac{{n\overline{y}\delta_{0}^{2} + \left( {B^{2} \delta^{2} + C + D\theta } \right)\theta_{0} }}{{n\delta_{0}^{2} + \left( {B^{2} \delta^{2} + C + D\theta } \right)}}.$$
(18)

Appendix A contains the estimators for the proposed Bayesian AEWMA CC, which incorporates an informative prior for both the covariate model and the multiple measurements method, along with the linearly increasing variance approach under LLF. Furthermore, Appendix A contains the comprehensive R codes necessary for assessing the run length profile, thus enabling the effective use and execution of the suggested Bayesian AEWMA CC.

Discussion on tables and main findings

Tables 1, 2 and 3 serve as a platform to showcase the outcomes obtained through the application of the Bayesian AEWMA CC approach, both in scenarios with and without ME considerations. The analysis at hand meticulously considers the influence of two distinct LFs, which are tailored to emphasize the P and PP distributions. These evaluations are conducted within the context of informative priors, strategically employed through the covariate method. Tables 4 and 5 likely present the steady-state ARL that highlights the effectiveness of the suggested CC, emphasizing its proficiency in recognizing and addressing ME, ultimately demonstrating its superiority over other methods in the same context. In a parallel manner, Tables 7, 8, 9 and Tables 10, 11, 12 adhere to the same structured approach, yet they encompass a more comprehensive perspective by accommodating multiple measurements originating from the same sampled values. This expanded scope is complemented by the utilization of the linearly increasing variance method, adding another layer of sophistication to the analysis.

Table 1 ARL and SDRL outcomes for CC in the context of ME, within the SELF framework for the covariate model, with \(\lambda\) = 0.10.
Table 2 Using covariate model, run-length results Bayesian chart in the presence of ME under LLF, with \(\lambda\) = 0.10.
Table 3 The run length profiles for AEWMA CC under ME with LLF for covariate model, with \(\lambda\) = 0.10.
Table 4 Using covariate model, steady state ARL and SDRL results of the Bayesian CC under ME with SELF at \(\lambda\) = 0.10 and cut = 5.
Table 5 Steady state ARL and SDRL in the existence of ME for covariate model using SELF, with \(\lambda\) = 0.10 at cut = 20.
Table 6 For sensitivity analysis, run length outcomes of Bayesian chart applying SELF for covariate model considering ME, with different values of smoothing constant and sample sizes.
Table 7 The run length profiles for Bayesian chart under ME under SELF for multiple measurement method, with \(\lambda\) = 0.10.
Table 8 The run length profiles for Bayesian AEWMA CC in the presence of ME using P distribution under LLF for multiple measurement method, with \(\lambda\) = 0.10.
Table 9 The run length profiles of AEWMA chart applying ME under LLF for multiple measurement method, with \(\lambda\) = 0.10.
Table 10 The run length profiles for Bayesian CC with ME applying SELF for linearly related variance, with \(\lambda\) = 0.10.
Table 11 The run length profiles for Bayesian-AEWMA chart under ME with LLF for linearly related variance, with \(\lambda\) = 0.10.
Table 12 The run length profiles for Bayesian CC applying ME using PP distribution under LLF for linearly related variance, with \(\lambda\) = 0.10.

This section of the study delves deeply into the content of these tables, embarking on a comprehensive exploration that reveals significant insights stemming from the utilization of the AEWMA CC method within the Bayesian framework. Notably, the versatility of the approach is underscored by its adaptation to diverse LFs, thereby enriching the overall understanding of the underlying phenomena. Tables 1, 2, 3, 4, 5, 6, 7, 8 and 9 indicate that as the shift magnitude increases from 0.10 to 0.20 and continues up to 3, both the ARL and SDRL values decrease. This decline suggests that even minor, moderate, and substantial changes in the process parameter are detected at an earlier stage. This conclusion is supported by the reduced ARL values for each shift compared to their previous values. This trend culminates in an ARL approaching unity at shift 3. These consistent patterns persist across all tables, irrespective of the presence of errors (ranging from none to magnitudes of 0.1, 0.2, 0.5, or 1). These observations underscore a fundamental trait of AEWMA CCs.

Shifting our focus to the impact of ME on CC efficiency, a consistent pattern emerges from the analysis of the tables. As the magnitude of error escalates from zero to 0.2 and further to a unit value, a corresponding increase is observed in ARLs. This increase leads to delays in detecting process shifts when employing methods to address ME. Consequently, it becomes evident that ME exerts a detrimental effect on the effectiveness of Bayesian AEWMA CCs in identifying process shifts within the context of industrial production. In Table 1, we are able to closely scrutinize the outcomes pertaining to the ARL outcomes of the Bayesian AEWMA CC method as proposed. This table provides a comprehensive display of these outcomes based on the recommended designs utilizing SELF for the covariate model, while considering specific parameter values, namely A = 0 and B = 1. Moreover, the table offers a presentation of the results across various scenarios characterized by different levels of ME. These scenarios include cases with no error, an ME value of 0.5, and an ME value of 1. Upon closer examination, a discernible pattern becomes apparent: the run length values exhibit an upward trend with an increase in the value of ME. In simpler terms, as the magnitude of ME rises, the associated run length values also increase. This pattern suggests that higher levels of ME have an influence on the detection efficiency of the Bayesian CC, potentially leading to delays in monitoring the shifts in the process mean. For example under SELF, at \(\frac{{\delta_{m}^{2} }}{{\delta^{2} }}\) = 0.0, 0.1, 0.2, 0.5 and 1 with \(\lambda = 0.10\) and shift \(\sigma = 0.30\), the ARL values are 43.86, 46.95, 49.91, 58.50 and 71.93, and for shift \(\sigma = 0.80\), ARL values are 8.05, 8.81, 9.55, 11.76, and 15.32. The same pattern is observed in the Table 2 for P distribution and Table 3 for PP distributions under LLF. i.e., From Table 3 at \(\frac{{\delta_{m}^{2} }}{{\delta^{2} }}\) = 0.0, 0.1, 0.2, 0.5 and 1 using \(\lambda = 0.10\) with \(\sigma = 0.30\) the run length values are 43.99, 47.30, 52.34, 58.18, and 70.57. Also, we show the same efficiency in the Figs. 1, 2 and 3, which presents the ARL plots using covariate method for the suggested design.

Figure 1
figure 1

Under covariate model, the ARL plots for the suggested CC using SELF consider values of \(\frac{{\delta_{m}^{2} }}{{\delta^{2} }}\).

Figure 2
figure 2

Utilizing covariate model, ARL plots for the suggested CC using LLF with various values of \(\frac{{\delta_{m}^{2} }}{{\delta^{2} }}\).

Figure 3
figure 3

Using covariate model, ARL plots for the suggested CC for P and PP distribution using various values of \(\frac{{\delta_{m}^{2} }}{{\delta^{2} }}\).

The data in Tables 7, 8 and 9 illustrates the significant impact of conducting multiple measurements on the same sample in mitigating the ME effect. Comparing the run length outcomes between Tables 4, 56 and Tables 1, 2, 3 demonstrates that the utilization of multiple measurements effectively minimizes the ME effect. This enhancement in chart efficiency enables the early detection of process shifts. As an illustration, under SELF Table 7, when \(\frac{{\delta_{m}^{2} }}{{\delta^{2} }}\) = 0.0, 0.1, 0.2, 0.5, and 1 are applied in conjunction with \(\lambda = 0.10\) and shift \(\sigma = 0.30\), the corresponding ARL values are 43.64, 44.66, 45.83, 47.39, and 50.04. Notably, these values are substantially lower than those detailed in Table 1, as discussed previously. The Figs. 4, 5 and 6 show the ARL plots for the multiple measurements method and indicate the same efficiency. Table 10, 11 and 12 presents the ARL and SDRL values for the Bayesian AEWMA CC applied to a scenario involving linearly increasing variance. The experiments entail varying the parameter D across different LFs. Notably, In Table 10, the ARL values at δ = 0.3 are observed to be 179.79 and 263.31 for D = 0 and 3, respectively, within the SELF condition. This trend suggests a decrease in the efficiency of the Bayesian EWMA control chart as the value of D increases. Additionally, Figs. 7, 8 and 9 also indicate an increase in ARL values with an increase in the value of D. Table 6 provides insights into the sensitivity of the suggested CC when confronted with ME while examining various smoothing constant values and sample sizes. The data clearly illustrates those higher smoothing constant values correspond to increased ARL values, indicating the CC's inefficiency in handling ME. Moreover, an increase in the sample size leads to a noticeable decline in the resulting ARL values across different shift values.

Figure 4
figure 4

Using multiple measurements, the ARL plots for the proposed CC using SELF consider values of \(\frac{{\delta_{m}^{2} }}{{\delta^{2} }}\).

Figure 5
figure 5

Utilizing multiple measurements, ARL plots for the recommended CC using LLF with different values of \(\frac{{\delta_{m}^{2} }}{{\delta^{2} }}\).

Figure 6
figure 6

Under LLF, ARL plots for the suggested CC for P and PP distribution using distinct values of \(\frac{{\delta_{m}^{2} }}{{\delta^{2} }}\).

Figure 7
figure 7

Based on the linearly increasing method, the ARL plots for the suggested CC applying SELF consider values of D.

Figure 8
figure 8

Using LLF, ARL plots for the suggested CC applying linearly increasing variance method with values of D.

Figure 9
figure 9

Using LLF, ARL plots for the suggested CC applying linearly increasing variance method with values of D for P and PP distribution.

Based on the preceding dialogue, we can formulate the key discoveries at this juncture.

  • The efficacy of AEWMA CCs in detecting minor to moderate process shifts is apparent from the ARL and SDRL values presented in all nine tables for the proposed strategy.

  • The previous discussion also addressed the negative effect of ME on the effectiveness of the proposed CCs.

  • The decrease in error effect resulting from multiple measurements is apparent from the ARL values in Tables 7, 8 and 9 and the earlier discussions about these tables. This confirms the effectiveness of utilizing multiple measures to alleviate the error effect in our recommended CCs.

  • By analyzing P and PP distributions using the Bayesian framework, it becomes clear that the newly introduced Bayesian AEWMA CC, when implemented with multiple measurements and in the presence of ME, shows reduced vulnerability to ME compared to the other two methods. These observations stem from the utilization of informative priors and the integration of both LFs.

Real life data applications

This article illustrates the practical implementation of the recommended Bayesian AEWMA CC in the context of ME. The specific dataset utilized for this purpose is drawn from Montgomery27 and focuses on the hard-bake process in semiconductor production. Consisting of a total of 45 samples, with each sample comprising 5 wafers, the dataset amounts to 225 data points in total. These data points represent measurements of flow width and are expressed in microns. The time interval between each sample is uniformly set at one hour. The initial 35 samples, making up 175 observations, are identified as representative of the controlled process and are designated as the phase-I dataset. In contrast, the remaining 10 samples, totaling 50 observations, are categorized as indicative of the out-of-control process and are labeled as the phase-II dataset. The Phase I dataset can be employed to estimate the parameters necessary for determining control limits. Typically, these control limits are derived from the stable data collected during Phase I. Subsequently, the same dataset is utilized to create the chart. If all the data points fall within the established control limits, these limits can then be carried over to Phase II. In cases where data points extend beyond the control limits, out-of-control points are identified and removed, after which new control limits are constructed for Phase II implementation.

To apply the suggested Bayesian AEWMA Chart using covariate models with SELF, we examine different error ratio \(\frac{{\delta_{m}^{2} }}{{\delta^{2} }}\) values are: 0.0, 0.5, and 1. The results of recommended chart applying covariate model with SELF are displayed in Figs. 10, 11, and 12. The respective error ratio \(\frac{{\delta_{m}^{2} }}{{\delta^{2} }}\) values of 0.0, 0.3, and 0.5 are considered. After careful examination, it becomes clear that the process strays from its controlled state in the 35th, 39th, and 41st samples.

Figure 10
figure 10

Applying covariate model, Bayesian chart with SELF at \(\frac{{\delta_{m}^{2} }}{{\delta^{2} }} = 0\).

Figure 11
figure 11

Utilizing covariate model, Bayesian CC by utilizing SELF at \(\frac{{\delta_{m}^{2} }}{{\delta^{2} }} = 0.3.\)

Figure 12
figure 12

Suggested CC for covariate model using SELF at \(\frac{{\delta_{m}^{2} }}{{\delta^{2} }} = 0.5.\)

Figures 13, 14, and 15 showcase the execution of the suggested CC utilizing Multiple measurement method incorporating the SELF. The charts present results for various error ratios: 0.0, 0.3, and 0.5. Upon a detailed examination of these charts, it is revealed that the process shows signs of being out of control in the 36th, 38th, and 40th samples. Additionally, Figs. 16, 17, and 18 provide a visual representation of the performance of the suggested CC utilizing the SELF. These figures clearly demonstrate that the process exhibits indications of being out of control in the 37th, 40th, and 42nd samples in the same context. This highlights that the MRSS scheme is the most effective compared to the other two schemes in terms of the efficacy of control charts for detecting process shifts during industrial production.

Figure 13
figure 13

Utilizing SELF, AEWMA CC for multiple measurements method at \(\frac{{\delta_{m}^{2} }}{{\delta^{2} }} = 0\).

Figure 14
figure 14

AEWMA CC with multiple measurement method under SELF for \(\frac{{\delta_{m}^{2} }}{{\delta^{2} }} = 0.3.\)

Figure 15
figure 15

Bayesian CC by using SELF for multiple measurement method at \(\frac{{\delta_{m}^{2} }}{{\delta^{2} }} = 0.5.\)

Figure 16
figure 16

Bayesian CC using linearly increasing variance with SELF for D = 1.

Figure 17
figure 17

Bayesian AEWMA CC applying linearly increasing variance under SELF for D = 3.

Figure 18
figure 18

Suggested CC applying linearly increasing variance under SELF for D = 5.

Conclusion

This study focuses on examining how ME affects the AEWMA CC using Bayesian techniques, specifically incorporating various LFs such as SELF and LLF. To assess the performance of the proposed CC with ME, two metrics are employed, ARL and SDRL. The results of ARL offer valuable information on the simulation results of Bayesian CCs employing three different strategies to address ME: the covariate approach, multiple measurements, and the linearly increasing variance. The results indicate that the suggested Bayesian AEWMA CC, when implemented with a multiple measurement design, demonstrates improved effectiveness compared to alternative methods in scenarios involving ME. As a result, the research recommends adopting suggested chart with multiple measurements for robust monitoring of shifts in process mean, particularly when ME is a contributing factor.

Limitations of the study and future recommendations

Implementing Bayesian CCs in the presence of ME often encounters computational challenges, particularly with large sample sizes, requiring resource-intensive procedures. Moreover, the reliance on carefully selected prior distributions for the process mean and variance can introduce subjectivity and potential bias within the Bayesian approach. Furthermore, the assumption of data exchangeability in the Bayesian-AEWMA CC with ME can yield unreliable results, demanding a thorough evaluation of data exchangeability prior to implementation. The proposed Bayesian CCs, when applied under ME, offer a versatile approach that can be extended to other memory-type control charts, allowing the adaptation of the methodology to accommodate diverse probability distributions beyond the normal distribution, such as Poisson or binomial distributions. Nevertheless, these extensions necessitate modifications to the likelihood function during the Bayesian updating procedure. By incorporating these adjustments, the methodology becomes adaptable to a broader range of data distributions, enhancing its applicability and potential impact across various industries. This adaptability is particularly valuable in sectors like healthcare, finance, and manufacturing, where diverse data distributions are encountered, enabling more effective and efficient quality control processes.