Introduction

Statistical process monitoring is a method utilized to monitor the variations in any process and to ensure the delivery of good quality outputs (products/services). Control charts are the main tools for this purpose. The first control chart was introduced by Shewhart in 1924. Since this control chart is memory-less, it is slow in detecting small and moderate shift sizes. To improve the sensitivity of the Shewhart control chart, different approaches are proposed. Two of the main approaches are using memory-type and adaptive schemes.

Although the statistic in the memory-less control charts is only related to the current sample, it is somehow related to the previous statistics as well (its value gets updated based on both current and previous samples) in the memory-type control charts. The main memory-type control charts are EWMA (exponentially weighted moving average) and CUSUM (cumulative sum) control charts.

On the other hand, in adaptive schemes, at least one of the chart’s parameters (sampling interval, sample size, and control limits) is allowed to vary from sample to sample (usually between two possible values). The main adaptive schemes are VSI (Variable Sampling Interval), VSS (Variable Sample Size), VSSI (Variable Sample Size and Sampling Interval), and VP (variable parameters). Studies such as Sabahno et al.1 have shown that the VP scheme, in which all the chart parameters are allowed to vary, is the best-performing scheme. For some notable works regarding adding different adaptive schemes to different control charts, we refer interested readers to Sabahno et al.1,2,3,4, Sabahno & Celano5, and Sabahno6.

Although adaptive schemes were initially developed to be used in memory-less control charts, studies such as Perdikis & Psarakis7 have shown that combining both approaches (using memory-type and adaptive control charts) further improves the chart’s sensitivity and performance. Nonetheless, there are also studies such as Amir et al.8 and Abbas et al.9 that have used auxiliary information to increase the sensitivity of memory-type control charts.

Profile monitoring is a special case of statistical process monitoring in which instead of quality characteristics, the relationship between some dependent and independent variables in the form of a regression model is monitored. Maintaining this relationship ensures the process quality, in this case. The idea of profile monitoring was raised by Kang & Albin10. They introduced simple linear profiles with two main applications in semiconductor and food manufacturing. They used the memory-less Hotelling´s T2 and the memory-type EWMA control charts. A significant development in monitoring simple linear profiles was made by Kim et al.11. They developed a monitoring scheme using three EWMA statistics. Other notable work has been proposed by Zou et al.12. They used a control chart based on a change-point model to monitor linear profiles. Some other notable works that have developed Kang & Albin10’s research are Yeh et al.13, Noorossana et al.14, Eyvazian et al.15, Hosseinifard et al.16, and Zou et al.17.

Simultaneous monitoring of the normal process parameters (the mean and variability) usually results in better overall performance. Although there are two main simultaneous monitoring schemes, one uses only one single chart and the other uses two charts, the former is preferred due to its simplicity of usage. Some notable works that have considered simultaneous monitoring of profiles parameters are Zhang et al.18, Eyvazian et al.15, Khedmati & Niaki19, Ghashghaei & Amiri20,21, Mahmood et al.22, Saeed et al.23, Ghashghaei et al.24, Malela-Majika et al.25, Abbasi et al.26, Sabahno & Amiri27, and Sherwani et al.28. Ghashghaei & Amiri20, developed two memory-type control charts by using a max-operator, namely Max-MEWMA (multivariate EWMA) and Max-MCUSUM (multivariate CUSUM) control charts for simultaneous monitoring of the mean vector and variance-covariance matrix of multivariate multiple linear profiles. Ghashghaei & Amiri21 did the same but this time by using an SS (squared sum) operator. They called their control charts SS-EWMA and SS-CUSUM. However, other than developing control charts to monitor the profile’s parameters (before-mentioned ones), they also developed control charts to monitor the residuals. Both these studies used simulation to compute the performance measures. Mahmood et al.22 developed SS and Max types EWMA control charts using three EWMA statistics and showed superior performance over using those three EWMA statistics separately (EWMA3), three separate Hotelling T2 charts, and EWMA-R charts. Saeed et al.23 developed a scheme using three progressive statistics which were monitored separately, and showed superior performance over the existing charts including EWMA3, EWMA-R, Hotelling T2, and a scheme with three separate Shewhart-type charts. Abbas et al.29 investigated the Bayesian EWMA and MEWMA control charts for monitoring of the linear profiles when the explanatory variables are random. However, in most studies like ours, the explanatory variables are assumed fixed.

Moreover, the following studies have considered adaptive schemes in profile monitoring. Li & Wang30 developed an EWMA scheme with variable sampling intervals (VSI) for monitoring linear profiles. Abdella et al.31 developed a Hotelling T2 scheme with varying sample sizes and sampling intervals (VSSI). Ershadi et al.32, investigated the economic-statistical design of an EWMA scheme with variable sampling interval (VSI) for linear profile monitoring. Magalhaes & Von Doellinger33 developed a variable sample size (VSS) χ2 scheme for linear profile monitoring. Kazemzadeh et al.34 developed the EWMA3 and MEWMA schemes with variable sample sizes. Ershadi et al.35 investigated the economic-statistical design of an EWMA scheme with variable sample size (VSS) for linear profile monitoring. Darbani & Shadman36, developed a generalized likelihood ratio control chart with variable sampling intervals for monitoring linear profiles. Yeganeh et al.37 developed an adaptive MEWMA control chart based on the ratio of samples. Haq38 developed adaptive MEWMA charts by varying the smoothing parameter for monitoring linear profiles. Sabahno & Amiri27 developed a VP memory-less Max-type control chart for simultaneous monitoring of the mean vector and the variance-covariance matrix in multivariate multiple linear profiles. They evaluated the chart performance using a dedicated Markov chain model. Sabahno & Amiri39 developed memory-less machine-learning based control charts and compared them to the best available statistical control charts for monitoring generalized linear regression profiles’ parameters in both fixed and variable parameters schemes.

According to the literature and the review paper of Perdikis & Psarakis7, VP adaptive schemes have not so far been developed for memory-type control charts. In this paper, we develop VP schemes for four memory-type control charts: the Max-MEWMA and Max-MCUSUM for monitoring the regression parameters (from Ghashghaei & Amiri20), and two SS-type control charts for monitoring the residuals (from Ghashghaei & Amiri21). Note that we only use Ghashghaei & Amiri21’s SS-type control charts for the residual, because they discovered them to be more effective than the SS-type charts to monitor the regression parameters. We also develop a computer algorithm to compute different performance measures of the developed control charts, which can be used for any other memory-type VP control chart as well. Furthermore, we use a real case to show how the proposed control charts can be implemented in practice. To do so, we use a dataset from the national Swedish Stroke Register, Riksstroke. After estimating two correlated multiple profiles, we develop and implement the proposed control charts to monitor the stroke care-related relationships.

This paper is structured as follows. Multivariate multiple linear profiles are described in Section "Multivariate multiple linear profiles". The Max-type and SS-type memory-type control charts for simultaneous monitoring of multivariate multiple linear profiles are described in Section "Max-type and SS-type memory-type control charts". In Section "Design parameters in a variable parameters scheme", VP adaptive schemes are developed for control charts described in Section "Max-type and SS-type memory-type control charts". Section "Performance measures" contains the proposed algorithm to compute the performance measures of the proposed control charts. Section "Simulation studies" contains extensive simulation studies and numerical analyses to evaluate the proposed control charts’ performance using the proposed performance measure algorithm, under different shift types and sizes. Our real case illustrative example is presented in Section "A real case". Concluding remarks and suggestions for future developments of the paper are mentioned in Section "Conclusions".

Multivariate multiple linear profiles

For the kth sample of size n, with p response variables (profiles), \({\mathbf{Y}}_{k}\) is a \(n\times p\) matrix. \({\mathbf{Y}}_{k}\) is a linear function of some independent variables x, so that:

$${\mathbf{Y}}_{k}=\mathbf{X}\mathbf{B}+{\mathbf{\rm E}}_{k},$$
(1)

where X is a \(n\times \left(q+1\right)\) matrix of explanatory (independent) variables,\(q\) is the number of independent variables, \(\mathbf{B}\) is a \(\left(q+1\right)\times p\) matrix of regression parameters, and \({\mathbf{\rm E}}_{k}\) is a \(n\times p\) matrix of correlated error terms (\(\varepsilon )\), which follows a multivariate normal distribution (0,\({\varvec{\Sigma}})\), where \({\varvec{\Sigma}}=\left(\begin{array}{ccc}\begin{array}{cc}{\sigma }_{11}& {\sigma }_{12}\end{array}& \cdots & {\sigma }_{1p}\\ \vdots & \ddots & \vdots \\ \begin{array}{cc}{\sigma }_{p1}& {\sigma }_{p2}\end{array}& \cdots & {\sigma }_{pp}\end{array}\right)\), and \({\sigma }_{gh}\) denotes the covariance between the error vector terms of gth and the hth response variables at each observation.

Therefore, we can write Eq. (1) as:

$$\begin{gathered} \left( {\begin{array}{*{20}c} {\begin{array}{*{20}c} {y_{11k} } & {y_{12k} } \\ \end{array} } & \cdots & {y_{1pk} } \\ \vdots & \ddots & \vdots \\ {\begin{array}{*{20}c} {y_{n1k} } & {y_{n2k} } \\ \end{array} } & \cdots & {y_{npk} } \\ \end{array} } \right) = \left( {\begin{array}{*{20}c} {\begin{array}{*{20}c} 1 & {x_{11} } \\ \end{array} } & \cdots & {x_{1q} } \\ \vdots & \ddots & \vdots \\ {\begin{array}{*{20}c} 1 & {x_{n1} } \\ \end{array} } & \cdots & {x_{nq} } \\ \end{array} } \right)_{{n \times \left( {q + 1} \right)}} \left( {\begin{array}{*{20}c} {\begin{array}{*{20}c} {\beta_{01} } & {\beta_{02} } \\ \end{array} } & \cdots & {\beta_{0p} } \\ \vdots & \ddots & \vdots \\ {\begin{array}{*{20}c} {\beta_{q1} } & {\beta_{q2} } \\ \end{array} } & \cdots & {\beta_{qp} } \\ \end{array} } \right)_{{\left( {q + 1} \right) \times p}} + \\ \left( {\begin{array}{*{20}c} {\begin{array}{*{20}c} {\varepsilon_{11k} } & {\varepsilon_{12k} } \\ \end{array} } & \cdots & {\varepsilon_{1pk} } \\ \vdots & \ddots & \vdots \\ {\begin{array}{*{20}c} {\varepsilon_{n1k} } & {\varepsilon_{n2k} } \\ \end{array} } & \cdots & {\varepsilon_{npk} } \\ \end{array} } \right)_{n \times p} . \\ \end{gathered}$$

Max-type and SS-type memory-type control charts

In this section, we describe two memory-type Max-type control charts, namely Max-MEWMA, and Max-MCUSUM charts, which form a single statistic by taking the maximum value between the absolute values of two statistics (one for the process mean vector and the other one for the process variability). We also describe two SS-type memory type control charts, namely SS-EWMAe and SS-CUSUMe charts, which form a single statistic by adding the squared values of two statistics (again, one for the process mean vector and the other one for the process variability). As previously mentioned in the Introduction section, these Max-type and SS-type control charts are proposed by Ghashghaei & Amiri20 and Ghashghaei & Amiri21, respectively.

Memory-type control charts using a Max operator

In this section, we describe two memory type Max-type control charts for regression parameters, namely Max-MEWMA and Max-MCUSUM charts, which were introduced by Ghashghaei & Amiri20.

Max-MEWMA control chart

In this section, we detail a single control chart for simultaneously monitoring of the process mean vector and variability (\(\mathbf{B}\) and \({\varvec{\Sigma}}\) matrices), by assuming that their values are known.

First, we need to develop a statistic to represent the process mean vector. To monitor the process mean, the Hotelling’s \({T}_{k}^{2}\) statistic can be used to monitor the changes in the \(\mathbf{B}\) matrix. The sample estimate of the \(\mathbf{B}\) matrix (\({\widehat{\mathbf{\rm B}}}_{k}\)) is computed as:

$${\widehat{\mathbf{\rm B}}}_{k}={\left[{\mathbf{X}}^{T}\mathbf{X}\right]}^{-1}{\mathbf{X}}^{{\text{T}}}{\mathbf{Y}}_{k}.$$
(2)

By changing \({\widehat{\mathbf{\rm B}}}_{k}\) into a p(q+1)\(\times 1\) vector, we have:

$${\widehat{{\varvec{\beta}}}}_{k}={\left({\widehat{\beta }}_{01k},{\widehat{\beta }}_{11k},\dots ,{\widehat{\beta }}_{q1k},\dots \dots ,{\widehat{\beta }}_{0pk},{\widehat{\beta }}_{1pk},\dots ,{\widehat{\beta }}_{qpk}\right)}^{T}.$$

Next, we need to compute its average and variance-covariance matrix. For an in-control process, and since we assume that the parameters’ values are known, the expected value of \({\widehat{\boldsymbol{\rm B}}}_{k}\) is equal to \({\varvec{B}}\). Therefore, we have:

$${\varvec{\beta}}=E({\widehat{{\varvec{\beta}}}}_{k})={\left({\beta }_{01},{\beta }_{11},\dots ,{\beta }_{q1},\dots \dots ,{\beta }_{0p},{\beta }_{1p},\dots ,{\beta }_{qp}\right)}^{T}.$$

For its variance-covariance matrix, we have:

$${{\varvec{\Sigma}}}_{{\widehat{{\varvec{\beta}}}}_{k}}={\left(\begin{array}{ccc}\begin{array}{cc}{{\varvec{\Sigma}}}_{11}& {{\varvec{\Sigma}}}_{12}\end{array}& \cdots & {{\varvec{\Sigma}}}_{1p}\\ \vdots & \ddots & \vdots \\ \begin{array}{cc}{{\varvec{\Sigma}}}_{p1}& {{\varvec{\Sigma}}}_{p2}\end{array}& \cdots & {{\varvec{\Sigma}}}_{pp}\end{array}\right)}_{p\left(q+1\right)\times p\left(q+1\right)},$$

\({\boldsymbol{\Sigma }}_{gh}\) is a \(\left(q+1\right)\times \left(q+1\right)\) matrix equal to \({\left[{{\varvec{X}}}^{T}{\varvec{X}}\right]}^{-1}{\sigma }_{gh}\).

Next, we define:

$${{\varvec{z}}}_{k}=\lambda \left({\widehat{{\varvec{\beta}}}}_{k}-{{\varvec{\beta}}}_{k}\right)+\left(1-\lambda \right){{\varvec{z}}}_{k-1},$$
(3)

where \({{\varvec{z}}}_{0}\) is the starting point and is equal to zero, \(\lambda\) is the smoothing parameter and its value can vary between 0 and 1. However, most commonly, \(\lambda =0.2\) is used.

Then, the following statistic is defined for monitoring the process mean vector:

$${C}_{k}={\boldsymbol{\Phi }}^{-1}\left[{{\varvec{H}}}_{\left(q+1\right)p}\left\{\frac{2-\lambda }{\lambda }{{\varvec{z}}}_{k}^{T}{\Sigma }_{\widehat{{\varvec{\beta}}}}^{-1}{{\varvec{z}}}_{k}\right\}\right],$$
(4)

where \({{\varvec{H}}}_{\left(q+1\right)p}\left(.\right)\) is the chi-square cumulative distribution function with \(\left(q+1\right)p\) degrees of freedom, and \(\Phi (.)\) is the standard normal cumulative distribution function.

To construct the statistic for monitoring the process variability, first, we define \({W}_{k}\) as:

$${W}_{k}={\sum }_{i=1}^{n}\left({y}_{ik}-{x}_{i}\mathbf{B}\right){{\varvec{\Sigma}}}^{-1}{({y}_{ik}-{x}_{i}\mathbf{B})}^{T}.$$
(5)

So that \({W}_{k}\) has a chi-square distribution with np degrees of freedom. Then, we define:

$${g}_{k}=\left(1-\lambda \right){g}_{k-1}+\lambda {\boldsymbol{\Phi }}^{-1}\left[{{\varvec{H}}}_{np}\left\{{W}_{k}\right\}\right],$$
(6)

where \({g}_{0}\) is the starting point and is equal to zero. The statistic for monitoring the variability is defined as:

$${S}_{k}=\sqrt{\frac{2-\lambda }{\lambda }}{g}_{k},$$
(7)

Finally, the \(M{E}_{k}\) statistic is formed by combining \({C}_{k}\) and \({S}_{k}\):

$$M{E}_{k}=max\left\{\left|{C}_{k}\right|,\left|{S}_{k}\right|\right\}.$$
(8)

Since this statistic only generates positive values, we only need an upper control limit (UCL) for this control chart, and its value is obtained using simulation to achieve any desired ARL performance.

Max-MCUSUM control chart

The statistic for monitoring the process mean vector for the Max-MCUSUM control chart is defined as:

$${U}_{k}=max\left\{0,{U}_{k-1}+{Z}_{k}-0.5D\right\},$$
(9)

where \({Z}_{k}=a{\left({\widehat{{\varvec{\beta}}}}_{k}-{{\varvec{\beta}}}_{g}\right)}^{T}\), \(a={\frac{\left({{\varvec{\beta}}}_{b}-{{\varvec{\beta}}}_{g}\right){\boldsymbol{\Sigma }}_{\widehat{{\varvec{\beta}}}}^{-1}}{\sqrt{\left({{\varvec{\beta}}}_{b}-{{\varvec{\beta}}}_{g}\right){\boldsymbol{\Sigma }}_{\widehat{{\varvec{\beta}}}}^{-1}{\left({{\varvec{\beta}}}_{b}-{{\varvec{\beta}}}_{g}\right)}^{T}}}}\), \({{\varvec{\beta}}}_{g}\) is the good \({\varvec{\beta}}\) (in-control mean vector), \({{\varvec{\beta}}}_{b}\) is the bad \({\varvec{\beta}}\) (out-of-control mean vector), D=\(\sqrt{\left({{\varvec{\beta}}}_{b}-{{\varvec{\beta}}}_{g}\right){\Sigma }_{\widehat{{\varvec{\beta}}}}^{-1}{\left({{\varvec{\beta}}}_{b}-{{\varvec{\beta}}}_{g}\right)}^{T}}\) and \({U}_{0}=0\).

The statistic for monitoring the process variability is:

$${L}_{k}=max\left\{0,{L}_{k-1}+\left({\widehat{{\varvec{\beta}}}}_{k}-{{\varvec{\beta}}}_{g}\right){\boldsymbol{\Sigma }}_{\widehat{{\varvec{\beta}}}}^{-1}{\left({\widehat{{\varvec{\beta}}}}_{k}-{{\varvec{\beta}}}_{g}\right)}^{T}-v\right\},$$
(10)

where \(v={\text{log}}\left(\tau \right){\left(\frac{\tau }{\tau -1}\right)}\), \(\tau\) is the multiplier with which the variance-covariance matrix shifts (\(\boldsymbol{\Sigma }\to \tau \boldsymbol{\Sigma })\), and similar to Ghashghaei and Amiri20 we assumed \(\tau\) =1.1 and \({L}_{0}=0\).

Finally, \(M{C}_{k}\) is formed by combining \({U}_{k}\) and \({L}_{k}\), as:

$$M{C}_{k}=max\left\{{U}_{k},{L}_{k}\right\}.$$
(11)

Again, since this statistic only generates positive values, we only need a UCL for this control chart, and its value is obtained using simulation to achieve any desired ARL performance.

Memory-type control charts using a SS operator

In this section, we describe the SS-type control charts for the residuals introduced by Ghashghaei & Amiri21. They developed these kinds of control charts to monitor both regression parameters and residuals (four control charts). However, we only use the ones for monitoring the residuals, mostly because we already have our Max-type ones for monitoring the regression parameters, and also, they concluded that in most situations their residual monitoring charts perform better than the other ones.

SS-EWMAe control chart

To compute the statistic for monitoring the mean vector (\({P}_{k})\), we first define:

$${T}_{k}={\boldsymbol{\Phi }}^{-1}\left[{{\varvec{H}}}_{p}\left\{{{\varvec{z}}}_{k}^{T}{\Sigma }_{\widehat{{\varvec{\beta}}}}^{-1}{{\varvec{z}}}_{k}\right\}\right],$$
(12)

where \({{\varvec{H}}}_{p}\left(.\right)\) is the chi-square cumulative distribution function with \(p\) degrees of freedom, \(\Phi (.)\) is the standard normal cumulative distribution function, \({\varvec{z}}_{k} = \lambda \overline{e}_{k} + \left( {1 - \lambda } \right){\varvec{z}}_{k - 1}\), \({{\varvec{z}}}_{0}=0\), \(\overline{e}_{k} = \left( {\overline{e}_{1k} ,\overline{e}_{2k} , \ldots ,\overline{e}_{pk} } \right)\) is the average residual vector in the sample k, and \(\lambda\) is the smoothing parameter.

Then, we have:

$${P}_{k}=\lambda {T}_{k}+\left(1-\lambda \right){P}_{k-1},$$
(13)

where \({P}_{0}\) is equal to zero.

To compute the statistic for monitoring the variability, we first have:

$${F}_{k}={\boldsymbol{\Phi }}^{-1}\left[{{\varvec{H}}}_{np}\left\{{f}_{k}\right\}\right],$$
(14)

where \({f}_{k}={\sum }_{i=1}^{n}\left({e}_{ik}\right){{\varvec{\Sigma}}}^{-1}({e}_{ik}\))T, and \({\varvec{\Sigma}}\) is the variance-covariance matrix of the error terms.

Then we have:

$${V}_{k}=\lambda {F}_{k}+\left(1-\lambda \right){V}_{k-1},$$
(15)

where \({V}_{0}=0\).

Finally, the SS-type statistic in the EWMAe scheme is defined as:

$$EW{e}_{k}={P}_{k}^{2}+{V}_{k}^{2}.$$
(16)

The same as in the case of Max-type control charts, since this statistic only generates positive values, we only need a UCL for this control chart, and its value is obtained using simulation to achieve any desired ARL performance.

SS-CUSUMe control chart

The statistic for monitoring the mean vector in this scheme is:

$${M}_{k}=max\left\{{D}_{k}^{-},{D}_{k}^{+}\right\},$$
(17)

where \({D}_{k}^{-}=max\left\{0,-{T}_{k}-{k}_{1}+{D}_{k-1}^{-}\right\}\), \({D}_{k}^{+}=max\left\{0,{T}_{k}-{k}_{1}+{D}_{k-1}^{+}\right\}\), \({D}_{0}^{-}=0, {D}_{0}^{+}=0,\) \({k}_{1}\) is the reference value, and \({T}_{k}\) was defined in the previous chart.

Similarly, the statistic for monitoring the process variability is:

$${N}_{k}=max\left\{{B}_{k}^{-},{B}_{k}^{+}\right\},$$
(18)

where \({B}_{k}^{-}=max\left\{0,-{F}_{k}-{k}_{2}+{B}_{k-1}^{-}\right\}\), \({B}_{k}^{+}=max\left\{0,{F}_{k}-{k}_{2}+{B}_{k-1}^{+}\right\}\), \({B}_{0}^{-}=0, {B}_{0}^{+}=0,\) \({k}_{2}\) is the reference value, and \({F}_{k}\) was defined in the previous chart.

Finally, the SS-type statistic in the CUSUMe scheme is defined as:

$$CU{e}_{k}={M}_{k}^{2}+{N}_{k}^{2}.$$
(19)

Note that following Ghashghaei & Amiri21, in this paper we choose \({k}_{1}=1\) and \({k}_{2}=1.5\).

Design parameters in a variable parameters scheme

The adaptive scheme in which all the control chart (design) parameters are allowed to vary from sample to sample, is called a VP (Variable Parameters) scheme. In this paper, we consider two types of sample sizes with \({n}_{1}\)<\({n}_{2}\), two types of sampling intervals with \({t}_{2}\)< \({t}_{1}\), and two types of Type-I error probabilities with \({\alpha }_{1}\)<\({\alpha }_{2}\). In addition to these parameters, we have to define two upper control limits \(UC{L}_{1}\) and \(UC{L}_{2}\) with \(UC{L}_{2}<UC{L}_{1}\) as well as two upper warning limits \(UW{L}_{1}\) and \(UW{L}_{2}\), satisfying \(UW{L}_{1}<UC{L}_{1}\) and \(UW{L}_{2}<UC{L}_{2}\).

In a VP scheme, we should have the following three constraints (each one is related to one design parameter) satisfied:

$$E(n)={n}_{1}{P}_{0}+{n}_{2}(1-{P}_{0}),$$
(20)
$$E(t)={t}_{1}{P}_{0}+{t}_{2}(1-{P}_{0}),$$
(21)
$$E(\alpha )={\alpha }_{1}{P}_{0}+{\alpha }_{2}(1-{P}_{0}).$$
(22)

By solving Eqs. (20)–(22) together, \({P}_{0}\), \({t}_{1}\) and \({\alpha }_{2}\) are obtained as:

$${P}_{0}=\frac{E(n)-{n}_{2}}{{n}_{1}-{n}_{2}},$$
(23)
$${t}_{1}=\frac{E(t)({n}_{1}-{n}_{2})-{t}_{2}({n}_{1}-E(n))}{E(n)-{n}_{2}},$$
(24)
$${\alpha }_{2}=\frac{E(\alpha )({n}_{1}-{n}_{2})-{\alpha }_{1}(E(n)-{n}_{2})}{{n}_{1}-E(n)}.$$
(25)

Note that \({P}_{0}\) is the conditional probability of being in the safe zone while the process is in-control. After determining the values of the UCLs and UWLs, which we will later show how to determine by introducing algorithms 1 and 2, we use the following sampling strategy in a VP scheme:

  • If at sample k, the statistic’s output \(\in \left[0,UW{L}_{(k)}\right]\), then the process is declared as being in-control and the parameters for the next sample must be \({n}_{1},{t}_{1},UC{L}_{1},UW{L}_{1}\).

  • If at sample k, the statistic’s output \(\in \left(UW{L}_{(k)},UC{L}_{(k)}\right]\), then the process is also declared as being in-control, but the parameters for the next sample must be \({n}_{2},{t}_{2},UC{L}_{2},UW{L}_{2}\).

  • If at sample k, the statistic’s output \(\in \left(UC{L}_{(k)},\infty \right)\), then the process is declared as being out-of-control and the corrective actions might be required,

where \(UC{L}_{(k)}\in \left\{UC{L}_{1},UC{L}_{2}\right\}\) and \(UW{L}_{(k)}\in \left\{UW{L}_{1},UW{L}_{2}\right\}\) are the upper control and warning limits used for sample k=1,2,..., respectively.

For determining the UCL values, we assume that we have an FP scheme and set the values of each UCL separately (UCL for an FP control chart, and UCL1 & UCL2 for a VP control chart). We use the following algorithm for determining the values of UCLs. First note that in all the following algorithms, it is assumed that the average sampling interval is equal to one time unit, which results in ARL=ATS. Otherwise, it would have been ATS=t \(\times\) ARL. Also, all the algorithms in this section should be run in an in-control state. In fact, obtaining the values of all the control chart parameters should be performed while the process is in-control.

Algorithm 1: Adjusting the UCL values

Step 1-Choose a value for \(\alpha\) (the probability of Type-I error) and the sample size n.

Note that for the FP schemes there is only one \(\alpha\) and one n. However, for the VP schemes we have two \(\alpha\)s and two ns, therefore we set UCL1 by using \({\alpha }_{1}\) and \({n}_{1}\), and UCL2 by using \({\alpha }_{2}\) and \({n}_{2}\).

Step 2-Choose a statistic (ME, MC, EWe, or CUe).

Step 3-Obtain the initial value for the UCL by generating and increasingly sorting 10000 in-control samples by using the statistic and choosing the [10000(1-\(\alpha\))]th value in the range.

Step 4-Run 10000 simulations and adjust the UCL so that you get ARL=\(\frac{1}{\alpha }\).

After determining the value of the UCLs, we should obtain the values of UWLs for the VP scheme (remember that the FP scheme has no UWL). For obtaining the UWL values, we can assume that we only have a variable sampling interval (VSI) scheme and compute each UWL separately with its corresponding parameters using the following algorithm (as is supposed to be done in a VSI scheme).

Algorithm 2: Adjusting the UWL values

Step 1-Choose the values of \(\alpha\), t2, E(t), the corresponding UCL value obtained via the previous algorithm, and the corresponding sample size (\({n}_{1}\)&UCL1 if \(\alpha\) =\({\alpha }_{1}\), and \({n}_{2}\)&UCL2 if \(\alpha\) =\({\alpha }_{2}\)).

Step 2-Compute \({P}_{0}\) using Eq. (23) and t1 using Eq. (24).

Step 3-Choose the same statistic used for determining the corresponding UCL value.

Step 4-Run 10000 simulations and adjust each UWL with its corresponding UCL so that you get \({P}_{0}\) equal to the value you obtained in Step 3 and at the same time get ARL=\(\frac{1}{\alpha }\) and ATS= E(t)ARL=\(E(t)\frac{1}{\alpha }\) (note that if E(t)=1, as in our case, then ARL= ATS=\(\frac{1}{\alpha }\)).

Performance measures

The Run length and time to signal based measures are the two most important control charts’ performance measures. In an FP scheme, computing the average run length based measures are enough, considering one can multiply the average run length by the sampling interval to obtain the average time to signal. However, in a VP scheme, the average time to signal should be computed separately. Both average and standard deviation of run length (ARL and SDRL) as well as time to signal (ATS and SDTS), are important to consider. Although the SDRL and SDTS are always expected to be low, the ARL and ATS are expected to be as high as possible when the process is in-control and as low as possible when the process is out-of-control.

To compute the performance measures for an FP scheme, algorithm 1 can still be used with the only difference that here we have obtained the UCL value, and we are now only interested in computing the values of the ARL and SDTS in an out-of-control situation.

To compute the performance measures for a VP control chart, the following computer algorithm is developed and can be used. Note that, to reduce the paper size, we only present the algorithm for the case of the Max-MEWMA control chart, but it can easily be modified for the other proposed control charts as well.

Algorithm 3 Computing the performance measures in a memory-type VP scheme.

figure a

As can be seen in the above-mentioned algorithm, the final statistic is differentiated with ‘ME1’ and ‘ME2’ depending on their zone (safe or warning zone). This happens in all the control charts whether they are memory-type or memory-less. However, if we have a memory-type control chart, the memory-type statistics (g and z statistics in Max-MEWMA) should also be divided into two categories as well and the previous statistic values obtained in region 1(2) cannot be used for the next sampling in region 2(1). In other words, region 1 values should only be used when we are in region 1 and the same applies to region 2 values. Otherwise, this will significantly increase the false alarm rate.

Simulation studies

In this section, we perform numerical analyses and simulation studies to evaluate and compare our developed adaptive control charts to one another in adaptive and one-adaptive conditions. We evaluate the performance of the proposed control charts under different shift scenarios and in different dimensions.

Although we report the values of the ATS, SDTS, ARL, and SDRL in the following tables, the comparisons are mainly made using the ATS values. All the simulation environments and the chosen values for the process and chart parameters as well as the shift sizes are the same as in Sabahno & Amiri27. The in-control ARL and ATS for all the considered control charts are set to 200 runs and 200 hrs (α = 0.005), respectively. The analysis is conducted for the case of two and six response variables (p=2 and 6), i.e. two and six multiple linear profiles. The following multiple regression models are used for the case of p=2: \({y}_{1}=3+2{x}_{1}+{x}_{2}+{\varepsilon }_{1}\) and \({y}_{2}=2+{x}_{1}+{x}_{2}+{\varepsilon }_{2}.\)

The error’s variance-covariance matrix for this case is assumed to have the following elements: \(\boldsymbol{\Sigma }=\left[\begin{array}{cc}{\sigma }_{1}^{2}& \rho {\sigma }_{1}{\sigma }_{2}\\ \rho {\sigma }_{1}{\sigma }_{2}& {\sigma }_{1}^{2}\end{array}\right]\), where \({\sigma }_{1}\) and \({\sigma }_{2}\) are the standard deviations of the first and second profiles, respectively, and \(\rho\) is their correlation. For its in-control value, we have: \({\boldsymbol{\Sigma }}_{0}=\left[\begin{array}{cc}1& 0.5\\ 0.5& 1\end{array}\right].\)

The sample size for the non-adaptive (FP) scheme is 4 and the sampling interval is 1hr. For the adaptive VP control chart, in which we need two values for each chart parameter, the following parameters’ values are chosen: \({n}_{1}=4\), \({n}_{2}=8\), \(E\left(n\right)=6\), \(E\left(\alpha \right)=0.005\), \({\alpha }_{1}=0.004\),\(E\left(t\right)=1\) hr,and \({t}_{2}=0.1\) hr. \({t}_{1}\) is computed by using Eq. (24) as 1.9 hrs, and the upper control and warning limits are computed using the algorithms outlined in Section "Design parameters in a variable parameters scheme".

For each sample size, a set of explanatory variables is required. The X matrix for \(n=4\) is assumed to be \(\mathbf{X}=\left(\begin{array}{ccc}1& 2& 1\\ 1& 4& 2\\ \begin{array}{c}1\\ 1\end{array}& \begin{array}{c}6\\ 8\end{array}& \begin{array}{c}3\\ 2\end{array}\end{array}\right)\) and for the sample size 8 (only needed for the VP chart) it is considered \(\mathbf{X}=\left(\begin{array}{ccc}1& 2& 1\\ 1& 4& 2\\ \begin{array}{c}1\\ \begin{array}{c}1\\ 1\\ \begin{array}{c}1\\ 1\\ 1\end{array}\end{array}\end{array}& \begin{array}{c}6\\ \begin{array}{c}8\\ 9\\ \begin{array}{c}10\\ 9\\ 11\end{array}\end{array}\end{array}& \begin{array}{c}3\\ \begin{array}{c}2\\ 3\\ \begin{array}{c}1\\ 2\\ 1\end{array}\end{array}\end{array}\end{array}\right)\).

We also assume that each element in the error’s variance-covariance matrix shifts with the same multiplier \((\tau ).\) In addition, the correlations between the responses in this section are assumed to be equal (regarding the p=6 case) and are fixed at \(\rho =0.5\).

The results for the p=2 case are presented in Table 1 (which contains separate and simultaneous shifts in the variability and the intercepts), Table 2 (which contains separate and simultaneous shifts in the variability and the first slopes), and Table 3 (which contains separate and simultaneous shifts in the variability and the second slopes).

Table 1 ATS=ARL, SDTS=SDRL for FP and ATS, SDTS (ARL, SDRL) for VP schemes for shifts in the intercept vector (\({{\varvec{\beta}}}_{0}\)) and the error variation (\(\tau\)), when p=2 and q=2.
Table 2 ATS=ARL, SDTS=SDRL for FP and ATS, SDTS (ARL, SDRL) for VP schemes for shifts in the first slope vector (\({{\varvec{\beta}}}_{1}\)) and the error variation (\(\tau\)Æ), when p=2 and q=2.
Table 3 ATS=ARL, SDTS=SDRL for FP and ATS, SDTS (ARL, SDRL) for VP schemes for shifts in the second slope vector (\({{\varvec{\beta}}}_{2}\)) and the error variation (\(\tau\)), when p=2 and q=2.

The results in all these three tables show that in all the FP and VP control charts, as the intercept/slope or variability shift increases, the charts signal faster. Moreover, if the number of profiles whose intercepts/slopes shift increases from one to two, i.e. from (0.2, 0) to (0.2, 0.2) in Table 1, only the Max-MCUSUM chart shows a significant increase in the performance (decrease in the ATS value), and the performance of the other charts remains more or less the same. In addition, by comparing the charts’ FP and VP schemes, we realize that all the charts show significant performance improvements if the VP scheme is used (with more than a 70% performance increase in some cases), and by comparing different control charts, it is clear that the Max-type control charts mostly perform better than the SS-type control charts (only if one profile shifts, the SS-type control charts perform better and that also only in some cases of no or low variability shifts). As for the Max-type control charts, the Max-MEWMA chart mostly performs better as the variability shift increases and the Max-MCUSUM chart mostly performs better as the mean shift increases.

For the case of p=6, the real case adopted in Sabahno & Amiri27 with the following model is used:

$${y}_{1}=-0.05+10{x}_{1}-0.01{x}_{2}-0.03{x}_{3}+0.26{x}_{4}+0{x}_{5}+0.03{x}_{6}+{\varepsilon }_{1},$$
$${y}_{2}=0.48+0.24{x}_{1}+21.01{x}_{2}-0.09{x}_{3}+0.03{x}_{4}-0.12{x}_{5}+0.01{x}_{6}+{\varepsilon }_{2},$$
$${y}_{3}=0.37+0.09{x}_{1}+0.01{x}_{2}+6.81{x}_{3}+0.04{x}_{4}+0.02{x}_{5}-0.03{x}_{6}+{\varepsilon }_{3},$$
$${y}_{4}=0.04+0{x}_{1}+0{x}_{2}+0{x}_{3}+10.53{x}_{4}-0.47{x}_{5}+0.21{x}_{6}+{\varepsilon }_{4},$$
$${y}_{5}=0.09-0.021{x}_{1}+0{x}_{2}+0.01{x}_{3}+0.02{x}_{4}+7{x}_{5}-0.34{x}_{6}+{\varepsilon }_{5},$$
$${y}_{6}=0.09+0.04{x}_{1}+0{x}_{2}-0.01{x}_{3}+0.18{x}_{4}-0.34{x}_{5}+11.46{x}_{6}+{\varepsilon }_{6}.$$

Since there are two sampling strategies in our adaptive scheme, we use \({n}_{1}=8\) and \({n}_{2}=16\), and \(E\left(n\right)=12\). Therefore, since we have two sets of sample sizes, we need two value sets for the explanatory variables as well. Again, we use the same value sets used by Sabahno & Amiri27, which we don’t include in this paper to save space.

The results of this case are presented in Tables 4, 5 and 6, for separate and simultaneous shifts in the variability and the intercepts, separate and simultaneous shifts in the variability and the first slopes, and separate and simultaneous shifts in the variability and the second slopes, respectively.

Table 4 ATS=ARL, SDTS=SDRL for FP and ATS, SDTS (ARL, SDRL) for VP schemes for shifts in the intercept vector (\({{\varvec{\beta}}}_{0}\)) and the error variation (\(\tau\)), when p=6 and q=6.
Table 5 ATS=ARL, SDTS=SDRL for FP and ATS, SDTS (ARL, SDRL) for VP schemes for shifts in the first slope vector (\({{\varvec{\beta}}}_{1}\)) and the error variation (\(\tau\)), when p=6 and q=6.
Table 6 ATS=ARL, SDTS=SDRL for FP and ATS, SDTS (ARL, SDRL) for VP schemes for shifts in the second slope vector (\({{\varvec{\beta}}}_{2}\)) and the error variation (\(\tau\)), when p=6 and q=6.

The results in the p=6 problem show that while in the cases of slope shifts (Tables 5 and 6) the conclusions are almost the same as in the previous case (p = 2), the same does not completely apply to the case of the intercept shift (Table 4). The Max-type control charts’ performance mostly gets worse (or their performance remains rather unchanged) as the shift in the intercept increases (with the Max-MCUSUM chart being the worst between the two). On the contrary, The SS-type charts’ performance mostly gets better (or their performance remains rather unchanged) under a similar situation. In addition, except for some cases of no/low variation shifts, the Max-MEWMA control charts perform better than the other control charts. Moreover, the VP adaptive control charts are still mostly much faster than the FP charts.

By comparing the p = 6 case with the p = 2 case, we realize that in the case of no variation shift (\(\tau\) = 1), all the charts perform worse when the mean shift is in the intercept. However, if the mean shift is in the slopes, and also in the case of low variation shift (\(\tau\) = 1.1) when there is no mean (intercept/slops) shift, all the charts perform better, but as the mean shift increases, the charts mostly tend to perform worse when the process dimension increases. Furthermore, in the cases of moderate/large variation shifts (\(\tau\) = 1.3 and 2), all the charts perform better in the case of p=6 compared to the case of p = 2.

A real case

To illustrate how one can implement the proposed control charts in practice, we study a real healthcare-related case. Stroke is one of the most common causes of death and disability in the world. In addition to severe consequences for individuals, stroke causes a high financial burden on societies. Intravenous thrombolysis within 4.5 hours from the stroke onset is an established treatment for ischemic stroke. The benefit of treatment reduces for every minute’s delay (Darehed et al.40), and thrombolysis delay times are key quality indicators of stroke care, and essential to monitor and maintain a good quality of stroke care.

We study a dataset containing all the stroke patients who received thrombolysis in Sweden from 2016 until the year 2020 and were registered in the national quality register for stroke care in Sweden (Riksstroke; Asplund et al.41). The study has been performed in accordance with the relevant guidelines/regulations. The Declaration of Helsinki was followed. Patients were informed about registration in Riksstroke, that their registered data could be used for research purposes, and their right to remove themselves from the registry at any time (opt-out consent). According to the Swedish Patient Data Act, data from national quality registers may be processed for research purposes without additional individual consent, if processing has been approved by an Ethics Review Board in accordance with the Ethical Review Act. The use of data from Riksstroke for this study was approved by the Swedish Ethical Review Authority (reference no. 2021-06152-01).

The objective of this study is to monitor the efficiency of the stroke care process in terms of thrombolysis treatment delays. Therefore, we investigate whether the relationship between two correlated responses, y1 = the time from stroke onset until getting the treatment (onset-to-needle time, ONT) and y2 = the time from admission to the hospital until getting the treatment (door-to-needle time, DNT) in relation to three crucial covariates (patient characteristics) of x1=Age, x2=Sex, and x3=Stroke Severity (as measured by NIH stroke score, i.e. NIHSS) is being kept constant over time or not. x1 and x3 are modeled as continuous variables and x2 as a binary variable (0 for male and 1 for female).

We used the data from two recent years 2016 and 2017 (which were considered to be stable years in the stroke system), from all the hospitals in Sweden to estimate this relationship (profile).

After cleaning the dataset by removing the missing data (the proportion of missing data was relatively low compared to the overall dataset) and the erroneous entries (times more than 4.5 hours, which showed they had been entered wrongly), our first analysis was to see if ONT and DNT were normally distributed or not. The analysis revealed that their distributions were skewed, indicating that they deviated from a normal distribution and appeared more like Lognormal distributions. This outcome is commonly expected for time-related variables with many factors influencing them. Consequently, we applied a logarithmic transformation (base 10) to the data in order to approximate a normal distribution. The variance-covariance matrix of these response variables was estimated as: \(\Sigma =\left[\begin{array}{cc}0.0399& 0.0207\\ 0.0207& 0.0743\end{array}\right]\).

Then, we estimated the multivariate multiple regression model using R. The results were as follows.

$${\text{log}}\left({y}_{1}\right)=2.0457+0.00084{x}_{1}+ 0.01597{x}_{2}-0.0045{x}_{3},$$
$${\text{log}}\left({y}_{2}\right)=1.6797-0.00079{x}_{1}+0.01558{x}_{2}-0.0032{x}_{3}.$$

We used the VP adaptive control charts (Max-MEWMA, Max-MCUSUM, SS-EWMAe, and SS-CUSUMe) to monitor these regression models over time. The design parameters for our control charts are the same as in our simulation study section (except for the sampling intervals) are set to \({n}_{1}=4\), \({n}_{2}=8\), \(E\left(n\right)=6\), \(E\left(\alpha \right)=0.005\), \({\alpha }_{1}=0.004\), \(E\left(t\right)=2\) months and, \({t}_{2}=1\) month. The other chart parameters are computed using Equations (23)-(25) and the proposed algorithms in Sect. "Design parameters in a variable parameters scheme" and can be seen in Tables 7, 8, 9 and 10 for each control chart.

Table 7 Details of process monitoring from Jan 2018 to the end of 2020 in the real case for the Max-MEWMA control chart.
Table 8 Details of process monitoring from Jan 2018 to the end of 2020 in the real case for the Max-MCUSUM control chart.
Table 9 Details of process monitoring from Jan 2018 to the end of 2020 in the real case for the SS-EWMAe control chart.
Table 10 Details of process monitoring from Jan 2018 to the end of 2020 in the real case for the SS-CUSUMe control chart.

After designing the control charts, we first checked whether the process was really under control in Phase-I or not (during the years 2016 and 2017). Otherwise, the estimated regression models and consequently the developed control charts would not be valid to be used in Phase -II. Researchers usually use non-adaptive (FP) schemes to do so. The result showed that the process was in-control according to all the control charts (the details of this analysis are not included in this paper but can be requested from the corresponding author).

For Phase-II, we employed the developed control charts for the years 2018 and 2019 to see if any assignable causes could be detected in those years. The control charts for the Max-MEWMA, Max-MCUSUM, SS-EWMAe, and SS-CUSUMe schemes can be seen in Figs. 1, 2, 3 and 4, respectively. More details regarding each sample can be seen in Tables 7, 8, 9 and 10.

Figure 1
figure 1

Max-MEWMA VP control chart in the illustrative example.

Figure 2
figure 2

Max-MCUSUM VP control chart in the illustrative example.

Figure 3
figure 3

SS-EWMAe VP control chart in the illustrative example.

Figure 4
figure 4

Max-CUSUMe VP control chart in the illustrative example.

Tables 7, 8, 9 and 10, from left to right, show the sample number (k), the sample size (the number of patients investigated in the sample), the cumulative number of samples up to the current sample, the sampling interval (in months) used to reach the current sample, the cumulative number of the sampling intervals up to the current sample (the time from the start of the process monitoring up to the current sample), the mean of the current sample's first and second response variables, the values of the sample statistics for the mean and variability, the value of the final statistic, the UWL and UCL values used for the current sample, and the status of the process based on the current sample, respectively. Remember that each control chart has two statistics, one for monitoring the mean vector and the other one for monitoring the variability. However, by using a Max or SS operator, a single final statistic will be formed and plotted in each control chart, i.e. the statistic in column ten.

As can be seen in Figs. 1, 2, 3 and 4 as well as Tables 7, 8, 9 and 10, the Max-MEWMA control chart was able to detect the out-of-control situations at samples eleventh (November 2019) and twelfth (December 2019). This control chart was able to signal first after 22 months and after observing 60 patients. The Max-MCUSUM control chart was able to detect the out-of-control situations in samples ninth (September 2019) and 11th (December 2019). This control chart was able to signal first after 20 months and after observing 44 patients. The SS-EWMAe control chart signaled at samples 14th (October 2019), 15th (November 2019), and 16th (December 2019). This chart was able to first signal after 21 months and after investigating 90 patients. Finally, the SS-CUSUMe control chart signaled at samples 13th (November 2019) and 14th (December 2019). This chart was able to first signal after 22 months and after investigating 80 patients.

One thing that is clear based on the obtained result is that in all the control charts, the statistic responsible for the mean vector monitoring (column eight) has caused the signal, meaning the shift has happened in the coefficient values and not the responses’ variability.

Further investigation is required to discover the reasons for these signals and to see which ones are real assignable causes and which ones are outliers that can be ignored. Then, these assignable causes should be removed if they are undesirable. We might even need to update the profile’s parameters if we discover that those assignable causes are desirable and should be kept.

Conclusions

In this paper, we improved the performance of four memory-type control charts for monitoring multivariate multiple linear profiles. These control charts are Max-MEWMA, and Max-MCUSUM control charts that use a single Max-type statistic and monitor the regression parameters, and SS-EWMAe and Max-CUSUMe control charts that use an SS-type statistic and monitor the residuals. We designed a VP adaptive scheme for all these control charts, in which all the design parameters can be varied throughout the process monitoring to increase their capability in detecting shifts. After that, we developed an algorithm with which the time to signal and run length-based performance measures of these charts could be measured. Then, we performed extensive simulations to evaluate these charts’ performance under different shift sizes and types as well as different process dimensions. Two different cases of two profiles (p = 2)-two covariates (q = 2), and also, six profiles (p = 6)-six covariates (q = 6) were studied in this paper.

The results in the p = 2 and q = 2 case showed that: (i) as the intercept/slope or variability shift increases, all the FP and VP charts signal faster, (ii) if the number of profiles whose intercepts/slopes shift increases, only the Max-MCUSUM chart shows a significant performance improvement, (iii) all the charts show significant performance improvements if the VP scheme is utilized, (iv) the Max-type control charts mostly perform better than the SS-type control, and (v) the Max-MEWMA chart mostly performs better as the variability shift increases and the Max-MCUSUM chart mostly performs better as the mean shift increases. The results in the p = 6 and q = 6 case show that, in the case of slope shifts, the conclusions are more or less the same as in the case of p = 2 and q = 2. However, in the case of intercept shift, the main difference is that the Max-type control charts’ performance mostly gets worse (or their performance remains rather unchanged) as the shift in the intercept increases (with the Max-MCUSUM chart being the worst between the two) and the SS-type charts’ performance mostly gets better (or their performance remain rather unchanged).

Finally, we used a real dataset to estimate two profiles in a stroke care process and then developed and utilized the VP control charts to monitor those profiles over time to show how these charts can be implemented in real practice.

For future studies, implementing similar adaptive strategies for more advanced profiles such as non-parametric and nonlinear profiles can be suggested. Furthermore, developing and implementing other control charts to improve the healthcare-related processes in general, and the stroke care process in particular, could be a great contribution considering the availability of very few studies in this regard.