Mild cognitive impairment (MCI) is a syndrome defined as a cognitive decline, which may affect daily activities. The amnesic subtype of MCI has a high risk of progression to Alzheimer’s disease and could lead to a prodromal stage of this disorder1. Alzheimer’s disease assessment scale-cognitive (ADAS-cog) subscale measures the progression of MCI in 11 relevant fields, namely spoken language ability, comprehension of spoken language, recall of test instructions, word-finding difficulty, following commands, naming, constructions, ideational praxis, orientation, word recall, and word recognition. Detailed information on ADAS-cog subscale can be found in Rosen et al.2.

Institute of Clinical Pharmacology at Xiyuan Hospital conducted a phase III randomized clinical trial to evaluate the efficacy of a traditional Chinese prescription on MCI. The double blinded randomized clinical trial was conducted in eight qualified medical centres across China with 216 patients allocated to the treatment arm and 108 retained as control. Two patients dropped out from each arm, resulting in 320 complete observations in the final dataset. Data on the difference between final ADAS-cog score and baseline scores (ADASFA) were recorded for the efficacy study. Previous literatures have used ADASFA in efficacy evaluation of MCI or Alzheimer’s disease3,4. The most intuitive idea is to test whether the treatment and control means are equal. However, Morgan and Rubin5 argued that the baseline equivalence is not guaranteed although the allocation is randomized. Imbalance in baseline covariates could confound the statistical test when comparing ADASFA between the two arms. Ten variables, specifically age, height, weight, gender, education, ethnicity, occupation, centre, drug (whether the patient took a drug for MCI in the past three months), and ADAS1 (the baseline record of ADAS-cog6), were recorded as potential covariates. Table 1 shows descriptive statistics of these variables. The explorative covariance analysis presented in Table 2 indicates that the variable of centre and ADAS1 may confound the efficacy evaluation of ADASFA. This implies that a linear regression model should be involved rather than using a simple statistical test in this study, that is,

$$y={\beta }_{0}+{\beta }_{1}{x}_{1}+{\beta }_{2}{x}_{2}+\cdots +{\beta }_{p}{x}_{p}+\varepsilon $$

where p is the number of covariates. We denote a binary variable xj = 1 to represent the treatment arm and xj = 0 for the control arm. The efficacy can be evaluated by the corresponding coefficient βj7. Ordinary least squares (OLS) is a general parameter estimation method for simple linear regression which performs as the best linear unbiased estimation when assuming independent identical normally distributed errors:

$$\varepsilon |x \sim N(0,{\sigma }^{2}).$$
Table 1 Descriptive statistics of variables.
Table 2 An example of covariance analysis.

However, the QQ plot in Fig. 1 shows that the MCI dataset may not follow a normal distribution and a Shapiro-Wilk test (W=0.9283, p-value = 2.799e-11) also suggests a similar result. The contaminated non-normal part may come from either measurement error or mixed distribution8, which is commonly presented in medical studies9,10. This could lead to inefficient efficacy estimation by using OLS since the contaminated part is not addressed11. A more robust estimation method in linear regression is, therefore, required in such studies.

Figure 1
figure 1

QQ plot of residuals in the MCI study using OLS.

Many robust methods have been discussed in literatures. Bao12 developed a rank-based estimate in linear regression. Wang et al.13 proposed an robust estimation via least absolute deviation while Wang et al.14 introduced an exponential squared loss (ESL) to select variables robustly. Since the breakdown point of ESL is almost 50%, we adopt it in the MCI efficacy evaluation study. Numerical studies show that the proposed method can achieve a more accurate estimation with a large proportion of contamination in the dataset. Additionally, the estimations are consistent with OLS when contamination proportions are relatively low. Therefore, it can be used as a complementary efficacy evaluation method in real-world clinical studies regardless of the presence or lack of contaminations.



Suppose there are n subjects, denoted as \({\{({x}_{i},{y}_{i})\}}_{i=1}^{n}\) where yi is the outcome and xi = (xi1, …, xip)T is a p-dimensional vector of covariates. A linear regression model is,

$${y}_{i}={x}_{i}^{T}\beta +{\varepsilon }_{i},i=1,2,\,\cdots ,\,n$$

where β is a p-dimensional vector of unknown parameters while εi is independent and identically distributed with some unknown distribution satisfying E(εi) = 0 and εi ╨ xi.

The ESL function has been used in AdaBoost for classification problems with success15. Wang et al.14 expanded the use of the ESL function for robust variables selection. We now use it to estimate parameters in linear regression without sparsity. The ESL function is defined as

$${{\rm{\Phi }}}_{\gamma }(t)=1-\exp (-\,\frac{{t}^{2}}{\gamma }),$$

which is a function of t, and γ, where the latter is a tuning parameter. To estimate model parameters (β), the objective function of ESL is to maximize,

$${l}_{n}(\beta )=\sum _{i=1}^{n}\,\exp (-\,\frac{{({y}_{i}-{x}_{i}^{T}\beta )}^{2}}{\gamma }).$$

The tuning parameter γ controls the degree of robustness of the estimator. With a relatively large γ, the proposed estimator gets close to the OLS estimator while a smaller γ leads to a limited influence of contaminations on the estimator. Since the tuning parameter γ controls the degree of robustness and efficiency of the estimator, a data-driven procedure that yields both high robustness and high efficiency simultaneously is used to select an appropriate γ. The entire calculation process in terms of ESL borrows from the idea proposed in Wang et al.14:

  1. 1.

    Find the pseudo outlier set of the sample. Let Dn = {(x1, y1), …, (xn, yn)}. Then, calculate \({r}_{i}({\hat{\beta }}_{n})={y}_{i}-{x}_{i}^{T}{\hat{\beta }}_{n},\,i=1,2,\,\cdots ,\,n\) and \({S}_{n}=1.4826\times {{\rm{median}}}_{i}|{r}_{i}({\hat{\beta }}_{n})-{{\rm{median}}}_{j}({r}_{j}({\hat{\beta }}_{n}))|\). Take the pseudo outlier set as \({D}_{m}=\{({x}_{i},{y}_{i}):{r}_{i}({\hat{\beta }}_{n})\ge 2.5{S}_{n}\}\), where m is the cardinality of Dm set and Dnm = Dn/Dm.

  2. 2.

    Update the tuning parameter γ. Let γ be the minimiser of det \((\hat{V}(\gamma ))\) in the set G = {γ:ζ(γ)  (0, 1]}, and \(\zeta (\gamma )=2m/n+(2/n)\sum _{i=m+1}^{n}\,{{\rm{\Phi }}}_{\gamma }{r}_{i}({\hat{\beta }}_{n}),\hat{V}(\gamma )={\{{\hat{I}}_{1}({\hat{\beta }}_{n})\}}^{-1}{\rm{\Sigma }}{\{{\hat{I}}_{1}({\hat{\beta }}_{n})\}}^{-1}\), det () denotes the determinant operator, and

    $$\begin{array}{rcl}{\hat{I}}_{1}({\hat{\beta }}_{n}) & = & \frac{2}{\gamma }\{\frac{1}{n}\sum _{i=1}^{n}\,\exp (\,-\,{r}_{i}^{2}({\hat{\beta }}_{n})/\gamma )\}(\frac{2{r}_{i}^{2}({\hat{\beta }}_{n})}{\gamma }-1)\}\times (\frac{1}{n}\sum _{i=1}^{n}\,{x}_{i}{x}_{i}^{T})\\ \hat{{\rm{\Sigma }}} & = & cov\{\exp (\,-\,{r}_{i}^{2}({\hat{\beta }}_{n})/\gamma )\frac{2{r}_{i}^{2}({\hat{\beta }}_{n})}{\gamma }{x}_{i}\times \cdots \times \exp (\,-\,{r}_{n}^{2}({\hat{\beta }}_{n})/\gamma )\frac{2{r}_{i}^{2}({\hat{\beta }}_{n})}{\gamma }{x}_{n}\}\end{array}$$
  3. 3.

    Update \({\hat{{\boldsymbol{\beta }}}}_{{\boldsymbol{n}}}\). After selecting γ in step 2, update \({\hat{\beta }}_{n}\) by maximizing (1).

We set the MM estimator16 \({\tilde{\beta }}_{n}\) as the initial estimator. The algorithm is an iterative procedure as shown above. To attain high efficiency, we choose the tuning parameter γ by minimizing the determinant of asymptotic covariance matrix as in Step 2. Since the calculation of det \((\hat{V}(\gamma ))\) depends on the estimation of \({\tilde{\beta }}_{n}\), we update \({\tilde{\beta }}_{n}\) in Step 3 and repeat the algorithm until the convergence condition \(\Vert {\hat{\beta }}_{n}^{old}-{\hat{\beta }}_{n}^{new}\Vert < {10}^{-2}\) is satisfied.


In order to verify the performance of the introduced method, we conduct numerical studies to compare bias and mean squared errors (MSE) of the estimators of our algorithm (ESL) versus those from the ordinary least squares (OLS).

Simulate data \({\{({x}_{i},{y}_{i})\}}_{i=1}^{n}\) as follows, where xi = (xi1, xi2, …, xip)T, i = 1, 2, …, n with p = 7 and n = 300. The first six covariates are continuous, that is, xij ~ N(0, 1) for j = 1, 2, …, 6 and xi7 is categorical, selected from {1, 2, …, 4}. Convert xi7 into three binary variables, denoted as zi1, zi2, zi3 where zij represents whether xi7 belongs to the j-th category and zi1 = zi2 = zi3 = 0 means xi7 belongs to the last category. Thus, we have xi = (xi1, xi2, …, xi6, zi1, zi2, zi3)T. Let β = (β0, β1, …, β9)T where β = (1, 1.2, 1.4, 1.6, 1.8, 2, 2.2, 2.4, 2.6, 2.8)T. The error term of contamination (outlier) follows t(1) distribution, and the error term of non-outlier follows standard normal distribution, N(0, 1). The proportion of contamination considered is 10%, 20% and 30%, respectively. For each proportion of contamination, the average mean, bias, standard deviation (SD), and MSE of ESL and OLS over 100 replications is reported in Table 3.

Table 3 Average results over 100 replications of ESL and OLS for 10%, 20%, and 30% contamination proportions, respectively.

Figure 24 show error bars of ESL and OLS with three proportions of contaminations, where the triangular points represent true values of parameters, the circles points represent means of estimator means,s and vertical lines mean represent standard deviations. ‘truej’, ‘eslj’, and ‘olsj’ refer to the corresponding parameter j for true value, ELS, and OLS estimations. It can be seen that the widths of error bars using ESL are significantly considerably shorter than those using OLS, which implies that the standard deviation of the ESL estimator in ESL is much smaller than that in of OLS, and our method is more robust.

Figure 2
figure 2

Error bars of ESL and OLS with 10% contamination.

Figure 3
figure 3

Error bars of ESL and OLS with 20% contamination.

Figure 4
figure 4

Error bars of ESL and OLS with 30% contamination.


In the MCI study, the linear regression model is conducted as follows:

$$\begin{array}{rcl}ADASFA & = & {\beta }_{1}age+{\beta }_{2}bmi+{\beta }_{3}ADAS1+{\beta }_{5}centre2+{\beta }_{6}centre3\\ & & +\,{\beta }_{7}centre4+{\beta }_{8}centre5+{\beta }_{9}centre6+{\beta }_{10}centre7+{\beta }_{11}group\\ & & +\,{\beta }_{12}gender+{\beta }_{13}education+{\beta }_{14}occupation+{\beta }_{15}drug+\varepsilon \end{array}$$

We include these variables in the model based on our clinical experience and existing literatures. In addition, we transform weight and height into a new variable BMI, since there are discussions on whether BMI has an effect on MCI. We exclude ethnicity and marital status from our model mainly because these two variables are extremely unbalanced between the treatment and control arms, and also due to the fact that almost no literature suggests that these two variables have an effect on MCI. Since 11 out of 16 variables are categorical variables, we do not consider interaction effects. The by-centre descriptive analysis is presented in Table 4 and 5.

Table 4 Continuous variable descriptive statistics.
Table 5 Discrete variable descriptive statistics.

Table 6 shows the parameter estimations using ESL and OLS. The empirical 95% confidence interval is calculated by the bootstrap approach. When the bootstrap confidence interval does not include 0, it indicates that the corresponding covariate has a significant effect on the primary outcome. Note that there are some differences between the ESL and OLS estimations. For example, the effects of centre5 and centre8 on ADASFA are opposite. Given the non-normal residuals, the ESL estimators are more accurate. From the results, we can conclude that

  1. (1)

    ADAS1 and centre 6 have significant influences on ADASFA since their bootstrap confidence intervals do not contain 0. From the medical view, higher ADAS1 means patients are in worse health situation, which can have a positive effect on ADASFA.

  2. (2)

    ESL and OLS both show that ADAS1 has a positive effect on decreasing ADAS-cog. For age, ESL shows that age has no effect on decreasing ADAS-cog because its bootstrap confidence interval contains 0 while OLS shows that age has a negative effect on decreasing ADAS-cog. From a medical viewpoint17, it is verified that ‘age’ has a significant effect on MCI. Prior work has demonstrated that rates of dementia increase exponentially with age18,19. However, the significant effect of age on MCI does not mean that it also influences the treatment effect.

  3. (3)

    The ESL group coefficient is −0.141 and its bootstrap confidence interval contains 0. This result makes sense because this project is a non-inferiority trial and the treatment group was not worse than the control group.

  4. (4)

    The ESL shows that centres 3, 6, and 7 have significant effects on the outcome. However, OLS shows that centres 3 and 7 have no significant impact but centre 6 has a significant effect on the outcome. According to Table 4, the average ADAS1 of centre 6 is much lower than that of centres 3 and 7, which implies that patients in centres 3 and 7 are in worse conditions. Moreover, patients in different centres may have different non-compliance levels, which may also contribute to the result that some centres have significant effects on the outcome while others do not.

  5. (5)

    Since we have shown that the data is not normally distributed, we can have greater confidence in the ESL results.

Table 6 Estimation results in MCI study using ESL and OLS.


In this paper, we discuss a method to evaluate efficacy in a randomized control MCI study. As many covariates may influence the outcome, a linear regression model is considered rather than comparing group means using t test or ANOVA. An exponential squared loss function, which is superior to OLS when dealing with non-normal residuals, is introduced in this study. Simulation results show that the ESL model yields more efficient estimation than OLS in non-normal data. The proposed method is also robust in the case of data with outliers. These advantages of the ESL model become more noticeable when the contamination percentage increases. The proposed method does not require the normal distribution assumption, offering new insight in the efficacy evaluation for practical researchers.