Introduction

The integration of additional available information on auxiliary variables at the estimation stage in survey sampling has been thoroughly discussed. To investigate précised estimators of the population parameters of the study variable Y has attracted much attention of the survey statisticians utilizing available information on auxiliary variable X. In this context, the literature provides several procedures such as ratio, regression, product, ratio-type and product-type exponential estimators, logarithmic ratio and product-type estimators along with their ramified versions for precisely estimating the parameters under investigation.

These estimation procedures have been proposed under the supposition that the observations gathered are free from measurement error (ME). In most practical situations, this type of circumstance is not usually encountered. Generally real data includes observational errors owing to several factors, including memory failure, excessive or insufficient reporting, prestige bias, etc. The readers are different books Cochran1, Murthy2, Carroll et al.3, Singh4, Fuller5 and Cheng and Van Ners6 etc. The term measurement error is the difference between true value and observed value which influences the findings of real-world surveys. We usually assume the accuracy of all the recorded and processed data. Though it is entirely hypothetical in surveys carried out in real life. A variety of factors, such as interviewer and respondent bias along with the errors occurred during collecting and processing the data, and many more, can lead to measurement error. So, it is important to investigate the measurement error because these issues are likely to arise in any kinds of surveys.

The values of the variable are reported to have some measurement errors (MEs) regardless of detecting the actual values of the variable under consideration. Without taking the MEs into consideration, the estimates seem incomplete which misleads the inference of the study. Various authors including Shalabh7, Manisha and Singh8, Singh and Karpe9,10,11,12,13, Diana and Giordan14, Gupta et al.15, Tariq et al.16,17 and Singh et al.18 have focused on the estimation of various parameters such as population mean, total, ratio, product and variance under MEs.

In the abovesaid studies, the authors have discussed only the case of uncorrelated measurement errors (UMEs) existing in both the study and auxiliary variables. However, in practice UME situations usually do not exist. For example, usually the same survey personal collects data on study and auxiliary variables both and so it may not be reasonable to presume that the MEs in both the variables are independent. Rather, they will be dependent (i.e., correlated) and this dependence in MEs may arise due to the hidden intrinsic tendencies of the surveyor. For further illustration, readers are referred to Shalabh and Tsai19, pp. 5567–5568. Shalabh and Tsai19 were the first who discussed the impact of correlated measurement errors (CME) over the performance of ordinary ratio and product estimators of population mean. Later Boniface et al.20, Bhushan et al.21,22 and Kumar et al.23 have evaluated the performance of some estimators of population mean under CME.

Taking motivation from Diana and Perri24 and Shalabh and Tsai19 work, we have developed a modified correlated measurement errors model. This paper is an effort towards developing ratio and product estimators under a modified correlated MEs model.

The remaining sections of this article are organized as follows: Shalabh and Tsai19 correlated measurement errors Model’s along with the ratio and product estimators have been introduced in section "Shalabh and Tsai (2017) correlated MEs model’s characteristics". In section "Description of modified correlated MEs model and the proposed estimators", we have developed the Modified correlated MEs Model and the proposed the ratio and product estimators in this scenario. The properties of the suggested estimators are examined up to first order of approximation (foa). We have covered the bias and MSE comparisons of the suggested mean per unit, ratio and product estimators with the usual mean per unit as well as the ratio and product estimators given by Shalabh and Tsai19 in sections "Bias comparisons of \(t_{R} ,\,\,t_{R}^{*} ,\,\,t_{P}\) and \(t_{P}^{*}\)" and "Comparison of MSEs of \((t_{R}^{*} ,t_{P}^{*} )\) with \((\overline{y},\,\,\overline{y}^{*} ,\,\,t_{R} ,\,\,t_{P} )\)", respectively. The theoretical efficiency conditions of proposed estimators were also obtained. In Section "Special Case", a special case of the recommended ratio and product estimators under modified correlated MEs was also discussed.

In Section "Empirical study ", an empirical study is also provided for assessing the efficiency of proposed estimators. In section "Simulation study ", a simulation study has also been performed in R software to strengthen the current study. The results and discussion followed by conclusion of the current study are summarized in sections “Results and discussions” and “Conclusion”, respectively.

Shalabh and Tsai (2017) correlated MEs model’s characteristics

Let \(\Omega = (\Omega_{1} ,\,\Omega_{2} ,\,...,\,\Omega_{N} )\) be a finite population of size N and a sample of size n be selected from the population \(\Omega\) using SRSWOR scheme. Assume that the true value of the ith unit of \(\Omega\) is denoted by Xi and Yi corresponding to the auxiliary and study variables, respectively.

But these true values are somehow not available and rather these are detected as \(y_{i}\) and \(x_{i}\) having MEs denoted by \(u_{i}\) and \(v_{i}\), respectively. Shalabh and Tsai19 assumed that these values can be expressed in additive form defined as:

$$y_{i} = Y_{i} + u_{i} ,\;\;\;x_{i} = X_{i} + v_{i} \,;\,\,\,\,i = 1,2, \ldots ,n$$
(1)

The MEs \(u_{i}\) and \(v_{i}\) are unobservable and assumed to have mean 0 (zero) and different variances \(\sigma_{u}^{2} \,\) and \(\sigma_{v}^{2}\), respectively, with correlation coefficient \(\rho_{uv} .\) Moreover it is reasonable assuming uncorrelated MEs to the true values. Suppose that \(\mu_{Y} \,\) and \(\mu_{X}\) are the population means, \(\sigma_{Y}^{2}\) and \(\sigma_{X}^{2}\) are the population variances, \(C_{Y}\) and \(C_{X}\) are the population coefficients of variation while \(\rho_{YX}\) is the population correlation coefficient. Further, consider \(\overline{y} = \frac{1}{n}\sum\limits_{i = 1}^{n} {y_{i} }\) and \(\overline{x} = \frac{1}{n}\,\sum\limits_{i = 1}^{n} {x_{i} }\) as the sample means of the observed values.

Assuming known population mean \(\mu_{X}\) of the auxiliary variable X, Shalabh and Tsai19 proposed ratio as well as product estimators for the population mean \(\mu_{Y}\) of the study variable Y given as:

$$t_{R} = \overline{y}\left( {\mu_{X} /\overline{x}} \right)$$
(2)
$$and\;\;t_{P} = \overline{y}\left( {\overline{x}/\mu_{X} } \right)$$
(3)

Assuming large enough population size N, the finite population correction (fpc) term is \(\left( {1 - f} \right) \cong 1,\,\,and\,\,f = \frac{n}{N}\) (sampling fraction), i.e., \(f = \frac{n}{N} \cong 0\).

It is easy to see that \(\overline{y}\) is an unbiased estimator of \(\mu_{Y}\) and its variance/mean squared error (MSE) is given as:

$$Var(\overline{y}) = MSE(\overline{y}) = \frac{1}{n}\left( {\sigma_{Y}^{2} + \sigma_{u}^{2} } \right)$$
(4)
$${\text{Similarly}},\;\;Var(\overline{x}) = MSE(\overline{x}) = \frac{1}{n}\left( {\sigma_{X}^{2} + \sigma_{v}^{2} } \right)$$
(5)
$$and\;\;Cov(\overline{y},\overline{x}) = \frac{1}{n}\,\left( {\rho_{YX} \sigma_{Y} \sigma_{X} + \rho_{uv} \sigma_{u} \sigma_{v} } \right)$$
(6)

The bias and MSE of \(t_{R}\) and \(t_{P}\) up to first order of approximation (foa), are respectively, given as:

$$B\left( {t_{R} } \right) = \frac{1}{n}\left( {B_{R} + B_{RM} } \right),$$
(7)
$$B\left( {t_{P} } \right) = \frac{1}{n}\left( {B_{P} + B_{PM} } \right),$$
(8)
$$MSE\left( {t_{R} } \right) = \frac{1}{n}\left( {V_{R} + V_{RM} } \right),$$
(9)
$$MSE(t_{P} ) = \frac{1}{n}\left( {V_{P} + V_{PM} } \right),$$
(10)

where \(B_{R} = \frac{{\sigma_{Y} \sigma_{X} }}{{\mu_{X} }}\left( {\frac{{\mu_{Y} \sigma_{X} }}{{\mu_{X} \sigma_{Y} }} - \rho_{YX} } \right) = \mu_{Y} C_{X}^{2} (1 - K_{YX} ),\) \(B_{P} = \frac{{\rho_{YX} \sigma_{Y} \sigma_{X} }}{{\mu_{X} }} = \mu_{Y} C_{X}^{2} K_{YX} ,\)

$$B_{RM} = \frac{{\sigma_{u} \sigma_{v} }}{{\mu_{X} }}\left( {\frac{{\mu_{Y} \sigma_{v} }}{{\mu_{X} \sigma_{u} }} - \rho_{uv} } \right) = \frac{{R\sigma_{v}^{2} }}{{\mu_{X} }}\left( {1 - K_{uv} } \right),$$
$$B_{PM} = \frac{1}{{\mu_{X} }}\rho_{uv} \sigma_{u} \sigma_{v} = \frac{{R\sigma_{v}^{2} }}{{\mu_{X} }}K_{uv} ,$$
$$V_{R} = \sigma_{Y}^{2} \left[ {1 - 2\left( {\frac{{\mu_{Y} \sigma_{X} }}{{\mu_{X} \sigma_{Y} }}} \right)\rho_{YX} + \left( {\frac{{\mu_{Y} \sigma_{X} }}{{\mu_{X} \sigma_{Y} }}} \right)^{2} } \right] = \mu_{Y}^{2} \left[ {C_{Y}^{2} + C_{X}^{2} \left( {1 - 2\,K_{YX} } \right)} \right],$$
$$V_{P} = \sigma_{Y}^{2} \left[ {1 + 2\left( {\frac{{\mu_{Y} \sigma_{X} }}{{\mu_{X} \sigma_{Y} }}} \right)\rho_{YX} + \left( {\frac{{\mu_{Y} \sigma_{X} }}{{\mu_{X} \sigma_{Y} }}} \right)^{2} } \right] = \mu_{Y}^{2} \,\left[ {C_{Y}^{2} + C_{X}^{2} \left( {1 + 2\,K_{YX} } \right)} \right],$$
$$V_{RM} = \sigma_{u}^{2} \left[ {1 - 2\left( {\frac{{\mu_{Y} \sigma_{v} }}{{\mu_{X} \sigma_{u} }}} \right)\rho_{uv} + \left( {\frac{{\mu_{Y} \sigma_{v} }}{{\mu_{X} \sigma_{u} }}} \right)^{2} } \right] = \left[ {\sigma_{u}^{2} + R^{2} \sigma_{v}^{2} \left( {1 - 2\,K_{uv} } \right)} \right],$$
$$V_{PM} = \sigma_{u}^{2} \left[ {1 + 2\left( {\frac{{\mu_{Y} \sigma_{v} }}{{\mu_{X} \sigma_{u} }}} \right)\rho_{uv} + \left( {\frac{{\mu_{Y} \sigma_{v} }}{{\mu_{X} \sigma_{u} }}} \right)^{2} } \right] = \left[ {\sigma_{u}^{2} + R^{2} \sigma_{v}^{2} \left( {1 + 2K_{uv} } \right)} \right],$$
$$R = \mu_{Y} /\mu_{X} ,\,\,K_{YX} = \rho_{YX} \left( {C_{Y} /C_{X} } \right) = \beta_{YX} /R,\,\,K_{uv} = \left( {\beta_{uv} /R} \right),$$
$$\,\beta_{YX} = \rho_{YX} \left( {\sigma_{Y} /\sigma_{X} } \right),\,\,\beta_{uv} = \rho_{uv} \,\left( {\sigma_{u} /\sigma_{v} } \right).$$

Description of modified correlated MEs model and the proposed estimators

We define the following correlated MEs model for expressing the observed values \(y_{i}^{*}\) and \(x_{i}^{*}\) in the additive form of true values (denoted by Xi and Yi) and the MEs (denoted by \(u_{i}\) and \(v_{i}\), respectively) as:

$$\left. \begin{gathered} y_{i}^{*} = Y_{i} + \alpha u_{i} ,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\hfill \\ x_{i}^{*} = X_{i} + \eta v_{i} ;\,\,\,\,\,i = 1,\,\,2,\,\,...,\,\,n \hfill \\ \end{gathered} \right\}$$
(11)

where \(\left( {\alpha ,\eta } \right)\) are constants to be determined the conditions over \(\left( {\alpha ,\eta } \right)\) so that the model (11) is superior to the Shalabh and Tsai19 model defined in (1).

The sample means denoted by \(\overline{y}^{*}\) and \(\overline{x}^{*}\) under the model (11) are defined as:

$$\overline{y}^{*} = \frac{1}{n}\,\sum\limits_{i = 1}^{n} {y_{i}^{*} } \;\;and\;\;\overline{x}^{*} = \frac{1}{n}\,\sum\limits_{i = 1}^{n} {x_{i}^{*} } .$$

Estimators \(\overline{y}^{*}\) and \(\overline{x}^{*}\) can easily be proved as unbiased estimators of the population means \(\mu_{Y}\) and \(\mu_{X} ,\) respectively.

The variances/MSEs of \(\left( {\overline{y}^{*} ,\,\,\overline{x}^{*} } \right)\) and the covariance between \(\left( {\overline{y}^{*} \,and\,\,\overline{x}^{*} } \right)\) under SRSWOR ignoring fpc term, are respectively, given by

$$Var\,(\overline{y}^{*} ) = MSE(\overline{y}^{*} ) = \frac{1}{n}\,\left( {\sigma_{Y}^{2} + \alpha^{2} \sigma_{u}^{2} } \right)$$
(12)
$$Var\,(\overline{x}^{*} ) = MSE(\overline{x}^{*} ) = \frac{1}{n}\,\left( {\sigma_{X}^{2} + \eta^{2} \sigma_{v}^{2} } \right)$$
(13)
$$Cov\,\left( {\overline{y}^{*} ,\overline{x}^{*} } \right) = \frac{1}{n}\left( {\rho_{YX} \sigma_{Y} \sigma_{X} + \alpha \eta \,\,\rho_{vu} \sigma_{u} \sigma_{v} } \right)$$
(14)

From (4) and (12), we have \(MSE(\overline{y}^{*} ) < MSE(\overline{y})\) if

$$(1 - \alpha^{2} ) > 0$$
$${\text{i}}.{\text{e}}.,{\text{ if}}\;\left| \alpha \right| < 1$$
(15)

Similarly, from (5) and (13), we note that

\(MSE(\overline{x}^{*} ) < MSE(\overline{x})\,\) if

$$(1 - \eta^{2} ) > 0$$
$${\text{i}}.{\text{e}}.,{\text{ if}} \;\left| \eta \right| < 1$$
(16)

Thus, the resulting modified correlated MEs model is:

$$\left. \begin{gathered} y_{i}^{*} = Y_{i} + \alpha \,u_{i} ,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \hfill \\ x_{i}^{*} = X_{i} + \eta v_{i} ,\,\,\,\,\,i = 1,\,2,...,n \hfill \\ \end{gathered} \right\}$$
(17)

with \(\left| \alpha \right| < 1\) and \(\left| \eta \right| < 1\).

Here we note that \(\alpha\) and \(\eta\) may take the values of \(\rho_{YX}\) and \(\rho_{uv}\) as \(\left| {\rho_{YX} } \right| < 1\) and \(\left| {\rho_{uv} } \right| < 1\).

Now we define the ratio (\(t_{R}^{*}\)) and product (\(t_{P}^{*}\)) estimators for population mean \(\mu_{Y}\) of Y under the model (17) as

$$t_{R}^{*} = \frac{{\overline{y}^{*} }}{{\overline{x}^{*} }}\,\mu_{X}$$
(18)
$$and\;\;t_{P}^{*} = \frac{{\overline{y}^{*} \overline{x}^{*} }}{{\mu_{X} }}$$
(19)

To study the properties of the estimators \(t_{R}^{*}\) and \(t_{P}^{*}\) under the model (17), we write

$$w_{y} = \frac{1}{\sqrt n }\sum\limits_{i = 1}^{n} {(Y_{i} - \mu_{Y} ),\,\,\,w_{x} = \frac{1}{\sqrt n }\,\sum\limits_{i = 1}^{n} {(X_{i} - \mu_{X} )} } ,\;w_{u} = \frac{1}{\sqrt n }\,\sum\limits_{i = 1}^{n} {u_{i} } \;and\;w_{v} = \frac{1}{\sqrt n }\sum\limits_{i = 1}^{n} {v_{i} } .$$
$$Thus\;\;\overline{y}^{*} = \mu_{Y} + \frac{1}{\sqrt n }(w_{y} + \alpha w_{u} ),$$
(20)
$$\overline{x}^{*} = \mu_{X} + \frac{1}{\sqrt n }\left( {w_{x} + \eta w_{v} } \right),$$
(21)

We note that \(e_{0}^{*} = (\overline{y}^{*} - \mu_{Y} ) = \frac{1}{\sqrt n }\,(w_{y} + \alpha w_{u} )\)

$$e_{1}^{*} = (\overline{x}^{*} - \mu_{X} ) = \frac{1}{\sqrt n }\,(w_{x} + \eta w_{v} )$$

such that \(E(e_{0}^{*} ) = E(e_{1}^{*} ) = 0\), \(E(e_{0}^{*2} ) = \frac{1}{n}\,\left( {\sigma_{Y}^{2} + \alpha^{2} \sigma_{u}^{2} } \right)\), \(E(e_{1}^{*2} ) = \frac{1}{n}\left( {\sigma_{x}^{2} + \eta^{2} \sigma_{v}^{2} } \right)\) and \(E\left( {e_{0}^{*} e_{1}^{*} } \right) = \frac{1}{n}\left( {\rho_{YX} \sigma_{Y} \sigma_{X} + \alpha \eta \,\,\rho_{v\,u} \,\sigma_{u} \sigma_{v} } \right)\).

Stating \(t_{R}^{*}\) in the form of \(e_{0}^{*} \,\) and \(e_{1}^{*}\), we have

$$t_{R}^{*} = \mu_{Y} \left( {1 + \frac{{e_{0}^{*} }}{{\mu_{Y} }}} \right)\,\left( {1 + \frac{{e_{1}^{*} }}{{\mu_{X} }}} \right)^{ - 1}$$
(22)

Assuming \(\left| {\frac{{e_{1}^{*} }}{{\mu_{X} }}} \right| < 1\), the term \(\left( {1 + \frac{{e_{1}^{*} }}{{\mu_{X} }}} \right)^{ - 1}\) will be expandable. Expanding (22) up to power two of \(e^{\prime}s\), we have

$$t_{R}^{*} = \mu_{Y} \left[ {1 + \frac{{e_{0}^{*} }}{{\mu_{Y} }} - \frac{{e_{1}^{*} }}{{\mu_{X} }} + \frac{{e_{1}^{*2} }}{{\mu_{X}^{2} }} - \frac{{e_{0}^{*} e_{1}^{*} }}{{\mu_{Y} \mu_{X} }}} \right]$$
$$or\;\left( {t_{R}^{*} - \mu_{Y} } \right) = \mu_{Y} \left( {\frac{{e_{0}^{*} }}{{\mu_{Y} }} - \frac{{e_{1}^{*} }}{{\mu_{X} }} + \frac{{e_{1}^{*2} }}{{\mu_{X}^{2} }} - \frac{{e_{0}^{*} e_{1}^{*} }}{{\mu_{Y} \mu_{X} }}} \right)$$
(23)

We obtain the bias of \(t_{R}^{*}\) up to foa by taking expectation of (23) which is given as:

$$B\left( {t_{R}^{*} } \right) = \frac{1}{n}\,\left( {B_{R} + B_{RM}^{*} } \right),$$
(24)
$${\text{where}}\;B_{RM}^{*} = \frac{{\eta \,R\sigma_{v}^{2} }}{{\mu_{X} }}\,\left( {\eta - \alpha \,K_{uv} } \right)$$
(25)

It is observed from (24) that the bias of \(t_{R}^{*}\) will vanish when sample size n is sufficiently large. Further, the bias of \(t_{R}^{*}\) at (24) can be re-expressed as:

$$B\left( {t_{R}^{*} } \right) = \frac{R}{{n\mu_{X} }}\,\left[ {\sigma_{x}^{2} (1 - K_{YX} ) + \eta \sigma_{v}^{2} \,(\eta - \alpha K_{uv} )} \right]$$

which will be zero, if

$$\beta _{{YX}} = R\;\;and\;\;\alpha \beta _{{uv}} = \eta R.$$

Thus, under the conditions, \(\beta_{YX} = R\) and \(\alpha \beta_{uv} = \eta R\), the proposed ratio estimator \(t_{R}^{*}\) is almost unbiased.

After squaring (23) and ignoring greater than power two terms of \(e^{*}\)’s, we obtain

$$\left( {t_{R}^{*} - \mu_{Y} } \right)^{2} = \,\,\left( {e_{0}^{*2} - 2R\,e_{0}^{*} e_{1}^{*} + R^{2} e_{1}^{*2} } \right)$$
(26)

The expectation of (26) provides the mean squared error of \(t_{R}^{*}\) up to foa given as:

$$MSE\left( {t_{R}^{*} } \right) = \frac{1}{n}\,\left( {V_{R} + V_{RM}^{*} } \right),$$
(27)
$${\text{where}}\;\;V_{RM}^{*} = \left[ {\alpha^{2} \sigma_{u}^{2} + R^{2} \,\sigma_{v}^{2} \left( {\eta^{2} - 2\alpha \eta \,K_{uv} } \right)} \right]$$
(28)

We note that if \(\rho_{uv}\) is positive, then we select \((\alpha ,\eta )\) in such a way that the quantity \(\alpha \eta\) is also positive. Alternatively, if \(\rho_{uv}\) is negative, then we choose \(\left( {\alpha ,\eta } \right)\) in such a manner that the quantity \(\alpha \eta\) is negative. We also note that \(\left( {\alpha ,\eta } \right)\) may also take the values of correlation coefficients \(\rho_{YX}\) and \(\rho_{uv}\). Progressing similar to \(t_{R}^{*} ,\) the following expressions for the bias and MSE of the product estimator \(t_{P}^{*}\) up to foa can be obtained as follows:

$$B\left( {t_{P}^{*} } \right) = E\left( {t_{P}^{*} - \mu_{Y} } \right) = \frac{1}{n}\,\left( {B_{P} + B_{PM}^{*} } \right),$$
(29)
$$MSE(t_{P}^{*} ) = E(t_{P}^{*} - \mu_{Y} )^{2} = \frac{1}{n}\,(V_{P} + V_{PM}^{*} ),$$
(30)
$${\text{where}}\;\;\;B_{PM}^{*} = \frac{{R\sigma_{v}^{2} \,}}{{\mu_{X} }}\alpha \eta \,K_{uv} ,$$
(31)
$$V_{PM}^{*} = \left[ {\alpha^{2} \sigma_{u}^{2} + R^{2} \sigma_{v}^{2} \,\left( {\eta^{2} + 2\alpha \eta \,K_{uv} } \right)} \right]$$
(32)

Expression (29) clearly shows that the bias of \(t_{P}^{*}\) is zero, for sufficiently large n. The bias of \(t_{P}^{*}\) at (29) can be re-written as:

$$B\left( {t_{P}^{*} } \right) = \frac{1}{{n\mu_{X} }}\,\left[ {\sigma_{x}^{2} \beta_{YX} + \sigma_{v}^{2} \alpha \eta K_{uv} )} \right]$$

The above expression clearly indicates that the product estimator \(t_{P}^{*}\) is unbiased if \(\rho_{YX} = 0\) and \(\rho_{uv} = 0,\) i.e., if the correlation between the two variables Y and X is zero and the measurement error variables u and v are uncorrelated.

Further, the bias of the ratio estimator \(t_{R}^{*}\) (of the product estimator \(t_{P}^{*}\)) decreases as the sample size n increases and can be easily seen that the proposed ratio (product) estimator \(t_{R}^{*} \,(t_{P}^{*} )\) is consistent.

We note from (32) that if \(\rho_{uv}\) is positive, then to get large efficiency we select \((\alpha ,\eta )\) in such a way that the quantity \(\alpha \eta\) is negative. On the other hand, if \(\rho_{uv}\) is negative, then we choose \(\left( {\alpha ,\eta } \right)\) in such a manner that the quantity \(\alpha \eta\) is positive.

It can be noticed from the expressions of bias and MSE of the estimators \((t_{R} ,t_{P} ,t_{R}^{*} ,t_{P}^{*} )\) that data having existence of the MEs lead a supplementary term in each instance. However, this additional term disappears in case of no MEs on both the variables.

This study can also be extended on the lines of Shahzad et al.25 and Ali et al.26.

Bias comparisons of \(t_{R} ,\,\,t_{R}^{*} ,\,\,t_{P}\) and \(t_{P}^{*}\)

It is looked upon based on the results obtain in sections "Shalabh and Tsai (2017) correlated MEs model’s characteristics" and "Description of modified correlated MEs model and the proposed estimators" that estimators \(\overline{y}\) and \(\overline{y}^{*}\) are unbiased whereas \(t_{R} ,t_{R}^{*} ,t_{P}\) and \(t_{P}^{*}\) are biased estimators of the population mean \(\mu_{Y}\) of Y. This fact holds correct whether the MEs exist or do not exist.

  • From (7) and (24), we have that \(\left| {B(t_{R}^{*} )} \right| < \left| {B(t_{R} )} \right|\) if

    $$\left| {\eta \left( {\eta - \alpha K_{uv} } \right)\,} \right| < \left| {(1 - K_{uv} )} \right|$$
    $${\text{i}}.{\text{e}}.,{\text{ if}}\;\;\left[ {(1 - \eta^{4} ) + (1 - \alpha^{2} \eta^{2} )\,K_{uv}^{2} - 2K_{uv}^{{}} (1 - \alpha \eta^{3} )} \right] > 0$$
    (33)

This inequality will meet usually in survey situations as \(\left| \alpha \right| < 1\) and \(\left| \eta \right| < 1\).

Now, we consider the two situations:

  1. 1.

    if \(\eta = \alpha\) then the inequality (33) reduces to

    $$\left( {1 - \alpha^{4} } \right)\,\,(1 - K_{uv} )^{2} > 0$$
    (34)

    which always holds good as \(\left| \alpha \right| < 1\). Thus when \(\eta = \alpha\) the proposed ratio estimator \(t_{R}^{*}\) is always less biased than the Shalabh and Tsai19 ratio estimator \(t_{R}\).

  2. 2.

    if \(\rho_{uv} = 0\), i.e., MEs \(u_{i}\) and \(v_{i}\) are not correlated, then inequality (33) boils down to:

    $$\left( {1 - \eta^{4} } \right) > 0$$
    (35)

    which again holds good as \(\left| \eta \right| < 1\).

Thus in this situation \((\rho_{uv} = 0),\) the suggested estimator \(t_{R}^{*}\) is less biased than the Shalabh7 and Shalabh and Tsai19 ratio estimator \(t_{R} .\) Here we would like to mention that the properties of \(t_{R}\) have been studied by Shalabh7 in case of no correlation between the MEs.

  • From (8) and (29), we note that

    $$\begin{gathered} \left| {B(t_{P}^{*} )} \right| < \left| {B(t_{P} )} \right|\;\;{\text{if}} \hfill \\ \left| {(B_{P} + B_{PM}^{*} )} \right| < \left| {(B_{P} + B_{PM} )} \right| \hfill \\ {\text{i}}.{\text{e}}.,{\text{ if}}\left| {B_{PM}^{*} } \right| < \left| {B_{PM} } \right| \hfill \\ {\text{i}}.{\text{e}}.,{\text{ if}}\left| {\alpha \eta K_{uv} } \right| < \left| {K_{uv} } \right| \hfill \\ {\text{i}}.{\text{e}}.,{\text{ if}}\left| {\alpha \eta } \right| < 1 \end{gathered}$$
    (36)

This is always true because \(\left| \alpha \right| < 1\) and \(\left| \eta \right| < 1\).

Comparison of MSEs of \((t_{R}^{*} ,t_{P}^{*} )\) with \((\overline{y},\,\,\overline{y}^{*} ,\,\,t_{R} ,\,\,t_{P} )\)

  • From (4), (9), (12) and (27) it is noted that

  1. 1.

    the suggested estimator \(t_{R}^{*}\) is said to be more efficient than the conventional unbiased estimator \(\overline{y}\) and the suggested estimator \(\overline{y}^{*}\), respectively, if

    $$R^{2} \left[ {\sigma_{X}^{2} (1 - 2K_{YX} ) + \sigma_{v}^{2} \eta \,(\eta - 2\alpha \,K_{uv} )} \right] < (1 - \alpha^{2} )\,\sigma_{u}^{2} ,$$
    (37)
    $$and\;\;\left[ {\sigma_{X}^{2} (1 - 2\,K_{YX} ) + \sigma_{v}^{2} \eta \,(\eta - 2\alpha \,K_{uv} )} \right] < 0$$
    (38)
  2. 2.

    The proposed estimator \(\overline{y}^{*}\) is more precise than Shalabh and Tsai19 estimator \(t_{R}\) if

    $$\left[ {R^{2} \left\{ {\sigma_{X}^{2} (1 - 2\,K_{YX} ) + \sigma_{v}^{2} (1 - 2\,K_{uv} )} \right\} + (1 - \alpha^{2} )\,\sigma_{u}^{2} } \right] > 0$$
    (39)
  3. 3.

    the developed estimator \(t_{R}^{*}\) has smaller MSE than Shalabh and Tsai19) estimator \(t_{R}\) if

    $$V_{RM}^{*} < V_{RM}$$
    $${\text{i}}.{\text{e}}.,{\text{ if}} \;\left[ {\sigma_{u}^{2} (1 - \alpha^{2} ) + R^{2} \sigma_{V}^{2} \left\{ {(1 - \eta^{2} ) - 2K_{uv} (1 - \alpha \eta )} \right\}} \right] > 0$$
    (40)

Now we consider the two situations:

  1. (a)

    if \(\rho_{uv} = 0\), then inequalities (37)–(40), respectively, reduce to:

    $$R^{2} \left[ {\sigma_{X}^{2} (1 - 2K_{YX} ) + \sigma_{v}^{2} \eta^{2} } \right] < (1 - \alpha^{2} )\,\sigma_{u}^{2}$$
    (41)
    $$\left[ {\sigma_{X}^{2} (1 - 2\,K_{YX} ) + \sigma_{v}^{2} \eta^{2} } \right] < 0$$
    (42)
    $$\left[ {R^{2} \left\{ {\sigma_{X}^{2} (1 - 2\,K_{YX} ) + \sigma_{v}^{2} } \right\} + \left( {1 - \alpha^{2} } \right)\,\sigma_{u}^{2} } \right] > 0$$
    (43)
    $$\left[ {\sigma_{u}^{2} (1 - \alpha^{2} ) + R^{2} \sigma_{v}^{2} (1 - \eta^{2} )} \right] > 0$$
    (44)

From (44) it is clear that when \(\rho_{uv} = 0,\) the recommended ratio estimator \(t_{R}^{*}\) is always better than Shalabh7, and Shalabh and Tsai19 ratio estimator \(t_{R}\), as in this case the inequality (44) always holds good. If \(K_{YX} < \frac{1}{2}\), then the inequality (43) holds true, i.e., the suggested estimator \(\overline{y}^{*}\) is said to be more efficient than Shalabh and Tsai19 estimator \(t_{R} ,\) while for \(K_{YX} < \frac{1}{2},\) the inequality (42) does not hold good, i.e., the suggested ratio estimator \(t_{R}^{*}\) is inferior to the proposed estimator \(\overline{y}^{*} .\) If \(K_{YX} < \frac{1}{2},\) inequality (41) is not hard to meet in the survey situations which suggests that the offered ratio estimator \(t_{R}^{*}\) is better than the conventional unbiased estimator \(\overline{y}\).

  1. (b)

    if \(\eta = \alpha\), then inequalities (37), (38) and (40), respectively, boils down to:

    $$R^{2} \left[ {\sigma_{X}^{2} (1 - 2\,K_{YX} ) + \sigma_{v}^{2} \eta^{2} (1 - 2K_{uv} )} \right] < (1 - \alpha^{2} )\,\sigma_{u}^{2} ,$$
    (45)
    $$\left[ {\sigma_{X}^{2} (1 - 2\,K_{YX} ) + \sigma_{v}^{2} \eta^{2} (1 - 2\,K_{uv} )} \right] < 0,$$
    (46)
    $$(1 - \alpha^{2} )\,\left[ {\sigma_{u}^{2} + R^{2} \sigma_{v}^{2} (1 - 2\,K_{uv} )} \right] > 0$$
    (47)

It is observed from (45)–(47) that the proposed ratio estimator \(t_{R}^{*}\) is more efficient than the estimator:

  1. (i)

    \(\overline{y}\) if the inequality (45) holds good.

  2. (ii)

    \(\overline{y}^{*}\), if \(K_{YX} > \frac{1}{2}\) and \(K_{uv} > \frac{1}{2}\,.\)

  3. (iii)

    Shalabh and Tsai19 ratio estimator \(t_{R}\) as \(\left| \alpha \right| < 1\).

  • From (4), (10), (12) and (30), we observe that the offered product estimator \(t_{P}^{*}\) is better than the estimator:

    $$\overline{y}\,\; {\text{if}}\;R^{2} \left[ {\sigma_{X}^{2} (1 + 2\,K_{YX} ) + \sigma_{v}^{2} \left( {\eta^{2} + 2\alpha \eta \,K_{uv} } \right)} \right] < (1 - \alpha^{2} )\,\sigma_{u}^{2}$$
    (48)
    $$\overline{y}^{*} \;\;{\text{if}}\;\left[ {\sigma_{X}^{2} (1 + 2K_{YX} ) + \sigma_{v}^{2} \left( {\eta^{2} + 2\alpha \eta \,K_{uv} } \right)} \right] < 0$$
    (49)

\(t_{P}\)(due to Shalabh and Tsai19) if

$$\left[ {\sigma_{u}^{2} (1 - \alpha^{2} ) + R^{2} \sigma_{v}^{2} \left\{ {(1 - \eta^{2} ) + 2(1 - \alpha \eta )\,K_{uv} } \right\}} \right] > 0$$
(50)

It is further observed from (9) and (12) that \(MSE(\overline{y}^{*} ) < MSE(t_{P} )\), if

$$\left\{ {R^{2} \left[ {\sigma_{X}^{2} (1 + 2\,K_{YX} ) + \sigma_{v}^{2} (1 + 2\,K_{uv} )} \right] + \sigma_{u}^{2} (1 - \alpha^{2} )} \right\} > 0$$
(51)

Now we discuss two cases:

  1. (c)

    if \(\rho_{uv} = 0\), then conditions (48)–(51), respectively, reduce to:

    $$R^{2} \left[ {\sigma_{X}^{2} (1 + 2\,K_{YX} ) + \sigma_{v}^{2} \eta^{2} } \right]\, < (1 - \alpha^{2} )\,\sigma_{u}^{2} ,$$
    (52)
    $$\left[ {\sigma_{X}^{2} (1 + 2K_{YX} ) + \eta^{2} \sigma_{v}^{2} } \right] < 0,$$
    (53)
    $$\left[ {\sigma_{u}^{2} \left( {1 - \alpha^{2} } \right) + R^{2} \sigma_{v}^{2} (1 - \eta^{2} )} \right] > 0\,,$$
    (54)
    $$\left\{ {R^{2} \left[ {\sigma_{X}^{2} (1 + 2\,K_{YX} ) + \sigma_{v}^{2} } \right] + \sigma_{u}^{2} (1 - \alpha^{2} )} \right\} > 0$$
    (55)

Inequality (54) clearly propagates that the recommended product estimator \(t_{P}^{*}\) is better than Shalabh and Tsai19 product estimator \(t_{p}\) as \(\left| \alpha \right| < 1\) and \(\left| \eta \right| < 1\). Further \(t_{P}^{*}\) is more efficient than \(\overline{y}\) and \(\overline{y}^{*}\) provided that the inequalities (52) and (53) hold, respectively. If condition (55) is satisfied, then the suggested estimator \(\overline{y}^{*}\) is better than the product estimator \(t_{P}\) due to Shalabh and Tsai19.

  1. (d)

    if \(\eta = \alpha\), then the inequalities (48)–(51), respectively, reduce to:

    $$R^{2} \left[ {\sigma_{X}^{2} (1 + 2\,K_{YX} ) + \sigma_{v}^{2} \alpha^{2} (1 + 2\,K_{uv} )} \right] < (1 - \alpha^{2} )\,\sigma_{u}^{2}$$
    (56)
    $$\left[ {\sigma_{X}^{2} (1 + 2\,K_{YX} ) + \sigma_{v}^{2} (1 + 2\,K_{uv} )} \right] < 0$$
    (57)
    $$(1 - \alpha^{2} )\,\left[ {\sigma_{u}^{2} + R^{2} \sigma_{v}^{2} (1 + 2\,K_{uv} )} \right] > 0$$
    (58)
    $$\left[ {R^{2} \left\{ {\sigma_{X}^{2} (1 + 2K_{YX} ) + \sigma_{v}^{2} (1 + 2\,K_{uv} )} \right\} + \sigma_{u}^{2} (1 - \alpha^{2} )} \right] > 0$$
    (59)

From (58) it follows the proposed product estimator \(t_{P}^{*}\) is better than Shalabh and Tsai19 product estimator \(t_{P}\) as long as \(\left[ {\sigma_{u}^{2} + R^{2} \sigma_{v}^{2} \left( {1 + 2\,K_{uv} } \right)} \right] > 0\). The proposed product estimator \(t_{P}^{*}\) will be more efficient than \(\overline{y}\) and \(\overline{y}^{*}\), if the conditions (56) and (57), respectively, hold good. Further the estimator \(\overline{y}^{*}\) is superior to the Shalabh and Tsai19 product estimator \(t_{P}\) as long as the inequality (59) satisfied.

  1. (e)

    We now compare the estimators \(t_{R}^{*}\) and \(t_{P}^{*}\). From (27) and (30), we have that

\(MSE(t_{R}^{*} ) < MSE(t_{p}^{*} ),{\text{ if}}\)

$$(V_{R} + V_{RM}^{*} ) < (V_{P} + V_{PM}^{*} )$$
$${\text{i}}.{\text{e}}.,{\text{ if}}\,\,\,\rho_{YX} + \alpha \eta \left( {\frac{{\sigma_{u} }}{{\sigma_{Y} }}} \right)\left( {\frac{{\sigma_{v} }}{{\sigma_{X} }}} \right)\rho_{uv} > 0$$
(60)

provided that the ratio \(R = \frac{{\mu_{Y} }}{{\mu_{X} }}\) is non-negative and (α, η) have the same signs.

When there are no measurement errors in the auxiliary variable and/or the measurement errors associated with the study and auxiliary variables are not correlated, the condition (60) boils down to \(\rho_{YX} > 0,\) which is usual condition derived under the specification of no measurement errors.

If we set α = η = 1 in (60), then we have

$$\rho_{YX} + \left( {\frac{{\sigma_{u} }}{{\sigma_{Y} }}} \right)\left( {\frac{{\sigma_{v} }}{{\sigma_{X} }}} \right)\rho_{uv} > 0$$
(61)

which is due to Shalabh and Tsai19.

Special case

For \(\eta = \alpha ,\) we define the ratio and product estimators for \(\mu_{Y}\) under modified correlated MEs, respectively, as:

$$t_{R}^{**} = \frac{{\overline{y}^{*} }}{{\overline{x}^{**} }}\mu_{X} ,$$
(62)
$$and\;\;t_{P}^{**} = \frac{{\overline{y}^{*} \overline{x}^{**} }}{{\mu_{X} }},$$
(63)

where \(\overline{y}^{*} = \left( {\overline{Y} + \alpha \overline{u}} \right)\) and \(\overline{x}^{**} = \left( {\overline{X} + \alpha \overline{v}} \right)\) with \(\left| \alpha \right| < 1\).

Putting \(\eta = \alpha\) in (24), (29), (27) and (30), we derive the bias and MSE of \(t_{R}^{**}\) and \(t_{P}^{**}\) up to foa, respectively, as:

$$B\left( {t_{R}^{**} } \right) = \frac{1}{n}\left( {B_{R} + B_{RM}^{**} } \right),$$
(64)
$$B\left( {t_{P}^{**} } \right) = \frac{1}{n}\left( {B_{P} + B_{PM}^{**} } \right),$$
(65)
$$MSE\left( {t_{R}^{**} } \right) = \frac{1}{n}\left( {V_{R} + V_{RM}^{**} } \right),$$
(66)
$$MSE\left( {t_{P}^{**} } \right) = \frac{1}{n}\left( {V_{P} + V_{PM}^{**} } \right),$$
(67)

where \(B_{RM}^{**} = \frac{{\alpha^{2} R\sigma_{v}^{2} }}{{\mu_{X} }}(1 - K_{uv} ),\,\)\(B_{PM}^{**} = \frac{{\alpha^{2} R\sigma_{v}^{2} }}{{\mu_{X} }}K_{uv} ,\)

\(V_{RM}^{**} = \alpha^{2} V_{RM} = \alpha^{2} \left[ {\sigma_{u}^{2} + R^{2} \sigma_{v}^{2} (1 - 2K_{uv} )} \right],\,\) and \(V_{PM}^{**} = \alpha^{2} V_{PM} = \alpha^{2} \left[ {\sigma_{u}^{2} + R^{2} \sigma_{v}^{2} (1 + 2K_{uv} )} \right]\).

From (7)–(10), (64)–(67), it can be easily proved that the proposed ratio (or, product) estimator \(t_{R}^{**} \,(or,\,\,t_{P}^{**} )\) at (62) (or, (63)) is less biased as well as more efficient than Shalabh and Tsai19 ratio (or, product) estimator \(t_{R} (or,\,\,t_{P} )\) at (2) (or, (3)) under the restriction \(\left| \alpha \right| < 1\).

From (4) and (66), we have that

$$MSE\left( {t_{R}^{**} } \right) < MSE(\overline{y})\;\;if\;\;\alpha^{2} < \frac{{\left\{ {\sigma_{u}^{2} + R^{2} \sigma_{X}^{2} (2K_{YX} - 1)} \right\}}}{{\left\{ {\sigma_{u}^{2} + R^{2} \sigma_{v}^{2} (1 - 2\,K_{uv} )} \right\}}}$$
(68)

Further from (12) and (66), we observe that \(MSE(t_{R}^{**} ) < MSE(\overline{y}^{*} )\,\) if

$$\alpha^{2} < \frac{{\sigma_{X}^{2} (2K_{YX} - 1)}}{{\sigma_{v}^{2} (1 - 2K_{uv} )}}$$
(69)

Hence, the recommended estimator \(t_{R}^{**}\) is better than \(\overline{y}\) and \(\overline{y}^{*}\), respectively, if the inequalities (68) and (69) are satisfied.

From (4) and (67), it is reflected that

$$MSE(t_{P}^{**} ) < MSE(\overline{y})\,\,\;\;if\;\;\alpha^{2} < \frac{{\left\{ {\sigma_{u}^{2} - R^{2} \sigma_{X}^{2} (1 + 2K_{YX} )} \right\}}}{{\left\{ {\sigma_{u}^{2} + R^{2} \sigma_{v}^{2} (1 + K_{uv} )} \right\}}}$$
(70)

Further from (12) and (67), we have that

\(MSE(t_{P}^{**} ) < MSE(\overline{y}^{*} )\) if

$$\alpha^{2} < - \frac{{\sigma_{X}^{2} (1 + 2K_{YX} )}}{{\sigma_{v}^{2} (1 + 2K_{uv} )}}$$
(71)

Thus the recommended product estimator \(t_{P}^{**}\) is better than \(\overline{y}\) and \(\overline{y}^{*}\) provided that the inequalities (70) and (71) satisfied, respectively.

Empirical study

To judge the performance of the recommended estimators, we have performed an empirical study using two real populations earlier considered by Bhushan et al.21,22.

Population I: Source: Gujarati and Sangeetha (2007).

\(Y_{i} \,\,and\,\,y_{i}\) are true and measured consumption expenditure, respectively.

\(X_{i} \,\,and\,\,x_{i}\) are true and measured income, respectively.

Population II: Source: The book of U.S. Census Bureau (1986).

\(Y_{i} \,\,and\,\,y_{i}\) are true and measured value of the product sold, respectively.

\(X_{i} \,\,and\,\,x_{i}\) are true and measured size of farms, respectively.

Population

N

n

\(\mu_{Y}\)

\(\,\mu_{X}\)

\(\sigma_{Y}^{2}\)

\(\sigma_{X}^{2}\)

\(\sigma_{u}^{2}\)

\(\sigma_{v}^{2}\)

\(\rho_{YX}\)

\(\rho_{uv}\)

I

10

4

127

170

1278

3300

36

36

0.964

0.800

II

56

15

61.59

75.79

577.44

155.5

16

16

− 0.508

− 0.418

We have used the following formulae for computing percent relative efficiencies (PREs) of various estimators of \(\mu_{Y}\) with respect to \(\overline{y}:\)

\(PRE(\overline{y}^{*} ,\,\,\overline{y}) = \frac{{\left( {\sigma_{Y}^{2} + \sigma_{u}^{2} } \right)}}{{\left( {\sigma_{Y}^{2} + \alpha^{2} \sigma_{v}^{2} } \right)}} \times 100\), \(PRE(t_{R} ,\overline{y}) = \frac{{\left( {\sigma_{Y}^{2} + \sigma_{u}^{2} } \right)}}{{\left( {V_{R} + V_{RM} } \right)}} \times 100\),

\(PRE\left( {t_{R}^{**} ,\overline{y}} \right) = \frac{{\left( {\sigma_{Y}^{2} + \sigma_{u}^{2} } \right)}}{{\left( {V_{R} + \alpha^{2} V_{RM} } \right)}} \times 100\), \(PRE(t_{P} ,\overline{y}) = \frac{{\left( {\sigma_{Y}^{2} + \sigma_{u}^{2} } \right)}}{{\left( {V_{P} + V_{PM} } \right)}} \times 100\) and

\(PRE(t_{P}^{**} ,\overline{y}) = \frac{{\left( {\sigma_{Y}^{2} + \sigma_{u}^{2} } \right)}}{{\left( {V_{P} + \alpha^{2} V_{PM} } \right)}} \times 100\).

These values are displayed in Tables 1, 2, 3 and 4.

Table 1 Biases, MSEs and PREs (with respect to \(\overline{y}\)) of estimators \(t_{R} ,t_{P}\) for Populations I and II.
Table 2 Variances/MSEs and PREs (with respect to \(\overline{y}\)) of \(\overline{y}^{*}\) for several values of \(\alpha\).
Table 3 Biases, MSEs and PREs (with respect to \(\overline{y}\)) of \(t_{R}^{**}\) for several values of \(\alpha\).
Table 4 Biases, MSEs and PREs (with respect to \(\overline{y}\)) of \(t_{P}^{**}\) for several values of \(\alpha.\)

Computation time for the empirical study: We have noted down the computation time for the numerical study. The time taken for each value of α is 0.073 s while the total computation time is 1.022 s (= 0.073 × 14).

Simulation study

Following Shalabh and Tsai19, we conducted Monte-Carlo simulation study in R software to judge the performance of the suggested estimators. We have considered the following combinations of the parameters: n = 20 and 100; \(\mu_{X} = 20;\) \(\mu_{Y} = 30;\)\(\sigma_{X}^{2} = 1;\) \(\sigma_{Y}^{2} = 1;\)\(\sigma_{u}^{2} = 1;\)\(\sigma_{v}^{2} = 1;\) α = η = 1, 0.5, 0.1, 0.05 and -0.5; \(\rho_{XY}\)  =  − 0.9, − 0.5, 0, 0.5, 0.9 and \(\rho_{uv}\)  =  − 0.9, − 0.5, 0, 0.5, 0.9. For these combinations, we followed the steps given ahead:

  1. 1.

    Generated data on X, Y, u, and v using four-variate normal distribution considering the mean vector \(\left( {\mu_{X} , \, \mu_{Y} , \, 0, \, 0} \right)^{\prime }\) and variance–covariance matrix given as:

    $$\left( {\begin{array}{*{20}c} {\sigma_{X}^{2} } & {\rho_{XY} \sigma_{X} \sigma_{Y} } & 0 & 0 \\ {\rho_{XY} \sigma_{X} \sigma_{Y} } & {\sigma_{Y}^{2} } & 0 & 0 \\ 0 & 0 & {\sigma_{u}^{2} } & {\rho_{uv} \sigma_{u} \sigma_{v} } \\ 0 & 0 & {\rho_{uv} \sigma_{u} \sigma_{v} } & {\sigma_{v}^{2} } \\ \end{array} } \right)$$
  2. 2.

    Estimated the values of the suggested estimators \((t_{R}^{**} \,and\,t_{P}^{**} )\) as well as \(\overline{y},\,\overline{y}^{*} ,\,t_{R} \,and\,t_{P}\) on the basis of generated data for both sample sizes.

  3. 3.

    Computed the values of the empirical biases and mean squared errors (MSEs) of all estimators by considering 5000 replications.

The biases and MSE of the estimators for no measurement errors case, i.e., \(\sigma_{u}^{2}\)  = 0 and \(\sigma_{v}^{2}\)  = 0 for n = 20 and 100 are noted in Table 5. The biases of all estimators are given in Tables 6 and 8 for n = 20 and 100, respectively while the MSEs of theses estimators are recorded in Tables 7 and 9 for n = 20 and 100, respectively, for various combinations of α, η, \(\rho_{XY}\) and \(\rho_{uv}\).

Table 5 Bias and MSE of the estimators for no measurement errors’ case (i.e., for \(\sigma_{u}^{2}\)  = 0 and \(\sigma_{v}^{2}\) = 0).
Table 6 Bias of the estimators for n = 20 and several values of \(\alpha\) and η.
Table 7 MSE of the estimators for n = 20 and several values of \(\alpha\) and η.
Table 8 Bias of the estimators for n = 100 and several values of \(\alpha\) and η.
Table 9 MSE of the estimators for n = 100 and several values of \(\alpha\) and η.

Computation time for the simuation study: We have noted down the computation time for the simulation study also. The time taken for one iteration (for each combination of α, \(\rho_{xy}\) and \(\rho_{uv}\)) is 2.138 s while the total computation time is 4.632333 min (= 2.138 × (5 × 5 × 5 + 5))/60).

Results and discussions

From Tables 1, 2, 3 and 4, we observe the followings:

  1. 1.

    The proposed ratio estimator \(t_{R}^{**}\) has bias very marginally larger than the ratio estimator \(t_{R}\) in population-I while it (proposed ratio estimator \(t_{R}^{**}\)) is less biased than the ratio estimator \(t_{R}\) for the population-II. Further, it is observed that the suggested product estimator \(t_{P}^{**}\) is less biased (in the sense of absolute bias) than the product estimator \(t_{P}\) for both the populations I and II.

  2. 2.

    The recommended unbiased estimator \(\overline{y}^{*}\) is more efficient than the conventional unbiased estimator \(\overline{y}\) with marginal gain in efficiency for \(\left| \alpha \right| < 1\) in Populations I and II.

  3. 3.

    The recommended ratio estimator \(t_{R}^{**}\) is more efficient than \(\overline{y},\,\,\overline{y}^{*} ,\,t_{P}\) and Shalabh and Tsai19 ratio estimator \(t_{R}\) with considerable gain in efficiency under the condition \(\left| \alpha \right| < 1\) in Population I, while it is inferior to \(\overline{y}\) and \(\overline{y}^{*}\) in Population II due to negative correlation between \(\left( {Y\& X} \right)\) and \((u_{i} \, \& \, v_{i} )\) but \(t_{R}^{**}\) is superior to Shalabh and Tsai19 product estimator \(t_{P}\).

  4. 4.

    The recommended product estimator \(t_{P}^{**}\) is better than the estimators \(\overline{y},\,\,\overline{y}^{*}\) and Shalabh and Tsai19 product estimator \(t_{P}\) under the condition \(\left| \alpha \right| < 1\) in Population II, while it is inferior to the estimators \(\overline{y}\) and \(\overline{y}^{*}\) in Population I due to positive correlation between \(\left( {Y\& X} \right)\) and \(\left( {u_{i} \, \& \, v_{i} } \right)\) but superior to \(t_{P}\) with very marginal gain in efficiency. It happened due to moderate correlation between \((X\& Y)\) and \((u_{i} \, \& \, v_{i} )\) in Population I.

Similarly, from Tables 5, 6, 7, 8 and 9, we can compare the biases and MSEs under both conditions without measurement error as well as in the presence of measurement error. From these Tables 5, 6, 7, 8 and 9, we note the followings:

  1. 1.

    Tables 5, 6, 7, 8, 9 clearly reveal the higher values of bias and variance or MSE under presence of measurement errors, i.e., \(\sigma_{u}^{2} = 1\,\,and\, \, \sigma_{v}^{2} = 1\) than the values under no measurement errors, i.e., \(\sigma_{u}^{2} = 0\,\,and\, \, \sigma_{v}^{2} = 0\). Thus it indicates that the properties of estimators got affected by the presence of measurement errors.

  2. 2.

    The proposed unbiased estimator \(\overline{y}^{*}\) is having less bias and MSE than the conventional unbiased estimator \(\overline{y}\) for \(\left| \alpha \right| < 1\) and both sample sizes, i.e., n = 20, 100.

  3. 3.

    From Tables 5 and 7, the bias of the suggested estimators \(t_{R}^{**}\) and \(t_{P}^{**}\) are compared in the presence of measurement errors. It can be clearly observed that bias of \(t_{R}^{**}\) and \(t_{P}^{**}\) are impacted by the value of ρuv and these are substantially different for ρuv = 0 and ρuv =  ± 0.9, indicating the significant impact of correlated measurement errors. Apparently, the bias decreases as sample size increases but there is no apparent reduction in the differences in the values of bias for ρuv = 0 and ρuv =  ± 0.9. So, we can conclude that the correlated measurement errors influence the bias of the estimators compared to uncorrelated measurement errors.

  4. 4.

    From Tables 6 and 8, we can observe a clear impact of sign of correlation between measurement errors on the MSE values of estimators \(t_{R}^{**}\) and \(t_{P}^{**}\). The MSE of \(t_{R}^{**}\) (in case of highly positively correlated study and auxiliary variable, i.e., ρXY = 0.9) is lowest for positively correlated measurement error, i.e. for ρuv = 0.9. The MSE of \(t_{R}^{**}\) decreases As the degree of ρXY increases for ρXY > 0. However, the extent of ρuv also affects the rate and value of MSE. Obviously the MSE decreases as sample size increases for all the values of the parameters considered for ρXY > 0.

  5. 5.

    In the same way, we can conclude for the estimator \(t_{P}^{**}\) (in case of highly negatively correlated study and auxiliary variable, i.e., ρXY = – 0.9) is lowest for negatively correlated measurement error, i.e. for ρuv = – 0.9. The MSE of \(t_{P}^{**}\) decreases As the degree of ρXY increases for ρXY < 0. This clearly indicates that the presence of measurement errors affected the MSE of \(t_{R}^{**}\) and \(t_{P}^{**}\).

  6. 6.

    Tables 5, 6, 7, 8 clearly depicts that the biases and MSEs of the suggested estimators are the lowest at α = η = 0.05.

Thus, the recommended ratio \((t_{R}^{**} )\) and product \((t_{P}^{**} )\) estimators are useful in practice.

Conclusion

This paper has introduced a modified correlated MEs model. The proposed correlated MEs model involves a constant \(\alpha\) (say) with restriction \(\left| \alpha \right| < 1,\) termed as ‘error control parameter’. This error control parameter \(\alpha\) (say) controls the errors in observations if we choose error control parameter \(\alpha\) (say) near to ‘zero’. For \(\alpha = 1,\) proposed correlated MEs model reduces to Shalabh and Tsai19 model. We have suggested ratio as well as product estimators for population mean (\(\mu_{Y}\)) of the study variable Y in presence of auxiliary variable X when correlated MEs contaminate the observations on both study and auxiliary variables.

The expressions of bias and MSE of the recommended ratio and product estimators are determined up to foa under SRSWOR sampling scheme. The realistic conditions are derived under which the recommended ratio and product estimators act superior than the conventional unbiased estimators \((\overline{y},\,\,\overline{y}^{*} )\) and Shalabh and Tsai19 ratio \((t_{R} )\) and product \((t_{P} )\) estimators. An empirical study and a simulation study have also been performed in R software to exhibit the performance of the recommended ratio and product estimators over usual unbiased estimators and the ratio and product estimators due to Shalabh and Tsai19. It is observed that when the ‘error control parameter’ is close to ‘zero’, the recommended ratio and product estimators yield larger gain in efficiency. Thus, we recommend the proposed study for its use in practice.