Abstract
In this article, we have suggested a new improved estimator for estimation of finite population variance under simple random sampling. We use two auxiliary variables to improve the efficiency of estimator. The numerical expressions for the bias and mean square error are derived up to the first order approximation. To evaluate the efficiency of the new estimator, we conduct a numerical study using four real data sets and a simulation study. The result shows that the suggested estimator has a minimum mean square error and higher percentage relative efficiency as compared to all the existing estimators. These findings demonstrate the significance of our suggested estimator and highlight its potential applications in various fields. Theoretical and numerical analyses show that our suggested estimator outperforms all existing estimators in terms of efficiency. This demonstrates the practical value of incorporating auxiliary variables into the estimation process and the potential for future research in this area.
Similar content being viewed by others
Introduction
The use of auxiliary information can increase the precision of population parameter estimates in survey sampling, assuming that it is strongly associated with the study variable. To get precise estimates of the unknown population characteristics, auxiliary information is essential in parameter selection and at estimation stage. Estimating population variance is specifically important in populations that are likely to be skewed, and the accuracy of the estimation has been the focus of many research studies. By reducing the dispersion of the estimate obtained from various samples, we can obtain a more accurate representation of the population. Variance is a feature that can be used to describe a statistical population, and brilliant efforts have been made to estimate this characteristic as accurately as possible. The goal is to reduce the scattering of the calculated values, and appropriate use of auxiliary variables in survey sampling results in significant decrease in the variance of the estimator of unknown population parameter(s). In many situations, it is necessary to estimate the population variance of the study variable. The variance estimate for the population is crucial in a number of disciplines, such as agriculture, health, biology, and business, where we confront populations that are likely to be skewed. Everywhere in the world, there are differences in the things we do every day. The idea that no two things or persons are exactly alike is one that is widely held. For instance, a farmer must be well aware of how the geographical climate varies over time in order to decide how to sow his land. A clinician must have a thorough understanding of the variations in human hypertension and fever in order to administer the appropriate care.
The estimation of population variance has been a matter of interest for several researchers over the years. Garcia and Cebrian1 suggested the ratio estimator for the population variance. Upadhyaya et al.2 recommended a broad class of estimators for variance of the ratio estimator. Chandra and Singh3 discussed a family of estimators for population variance. Arcos et al.4 suggested incorporating the auxiliary information available in variance estimation. Kadilar and Cingi5 discussed improvement in variance estimation in simple random sampling. Grover6 discussed a correction note on improvement in variance using auxiliary information. Sharma and Singh7 discussed a generalized class of estimators for finite population variance. Singh and Solanki8 suggested improved estimation of finite population variance using auxiliary information. Yadav and Kadilar9 recommended a two parameter variance estimator using auxiliary information. Adichwal et al.10 proposed generalized class of estimators for population variance using auxiliary attribute. Adichwal et al.11 recommended generalized class of estimators for population variance using information on two auxiliary variables. Lone and Tailor12 suggested estimation of population variance in simple random sampling. Singh and Khalid13 discussed effective estimation strategy of population variance in two-phase successive sampling under random non-response. Audu et al.14 proposed difference-cum-ratio estimators for estimating finite population coefficient of variation in simple random sampling. Shahzad et al.15 suggested a new variance estimators for calibration approach under stratified random sampling. Singh and Khalid16 suggested a composite class of estimators to deal with the issue of variance estimation under the situations of random non-response in two-occasion successive sampling. Zaman and Bulut17 suggested a new class of robust ratio estimators for finite population variance. Shahzad et al.18 discussed use of calibration constraints and linear moments for variance estimation under stratified adaptive cluster sampling. Ashutosh et al.19 suggested calibration Approach for Variance Estimation of Small Domain. Ahmad20 discussed improved estimation of population variance under stratified random sampling. Ahmad21 discussed improved variance estimator using dual auxiliary variable under simple random sampling. Ahmad et al.22 suggested enhanced generalized class of estimators under simple random sampling.
Numerous authors have contributed to the estimation of population variance as mentioned above. In this article, we have suggested an enhanced estimator, which has valuable contribution and highlighted the importance of auxiliary variables in estimating of population variance. By using actual data and a simulation study, the suggested estimator has minimum mean square error and higher percentage relative efficiency as compared to recent well known existing estimators.
The primary goal of the current work is given by:
-
1.
To propose a new estimator for finite population variance using two auxiliary variables under simple random sampling.
-
2.
The properties i.e. bias and mean squared error of the propose estimator is derived up to the first order of approximation.
-
3.
Through the use of the actual data and a simulation study, the application of the propose estimator is highlighted.
-
4.
A comparison of estimators is done with existing estimators in terms of minimum mean square error.
-
5.
Our proposed estimator provides a novel and valuable contribution to the field of population variance estimation, and we believe it has the potential to be useful in a wide range of applications.
Notations and symbols
Consider \(\raisebox{2.1pt}{$\upomega$}\kern-2.3pt\rotatebox{80}{\tiny $\smile$}\)= \(\left({{\raisebox{2.1pt}{$\upomega$}\kern-2.3pt\rotatebox{80}{\tiny $\smile$}}}_{1},{{\raisebox{2.1pt}{$\upomega$}\kern-2.3pt\rotatebox{80}{\tiny $\smile$}}}_{2},\dots ,{{\raisebox{2.1pt}{$\upomega$}\kern-2.3pt\rotatebox{80}{\tiny $\smile$}}}_{\mathrm{N}}\right)\) is a population comprised of size N, consider a sample of size n is chosen from \({\raisebox{2.1pt}{$\upomega$}\kern-2.3pt\rotatebox{80}{\tiny $\smile$}}\) by using simple random sampling without replacement (SRSWOR). Let Y be the population study variable, and \({X}_{1}\) and \({X}_{2}\) denote the two auxiliary variables respectively. Consider population variance of Y, \({X}_{1}\), and \({X}_{2}\) are denoted by \({S}_{y}^{2}\), \({S}_{x1}^{2}\), and \({S}_{x2}^{2}\). Let \({s}_{y}^{2}\) = \(\frac{{\sum_{i=1}^{n}\left({y}_{i}-\overline{y }\right)}^{2}}{n-1}\), \({s}_{x1}^{2}\) = \(\frac{{\sum_{i=1}^{n}\left({x}_{1i}-{\overline{x} }_{1}\right)}^{2}}{n-1}\), \({s}_{x2}^{2}\) = \(\frac{{\sum_{i=1}^{n}\left({x}_{2i}-{\overline{x} }_{2}\right)}^{2}}{n-1}\), be the sample variances. \({S}_{y}^{2}\) = \(\frac{{\sum_{i=1}^{N}({y}_{i}-\overline{Y })}^{2}}{N-1}\), \({S}_{x}^{2}\) = \(\frac{{\sum_{i=1}^{N}\left({x}_{1i}-{\overline{X} }_{1}\right)}^{2}}{N-1}\), \({S}_{x2}^{2}\) = \(\frac{{\sum_{i=1}^{N}\left({x}_{2i}-{\overline{X} }_{2}\right)}^{2}}{N-1}\), be the population variances. Let \(\overline{y }\), \({\overline{x} }_{1}\), and \({\overline{x} }_{2}\) be the sample mean.
To derive the bias and mean square error, we consider the following error terms:
where \({{\mkern0.75mu\mathchar '26\mkern -9.75mu\lambda}}_{lmn}^{*}\) =\(\left({{\mkern0.75mu\mathchar '26\mkern -9.75mu\lambda}}_{lmn} - 1\right)\), and \({\reflectbox{$\lambda$}}\)=\(\left(\frac{1}{n}-\frac{1}{N}\right)\).where l, m, and n be the positive numerals and \({\upupsilon }_{200}, {\upupsilon }_{020}\), \({\mathrm{and \upsilon }}_{002}\) are the second-order moments about means of Y, \({X}_{1}\), and \({X}_{2}\), and \({{\mkern0.75mu\mathchar '26\mkern -9.75mu\lambda}}_{lmn}\) is the moment ratio.
The article has been structured as follows:
In Section "Introduction", we have provided a brief overview of the topic and highlighted the importance of estimating population variance. In Section “Notations and symbols”, we review some of the existing estimators for estimation of population variance. In Section "Existing estimators", we proposed a new estimator for population variance under simple random sampling. In Section "Proposed estimator", we will compare the proposed estimator with the existing estimators in terms of MSE. In Section "Efficiency assessment", we will present a numerical study to illustrate the performance of the proposed estimator and compare it with the existing estimators. In Section "Numerical study", we will perform simulations to validate the results of the numerical study and provide additional insights into the performance of the estimators. In Section "Simulation study", we will discuss the results of our study, the limitations of the proposed estimator, and the scope for future research in this field. In Section "Discussion", we will summarize the main findings of our study and provide concluding remarks on the estimation of population variance under simple random sampling.
Existing estimators
In this section, we study different variance estimators based on simple random sampling that are available in the literature.
-
(i)
The usual estimator \({\mathrm{T}}_{0}\), is given by:
$${\mathrm{T}}_{0}=\frac{{\sum_{i=1}^{n}\left({y}_{i}-\overline{Y }\right)}^{2}}{n-1}$$(1)The variance of \({\mathrm{T}}_{0}\), is given by:
$$\mathrm{Var}\left({\mathrm{T}}_{0}\right)={\reflectbox{$\lambda$}}{S}_{y}^{4}{{\mkern0.75mu\mathchar '26\mkern -9.75mu\lambda}}_{400}^{*}$$(2) -
(ii)
Isaki23 suggested a ratio estimator, which is given by:
$${\mathrm{T}}_{1}={s}_{y}^{2}\left(\frac{{S}_{x}^{2}}{{s}_{x}^{2}}\right)$$(3)The bias and mean square error of \({\mathrm{T}}_{1}\) are given as:
$$\mathrm{Bias}\left({\mathrm{T}}_{1}\right)={\reflectbox{$\lambda$}}{S}_{y}^{2}\left\{{{\mkern0.75mu\mathchar '26\mkern -9.75mu\lambda}}_{040}^{*}-{{\mkern0.75mu\mathchar '26\mkern -9.75mu\lambda}}_{220}^{*}\right\},$$and
$$\mathrm{MSE}\left({\mathrm{T}}_{1}\right)\cong {\reflectbox{$\lambda$}}{S}_{y}^{4}\left\{{{\mkern0.75mu\mathchar '26\mkern -9.75mu\lambda}}_{400}^{*}+{{\mkern0.75mu\mathchar '26\mkern -9.75mu\lambda}}_{040}^{*}-2{{\mkern0.75mu\mathchar '26\mkern -9.75mu\lambda}}_{220}^{*}\right\}.$$(4) -
(iii)
The difference type estimator is given by:
$${\mathrm{T}}_{2}={s}_{y}^{2}+\mathrm{k}\left({S}_{x}^{2}-{s}_{x}^{2}\right).$$(5)where \(\mathrm{k}\) is the constant, i.e. \(\mathrm{k}\) = \(\left[\frac{{S}_{y}^{2}{{\mkern0.75mu\mathchar '26\mkern -9.75mu\lambda}}_{220}^{*}}{{S}_{x}^{2}{{\mkern0.75mu\mathchar '26\mkern -9.75mu\lambda}}_{040}^{*}}\right]\).
The minimum variance at the optimal value of \(\mathrm{k}\) is given by:
$$\mathrm{Var}\left({\mathrm{T}}_{2}\right)={S}_{y}^{4}{{\reflectbox{$\lambda$}}{\mkern0.75mu\mathchar '26\mkern -9.75mu\lambda}}_{400}^{*}\left(1-{\rho }^{2}\right),$$(6)where \(\rho\) = \(\frac{{{\mkern0.75mu\mathchar '26\mkern -9.75mu\lambda}}_{220}^{*}}{{\left({{\mkern0.75mu\mathchar '26\mkern -9.75mu\lambda}}_{400}^{*}\right)}^\frac{1}{2}{\left({{\mkern0.75mu\mathchar '26\mkern -9.75mu\lambda}}_{040}^{*}\right)}^\frac{1}{2}}\) .
-
(iv)
The difference type estimator given by:
$${\mathrm{T}}_{3}=\left\{{{\raisebox{2.1pt}{$\upomega$}\kern-2.3pt\rotatebox{80}{\tiny $\smile$}}}_{1}{s}_{y}^{2}+{{\raisebox{2.1pt}{$\upomega$}\kern-2.3pt\rotatebox{80}{\tiny $\smile$}}}_{2}\left({S}_{x}^{2}-{s}_{x}^{2}\right)\right\},$$(7)where \({{\raisebox{2.1pt}{$\upomega$}\kern-2.3pt\rotatebox{80}{\tiny $\smile$}}}_{1}\) and \({{\raisebox{2.1pt}{$\upomega$}\kern-2.3pt\rotatebox{80}{\tiny $\smile$}}}_{2}\) is the unidentified constants:
$${{\raisebox{2.1pt}{$\upomega$}\kern-2.3pt\rotatebox{80}{\tiny $\smile$}}}_{1}=\frac{{{\mkern0.75mu\mathchar '26\mkern -9.75mu\lambda}}_{040}^{*}}{{\reflectbox{$\lambda$}}\left\{{{\mkern0.75mu\mathchar '26\mkern -9.75mu\lambda}}_{400}^{*}{{\mkern0.75mu\mathchar '26\mkern -9.75mu\lambda}}_{040}^{*}-{{\mkern0.75mu\mathchar '26\mkern -9.75mu\lambda}}_{220}^{*2}\right\}+{{\mkern0.75mu\mathchar '26\mkern -9.75mu\lambda}}_{040}^{*}},$$and
$${{\raisebox{2.1pt}{$\upomega$}\kern-2.3pt\rotatebox{80}{\tiny $\smile$}}}_{2}=\frac{{S}_{y}^{2} {{\mkern0.75mu\mathchar '26\mkern -9.75mu\lambda}}_{220}^{*}}{{S}_{x}^{2}\left\{{\reflectbox{$\lambda$}}\left({{\mkern0.75mu\mathchar '26\mkern -9.75mu\lambda}}_{400}^{*}{{\mkern0.75mu\mathchar '26\mkern -9.75mu\lambda}}_{040}^{*}-{{\mkern0.75mu\mathchar '26\mkern -9.75mu\lambda}}_{220}^{*2}\right)+{{\mkern0.75mu\mathchar '26\mkern -9.75mu\lambda}}_{040}^{*}\right\}}.$$The bias and minimum MSE at the optimal value of \({{\raisebox{2.1pt}{$\upomega$}\kern-2.3pt\rotatebox{80}{\tiny $\smile$}}}_{1}\) and \({{\raisebox{2.1pt}{$\upomega$}\kern-2.3pt\rotatebox{80}{\tiny $\smile$}}}_{2}\), are given by:
$$\mathrm{Bias}\left({\mathrm{T}}_{3}\right)={S}_{y}^{2}\left\{\frac{{{\mkern0.75mu\mathchar '26\mkern -9.75mu\lambda}}_{040}^{*}}{{\reflectbox{$\lambda$}}\left\{{{\mkern0.75mu\mathchar '26\mkern -9.75mu\lambda}}_{400}^{*}{{\mkern0.75mu\mathchar '26\mkern -9.75mu\lambda}}_{040}^{*}-{{\mkern0.75mu\mathchar '26\mkern -9.75mu\lambda}}_{220}^{*2}\right\}+{{\mkern0.75mu\mathchar '26\mkern -9.75mu\lambda}}_{040}^{*}}-1\right\},$$and
$$\mathrm{MSE}\left({\mathrm{T}}_{3}\right)=\frac{{\reflectbox{$\lambda$}}{S}_{y}^{4} \left[{{\mkern0.75mu\mathchar '26\mkern -9.75mu\lambda}}_{400}^{*}{{\mkern0.75mu\mathchar '26\mkern -9.75mu\lambda}}_{040}^{*}-{{\mkern0.75mu\mathchar '26\mkern -9.75mu\lambda}}_{220}^{*2}\right]}{{\reflectbox{$\lambda$}}\left[{{\mkern0.75mu\mathchar '26\mkern -9.75mu\lambda}}_{400}^{*}{{\mkern0.75mu\mathchar '26\mkern -9.75mu\lambda}}_{040}^{*}-{{\mkern0.75mu\mathchar '26\mkern -9.75mu\lambda}}_{220}^{*2}\right]+{{\mkern0.75mu\mathchar '26\mkern -9.75mu\lambda}}_{040}^{*}}$$(8) -
(v)
The exponential ratio and product type estimators by Singh et al.24:
$${{\mathrm{T}}_{4}=s}_{y}^{2}\mathrm{exp}\left\{\frac{{S}_{x}^{2}-{s}_{x}^{2}}{{S}_{x}^{2}+{s}_{x}^{2}}\right\}$$(9)$${{\mathrm{T}}_{5}=s}_{y}^{2}\mathrm{exp}\left\{\frac{{s}_{x}^{2}-{S}_{x}^{2}}{{s}_{x}^{2}+{S}_{x}^{2}}\right\}$$(10)The bias and MSE of \({\mathrm{T}}_{4}\) and \({\mathrm{T}}_{5}\) are given by:
$$\mathrm{Bias}\left({\mathrm{T}}_{4}\right)={\reflectbox{$\lambda$}}{S}_{y}^{2}\left\{\frac{3}{8}{{\mkern0.75mu\mathchar '26\mkern -9.75mu\lambda}}_{040}^{*}-\frac{1}{2}{{\mkern0.75mu\mathchar '26\mkern -9.75mu\lambda}}_{220}^{*}\right\},\mathrm{MSE}\left({\mathrm{T}}_{4}\right)={\reflectbox{$\lambda$}}{S}_{y}^{4}\left[{{\mkern0.75mu\mathchar '26\mkern -9.75mu\lambda}}_{400}^{*}+\frac{1}{4}{{\mkern0.75mu\mathchar '26\mkern -9.75mu\lambda}}_{040}^{*}-{{\mkern0.75mu\mathchar '26\mkern -9.75mu\lambda}}_{220}^{*}+\frac{1}{4}\right]$$(11)and
$$\mathrm{Bias}\left({\mathrm{T}}_{5}\right)={\reflectbox{$\lambda$}}{S}_{y}^{2}\left\{\frac{1}{2}{{\mkern0.75mu\mathchar '26\mkern -9.75mu\lambda}}_{220}^{*}-\frac{1}{8}{{\mkern0.75mu\mathchar '26\mkern -9.75mu\lambda}}_{040}^{*}\right\},\mathrm{MSE}\left({\mathrm{T}}_{5}\right)={\reflectbox{$\lambda$}}{S}_{y}^{4}\left[{{\mkern0.75mu\mathchar '26\mkern -9.75mu\lambda}}_{400}^{*}+\frac{1}{4}{{\mkern0.75mu\mathchar '26\mkern -9.75mu\lambda}}_{040}^{*}-{{\mkern0.75mu\mathchar '26\mkern -9.75mu\lambda}}_{220}^{*}+\frac{9}{4}\right]$$(12) -
(vi)
Grover and Kaur25, suggested the following estimator, which is given by:
$${\mathrm{T}}_{6}=\left[{{\raisebox{2.1pt}{$\upomega$}\kern-2.3pt\rotatebox{80}{\tiny $\smile$}}}_{3}{s}_{y}^{2}+{{\raisebox{2.1pt}{$\upomega$}\kern-2.3pt\rotatebox{80}{\tiny $\smile$}}}_{4}\left({S}_{x}^{2}-{s}_{x}^{2}\right)\right]\mathrm{exp}\left(\frac{{S}_{x}^{2}-{s}_{x}^{2}}{{S}_{x}^{2}+{s}_{x}^{2}}\right)$$(13)The bias of \({\mathrm{T}}_{6}\), is given by:
$$\mathrm{Bias}\left({\mathrm{T}}_{6}\right)=\left[-{S}_{y}^{2}+{{\raisebox{2.1pt}{$\upomega$}\kern-2.3pt\rotatebox{80}{\tiny $\smile$}}}_{3}{S}_{y}^{2}\left\{1+\frac{3}{8} {{\mkern0.75mu\mathchar '26\mkern -9.75mu\lambda}}_{040}^{*}-\frac{1}{2}{{\mkern0.75mu\mathchar '26\mkern -9.75mu\lambda}}_{220}^{*}\right\}+\frac{1}{2}{{\raisebox{2.1pt}{$\upomega$}\kern-2.3pt\rotatebox{80}{\tiny $\smile$}}}_{4}{S}_{x}^{2}{\reflectbox{$\lambda$}}{{\mkern0.75mu\mathchar '26\mkern -9.75mu\lambda}}_{040}^{*}\right],$$The minimum values of \({{\raisebox{2.1pt}{$\upomega$}\kern-2.3pt\rotatebox{80}{\tiny $\smile$}}}_{3}\) and \({{\raisebox{2.1pt}{$\upomega$}\kern-2.3pt\rotatebox{80}{\tiny $\smile$}}}_{4}\) are given by:
$${{\raisebox{2.1pt}{$\upomega$}\kern-2.3pt\rotatebox{80}{\tiny $\smile$}}}_{3}=\frac{\left\{8{{\mkern0.75mu\mathchar '26\mkern -9.75mu\lambda}}_{040}^{*}-{\reflectbox{$\lambda$}}{{\mkern0.75mu\mathchar '26\mkern -9.75mu\lambda}}_{040}^{*2}\right\}}{8\left[{\reflectbox{$\lambda$}}\left({{\mkern0.75mu\mathchar '26\mkern -9.75mu\lambda}}_{400}^{*}{{\mkern0.75mu\mathchar '26\mkern -9.75mu\lambda}}_{040}^{*}-{{\mkern0.75mu\mathchar '26\mkern -9.75mu\lambda}}_{220}^{*2}\right)+{{\mkern0.75mu\mathchar '26\mkern -9.75mu\lambda}}_{040}^{*}\right]},$$and
$${{\raisebox{2.1pt}{$\upomega$}\kern-2.3pt\rotatebox{80}{\tiny $\smile$}}}_{4}=\frac{{S}_{y}^{2}\left({\reflectbox{$\lambda$}}{{\mkern0.75mu\mathchar '26\mkern -9.75mu\lambda}}_{040}^{*2}-{\reflectbox{$\lambda$}}{{\mkern0.75mu\mathchar '26\mkern -9.75mu\lambda}}_{040}^{*}{{\mkern0.75mu\mathchar '26\mkern -9.75mu\lambda}}_{220}^{*}+8{{\mkern0.75mu\mathchar '26\mkern -9.75mu\lambda}}_{220}^{*}-4\left\{{{\mkern0.75mu\mathchar '26\mkern -9.75mu\lambda}}_{040}^{*}-{\reflectbox{$\lambda$}}\left({{\mkern0.75mu\mathchar '26\mkern -9.75mu\lambda}}_{400}^{*}{{\mkern0.75mu\mathchar '26\mkern -9.75mu\lambda}}_{040}^{*}-{{\mkern0.75mu\mathchar '26\mkern -9.75mu\lambda}}_{220}^{*2}\right)\right\}\right)}{8{ S}_{x}^{2}\left[{\reflectbox{$\lambda$}}\left\{{{\mkern0.75mu\mathchar '26\mkern -9.75mu\lambda}}_{400}^{*}{{\mkern0.75mu\mathchar '26\mkern -9.75mu\lambda}}_{040}^{*}-{{\mkern0.75mu\mathchar '26\mkern -9.75mu\lambda}}_{220}^{*2}\right\}+{{\mkern0.75mu\mathchar '26\mkern -9.75mu\lambda}}_{040}^{*}\right]}.$$The minimum MSE at the optimum values of \({{\raisebox{2.1pt}{$\upomega$}\kern-2.3pt\rotatebox{80}{\tiny $\smile$}}}_{3}\) and \({{\raisebox{2.1pt}{$\upomega$}\kern-2.3pt\rotatebox{80}{\tiny $\smile$}}}_{4}\), are given by:
$$\mathrm{MSE}\left({\mathrm{T}}_{6}\right)=\frac{{S}_{y}^{4}\left[64\left\{{{\mkern0.75mu\mathchar '26\mkern -9.75mu\lambda}}_{400}^{*}{{\mkern0.75mu\mathchar '26\mkern -9.75mu\lambda}}_{040}^{*}-{{\mkern0.75mu\mathchar '26\mkern -9.75mu\lambda}}_{220}^{*2}\right\}-{\reflectbox{$\lambda$}}{{\mkern0.75mu\mathchar '26\mkern -9.75mu\lambda}}_{040}^{*3}-16{\reflectbox{$\lambda$}}{{\mkern0.75mu\mathchar '26\mkern -9.75mu\lambda}}_{040}^{*} \left\{{{\mkern0.75mu\mathchar '26\mkern -9.75mu\lambda}}_{400}^{*}{{\mkern0.75mu\mathchar '26\mkern -9.75mu\lambda}}_{040}^{*}-{{\mkern0.75mu\mathchar '26\mkern -9.75mu\lambda}}_{220}^{*2}\right\}\right]}{64\left\{{\reflectbox{$\lambda$}}\left({{\mkern0.75mu\mathchar '26\mkern -9.75mu\lambda}}_{400}^{*}{{\mkern0.75mu\mathchar '26\mkern -9.75mu\lambda}}_{040}^{*}-{{\mkern0.75mu\mathchar '26\mkern -9.75mu\lambda}}_{220}^{*2}\right)+{{\mkern0.75mu\mathchar '26\mkern -9.75mu\lambda}}_{040}^{*}\right\}}.$$(14)
Proposed estimator
The use of auxiliary variables can improve the precision and efficiency of an estimator during both the design and estimation procedures. Taking motivation from the work of Ahmad et al.26, we propose an enhanced estimator for estimation of finite population variance using two auxiliary variables under simple random sampling. Our suggested variance estimator based on simple random sampling is more efficient and flexible as compared to all the existing estimators considered in this study, which is given by:
where \({{\raisebox{2.1pt}{$\upomega$}\kern-2.3pt\rotatebox{80}{\tiny $\smile$}}}_{5}\), \({{\raisebox{2.1pt}{$\upomega$}\kern-2.3pt\rotatebox{80}{\tiny $\smile$}}}_{6}\) and \({{\raisebox{2.1pt}{$\upomega$}\kern-2.3pt\rotatebox{80}{\tiny $\smile$}}}_{7}\) are unknown constants.
After expanding the above equation, we have
The bias and mean square error of \({\mathrm{T}}_{prop}\), is given by:
The optimal values of \({{\raisebox{2.1pt}{$\upomega$}\kern-2.3pt\rotatebox{80}{\tiny $\smile$}}}_{5}\), \({{\raisebox{2.1pt}{$\upomega$}\kern-2.3pt\rotatebox{80}{\tiny $\smile$}}}_{6}\), and \({{\raisebox{2.1pt}{$\upomega$}\kern-2.3pt\rotatebox{80}{\tiny $\smile$}}}_{7}\) are obtained by minimizing (17), and are given by:
and
The minimal MSE at the optimum values of \({{{\raisebox{2.1pt}{$\upomega$}\kern-2.3pt\rotatebox{80}{\tiny $\smile$}}}_{5}}_{(Opt)}\), \({{{\raisebox{2.1pt}{$\upomega$}\kern-2.3pt\rotatebox{80}{\tiny $\smile$}}}_{6}}_{(Opt)}\) and \({{{\raisebox{2.1pt}{$\upomega$}\kern-2.3pt\rotatebox{80}{\tiny $\smile$}}}_{7}}_{(Opt)}\), are given by:
where
Efficiency assessment
In this section, we compare the proposed estimator with existing estimators in terms of mean square error.
-
(1)
$$\mathrm{Var}\left({\mathrm{T}}_{0}\right)-\mathrm{MSE}\left({\mathrm{T}}_{prop}\right)>0$$$${S}_{y}^{4}{\reflectbox{$\lambda$}}{{\mkern0.75mu\mathchar '26\mkern -9.75mu\lambda}}_{400}^{*}-\frac{{\reflectbox{$\lambda$}}{S}_{y}^{2}\left[64\left({\upgamma }^{*}+1\right)-{\reflectbox{$\lambda$}}\left(\frac{{{\mkern0.75mu\mathchar '26\mkern -9.75mu\lambda}}_{040}^{*2}}{{{\mkern0.75mu\mathchar '26\mkern -9.75mu\lambda}}_{400}^{*}}\right)-16 {\reflectbox{$\lambda$}}{{\mkern0.75mu\mathchar '26\mkern -9.75mu\lambda}}_{040}^{*}\left({\upgamma }^{*}+1\right)\right]}{64\left\{\frac{1}{{{\mkern0.75mu\mathchar '26\mkern -9.75mu\lambda}}_{400}^{*}}+{\reflectbox{$\lambda$}}\left({\upgamma }^{*}+1\right)\right\}}>0$$(19)$$\frac{{\reflectbox{$\lambda$}}{S}_{y}^{2}\left[64{S}_{y}^{2}{{\mkern0.75mu\mathchar '26\mkern -9.75mu\lambda}}_{400}^{*}\left\{{\Psi }_{1}\right\}+\left[16{\reflectbox{$\lambda$}}{{\mkern0.75mu\mathchar '26\mkern -9.75mu\lambda}}_{040}^{*}\left({\upgamma }^{*}+1\right)-64{\upgamma }^{*}+{\reflectbox{$\lambda$}}\left(\frac{{{\reflectbox{$\lambda$}}}_{040}^{*2}}{{{\mkern0.75mu\mathchar '26\mkern -9.75mu\lambda}}_{400}^{*}}\right)-64\right]\right]}{64{\Psi }_{1}}>0$$(20)
where \({\Psi }_{1}\)=\(\frac{1}{{{\mkern0.75mu\mathchar '26\mkern -9.75mu\lambda}}_{400}^{*}}+{\reflectbox{$\lambda$}}\left({\upgamma }^{*}+1\right),\)
$${\Psi }_{2}=\left[64{\upgamma }^{*}+64-{\reflectbox{$\lambda$}}\left(\frac{{{\mkern0.75mu\mathchar '26\mkern -9.75mu\lambda}}_{040}^{*2}}{{{\mkern0.75mu\mathchar '26\mkern -9.75mu\lambda}}_{400}^{*}}\right)-16{\reflectbox{$\lambda$}}{{\mkern0.75mu\mathchar '26\mkern -9.75mu\lambda}}_{040}^{*}\left({\upgamma }^{*}+1\right)\right],$$$${\Psi }_{3}=\frac{1}{4}{\reflectbox{$\lambda$}}{{\mkern0.75mu\mathchar '26\mkern -9.75mu\lambda}}_{040}^{*}\left({\upgamma }^{*}+1\right)-{\upgamma }^{*}+\frac{1}{64}{\reflectbox{$\lambda$}}\left(\frac{{{\mkern0.75mu\mathchar '26\mkern -9.75mu\lambda}}_{040}^{*2}}{{{\mkern0.75mu\mathchar '26\mkern -9.75mu\lambda}}_{400}^{*}}\right)-1$$ -
(2)
$$\mathrm{MSE}\left({\mathrm{T}}_{1}\right)-\mathrm{MSE}\left({\mathrm{T}}_{prop}\right)>0$$$${\reflectbox{$\lambda$}}{S}_{y}^{4} \left\{{{\mkern0.75mu\mathchar '26\mkern -9.75mu\lambda}}_{400}^{*}+{{\mkern0.75mu\mathchar '26\mkern -9.75mu\lambda}}_{040}^{*}-2{{\mkern0.75mu\mathchar '26\mkern -9.75mu\lambda}}_{220}^{*}\right\}-\frac{{\reflectbox{$\lambda$}}{S}_{y}^{2}\left[64\left({\upgamma }^{*}+1\right)-{\reflectbox{$\lambda$}}\left(\frac{{{\mkern0.75mu\mathchar '26\mkern -9.75mu\lambda}}_{040}^{*2}}{{{\mkern0.75mu\mathchar '26\mkern -9.75mu\lambda}}_{400}^{*}}\right)-16 {\reflectbox{$\lambda$}}{{\mkern0.75mu\mathchar '26\mkern -9.75mu\lambda}}_{040}^{*}\left({\upgamma }^{*}+1\right)\right]}{64{\Psi }_{1}}>0$$(21)$$\frac{{\reflectbox{$\lambda$}}{S}_{y}^{2}\left({S}_{y}^{2}\left\{{\mathcal{Y}}\right\}{\Psi }_{1}+{\Psi }_{3}\right)}{{\Psi }_{1}}>0$$(22)
-
(3)
$$\mathrm{Var}\left({\mathrm{T}}_{2}\right)-\mathrm{MSE}\left({\mathrm{T}}_{prop}\right)>0$$$${S}_{y}^{4}{{\reflectbox{$\lambda$}}{\mkern0.75mu\mathchar '26\mkern -9.75mu\lambda}}_{400}^{*}\left(1-{\rho }^{2}\right)-\frac{{\reflectbox{$\lambda$}}{S}_{y}^{2}\left[64\left({\upgamma }^{*}+1\right)-{\reflectbox{$\lambda$}}\left(\frac{{{\mkern0.75mu\mathchar '26\mkern -9.75mu\lambda}}_{040}^{*2}}{{{\mkern0.75mu\mathchar '26\mkern -9.75mu\lambda}}_{400}^{*}}\right)-16 {\reflectbox{$\lambda$}}{{\mkern0.75mu\mathchar '26\mkern -9.75mu\lambda}}_{040}^{*}\left({\upgamma }^{*}+1\right)\right]}{64\left\{\frac{1}{{{\mkern0.75mu\mathchar '26\mkern -9.75mu\lambda}}_{400}^{*}}+{\reflectbox{$\lambda$}}\left({\upgamma }^{*}+1\right)\right\}}>0$$(23)$$\frac{{\reflectbox{$\lambda$}}{S}_{y}^{2}\left({S}_{y}^{2}{{\mkern0.75mu\mathchar '26\mkern -9.75mu\lambda}}_{400}^{*}\left(1-{\rho }^{2}\right){\Psi }_{1}+{\Psi }_{3}\right)}{{\Psi }_{1}}>0$$(24)
-
(4)
$$\mathrm{MSE}\left({\mathrm{T}}_{3}\right)-\mathrm{MSE}\left({\mathrm{T}}_{prop}\right)>0$$$$\frac{{\reflectbox{$\lambda$}}{S}_{y}^{4}\left[{\mathcal{Y}}\right]}{{\reflectbox{$\lambda$}}\left[{\mathcal{Y}}\right]+{{\mkern0.75mu\mathchar '26\mkern -9.75mu\lambda}}_{040}^{*}}-\frac{{\reflectbox{$\lambda$}}{S}_{y}^{2}\left[64\left({\upgamma }^{*}+1\right)-{\reflectbox{$\lambda$}}\left(\frac{{{\mkern0.75mu\mathchar '26\mkern -9.75mu\lambda}}_{040}^{*2}}{{{\mkern0.75mu\mathchar '26\mkern -9.75mu\lambda}}_{400}^{*}}\right)-16 {\reflectbox{$\lambda$}}{{\mkern0.75mu\mathchar '26\mkern -9.75mu\lambda}}_{040}^{*}\left({\upgamma }^{*}+1\right)\right]}{64\left\{\frac{1}{{{\mkern0.75mu\mathchar '26\mkern -9.75mu\lambda}}_{400}^{*}}+{\reflectbox{$\lambda$}}\left({\upgamma }^{*}+1\right)\right\}}>0$$(25)$$\frac{\frac{1}{64}\left[{\reflectbox{$\lambda$}}{S}_{y}^{2}\left({{\mkern0.75mu\mathchar '26\mkern -9.75mu\lambda}}_{040}^{*}{\Psi }_{2}\right)+{{\Psi }_{1}S}_{y}^{2}+{\Psi }_{2}\left({\mathcal{Y}}\right)\right]}{{\reflectbox{$\lambda$}}(\left[{\mathcal{Y}}\right]+{{\mkern0.75mu\mathchar '26\mkern -9.75mu\lambda}}_{040}^{*}){\Psi }_{1}}>0.$$(26)
where \(\mathcal{Y}\) = \({{\mkern0.75mu\mathchar '26\mkern -9.75mu\lambda}}_{400}^{*}{{\mkern0.75mu\mathchar '26\mkern -9.75mu\lambda}}_{040}^{*}-{{\mkern0.75mu\mathchar '26\mkern -9.75mu\lambda}}_{220}^{*2}\).
-
(5)
$$\mathrm{MSE}\left({\mathrm{T}}_{4}\right)-\mathrm{MSE}\left({\mathrm{T}}_{prop}\right)>0$$$${\reflectbox{$\lambda$}}{S}_{y}^{4}\left[{{\mkern0.75mu\mathchar '26\mkern -9.75mu\lambda}}_{400}^{*}+\frac{1}{4}{{\mkern0.75mu\mathchar '26\mkern -9.75mu\lambda}}_{040}^{*}-{{\mkern0.75mu\mathchar '26\mkern -9.75mu\lambda}}_{220}^{*}+\frac{1}{4} \right]-\frac{{\reflectbox{$\lambda$}}{S}_{y}^{2}\left[64\left({\upgamma }^{*}+1\right)-{\reflectbox{$\lambda$}}\left(\frac{{{\mkern0.75mu\mathchar '26\mkern -9.75mu\lambda}}_{040}^{*2}}{{{\mkern0.75mu\mathchar '26\mkern -9.75mu\lambda}}_{400}^{*}}\right)-16 {\reflectbox{$\lambda$}}{{\mkern0.75mu\mathchar '26\mkern -9.75mu\lambda}}_{040}^{*}\left({\upgamma }^{*}+1\right)\right]}{64\left\{\frac{1}{{{\mkern0.75mu\mathchar '26\mkern -9.75mu\lambda}}_{400}^{*}}+ {\reflectbox{$\lambda$}}\left({\upgamma }^{*}+1\right)\right\}}>0$$(27)$$\frac{{\reflectbox{$\lambda$}}{S}_{y}^{2}\left[{S}_{y}^{2}\left\{{{\mkern0.75mu\mathchar '26\mkern -9.75mu\lambda}}_{400}^{*}+\frac{1}{4}{{\mkern0.75mu\mathchar '26\mkern -9.75mu\lambda}}_{040}^{*}-{{\mkern0.75mu\mathchar '26\mkern -9.75mu\lambda}}_{220}^{*}+\frac{1}{ 4} \right\}{\Psi }_{1}+{\Psi }_{3} \right]}{{\Psi }_{1}}>0$$(28)
-
(6)
$$\mathrm{MSE}\left({\mathrm{T}}_{5}\right)-\mathrm{MSE}\left({\mathrm{T}}_{prop}\right)>0$$$${\reflectbox{$\lambda$}}{S}_{y}^{4}\left[{{\mkern0.75mu\mathchar '26\mkern -9.75mu\lambda}}_{400}^{*}+\frac{1}{4}{{\mkern0.75mu\mathchar '26\mkern -9.75mu\lambda}}_{040}^{*}-{{\mkern0.75mu\mathchar '26\mkern -9.75mu\lambda}}_{220}^{*}+\frac{9}{4} \right]-\frac{{\reflectbox{$\lambda$}}{S}_{y}^{2}\left[64\left({\upgamma }^{*}+1\right)-{\reflectbox{$\lambda$}}\left(\frac{{{\mkern0.75mu\mathchar '26\mkern -9.75mu\lambda}}_{040h}^{*2}}{{{\mkern0.75mu\mathchar '26\mkern -9.75mu\lambda}}_{400h}^{*}}\right)-16 {\reflectbox{$\lambda$}}{{\mkern0.75mu\mathchar '26\mkern -9.75mu\lambda}}_{040h}^{*}\left({\upgamma }^{*}+1\right)\right]}{64\left\{\frac{1}{{{\mkern0.75mu\mathchar '26\mkern -9.75mu\lambda}}_{400h}^{*}}+{\reflectbox{$\lambda$}}\left({\upgamma }^{*}+1\right)\right\}}>0$$(29)$$\frac{{\reflectbox{$\lambda$}}{S}_{y}^{2}\left[{S}_{y}^{2}\left\{{{\mkern0.75mu\mathchar '26\mkern -9.75mu\lambda}}_{400}^{*}+\frac{1}{4}{{\mkern0.75mu\mathchar '26\mkern -9.75mu\lambda}}_{040}^{*}-{{\mkern0.75mu\mathchar '26\mkern -9.75mu\lambda}}_{220}^{*}+\frac{9}{4} \right\}{\Psi }_{1}+{\Psi }_{3} \right]}{{\Psi }_{1}}>0$$(30)
-
(7)
$$\mathrm{MSE}\left({\mathrm{T}}_{6}\right)-\mathrm{MSE}\left({\mathrm{T}}_{prop}\right)>0$$$$\frac{{S}_{y}^{4}\left[64\left\{{\mathcal{Y}}\right\}-{\reflectbox{$\lambda$}}{{\mkern0.75mu\mathchar '26\mkern -9.75mu\lambda}}_{040}^{*3}-16 {\reflectbox{$\lambda$}}{{\mkern0.75mu\mathchar '26\mkern -9.75mu\lambda}}_{040}^{*} \left\{{\mathcal{Y}}\right\}\right]}{64\left\{{\reflectbox{$\lambda$}}\left({\mathcal{Y}}\right)+{{\mkern0.75mu\mathchar '26\mkern -9.75mu\lambda}}_{040}^{*}\right\}}-\frac{{\reflectbox{$\lambda$}}{S}_{y}^{4}\left[64\left({\upgamma }^{*}+1\right)-{\reflectbox{$\lambda$}}\left(\frac{{{\mkern0.75mu\mathchar '26\mkern -9.75mu\lambda}}_{040}^{*2}}{{{\mkern0.75mu\mathchar '26\mkern -9.75mu\lambda}}_{400}^{*}}\right)-16 {\reflectbox{$\lambda$}}{{\mkern0.75mu\mathchar '26\mkern -9.75mu\lambda}}_{040}^{*}\left({\upgamma }^{*}+1\right)\right]}{64\left\{{\Psi }_{1}\right\}}>0.$$(31)
Numerical study
In this section, we conduct a mathematical exploration to compare the performance of our suggested estimator with existing estimators. We consider four actual datasets that are sampled using simple random sampling, and their descriptions are provided in Table 1. To compare the performance of the estimators, we use the concept of percentage relative efficiency (PRE), which is a widely used measure in statistics.
To calculate the PRE, we use the following expression:
where (u) = (0, 1, 2, 3, 4, 5, 6).
The expression of mean square error (MSE) using four actual data sets are given in Table 2. The conditional values of estimators are given in Table 3. The expression of percentage relative efficiency (PRE) using four actual data sets are given in Table 4.
Population 1: [Source: Murthy27]
Y: Output of the workshop,
\({X}_{1}\)=Fixed capital,
\({X}_{2}\)= Number of worker.
Population 2: [Source: Cochran28]
Y: Food cost,
\({X}_{1}\): size of the family,
and \({X}_{2}\)= Income.
Population 3: [Source: Singh29]
Y: Estimated fish caught in the year 1995,
\({X}_{1}\): Estimated fish caught in the year 1994,
\({X}_{2}\)= Estimated fish caught in the year 1993.
Population 4: [Source: Sukhathme30]
Y: Area under wheat in 1937,
\({X}_{1}\)= Area under wheat in 1936,
\({X}_{2}\)= Total cultivated area in 1931.
Simulation study
This portion shows the algebraic method for the evaluation of Tprop with the existing counterparts T0, T1, T2, T3, T4, T5, and T6. The algorithm for simulation study to demonstrate the performance of numerous estimators of T0 is as follows:
-
1.
Produce two independent random variables X from N \(\left( {\mu ,\sigma^{2} } \right)\) and Z from N \(\left( {\mu_{1} ,\sigma_{1}^{2} } \right)\) using the box-Muller method.
-
2.
Set \(Y=\rho X+\sqrt{1-{\rho }^{2}}\), where \(0<\rho =0.5, 0.6, 0.7, 0.8>1\).
-
3.
Return the pair (Y, X, Z).
-
4.
Let the population-I with the parameters \(\mu = 3,\) \(\sigma =2,\) \(\mu_{1} = 5\) and \(\sigma_{1} = 3\) and in step-1 and repeat steps 1–3 for 500 times. This population will have different variances for X and Z.
-
5.
Likewise, produce the population-II with the parameters \(\mu =2, \sigma =3,\) \({\mu }_{1}=4\) and \(\sigma_{1} = 3\) in step-1 and repeat steps 1–3 for 500 times.
-
6.
From the population of size N = 50, draw 500 SRSWOR (\({y}_{i}\),\({x}_{i}\)) (i = 1, 2,3,…,n) of size n = 10, 15 and 20.
-
7.
The Average MSE of the estimators is defined by:
$$Average MSE\left(T\right)=\frac{1}{500}\sum_{k=1}^{500}E{({T}_{k}-\mathrm{T}0)}^{2}$$ -
8.
The percentage relative efficiency of estimators as compared to usual estimator T0 is defined by:
$$PRE\left(T\right)=\frac{Var(\mathrm{T}0)\times 100}{MSE(T)}$$ -
9.
The Average PRE of the estimators is defined by
$$Average PRE\left(T\right)=\frac{1}{500}\sum_{k=1}^{500}PRE({T}_{k})$$
The results of the simulation study are presented in Tables 5 and 6.
Discussion
As stated before we take four data sets and simulation, to notice the performance of the proposed enhanced variance estimator based on simple random sampling using two auxiliary variables. The proposed estimator is compared with existing estimators concerning MSE and PRE. The summary statistic of actual data sets is presented in Table 1. The MSE of estimators are given in Table 2 and the conditional values of estimators are given in Table 3. From the mathematical results using actual data, and PRE are given in Table 4, it is observed that the proposed estimator is efficient in terms of efficiency. We evaluated the efficiency of our proposed variance estimator in simple random sampling using a simulation study. From Tables 5 and 6, it can be concluded that for different values of the correlation coefficient \(\rho\) = 0.5, 0.6, 0.7, 0.8 and the sample size n = 10, 15, 20 the performance of proposed estimator Tprop is more efficient in comparison with the usual estimator and other existing estimators. The result of the simulation study clearly shows that the average PRE of the proposed estimator Tprop is higher as compared to the average PRE of the existing estimator for both the simulated data sets I and dataset II. Hence it can be recommended to use it further.
Conclusion
In this article, we have suggested a novel variance estimator using two auxiliary variables under simple random sampling. We conducted a comprehensive comparison of our suggested estimator with several existing counterparts using four real data sets. We also obtained the properties of the proposed estimator under the first order of approximation and carried out a simulation study to test its robustness and generalizability. The results showed that the suggested estimator outperformed to the existing estimators in terms of efficiency. Future research could focus on extending our proposed estimator to two-phase sampling designs, and to situations with non-response and measurement error, where information from auxiliary variables can be utilized to estimate population variance more accurately. Additionally, it would be interesting to investigate the performance of our proposed estimator in more complex survey settings, such as clustered and stratified sampling.
Change history
02 April 2024
A Correction to this paper has been published: https://doi.org/10.1038/s41598-024-57145-4
References
Garcia, M. R. & Cebrian, A. A. Repeated substitution method: The ratio estimator for the population variance. Metrika 43(1), 101–105 (1996).
Upadhyaya, L. N., Singh, H. P. & Singh, S. A class of estimators for estimating the variance of the ratio estimator. J. Japan Stat. Soc. 34(1), 47–63 (2004).
Chandra, P. & Singh, H. P. A family of estimators for population variance using knowledge of kurtosis of an auxiliary variable in sample survey. Stat. Transit. 7(1), 27–34 (2005).
Arcos, A., Rueda, M., Martınez, M. D., González, S. & Roman, Y. Incorporating the auxiliary information available in variance estimation. Appl. Math. Comput. 160(2), 387–399 (2005).
Kadilar, C. & Cingi, H. Improvement in variance estimation in simple random sampling. Commun. Stat. Theory Methods 36(11), 2075–2081 (2007).
Grover, L. K. A correction note on improvement in variance estimation using auxiliary information. Commun. Stat. Theory Methods 39(5), 753–764 (2010).
Sharma, P. & Singh, R. A generalized class of estimators for finite population variance in presence of measurement errors. J. Mod. Appl. Stat. Methods 12(2), 13 (2013).
Singh, H. P. & Solanki, R. S. Improved estimation of finite population variance using auxiliary information. Commun. Stat. Theory Methods 42(15), 2718–2730 (2013).
Yadav, S. K. & Kadilar, C. A two parameter variance estimator using auxiliary information. Appl. Math. Comput. 226, 117–122 (2014).
Adichwal, N. K., Sharma, P., Verma, H. K. & Singh, R. Generalized class of estimators for population variance using auxiliary attribute. Int. J. Appl. Comput. Math. 2(4), 499–508 (2016).
Adichwal, N. K., Sharma, P. & Singh, R. Generalized class of estimators for population variance using information on two auxiliary variables. Int. J. Appl. Comput. Math. 3(2), 651–661 (2017).
Lone, H. A. & Tailor, R. Estimation of population variance in simple random sampling. J. Stat. Manag. Syst. 20(1), 17–38 (2017).
Singh, G. N. & Khalid, M. Effective estimation strategy of population variance in two-phase successive sampling under random non-response. J. Stat. Theor. Pract. 13(1), 1–28 (2019).
Audu, A. et al. Difference-cum-ratio estimators for estimating finite population coefficient of variation in simple random sampling. Asian J. Probab. Stat. 13(3), 13–29 (2021).
Shahzad, U., Ahmad, I., Almanjahie, I. M., Al-Noor, N. H. & Hanif, M. A novel family of variance estimators based on L-moments and calibration approach under stratified random sampling. Commun. Stat. Simul. Comput. 1, 1–14 (2021).
Singh, G. N. & Khalid, M. A composite class of estimators to deal with the issue of variance estimation under the situations of random non-response in two-occasion successive sampling. Commun. Stat. Simul. Comput. 51(4), 1454–1473 (2022).
Zaman, T. & Bulut, H. A new class of robust ratio estimators for finite population variance. Sci. Iran. 1, 1 (2022).
Shahzad, U., Ahmad, I., Al-Noor, N. H. & Benedict, T. J. Use of calibration constraints and linear moments for variance estimation under stratified adaptive cluster sampling. Soft Comput. 26(21), 11185–11196 (2022).
Ashutosh, A., Shahzad, U., Ahmad, I., Raza, M. A. & Benedict, T. J. Calibration approach for variance estimation of small domain. Math. Probl. Eng. 2022, 1–8 (2022).
Ahmad, S. et al. Improved estimation of finite population variance using dual supplementary information under stratified random sampling. Math. Probl. Eng. 2022(1), 1 (2022).
Ahmad, S. et al. A simulation study: Improved ratio-in-regression type variance estimator based on dual use of auxiliary variable under simple random sampling. Plos One 17(11), e0276540 (2022).
Ahmad, S. et al. A new improved generalized class of estimators for population distribution function using auxiliary variable under simple random sampling. Sci. Rep. 13(1), 5415 (2023).
Isaki, C. T. Variance estimation using auxiliary information. J. Am. Stat. Assoc. 78(381), 117–123 (1983).
Singh, R., Chauhan, P., Sawan, N. & Smarandache, F. Improvement in estimating the population mean using exponential estimator in simple random sampling. Int. J. Stat. Econ. 3(A09), 13–18 (2009).
Grover, L. K. & Kaur, A. Ratio type exponential estimators of population mean under linear transformation of auxiliary variable: Theory and methods. S. Afr. Stat. J. 45(2), 205–230 (2011).
Ahmad, S. et al. Dual use of helping information for estimating the finite population mean under the stratified random sampling scheme. J. Math. 1, 1 (2021).
Murthy, M. N. (1967). Sampling: Theory and methods. Statistical Pub. Society.
Cochran, W. G. Sampling techniques (Wiley, 1977).
Singh, S. Advanced sampling theory with applications: How michael “Selected” Amy (Vol. 2) (Springer, 2003).
Sukhathme, P. V. Sampling theory of survey with application (The Indian Society of Agricultural Statistics, 1970).
Acknowledgements
Researchers Supporting Project number (RSPD2023R458), King Saud University, Riyadh, Saudi Arabia.
Author information
Authors and Affiliations
Contributions
The entire manuscript is being written by S.A. and N.K.A. M.A. and J.S. help us to improve the language of the manuscript and supervise the overall manuscript. N.A., M.E. and H.A. help us to revise the manuscript.
Corresponding author
Ethics declarations
Competing interests
The authors declare no competing interests.
Additional information
Publisher's note
Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
The original online version of this Article was revised: The original version of this Article omitted affiliations for Hijaz Ahmad. The correct affiliations are listed in the correction notice.
Rights and permissions
Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/.
About this article
Cite this article
Ahmad, S., Adichwal, N.K., Aamir, M. et al. An enhanced estimator of finite population variance using two auxiliary variables under simple random sampling. Sci Rep 13, 21444 (2023). https://doi.org/10.1038/s41598-023-44169-5
Received:
Accepted:
Published:
DOI: https://doi.org/10.1038/s41598-023-44169-5
Comments
By submitting a comment you agree to abide by our Terms and Community Guidelines. If you find something abusive or that does not comply with our terms or guidelines please flag it as inappropriate.