Introduction

The use of auxiliary information can increase the precision of population parameter estimates in survey sampling, assuming that it is strongly associated with the study variable. To get precise estimates of the unknown population characteristics, auxiliary information is essential in parameter selection and at estimation stage. Estimating population variance is specifically important in populations that are likely to be skewed, and the accuracy of the estimation has been the focus of many research studies. By reducing the dispersion of the estimate obtained from various samples, we can obtain a more accurate representation of the population. Variance is a feature that can be used to describe a statistical population, and brilliant efforts have been made to estimate this characteristic as accurately as possible. The goal is to reduce the scattering of the calculated values, and appropriate use of auxiliary variables in survey sampling results in significant decrease in the variance of the estimator of unknown population parameter(s). In many situations, it is necessary to estimate the population variance of the study variable. The variance estimate for the population is crucial in a number of disciplines, such as agriculture, health, biology, and business, where we confront populations that are likely to be skewed. Everywhere in the world, there are differences in the things we do every day. The idea that no two things or persons are exactly alike is one that is widely held. For instance, a farmer must be well aware of how the geographical climate varies over time in order to decide how to sow his land. A clinician must have a thorough understanding of the variations in human hypertension and fever in order to administer the appropriate care.

The estimation of population variance has been a matter of interest for several researchers over the years. Garcia and Cebrian1 suggested the ratio estimator for the population variance. Upadhyaya et al.2 recommended a broad class of estimators for variance of the ratio estimator. Chandra and Singh3 discussed a family of estimators for population variance. Arcos et al.4 suggested incorporating the auxiliary information available in variance estimation. Kadilar and Cingi5 discussed improvement in variance estimation in simple random sampling. Grover6 discussed a correction note on improvement in variance using auxiliary information. Sharma and Singh7 discussed a generalized class of estimators for finite population variance. Singh and Solanki8 suggested improved estimation of finite population variance using auxiliary information. Yadav and Kadilar9 recommended a two parameter variance estimator using auxiliary information. Adichwal et al.10 proposed generalized class of estimators for population variance using auxiliary attribute. Adichwal et al.11 recommended generalized class of estimators for population variance using information on two auxiliary variables. Lone and Tailor12 suggested estimation of population variance in simple random sampling. Singh and Khalid13 discussed effective estimation strategy of population variance in two-phase successive sampling under random non-response. Audu et al.14 proposed difference-cum-ratio estimators for estimating finite population coefficient of variation in simple random sampling. Shahzad et al.15 suggested a new variance estimators for calibration approach under stratified random sampling. Singh and Khalid16 suggested a composite class of estimators to deal with the issue of variance estimation under the situations of random non-response in two-occasion successive sampling. Zaman and Bulut17 suggested a new class of robust ratio estimators for finite population variance. Shahzad et al.18 discussed use of calibration constraints and linear moments for variance estimation under stratified adaptive cluster sampling. Ashutosh et al.19 suggested calibration Approach for Variance Estimation of Small Domain. Ahmad20 discussed improved estimation of population variance under stratified random sampling. Ahmad21 discussed improved variance estimator using dual auxiliary variable under simple random sampling. Ahmad et al.22 suggested enhanced generalized class of estimators under simple random sampling.

Numerous authors have contributed to the estimation of population variance as mentioned above. In this article, we have suggested an enhanced estimator, which has valuable contribution and highlighted the importance of auxiliary variables in estimating of population variance. By using actual data and a simulation study, the suggested estimator has minimum mean square error and higher percentage relative efficiency as compared to recent well known existing estimators.

The primary goal of the current work is given by:

  1. 1.

    To propose a new estimator for finite population variance using two auxiliary variables under simple random sampling.

  2. 2.

    The properties i.e. bias and mean squared error of the propose estimator is derived up to the first order of approximation.

  3. 3.

    Through the use of the actual data and a simulation study, the application of the propose estimator is highlighted.

  4. 4.

    A comparison of estimators is done with existing estimators in terms of minimum mean square error.

  5. 5.

    Our proposed estimator provides a novel and valuable contribution to the field of population variance estimation, and we believe it has the potential to be useful in a wide range of applications.

Notations and symbols

Consider \(\raisebox{2.1pt}{$\upomega$}\kern-2.3pt\rotatebox{80}{\tiny $\smile$}\)= \(\left({{\raisebox{2.1pt}{$\upomega$}\kern-2.3pt\rotatebox{80}{\tiny $\smile$}}}_{1},{{\raisebox{2.1pt}{$\upomega$}\kern-2.3pt\rotatebox{80}{\tiny $\smile$}}}_{2},\dots ,{{\raisebox{2.1pt}{$\upomega$}\kern-2.3pt\rotatebox{80}{\tiny $\smile$}}}_{\mathrm{N}}\right)\) is a population comprised of size N, consider a sample of size n is chosen from \({\raisebox{2.1pt}{$\upomega$}\kern-2.3pt\rotatebox{80}{\tiny $\smile$}}\) by using simple random sampling without replacement (SRSWOR). Let Y be the population study variable, and \({X}_{1}\) and \({X}_{2}\) denote the two auxiliary variables respectively. Consider population variance of Y, \({X}_{1}\), and \({X}_{2}\) are denoted by \({S}_{y}^{2}\), \({S}_{x1}^{2}\), and \({S}_{x2}^{2}\). Let \({s}_{y}^{2}\) = \(\frac{{\sum_{i=1}^{n}\left({y}_{i}-\overline{y }\right)}^{2}}{n-1}\), \({s}_{x1}^{2}\) = \(\frac{{\sum_{i=1}^{n}\left({x}_{1i}-{\overline{x} }_{1}\right)}^{2}}{n-1}\), \({s}_{x2}^{2}\) = \(\frac{{\sum_{i=1}^{n}\left({x}_{2i}-{\overline{x} }_{2}\right)}^{2}}{n-1}\), be the sample variances. \({S}_{y}^{2}\) = \(\frac{{\sum_{i=1}^{N}({y}_{i}-\overline{Y })}^{2}}{N-1}\), \({S}_{x}^{2}\) = \(\frac{{\sum_{i=1}^{N}\left({x}_{1i}-{\overline{X} }_{1}\right)}^{2}}{N-1}\), \({S}_{x2}^{2}\) = \(\frac{{\sum_{i=1}^{N}\left({x}_{2i}-{\overline{X} }_{2}\right)}^{2}}{N-1}\), be the population variances. Let \(\overline{y }\), \({\overline{x} }_{1}\), and \({\overline{x} }_{2}\) be the sample mean.

To derive the bias and mean square error, we consider the following error terms:

$${\xi }_{o}=\frac{{s}_{y}^{2}-{S}_{y}^{2}}{{S}_{y}^{2}},{\xi }_{1}=\frac{{s}_{x1}^{2}-{S}_{x1}^{2}}{{S}_{x1}^{2}},{\xi }_{2}=\frac{{s}_{x2}^{2}-{S}_{x2}^{2}}{{S}_{x2}^{2}},\mathrm{E}\left({\xi }_{i}\right)=0,\mathrm{for }(i=0, 1, 2).$$
$$E\left({\xi }_{o}^{2}\right)={\reflectbox{$\lambda$}}{{\mkern0.75mu\mathchar '26\mkern -9.75mu\lambda}}_{400}^{*},E\left({\xi }_{1}^{2}\right)={\reflectbox{$\lambda$}}{{\mkern0.75mu\mathchar '26\mkern -9.75mu\lambda}}_{040}^{*},E\left({\xi }_{2}^{2}\right)={\reflectbox{$\lambda$}}{{\mkern0.75mu\mathchar '26\mkern -9.75mu\lambda}}_{004}^{*},E\left({\xi }_{o}{\xi }_{1}\right)={\reflectbox{$\lambda$}}{{\mkern0.75mu\mathchar '26\mkern -9.75mu\lambda}}_{220}^{*},E\left({\xi }_{0}{\xi }_{2}\right)={\reflectbox{$\lambda$}}{{\mkern0.75mu\mathchar '26\mkern -9.75mu\lambda}}_{202}^{*},E\left({\xi }_{1}{\xi }_{2}\right)={\reflectbox{$\lambda$}}{{\mkern0.75mu\mathchar '26\mkern -9.75mu\lambda}}_{022}^{*},$$
$${{\mkern0.75mu\mathchar '26\mkern -9.75mu\lambda}}_{lmn}=\frac{{\upupsilon }_{lmn}}{{\upupsilon }_{200}^\frac{l}{2}{\upupsilon }_{020}^\frac{m}{2}{\upupsilon }_{002}^\frac{n}{2}},{\upupsilon }_{lmn}=\frac{\sum ({{y}_{i}-\overline{Y })}^{l}({{x}_{1i}-{\overline{X} }_{1})}^{m}{({x}_{2i}-{\overline{X} }_{2})}^{n}}{N-1}.$$

where \({{\mkern0.75mu\mathchar '26\mkern -9.75mu\lambda}}_{lmn}^{*}\) =\(\left({{\mkern0.75mu\mathchar '26\mkern -9.75mu\lambda}}_{lmn} - 1\right)\), and \({\reflectbox{$\lambda$}}\)=\(\left(\frac{1}{n}-\frac{1}{N}\right)\).where l, m, and n be the positive numerals and \({\upupsilon }_{200}, {\upupsilon }_{020}\), \({\mathrm{and \upsilon }}_{002}\) are the second-order moments about means of Y, \({X}_{1}\), and \({X}_{2}\), and \({{\mkern0.75mu\mathchar '26\mkern -9.75mu\lambda}}_{lmn}\) is the moment ratio.

The article has been structured as follows:

In Section "Introduction", we have provided a brief overview of the topic and highlighted the importance of estimating population variance. In Section “Notations and symbols”, we review some of the existing estimators for estimation of population variance. In Section "Existing estimators", we proposed a new estimator for population variance under simple random sampling. In Section "Proposed estimator", we will compare the proposed estimator with the existing estimators in terms of MSE. In Section "Efficiency assessment", we will present a numerical study to illustrate the performance of the proposed estimator and compare it with the existing estimators. In Section "Numerical study", we will perform simulations to validate the results of the numerical study and provide additional insights into the performance of the estimators. In Section "Simulation study", we will discuss the results of our study, the limitations of the proposed estimator, and the scope for future research in this field. In Section "Discussion", we will summarize the main findings of our study and provide concluding remarks on the estimation of population variance under simple random sampling.

Existing estimators

In this section, we study different variance estimators based on simple random sampling that are available in the literature.

  1. (i)

    The usual estimator \({\mathrm{T}}_{0}\), is given by:

    $${\mathrm{T}}_{0}=\frac{{\sum_{i=1}^{n}\left({y}_{i}-\overline{Y }\right)}^{2}}{n-1}$$
    (1)

    The variance of \({\mathrm{T}}_{0}\), is given by:

    $$\mathrm{Var}\left({\mathrm{T}}_{0}\right)={\reflectbox{$\lambda$}}{S}_{y}^{4}{{\mkern0.75mu\mathchar '26\mkern -9.75mu\lambda}}_{400}^{*}$$
    (2)
  2. (ii)

    Isaki23 suggested a ratio estimator, which is given by:

    $${\mathrm{T}}_{1}={s}_{y}^{2}\left(\frac{{S}_{x}^{2}}{{s}_{x}^{2}}\right)$$
    (3)

    The bias and mean square error of \({\mathrm{T}}_{1}\) are given as:

    $$\mathrm{Bias}\left({\mathrm{T}}_{1}\right)={\reflectbox{$\lambda$}}{S}_{y}^{2}\left\{{{\mkern0.75mu\mathchar '26\mkern -9.75mu\lambda}}_{040}^{*}-{{\mkern0.75mu\mathchar '26\mkern -9.75mu\lambda}}_{220}^{*}\right\},$$

    and

    $$\mathrm{MSE}\left({\mathrm{T}}_{1}\right)\cong {\reflectbox{$\lambda$}}{S}_{y}^{4}\left\{{{\mkern0.75mu\mathchar '26\mkern -9.75mu\lambda}}_{400}^{*}+{{\mkern0.75mu\mathchar '26\mkern -9.75mu\lambda}}_{040}^{*}-2{{\mkern0.75mu\mathchar '26\mkern -9.75mu\lambda}}_{220}^{*}\right\}.$$
    (4)
  3. (iii)

    The difference type estimator is given by:

    $${\mathrm{T}}_{2}={s}_{y}^{2}+\mathrm{k}\left({S}_{x}^{2}-{s}_{x}^{2}\right).$$
    (5)

    where \(\mathrm{k}\) is the constant, i.e. \(\mathrm{k}\)  = \(\left[\frac{{S}_{y}^{2}{{\mkern0.75mu\mathchar '26\mkern -9.75mu\lambda}}_{220}^{*}}{{S}_{x}^{2}{{\mkern0.75mu\mathchar '26\mkern -9.75mu\lambda}}_{040}^{*}}\right]\).

    The minimum variance at the optimal value of \(\mathrm{k}\) is given by:

    $$\mathrm{Var}\left({\mathrm{T}}_{2}\right)={S}_{y}^{4}{{\reflectbox{$\lambda$}}{\mkern0.75mu\mathchar '26\mkern -9.75mu\lambda}}_{400}^{*}\left(1-{\rho }^{2}\right),$$
    (6)

    where \(\rho\) = \(\frac{{{\mkern0.75mu\mathchar '26\mkern -9.75mu\lambda}}_{220}^{*}}{{\left({{\mkern0.75mu\mathchar '26\mkern -9.75mu\lambda}}_{400}^{*}\right)}^\frac{1}{2}{\left({{\mkern0.75mu\mathchar '26\mkern -9.75mu\lambda}}_{040}^{*}\right)}^\frac{1}{2}}\) .

  4. (iv)

    The difference type estimator given by:

    $${\mathrm{T}}_{3}=\left\{{{\raisebox{2.1pt}{$\upomega$}\kern-2.3pt\rotatebox{80}{\tiny $\smile$}}}_{1}{s}_{y}^{2}+{{\raisebox{2.1pt}{$\upomega$}\kern-2.3pt\rotatebox{80}{\tiny $\smile$}}}_{2}\left({S}_{x}^{2}-{s}_{x}^{2}\right)\right\},$$
    (7)

    where \({{\raisebox{2.1pt}{$\upomega$}\kern-2.3pt\rotatebox{80}{\tiny $\smile$}}}_{1}\) and \({{\raisebox{2.1pt}{$\upomega$}\kern-2.3pt\rotatebox{80}{\tiny $\smile$}}}_{2}\) is the unidentified constants:

    $${{\raisebox{2.1pt}{$\upomega$}\kern-2.3pt\rotatebox{80}{\tiny $\smile$}}}_{1}=\frac{{{\mkern0.75mu\mathchar '26\mkern -9.75mu\lambda}}_{040}^{*}}{{\reflectbox{$\lambda$}}\left\{{{\mkern0.75mu\mathchar '26\mkern -9.75mu\lambda}}_{400}^{*}{{\mkern0.75mu\mathchar '26\mkern -9.75mu\lambda}}_{040}^{*}-{{\mkern0.75mu\mathchar '26\mkern -9.75mu\lambda}}_{220}^{*2}\right\}+{{\mkern0.75mu\mathchar '26\mkern -9.75mu\lambda}}_{040}^{*}},$$

    and

    $${{\raisebox{2.1pt}{$\upomega$}\kern-2.3pt\rotatebox{80}{\tiny $\smile$}}}_{2}=\frac{{S}_{y}^{2} {{\mkern0.75mu\mathchar '26\mkern -9.75mu\lambda}}_{220}^{*}}{{S}_{x}^{2}\left\{{\reflectbox{$\lambda$}}\left({{\mkern0.75mu\mathchar '26\mkern -9.75mu\lambda}}_{400}^{*}{{\mkern0.75mu\mathchar '26\mkern -9.75mu\lambda}}_{040}^{*}-{{\mkern0.75mu\mathchar '26\mkern -9.75mu\lambda}}_{220}^{*2}\right)+{{\mkern0.75mu\mathchar '26\mkern -9.75mu\lambda}}_{040}^{*}\right\}}.$$

    The bias and minimum MSE at the optimal value of \({{\raisebox{2.1pt}{$\upomega$}\kern-2.3pt\rotatebox{80}{\tiny $\smile$}}}_{1}\) and \({{\raisebox{2.1pt}{$\upomega$}\kern-2.3pt\rotatebox{80}{\tiny $\smile$}}}_{2}\), are given by:

    $$\mathrm{Bias}\left({\mathrm{T}}_{3}\right)={S}_{y}^{2}\left\{\frac{{{\mkern0.75mu\mathchar '26\mkern -9.75mu\lambda}}_{040}^{*}}{{\reflectbox{$\lambda$}}\left\{{{\mkern0.75mu\mathchar '26\mkern -9.75mu\lambda}}_{400}^{*}{{\mkern0.75mu\mathchar '26\mkern -9.75mu\lambda}}_{040}^{*}-{{\mkern0.75mu\mathchar '26\mkern -9.75mu\lambda}}_{220}^{*2}\right\}+{{\mkern0.75mu\mathchar '26\mkern -9.75mu\lambda}}_{040}^{*}}-1\right\},$$

    and

    $$\mathrm{MSE}\left({\mathrm{T}}_{3}\right)=\frac{{\reflectbox{$\lambda$}}{S}_{y}^{4} \left[{{\mkern0.75mu\mathchar '26\mkern -9.75mu\lambda}}_{400}^{*}{{\mkern0.75mu\mathchar '26\mkern -9.75mu\lambda}}_{040}^{*}-{{\mkern0.75mu\mathchar '26\mkern -9.75mu\lambda}}_{220}^{*2}\right]}{{\reflectbox{$\lambda$}}\left[{{\mkern0.75mu\mathchar '26\mkern -9.75mu\lambda}}_{400}^{*}{{\mkern0.75mu\mathchar '26\mkern -9.75mu\lambda}}_{040}^{*}-{{\mkern0.75mu\mathchar '26\mkern -9.75mu\lambda}}_{220}^{*2}\right]+{{\mkern0.75mu\mathchar '26\mkern -9.75mu\lambda}}_{040}^{*}}$$
    (8)
  5. (v)

    The exponential ratio and product type estimators by Singh et al.24:

    $${{\mathrm{T}}_{4}=s}_{y}^{2}\mathrm{exp}\left\{\frac{{S}_{x}^{2}-{s}_{x}^{2}}{{S}_{x}^{2}+{s}_{x}^{2}}\right\}$$
    (9)
    $${{\mathrm{T}}_{5}=s}_{y}^{2}\mathrm{exp}\left\{\frac{{s}_{x}^{2}-{S}_{x}^{2}}{{s}_{x}^{2}+{S}_{x}^{2}}\right\}$$
    (10)

    The bias and MSE of \({\mathrm{T}}_{4}\) and \({\mathrm{T}}_{5}\) are given by:

    $$\mathrm{Bias}\left({\mathrm{T}}_{4}\right)={\reflectbox{$\lambda$}}{S}_{y}^{2}\left\{\frac{3}{8}{{\mkern0.75mu\mathchar '26\mkern -9.75mu\lambda}}_{040}^{*}-\frac{1}{2}{{\mkern0.75mu\mathchar '26\mkern -9.75mu\lambda}}_{220}^{*}\right\},\mathrm{MSE}\left({\mathrm{T}}_{4}\right)={\reflectbox{$\lambda$}}{S}_{y}^{4}\left[{{\mkern0.75mu\mathchar '26\mkern -9.75mu\lambda}}_{400}^{*}+\frac{1}{4}{{\mkern0.75mu\mathchar '26\mkern -9.75mu\lambda}}_{040}^{*}-{{\mkern0.75mu\mathchar '26\mkern -9.75mu\lambda}}_{220}^{*}+\frac{1}{4}\right]$$
    (11)

    and

    $$\mathrm{Bias}\left({\mathrm{T}}_{5}\right)={\reflectbox{$\lambda$}}{S}_{y}^{2}\left\{\frac{1}{2}{{\mkern0.75mu\mathchar '26\mkern -9.75mu\lambda}}_{220}^{*}-\frac{1}{8}{{\mkern0.75mu\mathchar '26\mkern -9.75mu\lambda}}_{040}^{*}\right\},\mathrm{MSE}\left({\mathrm{T}}_{5}\right)={\reflectbox{$\lambda$}}{S}_{y}^{4}\left[{{\mkern0.75mu\mathchar '26\mkern -9.75mu\lambda}}_{400}^{*}+\frac{1}{4}{{\mkern0.75mu\mathchar '26\mkern -9.75mu\lambda}}_{040}^{*}-{{\mkern0.75mu\mathchar '26\mkern -9.75mu\lambda}}_{220}^{*}+\frac{9}{4}\right]$$
    (12)
  6. (vi)

    Grover and Kaur25, suggested the following estimator, which is given by:

    $${\mathrm{T}}_{6}=\left[{{\raisebox{2.1pt}{$\upomega$}\kern-2.3pt\rotatebox{80}{\tiny $\smile$}}}_{3}{s}_{y}^{2}+{{\raisebox{2.1pt}{$\upomega$}\kern-2.3pt\rotatebox{80}{\tiny $\smile$}}}_{4}\left({S}_{x}^{2}-{s}_{x}^{2}\right)\right]\mathrm{exp}\left(\frac{{S}_{x}^{2}-{s}_{x}^{2}}{{S}_{x}^{2}+{s}_{x}^{2}}\right)$$
    (13)

    The bias of \({\mathrm{T}}_{6}\), is given by:

    $$\mathrm{Bias}\left({\mathrm{T}}_{6}\right)=\left[-{S}_{y}^{2}+{{\raisebox{2.1pt}{$\upomega$}\kern-2.3pt\rotatebox{80}{\tiny $\smile$}}}_{3}{S}_{y}^{2}\left\{1+\frac{3}{8} {{\mkern0.75mu\mathchar '26\mkern -9.75mu\lambda}}_{040}^{*}-\frac{1}{2}{{\mkern0.75mu\mathchar '26\mkern -9.75mu\lambda}}_{220}^{*}\right\}+\frac{1}{2}{{\raisebox{2.1pt}{$\upomega$}\kern-2.3pt\rotatebox{80}{\tiny $\smile$}}}_{4}{S}_{x}^{2}{\reflectbox{$\lambda$}}{{\mkern0.75mu\mathchar '26\mkern -9.75mu\lambda}}_{040}^{*}\right],$$

    The minimum values of \({{\raisebox{2.1pt}{$\upomega$}\kern-2.3pt\rotatebox{80}{\tiny $\smile$}}}_{3}\) and \({{\raisebox{2.1pt}{$\upomega$}\kern-2.3pt\rotatebox{80}{\tiny $\smile$}}}_{4}\) are given by:

    $${{\raisebox{2.1pt}{$\upomega$}\kern-2.3pt\rotatebox{80}{\tiny $\smile$}}}_{3}=\frac{\left\{8{{\mkern0.75mu\mathchar '26\mkern -9.75mu\lambda}}_{040}^{*}-{\reflectbox{$\lambda$}}{{\mkern0.75mu\mathchar '26\mkern -9.75mu\lambda}}_{040}^{*2}\right\}}{8\left[{\reflectbox{$\lambda$}}\left({{\mkern0.75mu\mathchar '26\mkern -9.75mu\lambda}}_{400}^{*}{{\mkern0.75mu\mathchar '26\mkern -9.75mu\lambda}}_{040}^{*}-{{\mkern0.75mu\mathchar '26\mkern -9.75mu\lambda}}_{220}^{*2}\right)+{{\mkern0.75mu\mathchar '26\mkern -9.75mu\lambda}}_{040}^{*}\right]},$$

    and

    $${{\raisebox{2.1pt}{$\upomega$}\kern-2.3pt\rotatebox{80}{\tiny $\smile$}}}_{4}=\frac{{S}_{y}^{2}\left({\reflectbox{$\lambda$}}{{\mkern0.75mu\mathchar '26\mkern -9.75mu\lambda}}_{040}^{*2}-{\reflectbox{$\lambda$}}{{\mkern0.75mu\mathchar '26\mkern -9.75mu\lambda}}_{040}^{*}{{\mkern0.75mu\mathchar '26\mkern -9.75mu\lambda}}_{220}^{*}+8{{\mkern0.75mu\mathchar '26\mkern -9.75mu\lambda}}_{220}^{*}-4\left\{{{\mkern0.75mu\mathchar '26\mkern -9.75mu\lambda}}_{040}^{*}-{\reflectbox{$\lambda$}}\left({{\mkern0.75mu\mathchar '26\mkern -9.75mu\lambda}}_{400}^{*}{{\mkern0.75mu\mathchar '26\mkern -9.75mu\lambda}}_{040}^{*}-{{\mkern0.75mu\mathchar '26\mkern -9.75mu\lambda}}_{220}^{*2}\right)\right\}\right)}{8{ S}_{x}^{2}\left[{\reflectbox{$\lambda$}}\left\{{{\mkern0.75mu\mathchar '26\mkern -9.75mu\lambda}}_{400}^{*}{{\mkern0.75mu\mathchar '26\mkern -9.75mu\lambda}}_{040}^{*}-{{\mkern0.75mu\mathchar '26\mkern -9.75mu\lambda}}_{220}^{*2}\right\}+{{\mkern0.75mu\mathchar '26\mkern -9.75mu\lambda}}_{040}^{*}\right]}.$$

    The minimum MSE at the optimum values of \({{\raisebox{2.1pt}{$\upomega$}\kern-2.3pt\rotatebox{80}{\tiny $\smile$}}}_{3}\) and \({{\raisebox{2.1pt}{$\upomega$}\kern-2.3pt\rotatebox{80}{\tiny $\smile$}}}_{4}\), are given by:

    $$\mathrm{MSE}\left({\mathrm{T}}_{6}\right)=\frac{{S}_{y}^{4}\left[64\left\{{{\mkern0.75mu\mathchar '26\mkern -9.75mu\lambda}}_{400}^{*}{{\mkern0.75mu\mathchar '26\mkern -9.75mu\lambda}}_{040}^{*}-{{\mkern0.75mu\mathchar '26\mkern -9.75mu\lambda}}_{220}^{*2}\right\}-{\reflectbox{$\lambda$}}{{\mkern0.75mu\mathchar '26\mkern -9.75mu\lambda}}_{040}^{*3}-16{\reflectbox{$\lambda$}}{{\mkern0.75mu\mathchar '26\mkern -9.75mu\lambda}}_{040}^{*} \left\{{{\mkern0.75mu\mathchar '26\mkern -9.75mu\lambda}}_{400}^{*}{{\mkern0.75mu\mathchar '26\mkern -9.75mu\lambda}}_{040}^{*}-{{\mkern0.75mu\mathchar '26\mkern -9.75mu\lambda}}_{220}^{*2}\right\}\right]}{64\left\{{\reflectbox{$\lambda$}}\left({{\mkern0.75mu\mathchar '26\mkern -9.75mu\lambda}}_{400}^{*}{{\mkern0.75mu\mathchar '26\mkern -9.75mu\lambda}}_{040}^{*}-{{\mkern0.75mu\mathchar '26\mkern -9.75mu\lambda}}_{220}^{*2}\right)+{{\mkern0.75mu\mathchar '26\mkern -9.75mu\lambda}}_{040}^{*}\right\}}.$$
    (14)

Proposed estimator

The use of auxiliary variables can improve the precision and efficiency of an estimator during both the design and estimation procedures. Taking motivation from the work of Ahmad et al.26, we propose an enhanced estimator for estimation of finite population variance using two auxiliary variables under simple random sampling. Our suggested variance estimator based on simple random sampling is more efficient and flexible as compared to all the existing estimators considered in this study, which is given by:

$${\mathrm{T}}_{prop }=\left[{{\raisebox{2.1pt}{$\upomega$}\kern-2.3pt\rotatebox{80}{\tiny $\smile$}}}_{5}{s}_{y}^{2}+{{\raisebox{2.1pt}{$\upomega$}\kern-2.3pt\rotatebox{80}{\tiny $\smile$}}}_{6}\left({S}_{x1}^{2}-{s}_{x1}^{2}\right)+{{\raisebox{2.1pt}{$\upomega$}\kern-2.3pt\rotatebox{80}{\tiny $\smile$}}}_{7}\left({S}_{x2}^{2}-{s}_{x2}^{2}\right)\right]\mathrm{exp}\left(\frac{{S}_{x1}^{2}-{s}_{x1}^{2}}{{S}_{x1}^{2}+{s}_{x1}^{2}}\right),$$
(15)

where \({{\raisebox{2.1pt}{$\upomega$}\kern-2.3pt\rotatebox{80}{\tiny $\smile$}}}_{5}\), \({{\raisebox{2.1pt}{$\upomega$}\kern-2.3pt\rotatebox{80}{\tiny $\smile$}}}_{6}\) and \({{\raisebox{2.1pt}{$\upomega$}\kern-2.3pt\rotatebox{80}{\tiny $\smile$}}}_{7}\) are unknown constants.

$${\mathrm{T}}_{prop}=\left[{{\raisebox{2.1pt}{$\upomega$}\kern-2.3pt\rotatebox{80}{\tiny $\smile$}}}_{5}{S}_{y}^{2}\left(1+{\xi }_{o}\right)-{{\raisebox{2.1pt}{$\upomega$}\kern-2.3pt\rotatebox{80}{\tiny $\smile$}}}_{6}{{S}_{x1}^{2}\xi }_{1}-{{\raisebox{2.1pt}{$\upomega$}\kern-2.3pt\rotatebox{80}{\tiny $\smile$}}}_{7}{S}_{x1}^{2} {\xi }_{2}\right]\left(1-\frac{{\xi }_{1} }{2}+\frac{3{\xi }_{1}^{2}}{8}+\dots \right).$$

After expanding the above equation, we have

$$\begin{aligned}\left({\mathrm{T}}_{prop}-{S}_{y}^{2}\right) & =-{S}_{y}^{2}+{{\raisebox{2.1pt}{$\upomega$}\kern-2.3pt\rotatebox{80}{\tiny $\smile$}}}_{5}{S}_{y}^{2}+{{\raisebox{2.1pt}{$\upomega$}\kern-2.3pt\rotatebox{80}{\tiny $\smile$}}}_{5}{S}_{y}^{2}{\xi }_{o}-\frac{1}{2}{{\raisebox{2.1pt}{$\upomega$}\kern-2.3pt\rotatebox{80}{\tiny $\smile$}}}_{5}{S}_{y}^{2}{\xi }_{1}-{{\raisebox{2.1pt}{$\upomega$}\kern-2.3pt\rotatebox{80}{\tiny $\smile$}}}_{6}{{S}_{x1}^{2}\xi }_{1}-{ {\raisebox{2.1pt}{$\upomega$}\kern-2.3pt\rotatebox{80}{\tiny $\smile$}}}_{7}{{S}_{x2}^{2}\xi }_{2}+\frac{1}{2}{{\raisebox{2.1pt}{$\upomega$}\kern-2.3pt\rotatebox{80}{\tiny $\smile$}}}_{5}{S}_{y}^{2} {\xi }_{o}{\xi }_{1}\\ &\quad+\frac{3}{8}{{\raisebox{2.1pt}{$\upomega$}\kern-2.3pt\rotatebox{80}{\tiny $\smile$}}}_{5}{S}_{y}^{2} {\xi }_{1}^{2}+\frac{1}{2}{{\raisebox{2.1pt}{$\upomega$}\kern-2.3pt\rotatebox{80}{\tiny $\smile$}}}_{6}{S}_{x1}^{2}{\xi }_{1}^{2}+\frac{1}{2}{{\raisebox{2.1pt}{$\upomega$}\kern-2.3pt\rotatebox{80}{\tiny $\smile$}}}_{7}{S}_{x2}^{2}{\xi }_{1}{\xi }_{2}\end{aligned}$$
(16)

The bias and mean square error of \({\mathrm{T}}_{prop}\), is given by:

$$\begin{aligned}\mathrm{Bias}\left({\mathrm{T}}_{prop}\right)&\cong {S}_{y}^{2}\left({{\raisebox{2.1pt}{$\upomega$}\kern-2.3pt\rotatebox{80}{\tiny $\smile$}}}_{5}-1\right)+\frac{3}{8}{S}_{y}^{2}{{\raisebox{2.1pt}{$\upomega$}\kern-2.3pt\rotatebox{80}{\tiny $\smile$}}}_{5}{\reflectbox{$\lambda$}}{ {\mkern0.75mu\mathchar '26\mkern -9.75mu\lambda}}_{040}^{*}+\frac{1}{2}{S}_{x}^{2}{{\raisebox{2.1pt}{$\upomega$}\kern-2.3pt\rotatebox{80}{\tiny $\smile$}}}_{6}{\reflectbox{$\lambda$}}{{\mkern0.75mu\mathchar '26\mkern -9.75mu\lambda}}_{040}^{*}-{\frac{1}{2}S}_{y}^{2}{{\raisebox{2.1pt}{$\upomega$}\kern-2.3pt\rotatebox{80}{\tiny $\smile$}}}_{5}{\reflectbox{$\lambda$}}{{\mkern0.75mu\mathchar '26\mkern -9.75mu\lambda}}_{220}^{*}+\frac{1}{2}{S}_{x2}^{2}{\reflectbox{$\lambda$}}{{\mkern0.75mu\mathchar '26\mkern -9.75mu\lambda}}_{022}^{*}\nonumber\\ \mathrm{MSE}\left({\mathrm{T}}_{prop}\right)&\cong {S}_{y}^{4}{\left({{\raisebox{2.1pt}{$\upomega$}\kern-2.3pt\rotatebox{80}{\tiny $\smile$}}}_{5}-1\right)}^{2}+{S}_{y}^{4}{\reflectbox{$\lambda$}}{ {\mkern0.75mu\mathchar '26\mkern -9.75mu\lambda}}_{400}^{*}{{\raisebox{2.1pt}{$\upomega$}\kern-2.3pt\rotatebox{80}{\tiny $\smile$}}}_{5}^{2}+{S}_{x1}^{4}{\reflectbox{$\lambda$}}{ {\mkern0.75mu\mathchar '26\mkern -9.75mu\lambda}}_{040}^{*}{{\raisebox{2.1pt}{$\upomega$}\kern-2.3pt\rotatebox{80}{\tiny $\smile$}}}_{6}+{S}_{x2}^{2}{\reflectbox{$\lambda$}}{ {\mkern0.75mu\mathchar '26\mkern -9.75mu\lambda}}_{004}^{*}{{\raisebox{2.1pt}{$\upomega$}\kern-2.3pt\rotatebox{80}{\tiny $\smile$}}}_{7} +{S}_{y}^{4}{{\raisebox{2.1pt}{$\upomega$}\kern-2.3pt\rotatebox{80}{\tiny $\smile$}}}_{5}^{2}{\reflectbox{$\lambda$}}{ {\mkern0.75mu\mathchar '26\mkern -9.75mu\lambda}}_{040}^{*}-{S}_{y}^{2}{S}_{x1}^{2}{{\raisebox{2.1pt}{$\upomega$}\kern-2.3pt\rotatebox{80}{\tiny $\smile$}}}_{6}{\reflectbox{$\lambda$}}{ {\mkern0.75mu\mathchar '26\mkern -9.75mu\lambda}}_{040}^{*}\nonumber\\&\quad+ 2{S}_{y}^{2}{S}_{x1}^{2}{{\raisebox{2.1pt}{$\upomega$}\kern-2.3pt\rotatebox{80}{\tiny $\smile$}}}_{5}{{\raisebox{2.1pt}{$\upomega$}\kern-2.3pt\rotatebox{80}{\tiny $\smile$}}}_{6}{\reflectbox{$\lambda$}}{ {\mkern0.75mu\mathchar '26\mkern -9.75mu\lambda}}_{040}^{*} -\frac{3}{4}{S}_{y}^{4}{{\raisebox{2.1pt}{$\upomega$}\kern-2.3pt\rotatebox{80}{\tiny $\smile$}}}_{5}{\reflectbox{$\lambda$}}{ {\mkern0.75mu\mathchar '26\mkern -9.75mu\lambda}}_{040}^{*}+{S}_{y}^{4}{{\raisebox{2.1pt}{$\upomega$}\kern-2.3pt\rotatebox{80}{\tiny $\smile$}}}_{5}{\reflectbox{$\lambda$}}{ {\mkern0.75mu\mathchar '26\mkern -9.75mu\lambda}}_{220}^{*}-2{S}_{y}^{4}{{\raisebox{2.1pt}{$\upomega$}\kern-2.3pt\rotatebox{80}{\tiny $\smile$}}}_{5}^{2} 2{\reflectbox{$\lambda$}}{ {\mkern0.75mu\mathchar '26\mkern -9.75mu\lambda}}_{220}^{*}-2{S}_{y}^{2}{S}_{x1}^{2}{{\raisebox{2.1pt}{$\upomega$}\kern-2.3pt\rotatebox{80}{\tiny $\smile$}}}_{5}{{\raisebox{2.1pt}{$\upomega$}\kern-2.3pt\rotatebox{80}{\tiny $\smile$}}}_{6}{\reflectbox{$\lambda$}}{ {\mkern0.75mu\mathchar '26\mkern -9.75mu\lambda}}_{220}^{*}\nonumber\\&\quad-2{S}_{y}^{2}{S}_{x2}^{2}{{\raisebox{2.1pt}{$\upomega$}\kern-2.3pt\rotatebox{80}{\tiny $\smile$}}}_{5}{{\raisebox{2.1pt}{$\upomega$}\kern-2.3pt\rotatebox{80}{\tiny $\smile$}}}_{7}{\reflectbox{$\lambda$}}{ {\mkern0.75mu\mathchar '26\mkern -9.75mu\lambda}}_{202}^{*}-{S}_{y}^{2}{S}_{x2}^{2}{{\raisebox{2.1pt}{$\upomega$}\kern-2.3pt\rotatebox{80}{\tiny $\smile$}}}_{7}{\reflectbox{$\lambda$}}{ {\mkern0.75mu\mathchar '26\mkern -9.75mu\lambda}}_{022}^{*}+2{S}_{y}^{2}{S}_{x2}^{2}{{\raisebox{2.1pt}{$\upomega$}\kern-2.3pt\rotatebox{80}{\tiny $\smile$}}}_{5}{{\raisebox{2.1pt}{$\upomega$}\kern-2.3pt\rotatebox{80}{\tiny $\smile$}}}_{7}{\reflectbox{$\lambda$}}{ {\mkern0.75mu\mathchar '26\mkern -9.75mu\lambda}}_{022}^{*}-2{S}_{y}^{2}{S}_{x2}^{2}{{\raisebox{2.1pt}{$\upomega$}\kern-2.3pt\rotatebox{80}{\tiny $\smile$}}}_{6}{{\raisebox{2.1pt}{$\upomega$}\kern-2.3pt\rotatebox{80}{\tiny $\smile$}}}_{7}{\reflectbox{$\lambda$}}{ {\mkern0.75mu\mathchar '26\mkern -9.75mu\lambda}}_{022}^{*}\end{aligned}$$
(17)

The optimal values of \({{\raisebox{2.1pt}{$\upomega$}\kern-2.3pt\rotatebox{80}{\tiny $\smile$}}}_{5}\), \({{\raisebox{2.1pt}{$\upomega$}\kern-2.3pt\rotatebox{80}{\tiny $\smile$}}}_{6}\), and \({{\raisebox{2.1pt}{$\upomega$}\kern-2.3pt\rotatebox{80}{\tiny $\smile$}}}_{7}\) are obtained by minimizing (17), and are given by:

$${{{\raisebox{2.1pt}{$\upomega$}\kern-2.3pt\rotatebox{80}{\tiny $\smile$}}}_{5}}_{(Opt)}=\frac{8-{\reflectbox{$\lambda$}}{{\mkern0.75mu\mathchar '26\mkern -9.75mu\lambda}}_{040}^{*}}{{{\mkern0.75mu\mathchar '26\mkern -9.75mu\lambda}}_{400 }^{*}\left[8\left\{\frac{1}{{{\mkern0.75mu\mathchar '26\mkern -9.75mu\lambda}}_{400}^{*}}+{\reflectbox{$\lambda$}}\left({\upgamma }^{*} +1\right)\right\}\right]},$$
$${{{\raisebox{2.1pt}{$\upomega$}\kern-2.3pt\rotatebox{80}{\tiny $\smile$}}}_{6}}_{(Opt)}=\frac{{S}_{y}^{2}\left[\begin{array}{c}{\reflectbox{$\lambda$}} {{\mkern0.75mu\mathchar '26\mkern -9.75mu\lambda}}_{040}^{*}\left({{\mkern0.75mu\mathchar '26\mkern -9.75mu\lambda}}_{040}^{*}{{\mkern0.75mu\mathchar '26\mkern -9.75mu\lambda}}_{004}^{*}-{{\mkern0.75mu\mathchar '26\mkern -9.75mu\lambda}}_{220}^{*2}\right)+\left({{\mkern0.75mu\mathchar '26\mkern -9.75mu\lambda}}_{004}^{*}{{\mkern0.75mu\mathchar '26\mkern -9.75mu\lambda}}_{220}^{*}-{{\mkern0.75mu\mathchar '26\mkern -9.75mu\lambda}}_{202}^{*}{{\mkern0.75mu\mathchar '26\mkern -9.75mu\lambda}}_{022}^{*}\right)\left(8-{\reflectbox{$\lambda$}}{ {\mkern0.75mu\mathchar '26\mkern -9.75mu\lambda}}_{040}^{*}\right)+4{{\mkern0.75mu\mathchar '26\mkern -9.75mu\lambda}}_{400}^{*}\left({{\mkern0.75mu\mathchar '26\mkern -9.75mu\lambda}}_{040}^{*}{{\mkern0.75mu\mathchar '26\mkern -9.75mu\lambda}}_{004-}^{*}{{\mkern0.75mu\mathchar '26\mkern -9.75mu\lambda}}_{022}^{*2}\right)\\ \left\{\left(\frac{-1}{{{\mkern0.75mu\mathchar '26\mkern -9.75mu\lambda}}_{400}^{*}}\right)+{\reflectbox{$\lambda$}}\left({\upgamma }^{*} +1\right) \right\}\end{array}\right]}{8{S}_{x}^{2}{{\mkern0.75mu\mathchar '26\mkern -9.75mu\lambda}}_{400}^{*}\left({{\mkern0.75mu\mathchar '26\mkern -9.75mu\lambda}}_{040}^{*}{{\mkern0.75mu\mathchar '26\mkern -9.75mu\lambda}}_{004}^{*}-{{\mkern0.75mu\mathchar '26\mkern -9.75mu\lambda}}_{220}^{*2}\right)\left\{\left(\frac{1}{{{\mkern0.75mu\mathchar '26\mkern -9.75mu\lambda}}_{400}^{*}}\right)+{\reflectbox{$\lambda$}}\left({\upgamma }^{*} +1\right) \right\}},$$

and

$${{{\raisebox{2.1pt}{$\upomega$}\kern-2.3pt\rotatebox{80}{\tiny $\smile$}}}_{7}}_{(Opt)}=\frac{{S}_{y}^{2}\left[8-{\reflectbox{$\lambda$}}{{\mkern0.75mu\mathchar '26\mkern -9.75mu\lambda}}_{040}^{*}\right]\left\{{{\mkern0.75mu\mathchar '26\mkern -9.75mu\lambda}}_{220}^{*}{{\mkern0.75mu\mathchar '26\mkern -9.75mu\lambda}}_{002}^{*}-{{\mkern0.75mu\mathchar '26\mkern -9.75mu\lambda}}_{040}^{*}{{\mkern0.75mu\mathchar '26\mkern -9.75mu\lambda}}_{202}^{*}\right\}}{8{S}_{rx}^{2}{{\mkern0.75mu\mathchar '26\mkern -9.75mu\lambda}}_{400}^{*}\left({{\mkern0.75mu\mathchar '26\mkern -9.75mu\lambda}}_{040}^{*}{{\mkern0.75mu\mathchar '26\mkern -9.75mu\lambda}}_{004}^{*}-{{\mkern0.75mu\mathchar '26\mkern -9.75mu\lambda}}_{220}^{*2}\right)\left\{\left(\frac{1}{{{\mkern0.75mu\mathchar '26\mkern -9.75mu\lambda}}_{400}^{*}}\right)+{\reflectbox{$\lambda$}}\left({\upgamma }^{*} +1\right) \right\}}.$$

The minimal MSE at the optimum values of \({{{\raisebox{2.1pt}{$\upomega$}\kern-2.3pt\rotatebox{80}{\tiny $\smile$}}}_{5}}_{(Opt)}\), \({{{\raisebox{2.1pt}{$\upomega$}\kern-2.3pt\rotatebox{80}{\tiny $\smile$}}}_{6}}_{(Opt)}\) and \({{{\raisebox{2.1pt}{$\upomega$}\kern-2.3pt\rotatebox{80}{\tiny $\smile$}}}_{7}}_{(Opt)}\), are given by:

$$\mathrm{MSE}\left({\mathrm{T}}_{prop}\right)=\frac{{\reflectbox{$\lambda$}}{S}_{y}^{4}\left[64\left({\upgamma }^{*}+1\right)-{\reflectbox{$\lambda$}}\left(\frac{{{\mkern0.75mu\mathchar '26\mkern -9.75mu\lambda}}_{040}^{*2}}{{{\mkern0.75mu\mathchar '26\mkern -9.75mu\lambda}}_{400}^{*}}\right)-16 {\reflectbox{$\lambda$}}{{\mkern0.75mu\mathchar '26\mkern -9.75mu\lambda}}_{040}^{*}\left({\upgamma }^{*}+1\right)\right]}{64\left\{\frac{1}{{{\mkern0.75mu\mathchar '26\mkern -9.75mu\lambda}}_{400}^{*}}+{\reflectbox{$\lambda$}}\left({\upgamma }^{*}+1\right)\right\}},$$
(18)

where

$${\upgamma }^{*}=\frac{2{{\mkern0.75mu\mathchar '26\mkern -9.75mu\lambda}}_{220}^{*}{{\mkern0.75mu\mathchar '26\mkern -9.75mu\lambda}}_{202}^{*}{{\mkern0.75mu\mathchar '26\mkern -9.75mu\lambda}}_{022}^{*}-{{\mkern0.75mu\mathchar '26\mkern -9.75mu\lambda}}_{040}^{*}{{\mkern0.75mu\mathchar '26\mkern -9.75mu\lambda}}_{202}^{*2}-{{\mkern0.75mu\mathchar '26\mkern -9.75mu\lambda}}_{004}^{*}{{\mkern0.75mu\mathchar '26\mkern -9.75mu\lambda}}_{220}^{*2}}{{{\mkern0.75mu\mathchar '26\mkern -9.75mu\lambda}}_{400}^{*}\left({{\mkern0.75mu\mathchar '26\mkern -9.75mu\lambda}}_{040}^{*}{{\mkern0.75mu\mathchar '26\mkern -9.75mu\lambda}}_{004 -}^{*}{{\mkern0.75mu\mathchar '26\mkern -9.75mu\lambda}}_{220}^{*2}\right)}.$$

Efficiency assessment

In this section, we compare the proposed estimator with existing estimators in terms of mean square error.

  1. (1)

    By taking (2) and (18)

    $$\mathrm{Var}\left({\mathrm{T}}_{0}\right)-\mathrm{MSE}\left({\mathrm{T}}_{prop}\right)>0$$
    $${S}_{y}^{4}{\reflectbox{$\lambda$}}{{\mkern0.75mu\mathchar '26\mkern -9.75mu\lambda}}_{400}^{*}-\frac{{\reflectbox{$\lambda$}}{S}_{y}^{2}\left[64\left({\upgamma }^{*}+1\right)-{\reflectbox{$\lambda$}}\left(\frac{{{\mkern0.75mu\mathchar '26\mkern -9.75mu\lambda}}_{040}^{*2}}{{{\mkern0.75mu\mathchar '26\mkern -9.75mu\lambda}}_{400}^{*}}\right)-16 {\reflectbox{$\lambda$}}{{\mkern0.75mu\mathchar '26\mkern -9.75mu\lambda}}_{040}^{*}\left({\upgamma }^{*}+1\right)\right]}{64\left\{\frac{1}{{{\mkern0.75mu\mathchar '26\mkern -9.75mu\lambda}}_{400}^{*}}+{\reflectbox{$\lambda$}}\left({\upgamma }^{*}+1\right)\right\}}>0$$
    (19)
    $$\frac{{\reflectbox{$\lambda$}}{S}_{y}^{2}\left[64{S}_{y}^{2}{{\mkern0.75mu\mathchar '26\mkern -9.75mu\lambda}}_{400}^{*}\left\{{\Psi }_{1}\right\}+\left[16{\reflectbox{$\lambda$}}{{\mkern0.75mu\mathchar '26\mkern -9.75mu\lambda}}_{040}^{*}\left({\upgamma }^{*}+1\right)-64{\upgamma }^{*}+{\reflectbox{$\lambda$}}\left(\frac{{{\reflectbox{$\lambda$}}}_{040}^{*2}}{{{\mkern0.75mu\mathchar '26\mkern -9.75mu\lambda}}_{400}^{*}}\right)-64\right]\right]}{64{\Psi }_{1}}>0$$
    (20)

    where \({\Psi }_{1}\)=\(\frac{1}{{{\mkern0.75mu\mathchar '26\mkern -9.75mu\lambda}}_{400}^{*}}+{\reflectbox{$\lambda$}}\left({\upgamma }^{*}+1\right),\)

    $${\Psi }_{2}=\left[64{\upgamma }^{*}+64-{\reflectbox{$\lambda$}}\left(\frac{{{\mkern0.75mu\mathchar '26\mkern -9.75mu\lambda}}_{040}^{*2}}{{{\mkern0.75mu\mathchar '26\mkern -9.75mu\lambda}}_{400}^{*}}\right)-16{\reflectbox{$\lambda$}}{{\mkern0.75mu\mathchar '26\mkern -9.75mu\lambda}}_{040}^{*}\left({\upgamma }^{*}+1\right)\right],$$
    $${\Psi }_{3}=\frac{1}{4}{\reflectbox{$\lambda$}}{{\mkern0.75mu\mathchar '26\mkern -9.75mu\lambda}}_{040}^{*}\left({\upgamma }^{*}+1\right)-{\upgamma }^{*}+\frac{1}{64}{\reflectbox{$\lambda$}}\left(\frac{{{\mkern0.75mu\mathchar '26\mkern -9.75mu\lambda}}_{040}^{*2}}{{{\mkern0.75mu\mathchar '26\mkern -9.75mu\lambda}}_{400}^{*}}\right)-1$$
  2. (2)

    By taking (4) and (18)

    $$\mathrm{MSE}\left({\mathrm{T}}_{1}\right)-\mathrm{MSE}\left({\mathrm{T}}_{prop}\right)>0$$
    $${\reflectbox{$\lambda$}}{S}_{y}^{4} \left\{{{\mkern0.75mu\mathchar '26\mkern -9.75mu\lambda}}_{400}^{*}+{{\mkern0.75mu\mathchar '26\mkern -9.75mu\lambda}}_{040}^{*}-2{{\mkern0.75mu\mathchar '26\mkern -9.75mu\lambda}}_{220}^{*}\right\}-\frac{{\reflectbox{$\lambda$}}{S}_{y}^{2}\left[64\left({\upgamma }^{*}+1\right)-{\reflectbox{$\lambda$}}\left(\frac{{{\mkern0.75mu\mathchar '26\mkern -9.75mu\lambda}}_{040}^{*2}}{{{\mkern0.75mu\mathchar '26\mkern -9.75mu\lambda}}_{400}^{*}}\right)-16 {\reflectbox{$\lambda$}}{{\mkern0.75mu\mathchar '26\mkern -9.75mu\lambda}}_{040}^{*}\left({\upgamma }^{*}+1\right)\right]}{64{\Psi }_{1}}>0$$
    (21)
    $$\frac{{\reflectbox{$\lambda$}}{S}_{y}^{2}\left({S}_{y}^{2}\left\{{\mathcal{Y}}\right\}{\Psi }_{1}+{\Psi }_{3}\right)}{{\Psi }_{1}}>0$$
    (22)
  3. (3)

    By taking (6) and (18)

    $$\mathrm{Var}\left({\mathrm{T}}_{2}\right)-\mathrm{MSE}\left({\mathrm{T}}_{prop}\right)>0$$
    $${S}_{y}^{4}{{\reflectbox{$\lambda$}}{\mkern0.75mu\mathchar '26\mkern -9.75mu\lambda}}_{400}^{*}\left(1-{\rho }^{2}\right)-\frac{{\reflectbox{$\lambda$}}{S}_{y}^{2}\left[64\left({\upgamma }^{*}+1\right)-{\reflectbox{$\lambda$}}\left(\frac{{{\mkern0.75mu\mathchar '26\mkern -9.75mu\lambda}}_{040}^{*2}}{{{\mkern0.75mu\mathchar '26\mkern -9.75mu\lambda}}_{400}^{*}}\right)-16 {\reflectbox{$\lambda$}}{{\mkern0.75mu\mathchar '26\mkern -9.75mu\lambda}}_{040}^{*}\left({\upgamma }^{*}+1\right)\right]}{64\left\{\frac{1}{{{\mkern0.75mu\mathchar '26\mkern -9.75mu\lambda}}_{400}^{*}}+{\reflectbox{$\lambda$}}\left({\upgamma }^{*}+1\right)\right\}}>0$$
    (23)
    $$\frac{{\reflectbox{$\lambda$}}{S}_{y}^{2}\left({S}_{y}^{2}{{\mkern0.75mu\mathchar '26\mkern -9.75mu\lambda}}_{400}^{*}\left(1-{\rho }^{2}\right){\Psi }_{1}+{\Psi }_{3}\right)}{{\Psi }_{1}}>0$$
    (24)
  4. (4)

    By taking (8) and (18)

    $$\mathrm{MSE}\left({\mathrm{T}}_{3}\right)-\mathrm{MSE}\left({\mathrm{T}}_{prop}\right)>0$$
    $$\frac{{\reflectbox{$\lambda$}}{S}_{y}^{4}\left[{\mathcal{Y}}\right]}{{\reflectbox{$\lambda$}}\left[{\mathcal{Y}}\right]+{{\mkern0.75mu\mathchar '26\mkern -9.75mu\lambda}}_{040}^{*}}-\frac{{\reflectbox{$\lambda$}}{S}_{y}^{2}\left[64\left({\upgamma }^{*}+1\right)-{\reflectbox{$\lambda$}}\left(\frac{{{\mkern0.75mu\mathchar '26\mkern -9.75mu\lambda}}_{040}^{*2}}{{{\mkern0.75mu\mathchar '26\mkern -9.75mu\lambda}}_{400}^{*}}\right)-16 {\reflectbox{$\lambda$}}{{\mkern0.75mu\mathchar '26\mkern -9.75mu\lambda}}_{040}^{*}\left({\upgamma }^{*}+1\right)\right]}{64\left\{\frac{1}{{{\mkern0.75mu\mathchar '26\mkern -9.75mu\lambda}}_{400}^{*}}+{\reflectbox{$\lambda$}}\left({\upgamma }^{*}+1\right)\right\}}>0$$
    (25)
    $$\frac{\frac{1}{64}\left[{\reflectbox{$\lambda$}}{S}_{y}^{2}\left({{\mkern0.75mu\mathchar '26\mkern -9.75mu\lambda}}_{040}^{*}{\Psi }_{2}\right)+{{\Psi }_{1}S}_{y}^{2}+{\Psi }_{2}\left({\mathcal{Y}}\right)\right]}{{\reflectbox{$\lambda$}}(\left[{\mathcal{Y}}\right]+{{\mkern0.75mu\mathchar '26\mkern -9.75mu\lambda}}_{040}^{*}){\Psi }_{1}}>0.$$
    (26)

    where \(\mathcal{Y}\) = \({{\mkern0.75mu\mathchar '26\mkern -9.75mu\lambda}}_{400}^{*}{{\mkern0.75mu\mathchar '26\mkern -9.75mu\lambda}}_{040}^{*}-{{\mkern0.75mu\mathchar '26\mkern -9.75mu\lambda}}_{220}^{*2}\).

  5. (5)

    By taking (11) and (18)

    $$\mathrm{MSE}\left({\mathrm{T}}_{4}\right)-\mathrm{MSE}\left({\mathrm{T}}_{prop}\right)>0$$
    $${\reflectbox{$\lambda$}}{S}_{y}^{4}\left[{{\mkern0.75mu\mathchar '26\mkern -9.75mu\lambda}}_{400}^{*}+\frac{1}{4}{{\mkern0.75mu\mathchar '26\mkern -9.75mu\lambda}}_{040}^{*}-{{\mkern0.75mu\mathchar '26\mkern -9.75mu\lambda}}_{220}^{*}+\frac{1}{4} \right]-\frac{{\reflectbox{$\lambda$}}{S}_{y}^{2}\left[64\left({\upgamma }^{*}+1\right)-{\reflectbox{$\lambda$}}\left(\frac{{{\mkern0.75mu\mathchar '26\mkern -9.75mu\lambda}}_{040}^{*2}}{{{\mkern0.75mu\mathchar '26\mkern -9.75mu\lambda}}_{400}^{*}}\right)-16 {\reflectbox{$\lambda$}}{{\mkern0.75mu\mathchar '26\mkern -9.75mu\lambda}}_{040}^{*}\left({\upgamma }^{*}+1\right)\right]}{64\left\{\frac{1}{{{\mkern0.75mu\mathchar '26\mkern -9.75mu\lambda}}_{400}^{*}}+ {\reflectbox{$\lambda$}}\left({\upgamma }^{*}+1\right)\right\}}>0$$
    (27)
    $$\frac{{\reflectbox{$\lambda$}}{S}_{y}^{2}\left[{S}_{y}^{2}\left\{{{\mkern0.75mu\mathchar '26\mkern -9.75mu\lambda}}_{400}^{*}+\frac{1}{4}{{\mkern0.75mu\mathchar '26\mkern -9.75mu\lambda}}_{040}^{*}-{{\mkern0.75mu\mathchar '26\mkern -9.75mu\lambda}}_{220}^{*}+\frac{1}{ 4} \right\}{\Psi }_{1}+{\Psi }_{3} \right]}{{\Psi }_{1}}>0$$
    (28)
  6. (6)

    By taking (12) and (18)

    $$\mathrm{MSE}\left({\mathrm{T}}_{5}\right)-\mathrm{MSE}\left({\mathrm{T}}_{prop}\right)>0$$
    $${\reflectbox{$\lambda$}}{S}_{y}^{4}\left[{{\mkern0.75mu\mathchar '26\mkern -9.75mu\lambda}}_{400}^{*}+\frac{1}{4}{{\mkern0.75mu\mathchar '26\mkern -9.75mu\lambda}}_{040}^{*}-{{\mkern0.75mu\mathchar '26\mkern -9.75mu\lambda}}_{220}^{*}+\frac{9}{4} \right]-\frac{{\reflectbox{$\lambda$}}{S}_{y}^{2}\left[64\left({\upgamma }^{*}+1\right)-{\reflectbox{$\lambda$}}\left(\frac{{{\mkern0.75mu\mathchar '26\mkern -9.75mu\lambda}}_{040h}^{*2}}{{{\mkern0.75mu\mathchar '26\mkern -9.75mu\lambda}}_{400h}^{*}}\right)-16 {\reflectbox{$\lambda$}}{{\mkern0.75mu\mathchar '26\mkern -9.75mu\lambda}}_{040h}^{*}\left({\upgamma }^{*}+1\right)\right]}{64\left\{\frac{1}{{{\mkern0.75mu\mathchar '26\mkern -9.75mu\lambda}}_{400h}^{*}}+{\reflectbox{$\lambda$}}\left({\upgamma }^{*}+1\right)\right\}}>0$$
    (29)
    $$\frac{{\reflectbox{$\lambda$}}{S}_{y}^{2}\left[{S}_{y}^{2}\left\{{{\mkern0.75mu\mathchar '26\mkern -9.75mu\lambda}}_{400}^{*}+\frac{1}{4}{{\mkern0.75mu\mathchar '26\mkern -9.75mu\lambda}}_{040}^{*}-{{\mkern0.75mu\mathchar '26\mkern -9.75mu\lambda}}_{220}^{*}+\frac{9}{4} \right\}{\Psi }_{1}+{\Psi }_{3} \right]}{{\Psi }_{1}}>0$$
    (30)
  7. (7)

    By taking (14) and (18)

    $$\mathrm{MSE}\left({\mathrm{T}}_{6}\right)-\mathrm{MSE}\left({\mathrm{T}}_{prop}\right)>0$$
    $$\frac{{S}_{y}^{4}\left[64\left\{{\mathcal{Y}}\right\}-{\reflectbox{$\lambda$}}{{\mkern0.75mu\mathchar '26\mkern -9.75mu\lambda}}_{040}^{*3}-16 {\reflectbox{$\lambda$}}{{\mkern0.75mu\mathchar '26\mkern -9.75mu\lambda}}_{040}^{*} \left\{{\mathcal{Y}}\right\}\right]}{64\left\{{\reflectbox{$\lambda$}}\left({\mathcal{Y}}\right)+{{\mkern0.75mu\mathchar '26\mkern -9.75mu\lambda}}_{040}^{*}\right\}}-\frac{{\reflectbox{$\lambda$}}{S}_{y}^{4}\left[64\left({\upgamma }^{*}+1\right)-{\reflectbox{$\lambda$}}\left(\frac{{{\mkern0.75mu\mathchar '26\mkern -9.75mu\lambda}}_{040}^{*2}}{{{\mkern0.75mu\mathchar '26\mkern -9.75mu\lambda}}_{400}^{*}}\right)-16 {\reflectbox{$\lambda$}}{{\mkern0.75mu\mathchar '26\mkern -9.75mu\lambda}}_{040}^{*}\left({\upgamma }^{*}+1\right)\right]}{64\left\{{\Psi }_{1}\right\}}>0.$$
    (31)

Numerical study

In this section, we conduct a mathematical exploration to compare the performance of our suggested estimator with existing estimators. We consider four actual datasets that are sampled using simple random sampling, and their descriptions are provided in Table 1. To compare the performance of the estimators, we use the concept of percentage relative efficiency (PRE), which is a widely used measure in statistics.

Table 1 Summary statistics of population’s I–IV.

To calculate the PRE, we use the following expression:

$$\mathrm{PRE}(.) = \frac{Var\left( {\mathrm{T}}_{u}\right)}{MSE\left({\mathrm{T}}_{prop}\right)} \times 100$$

where (u) = (0, 1, 2, 3, 4, 5, 6).

The expression of mean square error (MSE) using four actual data sets are given in Table 2. The conditional values of estimators are given in Table 3. The expression of percentage relative efficiency (PRE) using four actual data sets are given in Table 4.

Table 2 Mean square error using population’s I-IV.
Table 3 Conditional values using actual data sets.
Table 4 Percentage relative efficiency w.r.t \({\mathrm{T}}_{0}\) of population’s I–IV.

Population 1: [Source: Murthy27]

Y: Output of the workshop,

\({X}_{1}\)=Fixed capital,

\({X}_{2}\)= Number of worker.

Population 2: [Source: Cochran28]

Y: Food cost,

\({X}_{1}\): size of the family,

and \({X}_{2}\)= Income.

Population 3: [Source: Singh29]

Y: Estimated fish caught in the year 1995,

\({X}_{1}\): Estimated fish caught in the year 1994,

\({X}_{2}\)= Estimated fish caught in the year 1993.

Population 4: [Source: Sukhathme30]

Y: Area under wheat in 1937,

\({X}_{1}\)= Area under wheat in 1936,

\({X}_{2}\)= Total cultivated area in 1931.

Simulation study

This portion shows the algebraic method for the evaluation of Tprop with the existing counterparts T0, T1, T2, T3, T4, T5, and T6. The algorithm for simulation study to demonstrate the performance of numerous estimators of T0 is as follows:

  1. 1.

    Produce two independent random variables X from N \(\left( {\mu ,\sigma^{2} } \right)\) and Z from N \(\left( {\mu_{1} ,\sigma_{1}^{2} } \right)\) using the box-Muller method.

  2. 2.

    Set \(Y=\rho X+\sqrt{1-{\rho }^{2}}\), where \(0<\rho =0.5, 0.6, 0.7, 0.8>1\).

  3. 3.

    Return the pair (Y, X, Z).

  4. 4.

    Let the population-I with the parameters \(\mu = 3,\) \(\sigma =2,\) \(\mu_{1} = 5\) and \(\sigma_{1} = 3\) and in step-1 and repeat steps 1–3 for 500 times. This population will have different variances for X and Z.

  5. 5.

    Likewise, produce the population-II with the parameters \(\mu =2, \sigma =3,\) \({\mu }_{1}=4\) and \(\sigma_{1} = 3\) in step-1 and repeat steps 1–3 for 500 times.

  6. 6.

    From the population of size N = 50, draw 500 SRSWOR (\({y}_{i}\),\({x}_{i}\)) (i = 1, 2,3,…,n) of size n = 10, 15 and 20.

  7. 7.

    The Average MSE of the estimators is defined by:

    $$Average MSE\left(T\right)=\frac{1}{500}\sum_{k=1}^{500}E{({T}_{k}-\mathrm{T}0)}^{2}$$
  8. 8.

    The percentage relative efficiency of estimators as compared to usual estimator T0 is defined by:

    $$PRE\left(T\right)=\frac{Var(\mathrm{T}0)\times 100}{MSE(T)}$$
  9. 9.

    The Average PRE of the estimators is defined by

    $$Average PRE\left(T\right)=\frac{1}{500}\sum_{k=1}^{500}PRE({T}_{k})$$

The results of the simulation study are presented in Tables 5 and 6.

Table 5 Evaluation of Tprop with other existing counterparts for Population-I.
Table 6 Evaluation of Tprop with other existing counterparts for Population-II.

Discussion

As stated before we take four data sets and simulation, to notice the performance of the proposed enhanced variance estimator based on simple random sampling using two auxiliary variables. The proposed estimator is compared with existing estimators concerning MSE and PRE. The summary statistic of actual data sets is presented in Table 1. The MSE of estimators are given in Table 2 and the conditional values of estimators are given in Table 3. From the mathematical results using actual data, and PRE are given in Table 4, it is observed that the proposed estimator is efficient in terms of efficiency. We evaluated the efficiency of our proposed variance estimator in simple random sampling using a simulation study. From Tables 5 and 6, it can be concluded that for different values of the correlation coefficient \(\rho\) = 0.5, 0.6, 0.7, 0.8 and the sample size n = 10, 15, 20 the performance of proposed estimator Tprop is more efficient in comparison with the usual estimator and other existing estimators. The result of the simulation study clearly shows that the average PRE of the proposed estimator Tprop is higher as compared to the average PRE of the existing estimator for both the simulated data sets I and dataset II. Hence it can be recommended to use it further.

Conclusion

In this article, we have suggested a novel variance estimator using two auxiliary variables under simple random sampling. We conducted a comprehensive comparison of our suggested estimator with several existing counterparts using four real data sets. We also obtained the properties of the proposed estimator under the first order of approximation and carried out a simulation study to test its robustness and generalizability. The results showed that the suggested estimator outperformed to the existing estimators in terms of efficiency. Future research could focus on extending our proposed estimator to two-phase sampling designs, and to situations with non-response and measurement error, where information from auxiliary variables can be utilized to estimate population variance more accurately. Additionally, it would be interesting to investigate the performance of our proposed estimator in more complex survey settings, such as clustered and stratified sampling.