Introduction

The joint censoring method is extremely advantageous and practical when conducting comparative life tests of products from different units inside the same facility. Assume that two different lines within the same facility are producing products. Assume that two independent samples of sizes m and n are chosen at random from these two production lines and placed in a life-testing experiment at the same time. The experimenter uses a combination progressive type-II censoring strategy to save time, money, and the life-testing is completed when a specified number of failures occur to see:1,2,3,4, and5. In the literature, many authors have looked at the joint progressive type II censoring scheme (JP-II-CS) and inference methods. For example6 used the joint progressive type II censoring scheme (JP-II-CS) to incorporate the likelihood inference of two exponential distributions7 investigated Bayes estimation with JP-II-CS and the LINEX loss function8 provided the likelihood inference for k exponential distributions under the JP-II-CS9 introduced Weibull parameter point and interval estimates based on JP-II-CS. The JP-II-CS of two populations was considered by10, because the lifetime distributions of the experimental units in both populations follow two-parameter generalised exponential distributions and11 introduced the statistical inference of inverted exponentiated Rayleigh distribution under joint progressively type-II censoring. Also,12 proposed the power Rayleigh distribution, which has been utilised for lifetime modelling in reliability analysis,13 the lifetime performance index with power Rayleigh distribution is estimated with progressive first-failure censoring,14 presented methods for simulating the parameter of the Akshaya distribution using Bayesian and Non-Bayesian estimation,15 introduced a new distribution called generalized power Akshaya distribution and its applications,16 discussed characteristics and applications of the extended Cosine generalized family of distributions for reliability modeling, and17 developed a novel, flexible modification of the log-logistic distribution to model the COVID-19 mortality rate. It has also been fitted using a wide range of observational data from a variety of fields, including meteorology, finance, and hydrology (see18). Moreover,19 discussed an application of type II half logistic Weibull distribution inference for reliability analysis with bladder cancer. The joint progressive censoring scheme is quite useful to compare the lifetime distribution of products from different units which are being manufactured by two different lines in the same facility. The joint progressive censoring (JPC) scheme introduced by Rasouli and Balakrishnan6 can be briefly stated as follows. It is assumed that two samples of products of sizes m and n, respectively, are selected from these two lines of operation (say Line 1 and Line 2) for two populations Pop-1 and Pop-2 as shown in Figs. 1 and 2, and they are placed on a life testing experiment simultaneously.

Figure 1
figure 1

Case-I: kth failure comes from Pop-1.

Figure 2
figure 2

Case-II: kth failure comes from Pop-2.

With application to flood frequency analysis, the power Rayleigh distribution has extremes. JP-II-CS is described as follows by20. The family of Rayleigh distribution is formed such as generalized Rayleigh distribution is introduced by12 and 21,22 discussed the log Rayleigh distribution23, derived beta generalized Rayleigh distribution, Weibull Rayleigh distribution is introduced by24 and 25 introduced exponentiated Rayleigh distribution. Several authors have considered extensions of Rayleigh distribution such as inverse Rayleigh by26, weighted inverse Rayleigh distribution by27 and transmuted Rayleigh distribution by28. The quality of the procedures used in statistical analysis depends heavily on the assumed probability model or distribution. Let \(X_{1},X_{2},...., X_{m}\) represent the lifetimes of m units for product A, and they are considered to be independent and identically distributed (iid) random variables from the power Rayleigh distribution with a cumulative distribution function (cdf) of

$$\begin{aligned} F(x; \alpha _{1}, \beta _{1})= 1- e^{\frac{-x^{2\beta _{1}}}{\alpha _{1}^{2}}}, \ \ \ \ \ \ \ \ \ \ x>0, \alpha _{1}, \beta _{1}>0, \end{aligned}$$
(1)

and probability density function (pdf) is

$$\begin{aligned} f(x; \alpha _{1}, \beta _{1})=\frac{\beta _{1}}{\alpha _{1}^{2}} x^{2\beta _{1}-1} e^{\frac{-x^{2\beta _{1}}}{\alpha _{1}^{2}}}, \ \ \ \ \ \ \ \ \ x>0, \alpha _{1}, \beta _{1}>0. \end{aligned}$$
(2)

Similarly, let \(Y_{1},Y_{2},...., Y_{n}\), are lifetimes of n units for product B, and they are supposed to iid random variables from the power Rayleigh distribution with cdf is given by

$$\begin{aligned} G(x; \alpha _{2}, \beta _{2})= 1- e^{-\frac{x^{2\beta _{2}}}{\alpha _{2}^{2}}}, \ \ \ \ \ \ \ \ \ \ y>0, \alpha _{2}, \beta _{2}>0, \end{aligned}$$
(3)

and probability density function (pdf) is

$$\begin{aligned} g(x; \alpha _{2}, \beta _{2})=\frac{\beta _{2}}{\alpha _{2}^{2}} x^{2\beta _{2}-1} e^{-\frac{x^{2\beta _{2}}}{\alpha _{2}^{2}}}, \ \ \ \ \ \ \ \ \ \ y>0, \alpha _{2}, \beta _{2}>0, \end{aligned}$$
(4)

where \(\beta _{1}\) and \(\beta _{2}\) are the shape parameters and \(\alpha _{1}\) and \(\alpha _{2}\) are a scale parameters. Let \(K = m+n\) denotes the total sample size and \(\lambda _{1}\le \lambda _{2} \le ...\le \lambda _{K}\) indicates the order statistics of the K random variables \({X_{1},X_{2},...., X_{m},Y_{1},Y_{2},...., Y_{n} }\). The JP-II-CS is applied as follows. At the time of the first failure, \(R_{1}\) units are randomly removed from the remaining \(K - 1\) surviving units. Similarly, at the time of the second failure, \(R_{2}\) units are randomly withdrawn from the remaining \(K - R_{1} - 2\) surviving units, etc. In the end, at the time of the rth failure units, all remaining \(R_{r} = K- r -\sum _{i=1}^{r-1} R_{i}\) surviving units are withdrawn from the life-testing experiment. Where the JP-II-CS \((R = R_1, R_2,...,R_r)\) and the total number of failures r are prefixed before the experiment. Suppose that \(R_i = S_i + T_i, i = 1, ..., r\) and \(S_i\) and \(T_i\) indicate the number of units withdrawn at the time of the ith failure is related to X and Y samples respectively, and these are unknown and random variables. The data observed in this form will consist of \((H, \lambda , S)\), where \((H = H_1, H_2,...,H_r)\), \(H_i = 1 \ or \ 0\) if \(\lambda _i\) comes from X or Y failure, respectively, \((\lambda = \lambda _1, \lambda _2,...,\lambda _r)\) with \(r < K\), and \((S = S_1, S_2,...,S_r)\).

In this research, the lifetime distributions of the experimental units in the two populations follow two-parameter generalized exponential distributions with the same scale parameter but different shape parameters. We investigate both the likelihood and the Bayesian inference of unknown model parameters. By solving a three-dimensional optimization problem, the maximum likelihood estimators (MLEs) of the unknown parameters can be produced. This problem can be solved using the Newton-Raphson approach. In this instance, the Hessian matrix must be computed, which may not be in the most convenient format. Furthermore, it has been discovered that the traditional Newton-Raphson approach may not be suitable for small effective sample sizes.

The following is a list of the paper’s objectives: The maximum likelihood estimators (MLEs) of the power Rayleigh distribution’s unknown parameters are derived in Maximum likelihood estimation” section. Approximate confidence intervals (ACIs) based on the MLEs are presented in “Bayesian method” section. “Application of real data” section is where the Bayesian analysis is carried out. In “Simulation” section, we examine real data sets to demonstrate the estimating methods presented in this paper. In “Conclusion” section, the simulation results are shown. Section 8 concludes with a brief conclusion.

Maximum likelihood estimation

Assume that \(X_1, X_2,... X_m\) are independently and identically distributed (i.i.d.) power Rayleigh random variables representing the lifetimes of m units for product A. Similarly, \(Y_1, Y_2,...\), and \(Y_n\) are assumed to denote the lifetimes of n units for product B, and they are assumed to be independent and identically distributed (i.i.d.) power Rayleigh random variables. According to Rasouli and Balakrishnan6, the likelihood function of\((S,H,\lambda )\) can be written as follows

$$\begin{aligned} L(\alpha _{1},\alpha _{2}, \beta _{1}, \beta _{2}; H, \lambda ,S)= & {} C\prod \limits _{i=1}^{r}\left[ \left( f(\lambda _{i})\right) ^{h_{i}} \left( g(\lambda _{i})\right) ^{1-h_{i}}\right] \left[ \left( \bar{F}(\lambda _{i})\right) ^{S_{i}} \left( \bar{G}(\lambda _{i})\right) ^{t_{i}}\right] , \end{aligned}$$
(5)

where \(\lambda _{1}\le \lambda _{2}\le ...\le \lambda _{r},\) \(\bar{F}=1-F,\) \(\bar{G}=1-G,\) \(\sum _{i=1}^{r} s_{i} = m-m_{r},\) \(\sum _{i=1}^{r} t_{i} = n-n_{r},\) \(\sum _{i=1}^{r} R_{i} = \sum _{i=1}^{r} s_{i} + \sum _{i=1}^{r} t_{i}\), and \(C=D_{1} D_{2}\) with

$$\begin{aligned} D_{1}= & {} \prod \limits _{j=1}^{r}{\left( m-\sum _{i=1}^{j-1} h_{i}-\sum _{i=1}^{j-1} s_{i}\right) h_{j}+\left( n-\sum _{i=1}^{j-1} (1-h_{i})-\sum _{i=1}^{j-1} t_{i}\right) (1-h_{j})},\nonumber \\ D_{2}= & {} \prod \limits _{j=1}^{r}\left( \frac{\left( {\begin{array}{c}m-\sum _{i=1}^{j-1}h_{i}-\sum _{i=1}^{j-1}s_{i}\\ s_{i}\end{array}}\right) \left( {\begin{array}{c}n-\sum _{i=1}^{j-1}(1- h_{i})-\sum _{i=1}^{j-1}t_{i}\\ t_{i}\end{array}}\right) }{\left( {\begin{array}{c} m+n-j-\sum _{i=1}^{j-1} R_{i}\\ R_{j}\end{array}}\right) }\right) .\nonumber \\ L(\alpha _{1},\alpha _{2}, \beta _{1}, \beta _{2})= & {} C \left( \frac{\beta _{1}}{\alpha _{1}^2}\right) ^{m_{r}} \left( \frac{\beta _{2}}{\alpha _{2}^2}\right) ^{n_{r}} \prod \limits _{i=1}^{r}\lambda _{i}^{(2\beta _{1}-1)h_{i}}e^{\frac{-h_{i} \lambda _{i}^{2\beta _{1}}}{2\alpha _{1}^{2}}} \lambda _{i}^{(2\beta _{2}-1)(1-h_{i})}e^{\frac{-(1-h_{i})\lambda _{i}^{2\beta _{2}-1}}{2\alpha _{1}^{2}}} e^{\frac{-s_{i}\lambda _{i}^{2\beta _{1}}}{\alpha _{1}^{2}}}e^{\frac{-t_{i}\lambda _{i}^{2\beta _{2}}}{\alpha _{2}^{2}}}. \end{aligned}$$
(6)

As a result, the log-likelihood function can be written as:

$$\begin{aligned} \ell (\alpha _{1},\alpha _{2}, \beta _{1}, \beta _{2}; H, \lambda ,S)= & {} m_{r}\log \beta _{1}-2 m_{r}\log \alpha _{1}+ n_{r}\log \beta _{2}-2 n_{r}\log \alpha _{2} +\sum _{j=1}^{r}(2\beta _{1}-1)h_{i}\log \lambda _{i}\nonumber \\- & {} \sum _{i=1}^{r}\frac{h_{i}\lambda _{i}^{2\beta _{1}}}{2\alpha _{1}^{2}} +\sum \limits _{i=1}^{r}(2\beta _{2}-1)(1-h_{i})\log \lambda _{i}-\sum \limits _{i=1}^{r} \frac{(1-h_{i})\lambda _{i}^{2\beta _{2}-1}}{2\alpha _{2}^{2}}\nonumber \\- & {} \sum _{i=1}^{r}\frac{s_{i}\lambda _{i}^{2\beta _{1}}}{\alpha _{1}^{2}}-\sum _{i=1}^{r} \frac{t_{i}\lambda _{i}^{2\beta _{2}}}{\alpha _{2}^{2}}. \end{aligned}$$
(7)

To estimate the unknown parameters, take the first derivative of Eq. (7) with respect to \(\alpha _{1},\alpha _{2}, \beta _{1}\), and \(\beta _{2}\), which are given by:

$$\begin{aligned} \frac{\partial \ell }{\partial \alpha _{1} }= & {} \frac{-2m_{r}}{\alpha _{1} }+\sum \limits _{i=1}^{r}\frac{h_{i}\lambda _{i}^{2\beta _{1}}}{ \alpha _{1}^{3}}+2\sum \limits _{i=1}^{r}\frac{s_{i}\lambda _{i}^{2\beta _{1}}}{ \alpha _{1}^{3}}, \end{aligned}$$
(8)
$$\begin{aligned} \frac{\partial \ell }{\partial \alpha _{2} }= & {} \frac{-2n_{r}}{\alpha _{2} }+\sum \limits _{i=1}^{r}\frac{(1-h_{i})\lambda _{i}^{2\beta _{2}-1}}{ \alpha _{2}^{3}}+2\sum \limits _{i=1}^{r}\frac{t_{i}\lambda _{i}^{2\beta _{2}}}{ \alpha _{2}^{3}}, \end{aligned}$$
(9)
$$\begin{aligned} \frac{\partial \ell }{\partial \beta _{1} }= & {} \frac{m_{r}}{\beta _{1} }+2\sum \limits _{i=1}^{r}h_{i}\log \lambda _{i} -2 \sum \limits _{i=1}^{r}\frac{s_{i}\lambda _{i}^{2\beta _{1}}}{ \alpha _{1}^{2}}\log \lambda _{i}, \end{aligned}$$
(10)

and

$$\begin{aligned} \frac{\partial \ell }{\partial \beta _{2} }=\frac{n_{r}}{\beta _{2} }+2\sum \limits _{i=1}^{r}(1-h_{i})\log \lambda _{i} -2 \sum \limits _{i=1}^{r}\frac{t_{i}\lambda _{i}^{2\beta _{2}-1}}{ \alpha _{2}^{2}}\log \lambda _{i}. \end{aligned}$$
(11)

The system of normal equations \(\frac{\partial \ell }{\partial \alpha _{1} }=0\), \(\frac{\partial \ell }{\partial \alpha _{2} }=0\), \(\frac{\partial \ell }{\partial \beta _{1} }=0\), and \(\frac{\partial \ell }{\partial \beta _{2} }=0\) has not closed form of its solution, so the numerical techniques to estimate the unknown parameters \(\alpha _{1},\alpha _{2}, \beta _{1}\), and \(\beta _{2}\) are used.

Asymptotic confidence intervals

The maximum likelihood estimators for the parameters cannot be obtained in analytic form. Therefore, their actual distributions cannot be derived. However, we can use the asymptotic distribution of the maximum likelihood estimator to derive confidence intervals for the unknown parameters \(\alpha _{1},\alpha _{2}, \beta _{1}\), and \(\beta _{2}\).

The \(100(1-\gamma )\%\) CIs for \(\alpha _{1},\alpha _{2}\) \(\beta _{1}~\), and \(\beta _{2}~\) can be calculated using the asymptotic normality of the maximum likelihood estimators with Var(\(\hat{\alpha _{1}}_{ML}\)), Var(\(\hat{\alpha _{2}}_{ML}\)), Var(\(\hat{\beta _{1}}_{ML}\)), and Var(\(\hat{\beta _{2}}_{ML}\)). The second derivatives with respect to \(\alpha _{1},\alpha _{2}\), \(\beta _{1}~\), and \(\beta _{2}~\) are provided by the log-likelihood function in Eq. (7).

$$\begin{aligned} \frac{\partial ^{2}\ell }{\partial \alpha _{1}^{2}}&=\frac{2m_{r}}{\alpha _{1}^{2} }-3\sum \limits _{i=1}^{r}\frac{h_{i}\lambda _{i}^{2\beta _{1}}}{ \alpha _{1}^{4}}-6\sum \limits _{i=1}^{r}\frac{s_{i}\lambda _{i}^{2\beta _{1}}}{ \alpha _{1}^{4}}, \\ \frac{\partial ^{2}\ell }{\partial \alpha _{1}\partial \beta _{1}}&=2\sum \limits _{i=1}^{r}\frac{h_{i}\lambda _{i}^{2\beta _{1}} \log \lambda _{i}}{ \alpha _{1}^{3}}+4\sum \limits _{i=1}^{r}\frac{s_{i}\lambda _{i}^{2\beta _{1}}\log \lambda _{i}}{ \alpha _{1}^{3}}, \\ \frac{\partial ^{2}\ell }{\partial \alpha _{2}^{2}}&=\frac{2n_{r}}{\alpha _{2}^{2} }-3\sum \limits _{i=1}^{r}\frac{(1-h_{i})\lambda _{i}^{2\beta _{2}-1}}{ \alpha _{2}^{4}}-6\sum \limits _{i=1}^{r}\frac{t_{i}\lambda _{i}^{2\beta _{2}}}{ \alpha _{2}^{4}}, \\ \frac{\partial ^{2}\ell }{\partial \alpha _{2}\partial \beta _{2}}&=2\sum \limits _{i=1}^{r}\frac{(1-h_{i})\lambda _{i}^{2\beta _{2}-1} \log \lambda _{i}}{ \alpha _{1}^{3}}+2\sum \limits _{i=1}^{r}\frac{t_{i}\lambda _{i}^{2\beta _{2}}\log \lambda _{i}}{ \alpha _{2}^{3}},\\ \frac{\partial ^{2}\ell }{\partial \beta _{1}^{2}}&=\frac{-m_{r}}{\beta _{1}^{2} }-4\sum \limits _{i=1}^{r}\frac{s_{i}\lambda _{i}^{2\beta _{1}}(\log \lambda _{i})^{2}}{ \alpha _{1}^{2}}, \\ \frac{\partial ^{2}\ell }{\partial \beta _{1}\partial \alpha _{1}}&=4\sum \limits _{i=1}^{r}\frac{s_{i}\lambda _{i}^{2\beta _{1}}\log \lambda _{i}}{ \alpha _{1}^{3}}, \\ \frac{\partial ^{2}\ell }{\partial \beta _{2}^{2}}&=\frac{-n_{r}}{\beta _{2}^{2} }-4\sum -4\sum \limits _{i=1}^{r}\frac{t_{i}\lambda _{i}^{2\beta _{2}}(\log \lambda _{i})^{2}}{ \alpha _{2}^{2}}, \\ \frac{\partial ^{2}\ell }{\partial \beta _{2}\partial \alpha _{2}}&=4\sum \limits _{i=1}^{r}\frac{t_{i}\lambda _{i}^{2\beta _{2}}\log \lambda _{i}}{ \alpha _{2}^{3}},\\ \frac{\partial ^{2}\ell }{\partial \alpha _{1}\alpha _{2}}&=0, \\ \frac{\partial ^{2}\ell }{\partial \partial \alpha _{2}\alpha _{1}}&=0, \\ \frac{\partial ^{2}\ell }{\partial \partial \beta _{1}\beta _{2}}&=0, \\ \frac{\partial ^{2}\ell }{\partial \beta _{1}\partial \beta _{2}}&=0. \end{aligned}$$

Therefore, the observed Fisher information matrix \(\hat{I}_{ij}=\) \(E\left[ -\partial ^{2}\ell /\partial \phi _{i}~\partial \phi _{j}\right]\), where \(i,j=1,2,3,4~\), and  \(\phi =\left( \phi _{1},\phi _{2},\phi _{3}, \phi _{4}\right) =\left( \alpha _1,\alpha _2,\beta _{1}, \beta _{2}\right) .\)

Hence, the observed information matrix is given by

$$\begin{aligned} \hat{I}\left( \alpha _{1},\alpha _{2},\beta _{1}, \beta _{2}\right) = \begin{pmatrix} -\frac{\partial ^{2}\ell }{\partial \alpha _{1}^{2}} &{} -\frac{\partial ^{2}\ell }{ \partial \alpha _{1}\partial \alpha _{2}} &{} -\frac{\partial ^{2}\ell }{\partial \alpha _{1}\partial \beta _{1}} &{} -\frac{\partial ^{2}\ell }{\partial \alpha _{1}\partial \beta _{2}} \\ -\frac{\partial ^{2}\ell }{\partial \alpha _{2}\partial \alpha _{1}} &{} -\frac{\partial ^{2}\ell }{\partial \alpha _{2}^{2}} &{} -\frac{\partial ^{2}\ell }{\partial \alpha _{2}\partial \beta _{1}} &{} -\frac{\partial ^{2}\ell }{\partial \alpha _{2}\partial \beta _{2}} \\ -\frac{\partial ^{2}\ell }{\partial \beta _{1}\partial \alpha _{1}} &{} -\frac{\partial ^{2}\ell }{\partial \beta _{1}\partial \alpha _{2}} &{} -\frac{\partial ^{2}\ell }{ \partial \beta _{1}^{2}}&{}-\frac{\partial ^{2}\ell }{\partial \beta _{1}\partial \beta _{2}}\\ -\frac{\partial ^{2}\ell }{\partial \beta _{2}\partial \alpha _{1}} &{} -\frac{\partial ^{2}\ell }{\partial \beta _{2}\partial \alpha _{2}} &{} -\frac{\partial ^{2}\ell }{ \partial \beta _{2}\beta _{1}}&{}-\frac{\partial ^{2}\ell }{\partial \beta _{2}^{2}} \end{pmatrix}. \end{aligned}$$

Therefore, the inverting the observed information matrix \(\hat{I}\left( \alpha _1,\alpha _2,\beta _{1}, \beta _{2}\right)\) is used to obtain the asymptotic variance-covariance matrix for the MLEs. Where \(\hat{I}^{-1}\left( \alpha _1,\alpha _2,\beta _{1}, \beta _{2}\right)\) is obtained by

$$\begin{aligned} \hat{I}^{-1}\left( \alpha _1,\alpha _2,\beta _{1},\beta _{2}\right) =\left( \begin{array}{cccc} \widehat{var(\alpha _{1})} &{} \quad cov(\alpha _{1},\alpha _{2}) &{} \quad cov(\alpha _{1},\beta _{1}) &{} \quad cov(\alpha _{1},\beta _{2}) \\ cov(\alpha _{2},\alpha _{1}) &{} \quad \widehat{var(\alpha _{2})} &{} \quad cov(\alpha _{2},\beta _{1}) &{} \quad cov(\alpha _{2},\beta _{2}) \\ cov(\beta _{1},\alpha _{1}) &{} \quad cov(\beta _{1},\alpha _{2}) &{} \quad \widehat{var(\beta _{1})} &{} \quad cov(\beta _{1},\beta _{2})\\ cov(\beta _{2},\alpha _{1}) &{} \quad cov(\beta _{2},\alpha _{2}) &{} \quad cov(\beta _{2},\beta _{1}) &{} \quad \widehat{var(\beta _{2})} \end{array} \right) . \end{aligned}$$

Thus, the \(100(1-\gamma )\%\) normal approximate CIs for \(\left( \alpha _1,\alpha _2,\beta _{1}, \beta _{2}\right)\) are

$$\begin{aligned} \widehat{\alpha _{1}}\pm Z_{\frac{\gamma }{2}}\sqrt{\widehat{var(\alpha _{1})}},\ \widehat{\alpha _{2}}\pm Z_{\frac{\gamma }{2}}\sqrt{\widehat{var(\alpha _{2})}},\ \widehat{\beta _{1}} \ \pm Z_{\frac{\gamma }{2}}\sqrt{\widehat{var(\beta _{1} )}} \ \text { and }\widehat{\beta _{2}} \ \pm Z_{\frac{\gamma }{2}}\sqrt{\widehat{var(\beta _{2} )}}. \end{aligned}$$
(12)

where \(Z_{\frac{\gamma }{2}}\) is the percentile of the standard normal distribution with right-tail probability \(\frac{\gamma }{2}\).

We will introduce another method to estimate the unknown parameters, such as the Bayesian technique. Bayesian analysis is a successful tool that has been proposed to estimate the unknown parameters. Comparing Bayesian inference to other methods of reasoning has various benefits.

Bayesian method

This section contains the Bayesian estimates of the unknown parameters \(\alpha _{1},\ \alpha _{2}, \beta _{1}\), and \(\beta _{2}\) of the power Rayleigh distribution based on JP-II-CS. Prior knowledge has been incorporated in terms of some prior distributions, and here we assume that the four parameters \(\alpha _{1},\ \alpha _{2}\), \(\ \beta _{1}\), and \(\beta _{2}\)  are random variables having independent gamma priors.

$$\begin{aligned} \begin{array}{ll} \pi _{1}\left( \alpha _{1}\right) \propto \alpha _{1}^{a_{1}-1}e^{-b_{1}\alpha _{1}}, &{}\quad \quad \alpha _{1}>0, a_{1}, b_{1}>0, \\ \pi _{2}\left( \alpha _{2}\right) \propto \alpha _{2}^{a_{2}-1} e^{-b_{2}\alpha _{2}},&{}\quad \quad \alpha _{2}>0, a_{2}, b_{2}>0, \\ \pi _{3}\left( \beta _{1} \right) \propto \beta _{1}^{a_{3}-1}e^{-b_{3}\beta _{1} },&{}\quad \quad \beta _{1}>0, a_{3}, b_{3}>0, \\ \pi _{4}\left( \beta _{2} \right) \propto \beta _{2}^{a_{4}-1}e^{-b_{4}\beta _{2} },&{}\quad \quad \beta _{2}>0, a_{4}, b_{4}>0, \end{array} \end{aligned}$$

where \(a_i, b_i, i=1,2,3,4\) are considered to be known and chosen to indicate the previous assumption on the unknown parameters. As a result, the joint prior density is given as

$$\begin{aligned} \pi \left( \alpha _{1},\ \alpha _{2},\ \beta _{1}, \ \beta _{2} \right) = \alpha _{1}^{a_{1}-1}\alpha _{2}^{a_{2}-1}\beta _{1}^{a_{3}-1}\beta _{2}^{a_{4}-1}e^{-b_{1} \alpha _{1}-b_{2}\alpha _{2}-b_{3}\beta _{3}-b_{4}\beta _{4}}. \end{aligned}.$$
(13)

The posterior distribution of parameters \(\alpha _{1},\ \alpha _{2}~, \beta _{1}\), and \(\ \beta _{2}\) indicates \(\pi ^{*}\left( \alpha _{1},\ \alpha _{2},\ \beta _{1}, \ \beta _{2} \mid H, \lambda , S\right) ~\) by combining the likelihood function Eq. (6) with the prior via Bayes’ theorem, proportionality can be achieved and it can be written as

$$\begin{aligned}{} & {} \pi ^{*}\left( \alpha _{1},\ \alpha _{2},\ \beta _{1}, \ \beta _{2} \mid H, \lambda , S\right) = \nonumber \\{} & {} \quad =\frac{ \pi \left( \alpha _{1},\ \alpha _{2},\ \beta _{1},\beta _{2} \right) ~L(\alpha _{1},\ \alpha _{2},\ \beta _{1}, \beta _{2} \mid H, \lambda , S)}{ \int \limits _{0}^{\infty }\int \limits _{0}^{\infty }\int \limits _{0}^{\infty }\int \limits _{0}^{\infty }\pi _{1}\left( \alpha _{1}\right) ~\pi _{2}\left( \alpha _{2}\right) ~\pi _{3}\left( \beta _{1} \right) ~\pi _{4}\left( \beta _{2} \right) ~L(\alpha _{1},\ \alpha_{2},\ \beta_{1}, \beta _{2} \mid H, \lambda, S)~d\alpha_{1}d\alpha_{2}d\beta_{1}d\beta _{2}}. \end{aligned}$$
(14)

From Eq. (14) can be used to evaluate the joint posterior to proportionality.

$$\begin{aligned}{} & {} \pi ^{*}\left( \alpha _{1},\alpha _{2},\beta _{1}, \beta _{2} \mid H, \lambda , S\right) \propto \alpha _{1}^{-2m_{r}+a_{1}-1}\alpha _{2}^{-2n_{r}+a_{2}-1}\beta _{1}^{m_{r} +a_{3}-1}\beta _{2}^{n_{r}+a_{4}-1}e^{-b_{1} \alpha _{1}- b_{2}\alpha _{2}-b_{3}\beta _{1}-b_{4}\beta _{2}}\nonumber \\{} & {} \quad \prod \limits _{i=1}^{r}\lambda _{i}^{(2\beta _{1}-1)h_{i}}e^{-\sum _{i=1}^{r}\frac{h_{i}\lambda _{i}^{2\beta _{1}}}{2\alpha _{1}^{2}}} \prod \limits _{i=1}^{r}\lambda _{i}^{(2\beta _{2}-1)(1-h_{i})}e^{-\sum _{i=1}^{r}\frac{(1-h_{i})\lambda _{i}^{2\beta _{2}}}{2\alpha _{2}^{2}}}\nonumber \\{} & {} \quad e^{-\sum _{i=1}^{r}\frac{s_{i}\lambda _{i}^{2\beta _{1}}}{\alpha _{1}^{2}}} e^{-\sum _{i=1}^{r}\frac{t_{i} \lambda _{i}^{2\beta _{2}}}{\alpha _{2}^{2}}}. \end{aligned}$$
(15)

We highlighted that solving Eq. (15) analytically is impossible due to the difficulty in obtaining closed forms for the marginal posterior distributions for each parameter. As a result, we propose using the Markov chain Monte Carlo (MCMC) technique to approximate29 and generate samples from posterior distributions, as well as to evaluate Bayes estimators of unknown parameters and construct the corresponding CRIs, using squared error (SE) and linear exponential (LINEX) loss functions. Abushal et al.30, EL-Sagheer and Hasaballah31, Parsi and Bairamov32 and Metropolis et al.33 are just a few of the studies that worked with the MCMC technique. From Eq. (15) the conditional posterior density function of \(\alpha _{1},\ \alpha _{2}~, \beta _{1}\) and \(\ \beta _{2}\) can be obtained as the following proportionality to simplify, we used \(\pi ^{*}_{1}\left( \alpha _{1}\right) , ~\pi ^{*}_{2}\left( \alpha _{2}\right) , ~\pi ^{*}_{3}\left( \beta _{1} \right)\) and \(\pi ^{*}_{4}\left( \beta _{2} \right)\) instead of \(\pi ^{*}_{1}\left( \alpha _{1}\mid \alpha _{2},\beta _{1}, \beta _{2}, H, \lambda , S\right) , ~\pi ^{*}_{2}\left( \alpha _{2}\mid \alpha _{1},\beta _{1}, \beta _{2}, H, \lambda , S\right) , ~\pi ^{*}_{3}\left( \beta _{1}\mid \alpha _{1},\alpha _{2}, \beta _{2}, H, \lambda , S \right)\) and \(\pi ^{*}_{4}\left( \beta _{2}\mid \alpha _{1},\alpha _{2},\beta _{1}, H, \lambda , S \right)\) respectively:

$$\begin{aligned}{} & {} \pi _{1}^{*}\left( \alpha _{1}\right) \propto \alpha _{1}^{-2m_{r}+a_{1}-1}e^{-b_{1} \alpha _{1}} e^{-\sum _{i=1}^{r}\frac{h_{i}\lambda _{i}^{2\beta _{1}}}{2\alpha _{1}^{2}}}e^{-\sum _{i=1}^{r}\frac{s_{i}\lambda _{i}^{2\beta _{1}}}{\alpha _{1}^{2}}}, \end{aligned}$$
(16)
$$\begin{aligned}{} & {} \pi _{2}^{*}\left( \alpha _{2}\right) \propto \alpha _{2}^{-2n_{r}+a_{2}-1}e^{-b_{2} \alpha _{2}} e^{-\sum _{i=1}^{r}\frac{(1-h_{i})\lambda _{i}^{2\beta _{2}}}{\alpha _{1}^{2}}}e^{-\sum _{i=1}^{r}\frac{t_{i}\lambda _{i}^{2\beta _{2}}}{\alpha _{2}^{2}}}, \end{aligned}$$
(17)
$$\begin{aligned}{} & {} \pi _{3}^{*}\left( \beta _{1}\right) \propto \beta _{1}^{m_{r}+a_{3}-1}e^{-b_{3} \beta _{1}} \prod \limits _{i=1}^{r}\lambda _{i}^{(2\beta _{1}-1)h_{i}} e^{-\sum _{i=1}^{r}\frac{h_{i}\lambda _{i}^{2\beta _{1}}}{2\alpha _{1}^{2}}} e^{-\sum _{i=1}^{r}\frac{s_{i} \lambda _{i}^{2\beta _{1}}}{\alpha _{1}^{2}}}, \end{aligned}$$
(18)

and

$$\begin{aligned} \pi _{4}^{*}\left( \beta _{2}\right) \propto \beta _{2}^{n_{r}+a_{4}-1}e^{-b_{4} \beta _{2}} \prod \limits _{i=1}^{r}\lambda _{i}^{(2\beta _{2}-1) (1-h_{i})}e^{-\sum _{i=1}^{r}\frac{(1-h_{i})\lambda _{i}^{2\beta _{2}}}{2\alpha _{2}^{2}}} e^{-\sum _{i=1}^{r} \frac{t_{i}\lambda _{i}^{2\beta _{2}}}{\alpha _{2}^{2}}}. \end{aligned}$$
(19)

The conditional posterior function of \(\alpha _{1},\ \alpha _{2}~, \beta _{1}\) and \(\ \beta _{2}\) in Eqs. (16)–(19) cannot be reduced analytically to well known distributions. Consequently, it is difficult sample directly by standard methods, but the plot of them see in Figs. 3, 4, 5 and 6 display that they are similar to normal distribution.

Figure 3
figure 3

The conditional posterior density of MCMC results of the \(\alpha _{1}\) parameter.

Figure 4
figure 4

The conditional posterior density of MCMC results of the \(\alpha _{2}\) parameter.

Figure 5
figure 5

The conditional posterior density of MCMC results of the \(\beta _{1}\) parameter.

Figure 6
figure 6

The conditional posterior density of MCMC results of the \(\beta _{2}\) parameter.

Gibbs sampling

To produce the Bayesian estimate of unknown parameters and the related credible interval, we now employ the Gibbs sampling method, which is a subclass of Markov chain Monte-Carlo (MCMC) methods. Using the posterior conditional density functions of the parameters \(\alpha _{1},\ \alpha _{2}~\), \(\ \beta _{1}\) and \(\ \beta _{2}\), this approach produces posterior samples. Eq. (15) identifies the posterior density function of the parameters \(\alpha _{1},\ \alpha _{2}~\), \(\ \beta _{1}\) and \(\ \beta _{2}\). As indicated by Eqs. (16)–(19), the conditional density function of \(\alpha _{1},\ \alpha _{2}~\), \(\ \beta _{1}\) and \(\ \beta _{2}\) cannot be achieved in the form of the well-known density functions (19). As a result, we use the33, Metropolis–Hasting (MH) algorithm uses a normal proposal distribution to generate random samples from the posterior densities of \(\alpha _{1},\ \alpha _{2}~\), \(\ \beta _{1}\) and \(\ \beta _{2}\).

The steps of Gibbs sampling are described as follows:

  • Step 1. Start with an initial guess \((\alpha _{1}^{(0)},\alpha _{2}^{(0)},\beta _{1}^{(0)},\beta _{2}^{(0)})\) =\((\hat{\alpha _{1}},\hat{\alpha _{2}},\hat{\beta _{1}},\hat{\beta _{2}})\).

  • Step 2. Set \(t = 1\).

  • Step 3. Generate \((\alpha _{1}^{(t)},\alpha _{2}^{(t)},\beta _{1}^{(t)}, \beta _{2}^{(t)})\) from \(\pi _{1}^{*}(\alpha _{1}^{(t-1)}\mid H, \lambda , S)\), \(\pi _{2}^{*}(\alpha _{2}^{(t-1)}\mid H, \lambda , S)\), \(\pi _{3}^{*}(\beta _{1}^{(t-1)}\mid H, \lambda , S)\), and \(\pi _{4}^{*}(\beta _{2}^{(t-1)}\mid H, \lambda , S)\) using MH algorithm with the proposal distributions \(N(\alpha _{1}^{(t-1)}, \sqrt{\widehat{var(\alpha _{1})}})\), \(N(\alpha _{2}^{(t-1)}, \sqrt{\widehat{var(\alpha _{2})}})\), \(N(\beta _{1}^{(t-1)}, \sqrt{\widehat{var(\beta _{1})}})\),  and \(N(\beta _{2}^{(t-1)}, \sqrt{\widehat{var(\beta _{2})}})\) respectively.

    1. (i)

      Generate proposals \(\alpha _{1}^{*}\) from \(N(\alpha _{1}^{(t-1)}, \sqrt{\widehat{var(\alpha _{1})}})\), \(\alpha _{2}^{*}\) from \(N(\alpha _{2}^{(t-1)}, \sqrt{\widehat{var(\alpha _{2})}})\), \(\beta _{1}^{*}\) from \(N(\beta _{1}^{(t-1)}, \sqrt{\widehat{var(\beta _{1})}})\), and \(\beta _{2}^{*}\) from \(N(\beta _{2}^{(t-1)}, \sqrt{\widehat{var(\beta _{2})}})\).

    2. (ii)

      Measure the acceptance probabilities \(\eta _{\alpha _{1}}=\min \left( 1, \frac{\pi _{1}^{*}(\alpha _{1}^{(t)}\mid H, \lambda , S)}{\pi _{1}^{*}(\alpha _{1}^{(t-1)}\mid H, \lambda , S)}\right)\), \(\eta _{\alpha _{2}}=\min \left( 1, \frac{\pi _{2}^{*}(\alpha _{2}^{(t)}\mid H, \lambda , S)}{\pi _{2}^{*}(\alpha _{2}^{(t-1)}\mid H, \lambda , S)}\right)\), \(\eta _{\beta _{1}}=\min \left( 1, \frac{\pi _{3}^{*}(\beta _{1}^{(t)}\mid H, \lambda , S)}{\pi _{3}^{*}(\beta _{1}^{(t-1)}\mid H, \lambda , S)}\right)\),  and \(\eta _{\beta _{2}}=\min \left( 1, \frac{\pi _{4}^{*}(\beta _{2}^{(t)}\mid H, \lambda , S)}{\pi _{4}^{*}(\beta _{2}^{(t-1)}\mid H, \lambda , S)}\right)\).

    3. (iii)

      Generate \(u_{1}\), \(u_{2}\), \(u_{3}\) and \(u_{4}\) from Uniform (0, 1).

    4. (iv)

      If \(u_{1} < \eta _{\alpha _{1}}\), accept the proposal and set \((\alpha _{1}^{(t)})=(\alpha _{1}^{(*)})\) else, set \((\alpha _{1}^{(t)})=(\alpha _{1}^{(t-1)})\).

    5. (v)

      If \(u_{2} < \eta _{\alpha _{2}}\), accept the proposal and set \((\alpha _{2}^{(t)})=(\alpha _{2}^{(*)})\) else, set \((\alpha _{2}^{(t)})=(\alpha _{2}^{(t-1)})\).

    6. (vi)

      If \(u_{3} < \eta _{\beta _{1}}\), accept the proposal and set \((\beta _{1}^{(t)})=(\beta _{1}^{(*)})\) else, set \((\beta _{1}^{(t)})=(\beta _{1}^{(t-1)})\).

    7. (vii)

      If \(u_{4} < \eta _{\beta _{2}}\), accept the proposal and set \((\beta _{2}^{(t)})=(\beta _{2}^{(*)})\) else, set \((\beta _{2}^{(t)})=(\beta _{2}^{(t-1)})\).

  • Step 4. Set \(t = t + 1\).

  • Step 5. Repeat Steps (3)–(5) N times and get the posterior sample to estimate the unknown parameters \(\alpha _{1},\ \alpha _{2}~, \beta _{1}\) and \(\ \beta _{2}\).

Application of real data

In this section, we analyse a data set primarily for illustration purposes. Rasouli and Balakrishnan6 also used these data sets, originally obtained from34. The data includes the time intervals (in hours) between air conditioning system failures on a fleet of 13 Boeing 720 jet planes. For illustration purposes, we used the planes “7913” and “7914”. The following data is provided:

  • PLANE 7914: 3, 5, 5, 13, 14, 15, 22, 22, 23, 30, 36, 39, 44, 46, 50, 72, 79, 88, 97, 102, 139, 188, 197, 210.

  • PLANE 7913: 1, 4, 11, 16, 18, 18, 18, 24, 31, 39, 46, 51, 54, 63, 68, 77, 80, 82, 97, 106, 111, 141, 142, 163,  191, 206, 216.

For each sample, we fit the Power Rayleigh distribution and provide the results in Table 3. The Kolmogorov–Smirnov test statistic values (K–S) and corresponding p values were provided, indicating that the data fit the Power Rayleigh distribution with the parameters presented in Table 1.

Table 1 MLEs and Kolmogorov–Smirnov test results for data.

So, the power Rayleigh distribution fits the data very well in both samples, and we have just plotted the empirical and fitted it in Fig. 7 for the first sample and Fig. 8 for the second sample. It is evident that the power Rayleigh distribution can be a better model for fitting this data. From the above data sets, we have generated JP-II-C sample with the censoring scheme. Assume that \(m = 24\) for the first sample and \(n = 27\) for the second sample, by implementing JP-II-CS where \(K = m + n\) denotes the total sample size, and when \(r = 10, S = ( 5, 0, 0, 0, 5, 0, 0, 0, 0, 8)\), \(T = (5, 0, 0, 0, 5, 0, 0, 0, 0, 10),\) and \(R = (10, 0, 0, 0, 10, 0, 0, 0, 0, 18)\). The generated data sets are provided below

$$\begin{aligned} \lambda = (2.2, 3.3, 3.4, 3.4, 3.5, 3.6, 3.7, 3.8, 3.8, 3.8), \end{aligned}$$

and

$$\begin{aligned} H = (1, 0, 1, 1, 0, 1, 0, 1, 1, 1). \end{aligned}$$

Based on the above JP-II-CS sample, we compute the point estimate based on MLEs and the results of \(95\%\) ACIs for \(\alpha _{1},\alpha _{2},\beta _{1}~\), and \(\beta _{2}\), the results of which are shown in Tables 2 and 3. For Bayesian estimation, we used MCMC method based on 10, 000 MCMC sample and discard the first 1000 values as ‘burn-in’. We used the informative priors which follow the Gamma distribution with hyperparameters \(a_{i}=0.02\) and \(b_{i}=2\). Table 2 shows the Bayesian estimates for \(\alpha _1, \alpha _2, \beta _1\), and \(\beta _2\) under the SE and LINEX loss functions. The two samples can be seen that the power Rayleigh distribution fits the data very well and also we have just plotted the empirical S(t) and the fitted S(t) in Fig. 9 for the first sample and in Fig. 10 for the second sample. It is evident that the power Rayleigh distribution can be a good model fitting this data. Moreover, the results of the \(95\%\) CRIs for \(\alpha _{1},\alpha _{2},\beta _{1}~\) and \(\beta _{2}~\) are tabled in Table 3. As we can see, the variances of \(\alpha _{1},\alpha _{2},\) and \(\beta _{1}~\) are very large comparing to their values. This would lead to a negative lower bound of the asymptotic confidence intervals of \(\alpha _{1},\alpha _{2},\) and \(\beta _{1}~\). Since \(\alpha _{1},\alpha _{2},\) and \(\beta _{1}~\) cannot be negative, we truncate the lower limits at zero. This is one of the disadvantages of the maximum likelihood method.

Table 2 Different point estimates for the parameters \(\alpha _{1},\alpha _{2},\beta _{1}\text { and }\beta _{2}\).
Table 3 \(95\%\) \(\text {CIs of }\alpha _{1},\alpha _{2},\beta _{1} \text { and }\beta _{2}.\).

Table 4 shows the comparison between the approximation of the expected values of the number of failures from the first production line before the test performance (A.E.B) and the mean of the exact number of failures after the test performance (M.E.A) when \(r=15,20,25,30\) and 35. The plots of the posterior density functions and the trace plots of the unknown parameters \(\alpha _{1}\), \(\alpha _{2}\), \(\beta _{1}\), and \(\beta _{2}\) using MCMC method have been shown in Figs. 11, 12, 13 and 14.

Table 4 The comparison between A.E.B and A.E.A for real data.
Figure 7
figure 7

The Empirical density and cumulative distribution for the first sample of real data.

Figure 8
figure 8

The Empirical density and cumulative distribution for the second sample of real data.

Figure 9
figure 9

The Empirical and fitted survival functions for the first sample of real data.

Figure 10
figure 10

The Empirical and fitted survival functions for the second sample of real data.

Figure 11
figure 11

The posterior density function and the trace plots for the parameters \(\alpha _{1}\).

Figure 12
figure 12

The posterior density function and the trace plots for the parameters \(\beta _{1}\).

Figure 13
figure 13

The posterior density function and the trace plots for the parameters \(\alpha _{2}\).

Figure 14
figure 14

The posterior density function and the trace plots for the parameters \(\beta _{2}\).

Simulation

A simulation study was performed to compare the performance of the different methods discussed in this paper. Suppose various sample sizes for the two populations as \(m, n = 10,20,30\), and various values of \(r = 5,10,15,20,30,40\). Also, set the parameters \((\alpha _{1}^{(t)},\alpha _{2}^{(t)},\beta _{1}^{(t)}, \beta _{2}^{(t)}) = (0.5,0.6,2.5,2.69,0.69,0.8,1.57,1.8)\). The MSEs, lengths of 95 % coverage probability (CP) for the parameters \((\alpha _{1}^{(t)},\alpha _{2}^{(t)},\beta _{1}^{(t)}\), and \(\beta _{2}^{(t)})\) have been evaluated using MLEs and MCMC with 10000 observations under SE and LINEX loss functions. This process is repeated 1000 times and the results of the mean values of MSE, lengths and CP, are displayed in Tables 5, 6, 7, and 8. Moreover, in this section a simulation study was conducted to compute the expected number of failures from the first production line (S.E.Mr) and also compute the approximated expected number of failures (A.E.Mr). We assumed various sample sizes for the two populations as\(~ m, n = 5,10,15, 20,25,30,40;50\) and various choices of JP-II-CS \(r = 5,10,20,30,40\) samples from the two PRD populations have been generated under the same truth values of these parameters, the results are presented in Table 9. The calculations in Table 9 are computed under the following assumptions: \(p = P(X_1 < X_2)\) where \(X_1, X_2\) are the lifetime of the first production line units and the second production line units, respectively, in which \(X_1\) is selected from PRD(0.5, 2.5),  and \(X_2\) from PRD(0.6, 2.69), once more \(X_1\) selected from PRD(0.8, 1.8),  and \(X_2\) from PRD(0.69, 1.57). We calculate the A.E.Mr according to Parsi and Bairamov32 as follows:

Table 5 MSE, length and coverage probability (CP) of estimates for the parameter\(\alpha _{1}\).
Table 6 MSE, length and coverage probability (CP) of estimates for the parameter\(\text { }\alpha _{2}\).
Table 7 MSE, length and coverage probability (CP) of estimates for the parameter\(\text { }\beta _{1}\).
Table 8 MSE, length and coverage probability (CP) of estimates for the parameter\(\text { }\beta _{2}\).
Table 9 The comparison between A.E.B and A.E.A.

Conclusion

In this study, a joint type-II progressively censoring method was used to investigate two samples with a power Rayleigh distribution. It was believed that the scale parameters and form parameters were different. The MLE estimates were obtained using the maximum likelihood method. Performance of MLEs and Bayesian estimation methods were compared for informative and non-informative priors. Importance sampling was used to create Bayesian estimates. It was also investigated how estimates under square and LINEX loss functions compared. The best method for point estimates found out to be Bayesian inference under informative priors, and a number of censoring scheme structures were found. On a real data set, we have used the developed techniques. A simulation study is used to compare the performance of the proposed methods for different sample sizes (mn). From the results, we observe the following:

  1. 1

    . From Table 2, it can be seen that when \(c=2\), the Bayes estimates under the SE loss function are similar to those under the LINEX loss functions.

  2. 2

    . It is observed that, from Table 3 the MCMC is better than the MLE in the sense of having the smallest lengths.

  3. 3

    . It is clear from Table 4 the values of A.E.B. are smaller than the values of M.E.A. in all schemes.

  4. 4

    . It can be seen from Table 9 the values of A.E.B. are relatively close to the values of M.E.A. in all schemes.

  5. 5

    . It is evident that from Tables 5, 6, 7, and 8 the MSEs and CP of MLE are smaller than the MSEs of MCMC. Then, the performance of the Bayes estimates for the parameters \(\alpha _{1},\ \alpha _{2}~, \beta _{1}\) and \(\ \beta _{2}\) are better than the MLEs.

  6. 6

    . It is observed that from Tables 5, 6, 7, and 8 the Bayes estimates under LINEX with \(c=2\) are provides better estimates in the sense of having smaller MSEs

  7. 7

    . It is clear that from Tables 5, 6, , and 8 when mn and r increase the MSEs and the lengths decrease

  8. 8

    . It is evident from Tables 5, 6, 7, and 8 that the MCMC CRIs give more accurate results than the ACIs since the lengths of the MCMC CRIs are less than the lengths of the ACIs, for various sample sizes.