Introduction

Inverse Weibull (IW) distribution, also known as Fréchet or type-II extreme value distribution, is introduced by Keller and Kanath1 to model the degeneration phenomena of mechanical components, such as dynamic components (piston, crankshaft, etc.) of diesel engines; see also Nelson2 in the context of modeling breakdown of insulating fluid. Later on, it has been used in analysing data from different areas of science, e.g. actuaria, agricaulture, energy, hydrology, medicine, and so on.

The probability density function (pdf) of the IW distribution is

$$\begin{aligned} f_{X}(x; \beta , \sigma )=\beta \sigma x^{-(\beta +1)}\exp \big ({-\sigma x^{-\beta }}\big ); \quad x>0, \quad \beta , \sigma >0, \end{aligned}$$
(1)

and its cumulative distribution function (cdf) is

$$\begin{aligned} F_{X}(x; \beta , \sigma )=\exp \big ({-\sigma x^{-\beta }}\big ); \quad x>0, \quad \beta , \sigma >0, \end{aligned}$$
(2)

where \(\beta\) and \(\sigma\) are the shape and scale parameters, respectively. Herein after, random variable having pdf in (1) is denoted by \(X \sim IW(\beta , \sigma )\). As stated in Helu3, the IW distribution has longer right tail than the other well-known distributions and also has hazard function like log-normal and inverse Gaussian distributions.

The IW distribution and its sub-models inverse Rayleigh and inverse exponential distributions are widely used in reliability engineeering. However, in some cases, IW and its sub-models can not model reliability data adequately. Therefore, generalized/extended versions of the IW distribution are proposed to improve its modeling capability, i.e. to model reliability data more accurately; see for example Chakrabarty and Chowdhury4, Fayomi5, Hanagal and Bhalerao6, Jahanshahi et al.7, Saboori et al.8, Afify et al.9, Hussein et al.10 and references given them. Note that IW distribution is also called Fréchet distribution. Therefore, we recommend reading Afify et al.11, Hussein et al.10 and references given them in the context of extension/generalization of the Fréchet distribution.

There are quite a variety of methods for extending/generating distribution; see Lee et al.12 for an overview on it. In this context, slash distribution introduced by Andrews et al.13 has been popular. Later on, various slash distributions were introduced; e.g., Oliveres-Pacheco et al.14, Iriarte et al.15, Korkmaz16, Gómez et al.17, and references that given by them for univariate slash distributions.

Recently, Jones18 considered the distribution of \(X \times Y^{\frac{1}{\alpha }}\) that is formally different form of the slash distribution, and called it as \(\alpha\)-monotone density. Here, X and Y are independent random variables following distributions on \(\mathbb {R^{+}}\) and uniform distribution having range 0 to 1, i.e., U(0,1), respectively. See Jones18 and Arslan19,20,21 for the theoretical view points of the \(\alpha\)-monotone distribution and application of it, respectively. Arslan20,21 show that the \(\alpha\)-monotone concept is easly applied for the baseline distribution and make significant effect on the modeling capability of the baseline distribution.

It is known that IW distribution is a well-known candidate in modeling lifetime data. However, its modeling performance may be inadequate since it has one shape parameter; therefore, it has to be improved in such cases. For example, when lifetime of a component under the stress wanted to be modeled and the effect of stress level wanted to be estimated, a new parameter, called stress parameter or shape parameter, may be included into density of the IW distribution. As a consequance, lifetime of a component under the stress is expected to be lower than the routine case. The division operation is usually used to reduce the value of a random variable arbitrarily. However, multiply it by a random variable taking value between 0 and 1 can also be used. Here, the reason prefering the multiplication rather than division is because having a very useful variable following U(0,1). Therefore, a random variable T defined as \(T=X \times Y^{\frac{1}{\alpha }}\) is very important in lifetime analysis. The T can be represent the lifetime of X under the stress.

Since the inherent of \(\alpha\)-monotone distribution, a random variable having an \(\alpha\)-monotone distribution will preciously represent a lifetime of a component under the stress. In this regard, \(\alpha\)-monotone distribution can be an alternative to the IW distribution. However, to the best of authors' knowledge, there are limited number of studies in the literature concerning the \(\alpha\)-monotone distribution spesifically in modeling the lifetime data. The motivation of this study comes from filling the gap in the literature about the \(\alpha\)-monotone distributions in lifetime data modeling. Therefore, in this study, a practical model, for lifetime of a component under stress, is proposed. The pdf of the proposed model meets the condition for having \(\alpha\)-monotone density; therefore it is called \(\alpha\)-monotone IW (\(\alpha\)IW) distribution. Also, the cdf and hazard rate function (hrf) of the \(\alpha\)IW distribution are obtained; then, the \(\alpha\)IW distribution is characterized by its hrf and characterizing conditions are provided as well. The r-th moment of the \(\alpha\)IW distribution is also formulated. Furthermore, it is showed that the \(\alpha\)IW distribution can be expressed as a scale-mixture between the IW and U(0, 1) distributions. Data generation process is also developed by using the stochastic representation of the random variable having the \(\alpha\)IW distribution. Maximum likelihood, maximum product of spacing, and least squares estimation methods are used to estimate parameters of the \(\alpha\)IW distribution. The \(\alpha\)IW distribution includes the \(\alpha\)-monotone inverse Rayleigh, \(\alpha\)-monotone inverse exponential, IW, inverse Rayleigh, and inverse exponential distributions for the different parameter settings and limiting cases. Thus, it can be considered as a general class of the IW distribution by adding a new shape parameter that allows to distribution being more flexible than the IW distribution. In light of this, the \(\alpha\)IW can be an alternative to the IW and its rivals in lifetime data analysis.

The rest of the paper is organized as follows. The \(\alpha\)IW distribution, its characterization, and properties are given; then submodels of the \(\alpha\)IW distribution are obtained and data generation process for the \(\alpha\)IW distribution is provided. By the following sections, parameter estimation for the \(\alpha\)IW distribution is handled and real-life data sets are used to show modeling capability of the \(\alpha\)IW distribution and compare it with its rivals. The paper is finalized with some concludings and remarks.

The \(\alpha\)IW distribution

Proposition 1

A random variable T defined by \(T=X \times Y^{1/\alpha }\), where X \(\sim\) IW(\(\beta , \sigma\)) and Y \(\sim\) U(0, 1) are independent random variables, follows the \(\alpha\)IW distribution having the pdf

$$\begin{aligned} f_{T}(t;\alpha , \beta , \sigma )=\frac{\alpha ^2}{\beta \sigma ^{\alpha /\beta }} \Gamma \left( \dfrac{\alpha }{\beta }\right) t^{\alpha -1} G\left( t^{-\beta }; \dfrac{\alpha }{\beta }+1, \sigma \right) ; \quad t>0, \quad \alpha , \beta , \sigma >0. \end{aligned}$$
(3)

Here, \(\Gamma (a)=\displaystyle \int _{0}^{\infty }u^{a-1}e^{-u}du\) and \(G(t;a,b)=\frac{a^b}{\Gamma (a)}\displaystyle \int _{0}^{z}u^{a-1} \exp (-b u) du\) are the gamma function and the cdf of the gamma distribution, respectively.

Proof

The proof is completed by using the Jacobian transformation, where J is the Jacobian,

$$\begin{aligned} \left. \begin{array}{cc} T= &{} \displaystyle \frac{X}{Y^{-\frac{1}{\alpha }}} \\ &{}\\ W= &{} Y\\ \end{array} \right\} \Rightarrow \left. \begin{array}{cc} X= &{}T W^{-\frac{1}{\alpha }}\\ Y= &{} W \\ \end{array} \right\} \Rightarrow J =\left| \begin{array}{cc} \displaystyle \frac{\partial X}{\partial T} &{}\displaystyle \frac{\partial X}{\partial W} \\ &{} \\ \displaystyle \frac{\partial Y}{\partial T} &{} \displaystyle \frac{\partial Y}{\partial W}\\ \end{array} \right| =\left| \begin{array}{cc} w^{-\frac{1}{\alpha }} &{}-t\frac{1}{\alpha }w^{-\frac{1}{\alpha }-1} \\ 0 &{} 1 \\ \end{array} \right| = w^{-\frac{1}{\alpha }} . \end{aligned}$$

Then, the joint pdf of T and W is

$$\begin{aligned} f_{T,W}(t,w)=\beta \sigma t^{-(\beta +1)} w^{\frac{\beta }{\alpha }}\exp (-\sigma t^{-\beta } w^{\frac{\beta }{\alpha }}). \end{aligned}$$

The marginal pdf of the random variable \(T\) is obtained immediately by taking integration with respect to the random variable \(W\) using the transformation \(t^{-\beta } w^{\frac{\beta }{\alpha }}=u\). Herein after, random variable having pdf in (3) is denoted by \(T \sim \alpha \text {IW}(\alpha , \beta , \sigma )\). \(\square\)

Proposition 2

The cdf and hrf of the \(\alpha\)IW distribution are

$$\begin{aligned} F_{T}(t; \alpha , \beta , \sigma )=&F_{X}(t; \beta , \sigma )+\frac{t}{\alpha }f_{T}(t; \alpha , \beta , \sigma )\\ =&\exp \left( -\sigma t^{-\beta }\right) + \dfrac{\alpha }{\beta \sigma ^{\alpha /\beta }} \Gamma \left( \dfrac{\alpha }{\beta }\right) t^{\alpha } G\left( t^{-\beta }; \dfrac{\alpha }{\beta }+1, \sigma \right) \end{aligned}$$

and

$$\begin{aligned} h_{T}(t; \alpha , \beta , \sigma )=\dfrac{\dfrac{\alpha ^2}{\beta \sigma ^{\alpha /\beta }} \Gamma \left( \dfrac{\alpha }{\beta }\right) t^{\alpha -1} G\left( t^{-\beta }; \dfrac{\alpha }{\beta }+1, \sigma \right) }{1-\left[ \exp \left( -\sigma t^{-\beta }\right) + \dfrac{\alpha }{\beta \sigma ^{\alpha /\beta }} \Gamma \left( \dfrac{\alpha }{\beta }\right) t^{\alpha } G\left( t^{-\beta }; \dfrac{\alpha }{\beta }+1, \sigma \right) \right] }, \end{aligned}$$

respectively.

Proof

The results follow from the definitions of the \(\alpha\)-monotone distribution and hrf; see Jones18. \(\square\)

The plots for the pdf and hrf of the \(\alpha\)IW distribution for certain values of the parameters are shown in Fig. 1a,b and c, respectively.

Figure 1
figure 1

Shapes of the pdf and hrf of the \(\alpha\)IW distribution for different parameters settings.

It should be noted that pdf of the \(\alpha\)IW distirbution can be skewed to the right or left, and also has various shapes for the different parameter settings; see Fig. 1a and b, respectively. Besides, shapes of the hrf of the \(\alpha\)IW distribution, given in Fig. 1c, can be monotone decrease, monotone decrease-increase (bathup-shape), or monotone increase-decrease. These properties make \(\alpha\)IW distribution desirable in modeling reliability data.

Characterization

In this section, the proposed model is characterized by its hrf and characterizing conditions are provided as well.

The pdf of the \(\alpha\)IW distribution can be expressed as

$$\begin{aligned} f(t)&=\alpha \sigma ^{-\frac{\alpha }{\beta }} t^{\alpha -1}\displaystyle \int _{0}^{\sigma t^{-\beta }} u^{\frac{\alpha +\beta }{\beta }-1} \exp \left( -u\right) du\\&=\alpha \sigma ^{\frac{\alpha }{\beta }} t^{\alpha -1}\left[ \Gamma \left( \dfrac{\alpha +\beta }{\beta }\right) - \Gamma \left( \dfrac{\alpha +\beta }{\beta }, \sigma t^{-\beta }\right) \right] , \end{aligned}$$
(4)

where \(\Gamma (a, z)=\displaystyle \int _{z}^{\infty }u^{a-1} \exp (-u) du\) is an upper incomplete gamma function. Therefore, cdf, survival function, and hrf of the \(\alpha\)IW distribution can also be expressed as

$$\begin{aligned} F(t)= & {} \exp \left( -\sigma t^{-\beta }\right) + \sigma ^{-\frac{\alpha }{\beta }} t^{\alpha } \left[ \Gamma \left( \frac{\alpha +\beta }{\beta }\right) -\Gamma \left( \frac{\alpha +\beta }{\beta }, \sigma t^{-\beta } \right) \right] , \\ S(t)= & {} {\overline{F}}(t)=1- \exp \left( -\sigma t^{-\beta }\right) - \sigma ^{-\frac{\alpha }{\beta }} t^{\alpha } \left[ \Gamma \left( \frac{\alpha +\beta }{\beta }\right) -\Gamma \left( \frac{\alpha +\beta }{\beta }, \sigma t^{-\beta } \right) \right] , \end{aligned}$$

and

$$\begin{aligned} h(t)=\dfrac{f(t)}{{\overline{F}}(t)}=\dfrac{\dfrac{\alpha }{\sigma ^{\alpha /\beta }}t^{\alpha -1}\left[ \Gamma \left( \dfrac{\alpha +\beta }{\beta }\right) - \Gamma \left( \dfrac{\alpha +\beta }{\beta }, \sigma t^{-\beta }\right) \right] }{1- \exp \left( -\sigma t^{-\beta }\right) - t^{\alpha } \sigma ^{-\frac{\alpha }{\beta }}\left[ \Gamma \left( \frac{\alpha +\beta }{\beta }\right) -\Gamma \left( \frac{\alpha +\beta }{\beta }, \sigma t^{-\beta } \right) \right] }, \end{aligned}$$
(5)

respectively. Note that the conditions \(F(0)=0\) and \(F(\infty )=1\) are satisfied, which implies the function F(t) is a cdf.

Proposition 3

The random variable \(T:\Omega \longrightarrow \left( 0,+\infty \right)\) has continuous pdf f(t) if and only if the hrf h(t) satisfies the following equation:

$$\begin{aligned} \frac{f'(t)}{f(t)}=\frac{h'(t)}{h(t)}-h(t). \end{aligned}$$
(6)

Proof

According to definition of the hrf, given by in (5), it follows:

$$\begin{aligned} \frac{h'(t)}{h(t)}=\frac{f'(t)\overline{{F}}(t)+f^2(t)}{\overline{{F}}^2(t)}\cdot \frac{\overline{{F}}(t)}{f(t)}=\frac{f'(t)}{f(t)}+h(t). \end{aligned}$$

Thus, the statement of proposition immediately follows. \(\square\)

Proposition 4

The random variable \(T:\Omega \longrightarrow \left( 0,+\infty \right)\) has \(\alpha\)IW\((\alpha , \beta , \sigma )\) if and only if the hrf h(t), defined by (5), satisfies the following equation:

$$\begin{aligned} \dfrac{h'(t)}{\left( h(t)\right) ^{2}}=&\dfrac{\left( 1-\dfrac{ \beta \sigma ^{\frac{\alpha }{\beta }+1} t^{-\alpha -\beta } \exp \left( -\sigma t^{-\beta }\right) }{(\alpha -1) \left[ \Gamma \left( \dfrac{\alpha +\beta }{\beta }\right) -\Gamma \left( \dfrac{\alpha +\beta }{\beta }, \sigma t^{-\beta } \right) \right] }\right) }{ \alpha \sigma ^{-\frac{\alpha }{\beta }} t^{\alpha } \left[ \Gamma \left( \dfrac{\alpha +\beta }{\beta }\right) -\Gamma \left( \dfrac{\alpha +\beta }{\beta },t^{-\beta } \sigma \right) \right] }\\&\times \left\{ (\alpha -1) \left[ 1-\exp \left( -\sigma t^{-\beta }\right) - \sigma ^{-\frac{\alpha }{\beta }} t^{\alpha } \left[ \Gamma \left( \dfrac{\alpha +\beta }{\beta }\right) -\Gamma \left( \dfrac{\alpha +\beta }{\beta }, \sigma t^{-\beta } \right) \right] \right] \right\} +1 \end{aligned}$$
(7)

Proof

Necessity: Assume that \(T\sim \alpha\)IW(\(\alpha , \beta , \sigma\)), with the pdf f(t), defined by (4). Then, natural logarithm of this pdf, can be expressed as:

$$\begin{aligned} \ln \left( f(t)\right) =\ln \alpha - \frac{\alpha }{\beta } \ln \sigma + (\alpha -1) \ln t+ \ln \left[ \Gamma \left( \frac{\alpha +\beta }{\beta }\right) -\Gamma \left( \frac{\alpha +\beta }{\beta }, \sigma t^{-\beta }\right) \right] . \end{aligned}$$

Differentiating both sides of this equality with respect to t, we get:

$$\begin{aligned} \frac{f'(t)}{f(t)}&=\frac{\alpha -1}{t}\left( 1-\frac{\beta \sigma ^{\frac{\alpha +\beta }{\beta }} t ^{-(\alpha +\beta )} \exp \left( -\sigma t^{-\beta }\right) }{(\alpha -1)\left[ \Gamma \left( \frac{\alpha +\beta }{\beta }\right) -\Gamma \left( \frac{\alpha +\beta }{\beta }, \sigma t^{-\beta }\right) \right] }\right) . \end{aligned}$$
(8)

Thus, according to (5), (6), and (8), it follows:

$$\begin{aligned} \frac{h'(t)}{h(t)}&= \frac{\alpha -1}{t}\left( 1-\frac{\beta \sigma ^{\frac{\alpha +\beta }{\beta }} t ^{-(\alpha +\beta )} \exp \left( -\sigma t^{-\beta }\right) }{(\alpha -1)\left[ \Gamma \left( \frac{\alpha +\beta }{\beta }\right) -\Gamma \left( \frac{\alpha +\beta }{\beta }, \sigma t^{-\beta }\right) \right] }\right) \\&\quad +\dfrac{\alpha \sigma ^{-\frac{\alpha }{\beta }}t^{\alpha -1}\left[ \Gamma \left( \dfrac{\alpha +\beta }{\beta }\right) - \Gamma \left( \dfrac{\alpha +\beta }{\beta }, \sigma t^{-\beta }\right) \right] }{1- \exp \left( -\sigma t^{-\beta }\right) - t^{\alpha } \sigma ^{-\frac{\alpha }{\beta }}\left[ \Gamma \left( \frac{\alpha +\beta }{\beta }\right) -\Gamma \left( \frac{\alpha +\beta }{\beta }, \sigma t^{-\beta } \right) \right] }, \end{aligned}$$

which after certain simplification yields (7).

Sufficiency: Suppose that (7) holds. After integration, it can be rewritten as follows:

$$\begin{aligned} \int \frac{h'(t)}{\left( h(t)\right) ^{2}} dt=&\int \frac{1}{ \alpha \sigma ^{-\frac{\alpha }{\beta }} t^{\alpha } \left[ \Gamma \left( \frac{\alpha +\beta }{\beta }\right) -\Gamma \left( \frac{\alpha +\beta }{\beta },t^{-\beta } \sigma \right) \right] }\\&\times \Biggl \{ \left[ 1 - \frac{\beta \sigma ^{\frac{\alpha +\beta }{\beta }} t ^{-(\alpha +\beta )} \exp \left( -\sigma t^{-\beta }\right) }{(\alpha -1)\left[ \Gamma \left( \frac{\alpha +\beta }{\beta }\right) -\Gamma \left( \frac{\alpha +\beta }{\beta }, \sigma t^{-\beta }\right) \right] }\right] \\&\times \left[ 1- \exp \left( -\sigma t^{-\beta }\right) - t^{\alpha } \sigma ^{-\frac{\alpha }{\beta }}\left[ \Gamma \left( \frac{\alpha +\beta }{\beta }\right) -\Gamma \left( \frac{\alpha +\beta }{\beta }, \sigma t^{-\beta } \right) \right] \right] (\alpha -1)\\&+ \alpha \sigma ^{-\frac{\alpha }{\beta }}t^{\alpha }\left[ \Gamma \left( \frac{\alpha +\beta }{\beta }\right) - \Gamma \left( \frac{\alpha +\beta }{\beta }, \sigma t^{-\beta }\right) \right] \Biggr \} dt, \end{aligned}$$

from the above differential equation, we have

$$\begin{aligned} h(u) =\frac{\alpha \sigma ^{-\frac{\alpha }{\beta }} u^{\alpha -1} \left[ \Gamma \left( \frac{\alpha +\beta }{\beta }\right) - \Gamma \left( \frac{\alpha +\beta }{\beta }, \sigma u^{-\beta }\right) \right] }{1-\exp \left( \sigma u^{-\beta }\right) - \sigma ^{-\frac{\alpha }{\beta }} u^{\alpha } \left[ \Gamma \left( \frac{\alpha +\beta }{\beta }\right) - \Gamma \left( \frac{\alpha +\beta }{\beta }, \sigma u^{-\beta }\right) \right] }. \end{aligned}$$
(9)

Integrating (9) from 0 to t, we obtain:

$$\begin{aligned} -\ln (1-F(t))=-\ln \left[ 1- \exp \left( -\sigma t^{-\beta }\right) -\sigma ^{-\frac{\alpha }{\beta }} t^{\alpha } \left( \Gamma \left( \frac{\alpha +\beta }{\beta }\right) - \Gamma \left( \frac{\alpha +\beta }{\beta }, \sigma t^{-\beta }\right) \right) \right] , \end{aligned}$$

which after simplification yields

$$\begin{aligned} F(t)= \exp \left( -\sigma t^{-\beta }\right) + \sigma ^{-\frac{\alpha }{\beta }} t^{\alpha } \left[ \Gamma \left( \frac{\alpha +\beta }{\beta }\right) -\Gamma \left( \frac{\alpha +\beta }{\beta }, \sigma t^{-\beta } \right) \right] \end{aligned}$$

whereby from the conditions \(F(0)=0\) and \(F(\infty )=1\). Thus, the function F(t) is indeed the cdf from \(\alpha \text {IW}(\alpha ,\beta ,\sigma )\), which completes the proof. \(\square\)

Proposition 5

The r-th moment of the \(\alpha\)IW distribution is formulated as follows

$$\begin{aligned} {\textbf{E}}\left[ T^{r}\right]= & {} {\textbf{E}}\left[ X^{r}Y^{\frac{r}{\alpha }}\right] ={\textbf{E}}\left[ X^{r}\right] {\textbf{E}}\left[ Y^{\frac{r}{\alpha }}\right] \\= & {} \sigma ^{r/\beta }\frac{\alpha }{\alpha +r}\Gamma \left( 1-\frac{r}{\beta }\right) ;\quad \frac{r}{\beta }<1. \end{aligned}$$

Proof

The random variable \(T\sim \alpha \text {IW}(\alpha , \beta , \sigma )\) can be expressed by using the stochastic representation \(T=X \times Y^{\frac{1}{\alpha }}\). Then,

$$\begin{aligned} {\textbf{E}}\left[ T^{r}\right] ={\textbf{E}}\left[ X^{r} Y^{\frac{r}{\alpha }}\right] ={\textbf{E}}\left[ X^r \right] {\textbf{E}}\left[ Y^{\frac{r}{\alpha }}\right] , \end{aligned}$$

which completes the proof. \(\square\)

Proposition 6

The random variable T, having pdf given in (3), has an \(\alpha\)-monotone density since its pdf satisfies the condition

$$\begin{aligned} \frac{d}{dt}(\log f_{T}(t)\le \frac{\alpha -1}{t}, \quad \text {for all }t>0. \end{aligned}$$

Proof

From the Proposition 1,

$$\begin{aligned} f_{T}(t;\alpha ,\beta ,\sigma )=\displaystyle \int _{0}^{1} \beta \sigma t^{-(\beta +1)} w^{\frac{\beta }{\alpha }}\exp (-\sigma t^{-\beta } w^{\frac{\beta }{\alpha }}) dw. \end{aligned}$$

By using the variable transformation \(tw^{-1/\alpha }=u\), \(f_{T}(t)\) is expressed as

$$\begin{aligned} f_{T}(t;\alpha ,\beta ,\sigma )&=\int _{t}^{\infty }\alpha t^{\alpha -1} \beta \sigma u^{-(\alpha +\beta +1)}\exp \left( -\sigma u^{-\beta }\right) du\\&=\alpha t^{\alpha -1}\int _{t}^{\infty }\frac{1}{u^\alpha }f_{X}(u;\beta ,\sigma )du.\\ \end{aligned}$$

It is seen that \(f_{T}(t)\) can be expressed as

$$\begin{aligned} f_{T}(t)=\alpha t^{\alpha -1}\int _{t}^{\infty }\frac{1}{x^\alpha }f_{X}(x)dx. \end{aligned}$$

Then,

$$\begin{aligned} f^{\prime }_{T}(t)&=\alpha (\alpha -1)t^{\alpha -2}\int _{t}^{\infty }\frac{1}{x^\alpha }f_{X}(x)dx - \left( \alpha t^{\alpha -1}\right) \frac{1}{t^{\alpha }}f_{X}(t)\\&=(\alpha -1)t^{-1}f_{T}(t)-\alpha t^{-1}f_{X}(t)\\ \alpha f_{X}(t)&=\left( \alpha -1\right) f_{T}(t) - t f^{\prime }_{T}(t). \end{aligned}$$

From there, the proof is completed by following lines given below.

$$\begin{aligned} (\alpha -1)f_{T}(t)- t f^{\prime }_{T}(t)&\ge 0 \quad \text {since} \quad f_{X}(t)\ge 0,\\ \dfrac{\alpha -1}{t}&\ge \dfrac{ f^{\prime }_{T}(t)}{f_{T}(t)},\\ \dfrac{\alpha -1}{t}&\ge \frac{d}{dt} \log ( f_{T}(t) ),\\ \frac{d}{dt} \log ( f_{T}(t) )&\le \dfrac{\alpha -1}{t}. \end{aligned}$$

\(\square\)

The \(\alpha\)IW distribution is obtained as a scale-mixture between the IW and U(0,1) distributions as shown in Proposition 7.

Proposition 7

Let \(T|Y=y\sim \text {IW}(\beta , \sigma y^{\frac{\beta }{\alpha }})\) and \(Y\sim \text {U}(0, 1)\), then \(T\sim \alpha \text {IW}(\alpha ,\beta ,\sigma )\). Therefore, the \(\alpha \text {IW}(\alpha ,\beta , \sigma )\) distribution is a scale-mixture between the \(\text {IW}(\beta , \sigma y^{\frac{\beta }{\alpha }})\) and U(0, 1) distributions.

Proof

 

$$\begin{aligned} f_{T}(t;\alpha , \beta , \sigma )&=\int _{0}^{1}f_{Y|U}(t; \beta , \sigma y^{\frac{\beta }{\alpha }})f_{Y}(y)dy\\&=\int _{0}^{1}\beta \sigma y^{\beta /\alpha } t^{-(\beta +1)} \exp \left( -\sigma y^{\beta /\alpha }t^{-\beta }\right) dy \end{aligned}$$

The proof is completed after following the transformation \(y^{\beta /\alpha }t^{-\beta }=u\). \(\square\)

It can be seen from the propositions given above that the \(\alpha\)-monotone concept is easy to be applied and adds essential propoerties to the baseline distribution, i.e., IW distribution. For example, the cdf of the \(\alpha\)IW distribution is formed in terms of the cdf of the IW distribution and the pdf of the \(\alpha\)IW distribution. In addition, the moments of the \(\alpha\)IW distribution can be easily obtained with the help of the moments of the IW and uniform distributions. Furthermore, the pdf of the \(\alpha\)IW distribution can be written as a scale-mixture between the IW and uniform distributions and this property may make it attractive in the application.

Data generation

The steps given below should be followed for obtaining the random variates from the \(\alpha \text {IW}(\alpha , \beta , \sigma )\) distribution:

  1. Step 1

    Generate random variate x from the IW(\(\alpha , \beta\)) distribution via equation

    $$\begin{aligned} x=\left[ -\sigma ^{-1}\ln (p)\right] ^{-1\big /\beta }; \quad 0<p<1,\quad i.e. \quad p\sim \text {U}(0,1). \end{aligned}$$
  2. Step 2

    Generate random variate y from the U(0,1) distribution.

  3. Step 3

    Obtain the random variate from the \(\alpha \text {IW}(\alpha , \beta , \sigma )\) distribution via the equation

    $$\begin{aligned} t=x \times y^{1/\alpha }. \end{aligned}$$

Note that steps given above come from the definition for the random variable that follows a \(\alpha\)IW distribution; see Proposition 1.

Related distributions

The \(\alpha\)IW distribution includes some distributions as sub-models, converges to the some other well-known distributions as a limiting case and be slash Weibull distribution under variable transformation. In this section, referred distributions are given briefly.

Sub-models

Let \(T \sim \alpha \text {IW}(\alpha , \beta , \sigma )\), then we have the next submodels.

  1. i.

    If \(\beta =1\), then T has an \(\alpha\)-monotone inverse exponential density given below

    $$\begin{aligned} g(t; \alpha , \sigma )=\dfrac{\alpha ^2}{\sigma ^{\alpha }} \Gamma \left( \alpha \right) t^{\alpha -1} G\left( t^{-1}; \alpha +1, \sigma \right) ;\quad t>0 \quad \alpha , \sigma >0. \end{aligned}$$
  2. ii.

    If \(\beta =2\), then T has an \(\alpha\)-monotone inverse Rayleigh density as follows:

    $$\begin{aligned} g(t; \alpha , \sigma )=\dfrac{\alpha ^2}{2 \sigma ^{\alpha /2}} \Gamma \left( \dfrac{\alpha }{2}\right) t^{\alpha -1} G\left( t^{-2}; \dfrac{\alpha }{2}+1, \sigma \right) ;\quad t>0 \quad \alpha , \sigma >0. \end{aligned}$$

Limiting distributions

Let \(T \sim \alpha \text {IW}(\alpha , \beta , \sigma )\).

  1. i.

    The stochastic representation of the random variable T having \(\alpha\)-monotone distribution is \(T=X \times Y^{\frac{1}{\alpha }}\). Therefore, it is trivial that if \(\alpha\) goes to infinity, the random variable T converenges to the X. As a result of this, if \(\alpha \rightarrow \infty\), then \(\alpha \text {IW}(\alpha , \beta , \sigma )\) converenges to the IW(\(\beta , \sigma\)).

  2. ii.

    If \(\alpha \rightarrow \infty\) and \(\beta =1\), then \(\alpha \text {IW}(\alpha , \beta , \sigma )\) converenges to the inverse exponential distribution

    $$\begin{aligned} g(t; \sigma )=\sigma t^{-2}\exp \big ({-\sigma t^{-1}}\big ); \quad t>0, \quad \sigma >0. \end{aligned}$$
  3. iii.

    If \(\alpha \rightarrow \infty\) and \(\beta =2\), then \(\alpha \text {IW}(\alpha , \beta , \sigma )\) converenges to the inverse Rayleigh distribution

    $$\begin{aligned} g(t; \alpha , \sigma )=2 \sigma t^{-3}\exp \big ({-\sigma t^{-2}}\big ); \quad t>0, \quad \sigma >0. \end{aligned}$$

Under variable transformation

Let \(T \sim \alpha \text {IW}(\alpha , \beta , \sigma )\). Then, the random variable Z defined by \(Z=T^{-1}\) has the pdf

$$\begin{aligned} f_{Z}(z; \alpha , \beta , \sigma ) =\dfrac{\alpha ^2}{\beta \sigma ^{\alpha /\beta }} \Gamma \left( \dfrac{\alpha }{\beta }\right) z^{-(\alpha +1)} G\left( z^{\beta }; \dfrac{\alpha }{\beta }+1, \sigma \right) ; \quad z>0, \quad \alpha , \beta , \sigma >0 \end{aligned}$$

and follows the slash Weibull distribution with a certain reparametrization. Note that slash exponential and slash Rayleigh distributions are sub-models of the slash Weibull distribution.

Estimation

Let \(\underset{\sim }{t}=(t_1, t_2, \cdots , t_n)\) be the observed values of a random sample from \(\alpha \text {IW}(\alpha , \beta , \sigma )\) distribution. Then, estimation methods can be used to obtain the estimators of paramaters \(\alpha\), \(\beta\), and \(\sigma\), say \({\hat{\alpha }}, {\hat{\beta }}\), and \({\hat{\sigma }}\), for the \(\alpha\)IW distribution. In this study, well-known estimation methods maximum likelihood (ML), maximum product of spacing (MPS), and least squares (LS) are considered to obtain estimators of the parameters of the \(\alpha\)IW distribution. Also, efficiencies of the ML, MPS, and LS estimation methods are compared by conducting a Monte-Carlo simulation study. Note that optimization tools “fminsearch” and “fminunc”, which are available in software MATLAB2015b can be used to find the ML, MPS, and LS estimates of the parameters \(\alpha , \beta\), and \(\sigma\).

ML estimation

The idea of the ML method is coming from maximization of the log-likelihood (\(\ln L\)) function

$$\begin{aligned} \ln L\left( \alpha , \beta , \sigma ; \underset{\sim }{t}\right) =2 n\ln \alpha - n \ln \beta - n\left( \frac{\alpha }{\beta }\right) \ln \sigma + n \ln \Gamma \left( \dfrac{\alpha }{\beta }\right) +(\alpha -1)\sum _{i=1}^{n}\ln t_{i} + \sum _{i=1}^{n}\ln \left[ G\left( t_{i}^{-\beta };\frac{\alpha }{\beta }+1,\sigma \right) \right] . \end{aligned}$$
(10)

After taking partial derivative of the \(\ln L\) function given in (10) with respect to the parameters of interest and setting them equal to 0, the following likelihood equations

$$\begin{aligned} \dfrac{\partial \ln L}{\partial \alpha }= & {} \frac{2 n}{\alpha }-\frac{n}{\beta }\ln \sigma + \frac{n}{\beta }\Psi \left( \frac{\alpha }{\beta }\right) +\sum _{i=1}^{n} \ln t_{i}+\sum _{i=1}^{n}\dfrac{\frac{d}{d\alpha }G\left( t_{i}^{-\beta };\frac{\alpha }{\beta }+1,\sigma \right) }{G\left( t_{i}^{-\beta };\frac{\alpha }{\beta }+1,\sigma \right) }=0 \quad , \end{aligned}$$
(11)
$$\begin{aligned} \dfrac{\partial \ln L}{\partial \beta }= & {} n\frac{\alpha }{\beta ^2}\ln \sigma - \frac{n}{\beta }-n\frac{\alpha }{\beta ^2}\Psi \left( \frac{\alpha }{\beta }\right) +\sum _{i=1}^{n}\dfrac{\frac{d}{d\beta }G\left( t_{i}^{-\beta };\frac{\alpha }{\beta }+1,\sigma \right) }{G\left( t_{i}^{-\beta };\frac{\alpha }{\beta }+1,\sigma \right) }=0 \quad , \end{aligned}$$
(12)

and

$$\begin{aligned} \dfrac{\partial \ln L}{\partial \sigma }=-n\frac{\alpha }{\beta \sigma } +\sum _{i=1}^{n}\dfrac{\frac{d}{d\sigma }G\left( t_{i}^{-\beta };\frac{\alpha }{\beta }+1,\sigma \right) }{G\left( t_{i}^{-\beta };\frac{\alpha }{\beta }+1,\sigma \right) }=0 \end{aligned}$$
(13)

are obtianed. Here, \(\Psi {(\cdot )}\) represents the digamma function. Since likelihood Eqs. (1113) include nonlinear functions of the parameters \(\alpha\), \(\beta\) and \(\sigma\), they cannot be solved explicitly. Therefore, the ML estimates of the parameters \(\alpha\), \(\beta\), and \(\sigma\), say \({\hat{\alpha }}_{ML}\), \({\hat{\beta }}_{ML}\), and \({\hat{\sigma }}_{ML}\), are obtained by solving likelihood equations (1113) simulatanously. Note that the ML estimators has approximately a \(N_{3}(\Theta , \varvec{I^{-1}(\Theta )})\) distribution where \(\varvec{I(\Theta )}\) is the expected information matrix. However, the matrix \(\varvec{J(\Theta )}\), which is equal to \(-\varvec{H}(\Theta )\) and \(\varvec{H}\) denotes the Hessian matrix, evaluated at \({\hat{\Theta }}\) can be used if the matrix \(\varvec{I(\Theta )}\) for \(\Theta\) can not be obtained explicitly. Therefore, asymptotic confidence intervals for the parameters \(\alpha , \beta\), and \(\sigma\) are defined by using the matrix \(\varvec{J}(\Theta )\), where \(\Theta =(\alpha , \beta , \sigma )^{\top }\). The entries of \(\varvec{H}\) are given in the Supplementary Material of the paper as an appendix. Also, ML estimation of the parameters \(\alpha\), \(\beta\), and \(\sigma\) is considered under progressive Type-II censored sample and provided in the Supplementary Material of the paper as an appendix.

MPS estimation

The MPS estimates of the parameters \(\alpha\), \(\beta\), and \(\sigma\), say \({\hat{\alpha }}_{MPS}\), \({\hat{\beta }}_{MPS}\), and \({\hat{\sigma }}_{MPS}\), of the \(\alpha\)IW distribution are the points in which the objective function

$$\begin{aligned}MPS\left( \alpha , \beta , \sigma ; \underset{\sim }{t}\right)&= \left( \dfrac{1}{n+1}\right) \sum _{i=0}^{n} \ln \bigg [ \exp \left( -\sigma t_{(i+1)}^{-\beta }\right) + \dfrac{\alpha }{\beta \sigma ^{\alpha /\beta }} \Gamma \left( \dfrac{\alpha }{\beta }\right) t_{(i+1)}^{\alpha } G\left( t_{i+1}^{-\beta }; \dfrac{\alpha }{\beta }+1, \sigma \right) \\&\quad -\exp \left( -\sigma t_{(i)}^{-\beta }\right) + \dfrac{\alpha }{\beta \sigma ^{\alpha /\beta }} \Gamma \left( \dfrac{\alpha }{\beta }\right) t_{(i)}^{\alpha } G\left( t_{(i)}^{-\beta }; \dfrac{\alpha }{\beta }+1, \sigma \right) \bigg ] \end{aligned}$$

attains its maximum. Here, \(t_{(\cdot )}\) denotes ordered observation, i.e., \(t_{(1)} \le t_{(2)} \le \cdots \le t_{(n-1)} \le t_{(n)}\). Note that \(t_{0}\) and \(t_{(n+1)}\) are the values in which \(F_{T}(t_{0}; \alpha , \beta , \sigma )\equiv 0\) and \(F_{T}(t_{n+1}; \alpha , \beta , \sigma )\equiv 1\); see Bagci et al.22 and references given there in.

LS estimation

In the LS method, for the case of the \(\alpha\)IW distribution, it is aimed to minimize the objective function

$$\begin{aligned} LS\left( \alpha , \beta , \sigma ; \underset{\sim }{t}\right) =&\dfrac{1}{n}\sum _{i=1}^{n}\left[ F_{T}(t_{(i); \alpha , \beta , \sigma })- \dfrac{i}{n+1}\right] ^2\\&\dfrac{1}{n}\sum _{i=1}^{n}\left[ \left( \exp \left( -\sigma t_{(i)}^{-\beta }\right) + \dfrac{\alpha }{\beta \sigma ^{\alpha /\beta }} \Gamma \left( \dfrac{\alpha }{\beta }\right) t_{(i)}^{\alpha } G\left( t_{(i)}^{-\beta }; \dfrac{\alpha }{\beta }+1, \sigma \right) \right) -\left( \frac{i}{n+1}\right) \right] ^2 \end{aligned}$$

with respect to the parameters of interest (\(\alpha\), \(\beta\), and \(\sigma\)) and the results are called the LS estimates of the parameters \(\alpha\), \(\beta\), and \(\sigma\), i.e., \({\hat{\alpha }}_{LS}\), \({\hat{\beta }}_{LS}\), and \({\hat{\sigma }}_{MPS}\); see Acitas and Arslan23 and references therein for further information.

Monte-Carlo simulation

In this subsection, a Monte-Carlo simulation study is conducted to compare the efficiencies of the ML, MPS, and LS estimation methods. Random samples are generated from the \(\alpha\)IW distribution, as presented in the data generation subsection, for the sample sizes 100, 200, and 300 and different parameter settings; see Table 1.

Table 1 Parameter settings considered in the simulation.

Note that the scale parameter \(\sigma\) is taken to be 1 througout all simulation scenario without loss of generality. All the simulations are conducted for \(\lfloor 100,000/n \rfloor\) Monte-Carlo runs, where \(\lfloor \cdot \rfloor\) denotes the integer value function via MATLAB2015b. The ML, MPS, and LS estimates of the parameters are obtained by using the optimization tools “fminunc”, which is available in software MATLAB2015b. Efficiencies of the ML, MPS, and LS methods are compared by using bias, variance, and mean squared error (MSE) criteria. The simulation results are given in Table 2 and summarized as follows.

  • Scenario 1: The ML method gives the smallest bias values for \(\alpha\) in all sample sizes. However, in terms of the MSE criterion, the MPS method results the smallest values for \(\alpha\) in all sample sizes. Concerning the \(\beta\), and \(\sigma\), the ML, MPS, and LS methods have negligible biases and small variances for all sample sizes n.

  • Scenario 2: The ML, MPS and LS methods have the negligible bias values and small variances for the \(\alpha\), and \(\beta\) for all sample sizes; however the LS method gives the largest MSE values for \(\sigma\). When \(n=300\), the ML, MPS, and LS show more or less the same performances.

  • Scenario 3: The MPS method has the largest bias values and the LS gives the largest MSE values for \(\alpha\) in all sample sizes. However, the ML and MPS methods give more or less the same bias and the MSE values for \(\beta\) and \(\sigma\) for all sample sizes.

  • Scenario 4: The ML, MPS, and LS methods result negligible bias and small MSE values for \(\alpha\), \(\beta\), and \(\sigma\) when sample size \(n=200\) and \(n=300\). However, the ML, MPS, and LS do not show the good performances for \(\sigma\) in the sample size \(n=100\).

  • Scenario 5: The LS method gives the biggest MSE values for \(\alpha\) \(\beta\), and \(\sigma\) in all sample sizes. The ML, MPS and LS methods have the small bias values and small variances for the \(\alpha\), \(\beta\) and \(\sigma\) for the sample size \(n=200\) and \(n=300\).

  • Scenario 6: The MPS method produces smallest bias and MSE values for \(\alpha\), \(\beta\), and \(\sigma\) when the sample size \(n=100\). The ML, MPS, and LS methods show more or less the same performances for \(\alpha\), \(\beta\), and \(\sigma\) when the sample size \(n=300\).

  • Scenario 7: The ML, MPS, and LS methods have negligible bias values for \(\alpha\) and \(\beta\); however the LS gives the largest MSE values for \(\alpha\) and \(\sigma\), except the sample size \(n=300\). The MPS and ML have similar efficiencies for \(\alpha\), \(\beta\), and \(\sigma\) for all sample sizes n.

  • Scenario 8: The LS method shows the worst performance for \(\alpha\) when the sample \(n=100\). Concerning the \(\beta\) and \(\sigma\), the ML, MPS, and LS methods produce more or less the same results for sample sizes \(n=200\) and \(n=300\).

To sum up, the ML and MPS methods are very closed easch other and stand one step ahead of the LS method. It should also be noted that the simulated MSE values for each parameter of the ML, MPS, and LS estimates are decreasing when the sample size increases, as expected.

Table 2 The simulated bias, variance, and MSE values of the ML, MPS, and LS estimates.

Applications

In this section, \(\alpha\)IW distribution is used to model two popular data sets,called Kevlar 49/epoxy and Kevlar 373/epoxy, from the related literature.

There exist various extended/generalized versions of the IW distribution in the literature. To the best of authors' knowledge, Kumaraswamy inverse Weibull (KIW)24, alpha power inverse Weibull (APIW)25, Marshall-Olkin extended inverse Weibull (MOEIW)26, and exponentiated exponential inverse Weibull (EEIW)27 distributions, which can be strong alternative to the \(\alpha\)IW distribution, have not been used in modeling Kevlar 49/epoxy and Kevlar 371/epoxy data sets. Therefore, the modeling performance of the \(\alpha\)IW distribution is compared with the KIW, APIW, MOEIW, and EEIW distributions. In the comparisons, the well known information criteria the corrected Akaike Information Criterion (AICc), Bayesian Information Criterion (BIC), and the following goodness-of-fit statistics; Kolmogorov-Smirnov (KS), Anderson-Darling (AD), and Cramér-von Misses (CvM) are considered. In the estimation, the ML, MPS, and LS methods are used to obtain estimates of the parameters of the \(\alpha\)IW distribution. The ML method is considered for the KIW, APIW, MOEIW, EEIW distributions. The optimization tool “fminunc” and “fminsearch” available in software MATLAB2015b is utilised to obtain the corresponding estimates of the parameters of the \(\alpha\)IW, IW, KIW, APIW, MOEIW, and EEIW distributions.

Application-I

The Kevlar 49/epoxy data set, given in Table 3, involves stress-rupture life of kevlar 49/epoxy strands subjecting to constant sustained pressure at the 90% stress level until the all had failed; see Andrews and Herzberg28.

Table 3 Kevlar 49/epoxy data (\(n=101\)).

Many authors proposed different distributions to model the Kevlar 49/epoxy data. For example, Paranaiba et al.29 derived Kumaraswamy Burr XII distribution, Olmos et al.30 derived slash generalized half-normal distribution, and Sen et al.31 introduced quasi Xgamma-Poisson (QXGP) distribution to model Kevlar 49/epoxy. The QXGP distribution modeled Kevlar 49/epoxy data better than the distributions formerly used and also recently proposed by the others. Therefore, in modeling Kevlar 49/epoxy, the QXGP distribution is also considered besides with the IKW, APIW, MOEIW, and EEIW for the sake of completeness of comparisons. The estimates of the parameters of the \(\alpha\)IW, IW, QXGP, KIW, APIW, MOEIW, and EEIW distributions and the corresponding fitting results are given in Table 4.

Table 4 Fitting results for the Kevlar 49/epoxy data.

The results given in Table 4 show that the \(\alpha\)IW distribution performs better modeling perfomance than the QXGP, IW, KIW, APIW, MOEIW, and EEIW distributions when the AICc, BIC, KS, AD, and CvM criteria are taken into account. The fitting performance of the \(\alpha\)IW distribution is also illustrated graphically by Fig. 2.

Figure 2
figure 2

The pdf and cdf plots of the \(\alpha\)IW with histogram and empirical cdf of the Kevlar 49/epoxy data.

It is seen from Fig. 2 that the pdf and cdf of the \(\alpha\)IW distribution preciously fit the histogram and emprical cdf of the Kevlar 49/epoxy, respectively. This conclusion is also supported by values of the information criteria and goodness-of-fit statistics given in Table 4. Moreover, the \(\alpha\)IW attains this conclusion when compared to the other competitive distributions and with a considerably bigger difference as well, as per Raftery32, which states that the significant difference in the BIC of the models should be more than 2. Also, note that the ML, MPS, and LS estimation methods give more or less the same fitting performances; see the corresponding goodness-of-fit statistics given in Table 4.

Application-II

The Kevlar 373/epoxy data set involves life of fatigue fracture of kevlar 373/epoxy subjecting to constant pressure at the 90% stress level until all had failed, therefore it includes the exact times of failure. This data set was provided in Glaser33; see also Table 5.

Table 5 Kevlar 373/epoxy data (\(n=76\)).

In the literature, there exist various distributions used in modeling Kevler 373/epoxy data. For example, Merovci et al.34 proposed generalized transmuted exponential distribution, Alizadeh et al.35 proposed an other generalized transmuted exponential distribution, Dey et al.36 introduced alpha power transformed Weibull distribution, and Jamal and Chesneau37 proposed transformation of Weibull distribution using sine and cosine functions (TSCW) to model Kevlar 373/epoxy. Note that TSCW distribution introduced by Jamal and Chesneau37 is preferable over among the others, since it has smaller values of the goodness-of-fit statistics. Therefore, besides with the KIW, APIW, MOEIW, and EEIW distributions, the TSCW distribution is also taken into account to make comparisons complete. The ML estimates of the parameters for the \(\alpha\)IW, IW, TSCW, KIW, APIW, MOEIW, and EEIW distributions, and the corresponding fitting results are given in Table 6. Also, the MPS and LS estimates of the parameters of the \(\alpha\)IW distribution along with the corresponding goodness-of-fit statistcs are given in Table 6.

Table 6 Fitting results for the Kevlar 373/epoxy data.

When taking into account the KS, AD, and CvM values, it can be seen from Table 6 that the \(\alpha\)IW distribution is preferable to the IW, KIW, APIW, MOEIW, and EEIW distributions. The \(\alpha\)IW and TSCW distributions show more or less the same modeling performace based on the infromation criteria; however, the \(\alpha\)IW stands one step ahead of the TSCW distribution when goodness-of-fit statistics are taken into account.The fitting performance of the \(\alpha\)IW distribution is also illustrated in Fig. 3.

Figure 3
figure 3

The pdf and cdf plots of the \(\alpha\)IW with histogram and empirical cdf of the Kevlar 373/epoxy data.

Note that the results given in Table 6 is also supported by the Fig. 3, i.e., the density and cdf of the \(\alpha\)IW distribution preciously fit the histogram and emprical cdf of the Kevlar 373/epoxy, respectively.

Conclusion

In this study, the \(\alpha\)IW distribution is derived as a scale-mixture between IW and U(0, 1) distributions. Some statistical properties of the \(\alpha\)IW distribution are provided. It should be noted that \(\alpha\)-monotone inverse Rayleigh, \(\alpha\)-monotone inverse exponential distributions are sub-models of the \(\alpha\)IW distribution, and also IW, inverse Rayleigh, and inverse exponential distributions are limiting distributions of the \(\alpha\)IW distribution for the different parameters settings.

The \(\alpha\)IW distribution is used to model two popular data sets from the reliability area, i.e. Kevlar 49/epoxy and Kevlar 373/epoxy data sets. Literature review show that QXGP distribution proposed by Sen et al.31 and TSCW distribution introduced by Jamal and Chesneau37 are modeled the Kevlar 49/epoxy and Kevlar 373/epoxy data better than the formerly used distributions, respectively. Therefore, modeling capability of the \(\alpha\)IW distribution is compared with the modeling capability of the QXGP and TSCW distributions for the corresponding data set. It should be noted IW, KIW, APIW, MOEIW, and EEIW distributions are also included to the comparisons to make the study complete. In the comparisons, AICc, BIC, KS, AD, and CvM criteria are used. Results show that the \(\alpha\)IW distribution performs better fitting performance than the QXGP, TSCW, KIW, APIW, MOEIW, and EEIW distributions and therewithal the other distributions that are formerly used in modeling these data sets.

The results in Tables 4 and 6 show that the \(\alpha\)-monotone concept significantly contributes to increase the modeling performance of the baseline distribution, i.e., IW distribution. Thus, obtaining the \(\alpha\)IW distribution by using the \(\alpha\)-monotone concept is cost effective. In other words, the new shape parameter added to the distribution by using the \(\alpha\)-monotone concept significantly increases the modeling capability of the IW distribution. As a result of this study, it is shown that the \(\alpha\)IW distribution can be an alternative to the well-known and recently-introduced distributions for modeling purposes.

It is known that censored samples may be occured in lifetime analysis and there are many other estimaiton methods, such as method of moments, probabilty weighted moments, L-moments, and so on. Expectation-maximization algorithm can also be utilised to find the maximum of the likelihood function since \(\alpha\)IW distribution has scale-mixture representation. However, in this study, the parameters of the \(\alpha\)IW distribution are estimated by using the ML, MPS, and LS methods for the complete sample case. Furthermore, only, the ML estimation method is considered for the progressively type-II censored sample case; see the appendix in the Supplementary Material. Therefore, estimation of the parameters of the \(\alpha\)IW distribution by using different estimation methods for complete and censored sample cases can be considered as future works.