Introduction

The Poisson regression model (PRM) is often adopted in modelling count data. PRM is employed to model the relationship between the response variable and one or more regressors. The response variable comes in the form of a count variable or non-negative integers such as the defects in a unit of manufactured product, errors or bugs in software, number of road accidents, number of times a machine fail in a month, occurrences of virus disease, count of particulate matter or other pollutants in the environment etc. The regression coefficients in PRM are estimated using the Maximum Likelihood Estimator (MLE).

In LRM, the estimator performance suffers from high instability when the regressors are correlated, i.e. multicollinearity (for example, see1,2). Multicollinearity effects include significant variance and covariances of the regression coefficients, wider confidence intervals, insignificant t-ratios and high R-square. Multicollinearity also negatively influence the performance of the MLE in PRM3,4. Alternative estimators to the MLE in LRM are the ridge regression estimator by Hoerl and Kennard5, Liu estimator by Liu6, Liu type estimator by Liu7, two-parameter estimator by Ozkale and Kaciranlar8, k-d class estimator by Sakallioglu and Kaciranlar9, a two-parameter estimator by Yang and Chang10, modified two-parameter estimator by Dorugade11 and recently, the modified ridge type estimator by Lukman et al.12, modified new two-parameter estimator by Lukman et al.13, modified new two-parameter estimator by Ahmad and Aslam14, and K–L estimator by Kibria and Lukman15.

Researchers have applied some of these estimators to the Poisson regression model. These include the Poisson ridge regression estimator (PRE) by Månsson and Shukur3, Månsson et al.16 developed the Poisson Liu estimator (PLE) to mitigate the problem of multicollinearity in PRM. Batah et al.17 proposed the modified jackknifed ridge regression estimator (MJRE) for the LRM while Turkan and Özel18 adopted the MJRE to the Poisson regression model as a remedy to the problem of multicollinearity. Özkale and Kaciranlar8 combine the Liu regression estimator and the ridge regression to form the two-parameter estimator in LRM. Thus, Asar and Genc19 implemented the two-parameter estimator to the Poisson regression model. Rashad and Algamal20 developed a new ridge estimator for the Poisson regression model by modifying Poisson modified jackknifed ridge regression. Qasim et al.4 suggest some new shrinkage estimators for the PLE. We classified these estimators into Poisson regression estimators with a single shrinkage parameter and two-parameters, respectively. Recently, Kibria and Lukman15 proposed another ridge type estimator called K–L estimator with a single shrinkage parameter.

This study aims to propose an estimator that can handle multicollinearity in a Poisson regression model. We harmonize the K–L estimator to the PRM and suggest some shrinkage estimators for the estimator. Also, compare the proposed estimator's performance with the MLE, PRE and PLE in terms of the matrix mean square error (MSEM) and mean square error (MSE). The small sample properties are investigated using a simulation experiment. Finally, the new method's benefit is evaluated in an example using aircraft damage data that was initially analyzed by Myers et al.21.

This paper structuring is as follows: the Poisson regression model, some estimators and the MSEM and MSE properties of the estimators are discussed in Sect. 2. A Monte Carlo simulation experiment has been conducted in Sect. 3. To illustrate the finding of the paper, aircraft damage data was analyzed in Sect. 4. Some concluding remarks are presented in Sect. 5.

Statistical methodology

Poisson regression model and maximum likelihood estimator

Suppose that the response variable, \(y_{i}\) is in the form of non-negative integers (or count data), then the probability function is given as follows

$$f(y_{i} ) = \frac{{\exp \left( { - \mu_{i} } \right)\mu_{i}^{yi} }}{{y_{i} !}},\;y_{i} = 0,1,2, \ldots$$
(2.1)

where \(\mu_{i} > 0.\) The mean and variance of the Poisson distribution in Eq. (2.1) are the same (i.e.\(E(y) = Var\left( y \right) = \mu\)). The model is written in terms of the mean of the response. According to Myers et al.21, we assume that there exists a function, g, that relates the mean of the response to a linear predictor such that

$$g\left( {\mu_{i} } \right) = \eta_{i} = \beta_{0} + \beta_{1} x_{1} + \cdots + \beta_{p} x_{p} = x^{\prime}_{i} \beta ,$$
(2.2)

where \(g(.)\) is a monotone differentiable link function. The log link function is a popular type of this link function such that \(g\left( {\mu_{i} } \right) = \ln \left( {\mu_{i} } \right) = \exp \left( {x^{\prime}_{i} \beta } \right).\) This log link is generally adopted for the Poisson regression model because it ensures that all the fitted values for the response variable are positive. The maximum likelihood estimator is popularly used to estimate the coefficients of the PRM, where the likelihood function is defined as:

$$l\left( \beta \right) = \prod\limits_{i = 1}^{n} {\frac{{\exp \left( { - \mu_{i} } \right)\mu_{i}^{yi} }}{{y_{i} !}}} . = \frac{{\prod\limits_{i = 1}^{n} {\mu_{i}^{yi} \exp \left( { - \sum\limits_{i = 1}^{n} {\mu_{i} } } \right)} }}{{\prod\limits_{i = 1}^{n} {y_{i} !} }}$$
(2.3)

where \(\mu_{i} = g^{ - 1} \left( {x^{\prime}_{i} \beta } \right).\) The log-likelihood function is used to estimate the parameter vector \(\beta\)

$$\ln l\left( \beta \right) = \sum\limits_{i = 1}^{n} {y_{i} } \ln \left( {\mu_{i} } \right) - \sum\limits_{i = 1}^{n} {\mu_{i} } - \sum\limits_{i = 1}^{n} {\ln \left( {y_{i} !} \right)}$$
(2.4)

Since Eq. (2.4) is nonlinear in \(\beta\), the solution is obtained using iterative methods. A common such procedure is the Fisher Scoring method defined as:

$$\beta^{t + 1} = \beta^{t} + I^{ - 1} \left( {\beta^{t} } \right)S\left( {\beta^{t} } \right),$$
(2.5)

where \(S(\beta ) = \frac{\partial l\left( \beta \right)}{{\partial \beta }}\) and \(I^{ - 1} \left( \beta \right) = \left( { - E(\partial^{2} l(\beta )/\partial \beta \partial \beta^{\prime})} \right)^{ - 1}\). The final step of the estimated coefficients corresponds to:

$$\hat{\beta }^{PMLE} = (X^{\prime}\hat{W}X)^{ - 1} X^{\prime}\hat{W}\hat{z}$$
(2.6)

where \(\hat{W} = diag(\mu_{i}^{2} )\) matrix and \(\hat{z}\) is the adjusted response variable, \(\hat{z} = x^{\prime}_{i} \hat{\beta }^{PMLE} + \frac{{y_{i} - \hat{\mu }_{i} }}{{\hat{\mu }_{i}^{2} }}.\) \(\hat{W}\) and \(\hat{z}\) are obtained using Fisher scoring iterative procedure (see Hardin and Hilbe22). The covariance matrix and mean square error are given respectively as follows:

$$Cov\left( {\hat{\beta }^{PMLE} } \right) = \left( {X^{\prime}\hat{W}X} \right)^{ - 1}$$
(2.7)

and

$$MSE\left( {\hat{\beta }^{PMLE} } \right) = \sum\limits_{i = 1}^{p} {\frac{1}{{\lambda_{i} }}}$$
(2.8)

where \(\lambda_{i}\) is the ith eigenvalue of the matrix \(X^{\prime}\hat{W}X\).

Poisson K–L estimator

Månsson and Shukur3 developed the Poisson ridge regression estimator (PRRE) to mitigate the problem of multicollinearity, which is defined as follows:

$$\hat{\beta }^{PRRE} = \left( {X^{\prime}\hat{W}X + kI} \right)^{ - 1} X^{\prime}\hat{W}X\hat{\beta }^{PMLE}.$$
(2.9)

where \(k > 0\) is the biasing parameter, \(I\) is a \(p \times p\) identity matrix and the optimal value of k is defined as:

$$k = \frac{1}{{\alpha_{i,\max }^{2} }}$$
(2.10)

where \(\hat{\alpha }_{i}\) is the ith component of \(\alpha = Q^{\prime}\beta\), Q is the matrix whose columns are the eigenvectors of \(X^{\prime}\hat{W}X.\)

Månsson et al.16 introduced the Poisson Liu estimator (PLE) as follows:

$$\hat{\beta }^{PLE} = \left( {X^{\prime}\hat{W}X + I} \right)^{ - 1} \left( {X^{\prime}\hat{W}X + dI} \right)\hat{\beta }^{PMLE} ,\;0 < d < 1,$$
(2.11)

where \(d\) according to Månsson et al.16 may be estimated by the following formula:

$$d = \max \left( {0,\min \left( {\frac{{\alpha_{i}^{2} - 1}}{{\frac{1}{{\lambda_{i} }} + \alpha_{i}^{2} }}} \right)} \right)$$
(2.12)

Kibria and Lukman15 proposed a new single parameter ridge-type estimator for the linear regression model, which is defined as follows:

$$\hat{\beta }^{KLE} = (X^{\prime}X + kI_{p} )^{ - 1} (X^{\prime}X - kI_{p} )\hat{\beta }^{MLE}$$
(2.13)

Following Kibria and Lukman15, we proposed the following new estimator for the Poisson regression model as follows:

$$\hat{\beta }^{PKLE} = (X^{\prime}\hat{W}X + kI_{p} )^{ - 1} (X^{\prime}\hat{W}X - kI_{p} )\hat{\beta }^{PMLE}$$
(2.14)

Suppose \(\alpha = Q^{\prime}\beta\) and \(Q^{\prime}X^{T} \hat{W}XQ = \Lambda = diag\left( {\lambda_{1} ,...,\lambda_{p} } \right)\) where \(\lambda_{1} \ge \lambda_{2} \ge ... \ge \lambda_{p} ,\Lambda\) is the matrix of eigenvalues of \(X^{T} \hat{W}X\) and Q is the matrix whose columns are the eigenvectors of \(X^{T} \hat{W}X.\) The matrix mean square error and the mean square error of the estimators PMLE, PRRE, PLE and PKLE are provided in Eqs. (2.15) to (2.21) respectively as follows:

$$MSEM\left( {\hat{\alpha }^{PMLE} } \right) = Q\Lambda^{ - 1} Q^{T}$$
(2.15)
$$MSE\left( {\hat{\alpha }^{PMLE} } \right) = \sum\limits_{i = 1}^{p} {\frac{1}{{\lambda_{i} }}}$$
(2.16)
$$MSEM\left( {\hat{\alpha }^{PRRE} } \right) = Q\Lambda^{k} \Lambda \Lambda^{k} Q^{T} + k^{2} \Lambda^{k} \alpha \alpha^{T} \Lambda^{k}$$
(2.17)
$$MSE\left( {\hat{\alpha }^{PRRE} } \right) = \sum\limits_{i = 1}^{p} {\left( {\frac{{\lambda_{i} }}{{\left( {\lambda_{i} + k} \right)^{2} }}} \right)} + k^{2} \sum\limits_{i = 1}^{p} {\left( {\frac{{\alpha_{i}^{2} }}{{\left( {\lambda_{i} + k} \right)^{2} }}} \right)}$$
(2.18)
$$MSEM\left( {\hat{\alpha }^{PLE} } \right) = Q\Lambda_{d} \Lambda^{ - 1} \Lambda_{d}^{T} Q^{T} + \left( {\Lambda_{d} - I} \right)\alpha \alpha^{T} \left( {\Lambda_{d} - I} \right)^{T}$$
(2.19)

where \(\Lambda_{d} = \left( {\Lambda + I} \right)^{ - 1} \left( {\Lambda + dI} \right).\)

$$MSEM(\hat{\alpha }^{PKLE} ) = Q\Lambda^{k} (\Lambda - kI_{p} )\Lambda^{ - 1} \Lambda^{k} (\Lambda - kI_{p} )Q^{\prime} + 4k^{2} \Lambda^{k} Q\Lambda^{k} \alpha \alpha^{\prime}$$
(2.20)

where \(\Lambda^{k} = \left( {\Lambda + kI_{p} } \right)^{ - 1} .\)

$$MSE(\hat{\alpha }^{PKLE} ) = \sum\limits_{i = 1}^{p} {\left( {\frac{{\left( {\lambda - k} \right)^{2} }}{{\lambda_{i} \left( {\lambda_{i} + k} \right)^{2} }}} \right)} + 4k^{2} \sum\limits_{i = 1}^{p} {\left( {\frac{{\alpha_{i}^{2} }}{{\left( {\lambda_{i} + k} \right)^{2} }}} \right)}$$
(2.21)
$$MSE\left( {\hat{\alpha }^{PLE} } \right) = \sum\limits_{j = 1}^{p} {\left( {\frac{{\left( {\lambda_{j} + d} \right)^{2} }}{{\lambda_{j} \left( {\lambda_{j} + 1} \right)^{2} }} + \frac{{\left( {d - 1} \right)^{2} \alpha_{j}^{2} }}{{\left( {\lambda_{j} + 1} \right)^{2} }}} \right)}$$
(2.22)

where \(\lambda_{i}\) is the ith eigenvalue of \(X^{\prime}\hat{W}X\) and \(\alpha_{j}\) is the jth element of \(\alpha .\) For the purpose of theoretical comparisons, we adopt the following lemmas.

Lemma 2.1

Let A be a positive definite (pd) matrix, that is A > 0, and \(a\) be some vector, then \(A - aa^{^{\prime}} \ge 0\) if and only if (iff) \(a^{\prime}A^{ - 1} a \le 1\)23.

Lemma 2.2

\(MSEM(\hat{\beta }_{1} ) - MSEM(\hat{\beta }_{2} ) = \sigma^{2} D + b_{1} b_{1}^{T} - b_{2} b_{2}^{T} > 0\) if and only if \(b_{2}^{T} \left[ {\sigma^{2} D + b_{1} b_{1}^{T} } \right]^{ - 1} b_{2} < 1\) where \(MSE\left( {\hat{\beta }_{j} } \right) = Cov\left( {\hat{\beta }_{j} } \right) + b_{j}^{T} b_{j}\)24.

Theorem 2.1

\(\hat{\alpha }^{PKLE}\) is better than \(\hat{\alpha }^{PMLE}\) iff, \(MSEM\;[\hat{\alpha }^{PMLE} ] - MSEM\;[\hat{\alpha }^{PKLE} ] > 0\) provided k > 0.

Proof.

$$\begin{gathered} MSEM\left( {\hat{\alpha }^{PMLE} } \right) - MSEM\left( {\hat{\alpha }^{PKLE} } \right) = Q\left[ {\Lambda^{ - 1} - \Lambda^{k} (\Lambda - kI_{p} )\Lambda^{ - 1} \Lambda^{k} (\Lambda - kI_{p} )} \right]Q^{T} \hfill \\ \;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\; - 4k^{2} \Lambda^{k} Q\Lambda^{k} \alpha \alpha^{\prime} \hfill \\ \;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\; = Qdiag\left\{ {\frac{1}{{\lambda_{i} }} - \frac{{\left( {\lambda_{i} - k} \right)^{2} }}{{\lambda_{i} \left( {\lambda_{i} + k} \right)^{2} }}} \right\}_{i = 1}^{p} Q^{T} \hfill \\ \;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\; - 4k^{2} \Lambda^{k} Q\Lambda^{k} \alpha \alpha^{\prime} \hfill \\ \end{gathered}$$
(2.23)

The matrix \(\Lambda^{ - 1} - \Lambda^{k} (\Lambda - kI_{p} )\Lambda^{ - 1} \Lambda^{k} (\Lambda - kI_{p} )\) is pd since \(\lambda_{i} \left( {\lambda_{i} + k} \right)^{2} - \lambda_{i} \left( {\lambda_{i} - k} \right)^{2} > 0.\)

Theorem 2.2

\(\hat{\alpha }^{PKLE}\) is better than \(\hat{\alpha }^{PRRE}\) iff, \(MSEM\;[\hat{\alpha }^{PRRE} ] - MSEM\;[\hat{\alpha }^{PKLE} ] > 0\) provided k > 0.

Proof.

$$\begin{gathered} D\left( {\hat{\alpha }^{PRRE} } \right) - D\left( {\hat{\alpha }^{PKLE} } \right) = Q\left[ {\Lambda^{k} \Lambda \Lambda^{k} - \Lambda^{k} (\Lambda - kI_{p} )\Lambda^{ - 1} \Lambda^{k} (\Lambda - kI_{p} )} \right]Q^{T} \hfill \\ \;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\; = Qdiag\left\{ {\frac{{\lambda_{i} }}{{\left( {\lambda_{i} + k} \right)^{2} }} - \frac{{\left( {\lambda_{i} - k} \right)^{2} }}{{\lambda_{i} \left( {\lambda_{i} + k} \right)^{2} }}} \right\}_{i = 1}^{p} Q^{T} \hfill \\ \end{gathered}$$
(2.24)

The matrix \(\Lambda^{k} \Lambda \Lambda^{k} - \Lambda^{k} (\Lambda - kI_{p} )\Lambda^{ - 1} \Lambda^{k} (\Lambda - kI_{p}\) is pd since \(\lambda_{i}^{2} \left( {\lambda_{i} + k} \right)^{2} - \left( {\lambda_{i} + k} \right)^{2} \left( {\lambda_{i} - k} \right)^{2} > 0\) for \(2\lambda_{i} - k > 0.\)

Theorem 2.3

\(\hat{\alpha }^{PKLE}\) is better than \(\hat{\alpha }^{PLE}\) iff, \(MSEM\;[\hat{\alpha }^{PLE} ] - MSEM\;[\hat{\alpha }^{PKLE} ] > 0\) provided k > 0.

Proof.

$$\begin{gathered} D\left( {\hat{\alpha }^{PLE} } \right) - D\left( {\hat{\alpha }^{PKLE} } \right) = Q\left[ {\Lambda_{d} \Lambda^{ - 1} \Lambda_{d}^{T} - \Lambda^{k} (\Lambda - kI_{p} )\Lambda^{ - 1} \Lambda^{k} (\Lambda - kI_{p} )} \right]Q^{T} \hfill \\ \;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\; = Qdiag\left\{ {\frac{{\left( {\lambda_{j} + d} \right)^{2} }}{{\lambda_{j} \left( {\lambda_{j} + 1} \right)^{2} }} - \frac{{\left( {\lambda_{i} - k} \right)^{2} }}{{\lambda_{i} \left( {\lambda_{i} + k} \right)^{2} }}} \right\}_{i = 1}^{p} Q^{T} \hfill \\ \end{gathered}$$
(2.25)

The matrix found in the above equation \(\Lambda_{d} \Lambda^{ - 1} \Lambda_{d}^{T} - \Lambda^{k} (\Lambda - kI_{p} )\Lambda^{ - 1} \Lambda^{k} (\Lambda - kI_{p} )\) is pd since \(\lambda_{i} \left( {\lambda_{i} + k} \right)^{2} \left( {\lambda_{j} + d} \right)^{2} - \lambda_{j} \left( {\lambda_{j} + 1} \right)^{2} \left( {\lambda_{i} - k} \right)^{2} > 0.\)

Selection of Biasing Parameter


The parameter is estimated by taking the first derivative of the MSE function of \(\hat{\alpha }^{PKLE}\) with respect to k and equating the resulting solution to zero. We obtain the following estimates of k:

$$k = \frac{{\lambda_{i} }}{{1 + 2\lambda_{i} \alpha_{i}^{2} }}$$
(2.26)

Following Månsson et al.16 and Lukman and Ayinde25, we propose the following forms of the shrinkage parameters in Eq. (2.26).

$$\hat{k}_{1} = \max \left( {0,\min \left( {\frac{{\lambda_{i} }}{{1 + 2\lambda_{i} \alpha_{i}^{2} }}} \right)} \right)$$
(2.27)
$$\hat{k}_{2} = \sqrt {\max \left( {0,\min \left( {\frac{{\lambda_{i} }}{{1 + 2\lambda_{i} \alpha_{i}^{2} }}} \right)} \right)}$$
(2.28)

Simulation Experiment

Simulation Design

Since a theoretical comparison among the estimators is not sufficient, as simulation experiment has been carried out in this section. We generate the response variable of the PRM from the Poisson distribution \(P_{0} \left( {\mu_{i} } \right)\) where \(\mu_{i} = \exp \left( {x_{i} \beta } \right)\;i = 1,2, \ldots ,n,\;\beta = \left( {\beta_{0} ,\beta_{1} ,\beta_{2} , \ldots ,\beta_{p} } \right)^{\prime }\) such that \(x_{i}\) is the ith row of the design matrix X and following Kibria1, we generated the X matrix as follows:

$$x_{ij} = \left( {1 - \rho^{2} } \right)^{1/2} w_{ij} + \rho w_{ip + 1} ,\;i = 1,2, \ldots ,n;\;j = 1,2, \ldots p,p + 1$$
(3.1)

where \(\rho^{2}\) is the correlation between the explanatory variables. The values of \(\rho\) are chosen to be 0.85, 0.9, 0.95 and 0.99. The mean function is obtained for p = 4 and 7 regressors, respectively. According to Kibria et al.26, the intercept value are chosen to be − 1, 0 and 1 to change the average intensity of the Poisson process. The slope coefficients chosen so that \(\sum\nolimits_{j = 1}^{p} {\beta_{j}^{2} = 1}\) and \(\beta_{1} = \beta_{2} = \cdots = \beta_{p}\) for sample sizes 50, 75, 100 and 200. Simulation experiment conducted through R programming language27. The estimated MSE is calculated as

$$MSE\left( {\hat{\beta }} \right) = \frac{1}{1000}\sum\limits_{j = 1}^{1000} {\left( {\hat{\beta }_{ij} - \beta_{i} } \right)^{\prime } } \left( {\hat{\beta }_{ij} - \beta_{i} } \right)$$
(3.2)

where \(\hat{\beta }_{ij}\) denotes the estimate of the ith parameter in jth replication and βi is the true parameter values. The experiment is replicated 1000 times. The simulated MSE values of the estimators for p = 4 and intercepts = − 1, 0 and 1 are presented in Tables 1, 2, 3 respectively and p = 7 and intercepts = − 1, 0 and 1 are presented in Tables 4, 5, 6, respectively.

Table 1 Simulated MSE when p = 4 and intercept = − 1.
Table 2 Simulated MSE when p = 4 and intercept = 0.
Table 3 Simulated MSE when p = 4 and intercept = 1.
Table 4 Simulated MSE when p = 7 and intercept = − 1.
Table 5 Simulated MSE when p = 7 and intercept = 0.
Table 6 Simulated MSE when p = 7 and intercept = 1.

Simulation results discussion

The simulation result in Tables 1, 2, 3, 4, 5, 6 shows that the following factors affect the estimators’ performances: the degree of correlation, the number of explanatory variables, the sample size and the value of the intercept. We observed that increasing the sample size led to a decrease in the MSE values of all the estimators, which is one of the unique properties for any statistical estimator. The proposed estimator, PKLE2 consistently possessed the minimum MSE. Increasing the degree of correlation increases the simulated MSE values for each of the estimators. The Poisson ridge (PRE) and Liu estimator (PLE) competes favorably with the proposed estimator. For instance, The MSE of PRE and PLE are very similar to the proposed estimator, especially when multicollinearity is low (ρ = 0.8–0.95).The performance of PMLE is the worst compared to other estimators, especially when the correlation among regressors is 0.90 and higher. This study increased explanatory variables from 4 to 7 and observed that the MSE rises by increasing explanatory variables. The MSE for all the estimators decreases when we change the intercept value from − 1 to + 1. Consistently, the proposed estimator PKLE2 outperforms all other estimators considered in this study. We also plotted MSE vs sample sizes and different ρ and intercepts and presented them Figs. 1, 2, 3, 4 and 5. From these figures, we observed that PKLE2 consistently possessed minimum value at the different sample size (n), followed by PKLE1 while PMLE has the worst performance. These figures also revealed that the estimators’ performance becomes similar for large n (200) or small correlation (0.80). However, the proposed estimator, PKLE2 performed the best.

Figure 1
figure 1

Intercept = − 1; ρ = 0.95; p = 4.

Figure 2
figure 2

Intercept = 0; ρ = 0.99; p = 4.

Figure 3
figure 3

Intercept = 0; ρ = 0.999; p = 7.

Figure 4
figure 4

Intercept = 1; n = 200; p = 4.

Figure 5
figure 5

Intercept = 1; n = 200; p = 7.

Real life application

In this session, we examined the effectiveness of the new estimator using real-life data. We adopted the aircraft damage data to evaluate the proposed estimator's performance and some other estimators in this study. The dataset was initially used by Myers et al.21 and recently by Asar and Genc19 and others. The dataset provides the information about two types of aircraft, the McDonnell Douglas A-4 Skyhawk and the Grumman A-6 Intruder. This data describe 30 strike missions of these two aircraft. The explanatory variables are as follows: x1 is a binary variable representing the aircraft type (A-4 coded as 0 and A-6 coded as 1), x2 and x3 denote bomb load in tons and total months of aircrew experience, respectively. The response variable, y represents the number of locations with damage on the aircraft, which follows a Poisson distribution19,21. Amin et al.28 examine if the model follows a Poisson regression model by adopting the Pearson chi-square goodness of fit test. The test confirms that the response variable is well fitted to the Poisson distribution with test statistic (p-value) is given as 6.89812 (0.07521).

According to Myers et al.21, there is evident of multicollinearity problem in the data. The eigenvalues of the \(X^{\prime}\hat{W}X\) matrix are 4.3333, 374.8961 and 2085.2251. The condition number, \(CN = \sqrt {\frac{{\max \left( {eigenvalue} \right)}}{{\min \left( {eigenvalue} \right)}}} = 219.365\), also shows multicollinearity in the dataset2,12. The estimators’ performances are assessed through the mean squared error (MSE). The MSE of the estimators is computed using Eqs. (2.15). (2.17), (2.19) and (2.21), respectively. The biasing parameters are determined using Eqs. (2.10), (2.12), (2.27) and (2.28), respectively. The regression coefficients and the MSE values are provided in Table 7. From Table 7, we observed that all the coefficients have a similar sign. PMLE has the highest mean square error, while the proposed estimator (PKLE2) has the lowest MSE which established its superiority. The maximum likelihood estimator possesses the highest MSE due to the presence of multicollinearity. The ridge and Liu estimator equally perform well when there is multicollinearity. We observed that the performance of the proposed estimator is a function of the biasing parameter, k.

Table 7 Regression coefficients and MSE.

Some concluding remarks

The K–L estimator is an estimator with a single biasing parameter, which eliminates the biasing parameter's computational rigour as obtainable in some of the two-parameter estimators. It falls in the ridge and Liu estimator class to mitigate multicollinearity in the linear regression model. According to Kibria and Lukman15, K–L estimator outclasses the following estimators: the ordinary least squares estimator, the ridge and the Liu estimator in the linear regression model. As earlier stated, the multicollinearity influences the performance of the maximum likelihood estimator (MLE) in both the linear regression models and the Poisson regression models (PRM). The ridge regression and Liu estimator at a different time were harmonized to the PRM to solve multicollinearity. However, in this study, we developed a new estimator, establish its statistical properties, carried out theoretical comparisons with the estimators mentioned above. Furthermore, we conducted a simulation experiment and analyzed a real-life application to show the proposed estimator effectiveness. The simulated and application results show that the proposed estimators outperform the existing estimators, while PMLE has the worst performance.