Introduction

Bronchopulmonary dysplasia (BPD), one of the most common and complex neonatal conditions,1 continues to increase and affects approximately 28,000 and 18,000 babies annually in Europe2 and the US,3 respectively. Preterm infants with BPD have significant long-term respiratory and neurodevelopmental complications into adulthood,4 including abnormal lung function5 and poor school performance.4

There is a myriad of trials with at least 24 Cochrane reviews looking at BPD preventative interventions, including postnatal corticosteroids. However, their benefit in preventing BPD may not outweigh the significant side effects, including gastrointestinal perforation and neurodevelopmental impairment.6,7 This demonstrates the complexity of BPD management in balancing the risk of significant long-term morbidity from BPD with that of exposure to potentially harmful treatments.8

BPD prediction models aim to provide a personalised risk approach in identifying high-risk very preterm infants for timely preventative treatments. Despite numerous models being developed, none are used routinely in clinical practice. This review aims to provide a comprehensive assessment of all BPD prediction models developed to address the clinical uncertainty of which predictive model is sufficiently valid and generalisable for clinical and research use. Secondly, we will validate eligible models in a large national contemporaneous cohort of very preterm infants.

Material and methods

Systematic review

There was no deviation from the protocol published in PROSPERO.9 Standard Cochrane Neonatal and Prognosis Methods Group methodologies were used.

Inclusion criteria

Cohort, case–control, and randomised controlled trials used in developing or validating the prediction models were included. Very preterm infants born before 32 weeks of gestational age (GA) and less than 2 weeks old at the time of BPD prediction were included. This ensures the clinical applicability and timeliness of the prediction models to support clinical decision making on preventative treatments. Studies that used non-universally accessible predictors such as pulmonary function tests, ultrasonography and biomarkers were excluded. BPD was defined as a respiratory support requirement at either 28 days of age or 36 weeks of corrected gestational age (CGA).10 The composite outcome of BPD and death before discharge was included as a secondary outcome.

Search methods

Standard Cochrane Neonatal11 and prognostic study search filters12 were used. “Bronchopulmonary dysplasia OR BPD OR chronic lung disease OR CLD” search terms were used to search the CENTRAL, Ovid MEDLINE, CINAHL, EMBASE and Scopus databases until 13/08/2021 (Appendix 1).

Data collection

Two reviewers (T.C.K., N.B. or K.L.L.) independently screened the title and abstract as well as full-text reports for inclusion before independently extracting data and assessing the risk of bias using the PROBAST tool13,14 (Appendix 2). These were done using a web-based tool CADIMA.15 Any disagreement was resolved by discussion.

Prediction model performance measure

Discrimination (C-statistics), calibration (Observed:Expected ratio (O:E ratio)) and classification (net benefit analysis) measures were extracted alongside their uncertainties.

Missing data

Study authors were contacted to obtain any missing data. Failing that, missing performance measures were approximated using the methodology proposed by Debray et al.16 and R statistical package “metamisc“.17

Meta-analysis

Meta-analysis of the performance measures, using the random-effects approach and R statistical package “metafor“,18 was performed for externally validated models. Sensitivity analysis was performed by excluding studies with an overall high risk of bias. We pre-specified that we would assess the source of heterogeneity16 and reporting deficiencies19 if more than ten studies were included.

Conclusions

The adapted Grades of Recommendation, Assessment, Development and Evaluation (GRADE) framework20 was used to assess the certainty of the evidence.

External validation of eligible models

Study design

A population-based retrospective cohort study from the UK National Neonatal Research Database (NNRD)21 was used to externally validate BPD prediction models identified in the systematic review. We included all very preterm infants admitted to 185 neonatal units in England and Wales from January 2010 to December 2017. This encompasses over 90% of English neonatal units in 2010, with full coverage in England and Wales in 2012 and 2014 respectively. Infants with birthweight z score below –4 or above 4 were excluded as they were likely erroneous entries. Further details of the data items extracted are found within the National Neonatal Dataset21 and Appendix 3. Ethical approval was granted by the Sheffield Research Ethics Committee (REC reference 19/YH/0115).

Statistical analysis

Data extraction and statistical analysis were done using STATA/SE version 16 (StataCorp) and R version 4 (R Core Team). Summary statistics (median, interquartile range and percentages) were used to describe the data. Missing data were imputed five times using Multivariate Imputation by Chained Equations.22 Model performances were assessed in three domains: discrimination (C-statistics), calibration (calibration plot and O:E ratio) and utility measure (decision curve analysis).

Results

Systematic review

Literature search

Of the 7628 potentially eligible studies identified, 194 full-text articles were screened with 122 articles excluded as studies identified risk factors rather than developing prediction models (48%), predictors available after 2 weeks of age (24%), infants above 32 weeks GA at birth (17%), non-universally accessible predictors (10%) or wrong outcome measure reported (2%). Data were extracted from the 72 full-text articles (Appendix 4), encompassing 64 studies and 53 BPD prediction models (Fig. 1).

Fig. 1: Flow diagram of the systematic review.
figure 1

Flow diagram of literature search and included studies.

Description of included studies

Of the 64 included studies, 31 were BPD prediction model development studies, 20 were validation studies, and 13 were development and external validation studies. Fifty-five of the studies were cohort studies; five were randomised controlled trials; two used a combination of randomised control trials and cohort studies with one case–control study and another with an unreported study design. Twenty-six studies were performed in North America, 14 in Europe, 13 in Asia, 5 in South America and Australia/New Zealand each and 1 study was carried out worldwide. Twenty studies developed and validated BPD prediction models based on infants born before 2000, with a further 15 studies using infants born between 2000 and 2010. The 64 included studies recruited 274,407 (range 32 and 156,587) infants, with the majority (50 studies) recruiting less than 1000 infants. A total of 39 (61%) studies were conducted in a single centre. Forty-seven studies used BPD as their outcome, while 14 studies used a BPD/death composite outcome, with 3 further studies reporting both BPD and BPD/death composite outcomes. Thirty-one studies defined BPD at 36 weeks CGA, while 22 studies used the timepoint of 28 days old. Six studies defined BPD using both timepoints. Five studies did not report how BPD was defined (Table 1).

Table 1 Characteristics of 64 included studies.

A total of 70% of the 44 derivation studies used logistic regression to develop the BPD prediction tool, with 11% used univariate analysis; 5% used clinical consensus as well as a combination of logistic regression and classification and tree analysis (CART) respectively; and 2% used CART, gradient boosting, Bayesian network and a combination of logistic regression and support vector machine, respectively. Complete case analysis was used in 41% of the included derivation studies, while handling of missing data was not reported in the remaining 59%. Internal and external validation was done in 25% and 30% of the studies, respectively. Validation was not done in the remaining 45% of studies. A total of 75% of the studies assessed discrimination using C-statistics. In contrast, only 16% of the studies evaluated calibration using the goodness of fit (5 studies), calibration plot (1 study) and O:E ratio (1 study). Of the 44 models, ten (23%), eight (18%) and four (9%) models provided a formula, score chart and nomogram respectively. Only two (5%) models provided an online calculator (Table 2).

Table 2 Methodology used by the derivation studies.

Of the 53 BPD prediction models identified, 19 used predictors available within 24 hours of age, while 20 and six models relied on predictors available between 2 and 7 days and above 7 days of age, respectively. Seven models used predictors available at various timepoints while the timepoints were unavailable for one model. The BPD prediction models considered a median of 14 predictors before using a median of five predictors in the final models. The five most used predictors were GA, birthweight, the fraction of inspired oxygen (FiO2), sex and invasive ventilation requirement, which were used in 33–69% of models (Appendix 5).

Risk of bias

The majority of the studies were assessed to have a low risk of bias for the three domains of participants (84%), predictors (92%) and outcome (89%). A total of 60 (94%) studies were assessed to have a high risk of bias in the analysis domain based on the PROBAST tool13 due to various reasons including calibration not assessed (55 studies (86%)); small sample size (37 studies (58%)); inappropriate handling of missing data (21 studies (33%)); lack of internal/external validation (9 studies (14%)); inappropriate selection approach for predictors (6 studies (9%)); and inappropriate handling of continuous predictors (2 studies (3%)).

Twenty-one studies (33%) had high applicability concerns in the participant’s domain as they targeted a specific group of very preterm infants, usually infants at a higher risk of BPD (for example, ventilated infants only in 17 studies (27%)). Although universally accessible, predictors used in ten studies (16%) may not be routinely collected. Eight studies (13%) used BPD definitions that deviated against the current consensus10 (Fig. 2 and Appendix 6).

Fig. 2: Risk of bias assessment.
figure 2

Summary of risk of bias assessments for included studies based on the PROBAST tool.13

Discrimination

The C-statistics of the included prediction models ranged from 0.52 to 0.95 in the external validation studies with better performance in models using predictors beyond 7 days of age. Meta-analysis could only be done on 22 (50%) and 11 (35%) models for BPD and BPD/death composite outcomes, respectively, as the remaining 22 and 20 models were only validated in one study. The C-statistics confidence intervals (CI) were wide due to the small number of studies in each meta-analysis. The five models with CI above 0.5 for BPD from the meta-analysis were CRIB I,24 CRIB II25 as well as Valenzuela-Stutman 201926 (Birth, day 3 and 14 models). Similarly, for the BPD/death composite, the five models with CI above 0.5 from the meta-analysis were Laughon 201127 (day 1, 3, 7 and 14 models) and Valenzuela-Stutman 201926 (day 14 model) (Appendix 7). Meta-analysis for the Valenzuela-Stutman 2019 models26 could only be performed after including validation findings from our cohort study.

Calibration

The O:E ratio was reported in four external validation studies28,29,30,31 evaluating six prediction models (Rozycki 1996,28 Parker 199229 and Laughon 201127 (day 1, 3, 7 and 14 models)) with considerable variation in the O:E ratio among the included models. Meta-analysis of the O:E ratio could only be done on one model (Laughon 201127 (day 1)) with an O:E ratio of 0.96 (95% CI 0.85–0.99) (Appendix 8).

The calibration plot was reported in three studies,30,31,32 assessing six models (Palta 1990,33 Sinkin 1990,34 Ryan 1996,35 Kim 200536 as well as Laughon 201127 (day 1 and 3 models) (Appendix 9).

Classification

No studies reported net benefit or decision curve analyses.

Heterogeneity and reporting deficiencies

The wide confidence and prediction intervals demonstrated heterogeneity amongst the external validation studies. As there were less than ten validation studies in a meta-analysis, subgroup analysis and funnel plots were not performed to explore the source of heterogeneity. Sensitivity analysis was not performed as all studies had an overall high risk of bias except for two studies.30,37

Summary of findings

Due to the lack of validation studies, a conclusion could only be made for one model Laughon 2011.27 There was a low quality of evidence to show the discrimination and calibration performances of the Laughon 201127 model in predicting the BPD/death composite outcome using predictors at day 1 of age with a C-statistic of 0.76 (95% CI 0.70–0.81) and O:E ratio of 0.96 (95% CI 0.85–0.99). The evidence was downgraded by two levels due to study limitation (variation in the BPD definition used and some studies recruiting high-risk infants only (such as invasively ventilated infants)), as well as inconsistency (heterogeneity and a small number of external validation studies).

External validation

Patient cohort

After exclusions (Appendix 10), 62,864 very preterm infants were included (Appendix 11). A total of 17,775 (31%) infants developed BPD while 5718 (9%) infants died before discharge from the neonatal unit.

Model performance

We were able to externally validate six prediction models (Henderson-Smart 2006,38 Valenzuela-Stutman 201926 (day 1, 3 and 14 models), Shim 202139 and Ushida 202140) in our retrospective cohort. The variables in the remaining models were not available in our cohort. The discrimination (C-statistics) and calibration (O:E ratio and calibration plot) (Fig. 3) performances were variable among the models. Although the models displayed fair to good discrimination with C-statistics of 0.70–0.90, they had poor calibration as indicated by the calibration plot and O:E ratio between 0.39 and 2.31. The Valenzuela-Stutman 2019 models26 appear to overestimate the predicted risk, whereas the remaining three models (Henderson-Smart 2006,38 Shim 202139 and Ushida 202140) tend to underestimate the predicted risk. Of the six externally validated models, four models (Henderson-Smart 2006,38 Valenzuela-Stutman 201926 (day 14 models), Shim 202139 and Ushida 202140) indicated superior net benefit across a reasonable range of threshold probabilities of 30–60% in deciding postnatal corticosteroid treatment in the decision curve analysis (Appendix 12). The threshold probabilities used were identified in a meta-regression of 20 randomised controlled trials.8

Fig. 3: Model performance of the externally validated prediction models.
figure 3

Discrimination (C-statistics) and calibration (O:E ratio and calibration plots) characteristics of prediction models externally validated using a retrospective cohort for a bronchopulmonary dysplasia (BPD) (n = 57,572) and b composite BPD and death (n = 62,864).

Discussion

Our study is an update to the systematic review carried out nearly a decade ago,32 with a further 27 prediction models identified since the last review. Our systematic review identified 64 studies that developed and/or validated 53 BPD prediction models with meta-analysis carried out on 22 models. Due to the lack of external validation studies, we could not identify a prediction model for routine clinical use. Further external validation, including assessment of both discrimination and calibration performances in a population similar to that whereby the model will be used, is needed before any model could be adopted in clinical practice. However, the most promising prediction model that could be considered based on our meta-analysis was Laughon 201127 in predicting the BPD/death composite outcome using predictors at day one of age. Further re-calibration of the model based on the local population of interest, with re-assessment of its performance in subsequent external validation studies (if re-calibrated), may be needed before being used in clinical practice.

We have also externally validated six prediction models26,38,39,40 in our retrospective population-based cohort study as variables in the remaining models were not available in our cohort. Although they have fair to good discrimination, they were not well calibrated in our cohort. To be useful, prediction tools need to be generalisable to current datasets highlighting the importance of external validation.

Implications for clinical practice and research

The implementation of BPD prediction models in clinical practice is limited by the lack of external validation of the published models. Less than a third of the identified prediction models were externally validated. Furthermore, half of the externally validated models were only validated by one study. This potentially limits the generalisability of the model performance to other infant populations and adoption into clinical practice. There is also a need for continual assessment of the model performance over time to determine if further updates to the model are needed with changes in clinical practice.

Sample size

Most external validation studies had small sample sizes or were restricted to specific high-risk infant populations (for example, ventilated infants only). Furthermore, 61% of studies were single centre only. This potentially limits the generalisability of the models. It is recommended that prediction model development studies should have a sufficient sample size of infants with the outcome of interest for the number of candidate predictors used based on recommendations made by Riley et al. 2019,41 while validation studies should have at least 100 infants with the outcome.13

Missing values

The majority of the studies did not report missing data or excluded infants with missing data. A clear description of the handling of missing data should be provided. Complete case analysis should be avoided if possible.13

Variation in prediction timepoint and outcome definition

Nearly three-quarters of the included prediction models predicted BPD, the remainder predicting the BPD/death composite outcome. As death and BPD are semi-competitive risks, infants who died before 36 weeks CGA may have a higher risk of developing BPD if they had survived until 36 weeks CGA. Hence, the potential predictive information of death should be accounted for in BPD prediction modelling. The included models also made predictions at a variety of timepoints. Therefore, a meta-analysis of the models was difficult and may limit the clinical settings in which the model can be used. It may be sensible for the performance of future prediction models to be externally validated for BPD as well as the BPD/death composite outcome at three prediction timepoints of one, seven days and 14 days of age. These timepoints would allow timely preventative treatment or research recruitment to be targeted to high-risk infants.

Predictors

The predictors used in the model should be easily assessed routinely during daily clinical practice and not dependable on clinical practice, such as weight loss and fluid intake. Future prediction models should also be dynamic, accounting for the changing status of the infant over time and clinical trajectory.

Predictor selection based on the traditional stepwise approach or univariable analysis should be avoided, especially in small datasets. Instead, predictor selection based on a priori knowledge or statistical approach not based on prior statistical tests between predictor and outcome (e.g., principal component analysis) may be better methods.14

Model performance

Both discrimination (C-statistics) and calibration (calibration plot or O:E ratio) performances of the prediction models need to be assessed during external validation. A model with fair to good discrimination may be poorly calibrated.32 Hosmer-Lemeshow goodness-of-fit test alone without other calibration measures was found not to be a suitable method to assess calibration as it is sensitive to sample size.13 The test is often non-significant (i.e. good calibration) in small datasets while usually significant (i.e., poor calibration) in large datasets. Since the recommendation to assess calibration in the last review nearly a decade ago,32 only two further studies30,31 assessed calibration using calibration plots or O:E ratios.

An impact analysis was not carried out in any of the identified prediction models to evaluate if the prediction model improved patient outcomes. Decision curve analysis42 may be used as an initial screening method to assess the net benefit of using the prediction model before carrying out further impact analysis. Decision curve analysis can be used on the external validation dataset without further data collection.

Practicality of model

Prediction models developed should be practical and easy to use at the bedside. Only two published models27,40 provided online calculators to allow easy access risk assessment.

Changes in clinical practice and rising BPD rates, potentially makes previously published models outdated affecting their predictive ability. Over half of the published models used data from babies born more than a decade ago. Hence, new models should consider a built-in feature to allow them to learn from future babies and adapt their performance to new practices.

Strength

The systematic review was carried out based on standard Cochrane methodologies as well as recent recommendations for meta-analysis of prediction models16 and risk of bias assessment.13 There were no language or date restrictions. The review is anticipated to guide clinicians and researchers in not only developing and/or validating BPD prediction models in very premature infants based on recommendations of the review, but also in identifying the most promising prediction model to be externally validated in their local population.

The use of recent routinely collected clinical information in our external validation study, coupled with its large population coverage, provides an accurate representation of the current neonatal practice in England and Wales. This large cohort of nearly 63,000 very preterm infants, including infants receiving both invasive and non-invasive ventilation, forms an ideal cohort to externally validate and assess BPD prediction models.

Limitation

Only 6 out of the 53 identified prediction models could be validated in our cohort. Hence, the performance of the remaining models in our cohort was unclear. However, it is crucial that future models should only use predictors that are easily assessed in clinical practice to ensure their successful clinical implementation.

Conclusion

As preterm infant survival increases, more survivors are diagnosed with BPD along with the long-term respiratory and neurological consequences. Despite almost a doubling in the number of BPD prediction models published over the last decade, most identified in our systemic review are not used in routine clinical practice. This is due to a lack of good quality external validation studies assessing their performance on the local population of interest. Furthermore, calibration of the models is often not appropriately evaluated in most of the models. Models should be externally validated with a subsequent impact analysis before being adopted in clinical practice. Decision curve analysis may be a good screening tool to assess the net benefit of the tool prior to impact analysis.

Our systematic review has also made recommendations for future BPD prediction models including consideration of additional predictors, a more dynamic model accounting for changes in the infant’s condition over time and their trajectory, and the ability to adapt performance with evolving clinical practice. A good quality, well-validated BPD prediction tool is needed to provide personalised preventative treatment and allow targeted trial recruitment to reduce the long-term impact on this vulnerable and expanding population.