Systematic differences in effect estimates between observational studies and randomized control trials in meta-analyses in nephrology

The limited availability of randomized controlled trials (RCTs) in nephrology undermines causal inferences in meta-analyses. Systematic reviews of observational studies have grown more common under such circumstances. We conducted systematic reviews of all comparative observational studies in nephrology from 2006 to 2016 to assess the trends in the past decade. We then focused on the meta-analyses combining observational studies and RCTs to evaluate the systematic differences in effect estimates between study designs using two statistical methods: by estimating the ratio of odds ratios (ROR) of the pooled OR obtained from observational studies versus those from RCTs and by examining the discrepancies in their statistical significance. The number of systematic reviews of observational studies in nephrology had grown by 11.7-fold in the past decade. Among 56 records combining observational studies and RCTs, ROR suggested that the estimates between study designs agreed well (ROR 1.05, 95% confidence interval 0.90–1.23). However, almost half of the reviews led to discrepant interpretations in terms of statistical significance. In conclusion, the findings based on ROR might encourage researchers to justify the inclusion of observational studies in meta-analyses. However, caution is needed, as the interpretations based on statistical significance were less concordant than those based on ROR.

www.nature.com/scientificreports/ Therefore, in the present study, we aimed to (1) assess the trends and characteristics of systematic reviews of observational studies in nephrology in the past decade; and (2) quantify systematic differences in effect estimates between observational studies and RCTs in meta-analyses using two statistical methods: ROR, and discrepancies in statistical significance between the two study designs among meta-analyses combining observational studies and RCTs.

Methods
Literature search and selection of studies. The literature searches were conducted in January 2017 using EMBASE and MEDLINE. We searched studies published from January 2006 to December 2016 with no language limitation. The search strategy was developed with the assistance of a medical information specialist and included key words related to ' observational study' , 'systematic review' , and 'kidney disease' (see Supplement  Table 1). Search terms relevant to this review were collected through expert opinion, literature review, controlled vocabulary-including Medical Subject Headings (MeSH) and Excerpta Medica Tree-and a review of the primary search results. The titles and abstracts were screened independently by two authors (M.K, K.K) and were excluded during screening if they were irrelevant to our research question or duplicated. Studies suspected of including relevant information were retained for full text assessment using inclusion and exclusion criteria. If more than one publication of one study existed, we grouped them together and adopted the publication with the most complete data. The present study was conducted according to a protocol prospectively registered at PROSPERO (CRD42016052244).
Evaluation of the characteristics of the systematic reviews of observational studies. We included systematic reviews of all comparative observational studies in nephrology to assess the trends and characteristics of systematic reviews of observational studies in nephrology in the past decade. We included systematic reviews published from 2006 to 2016 to assess the influence of reporting assessment tools including PRISMA (Preferred Reporting Items for Systematic Reviews and Meta-analyses) 23  We selected studies of kidney disease based on the following two criteria: 1. We included studies on participants with kidney diseases. Kidney diseases were defined as diseases that occurred in the renal parenchyma, such as acute or chronic kidney injury, kidney neoplasms, and nephrolithiasis, based on the MeSH search builder of the term 'Kidney Diseases' . Studies were excluded if they had participants with extra-renal diseases including ureteral, urethral, and urinary bladder diseases. 2. We included studies with primary outcomes related to kidney diseases. We used the same definition of kidney diseases as above. We excluded studies in which kidney diseases were treated as a composite outcome (e.g. composite outcome of kidney, pancreas, and liver cancers).
We described the characteristics of systematic reviews of observational studies as follows: 1.  26 , and QUAROM (The Quality of Reporting of Meta-analyses) 27 . STROBE (Strengthening the Reporting of Observational Studies in Epidemiology) 28 , CONSORT (Consolidated Standards of Reporting) 29 , and others were excluded.

Comparison of effect estimates between observational studies and RCTs in meta-analyses combining both types of study.
To compare the effect estimates between study designs, we focused on meta-analyses which combined observational studies and RCTs and compared two specific interventions. We included non-randomized studies, such as cohort, case-control, cross-sectional, and controlled trials that use inappropriate strategies of allocating interventions (sometimes called quasi-randomized studies), as observational studies 30 . We included all studies related to the above-mentioned kidney diseases and did not focus on specific comparative studies. We compared the effect estimates obtained from observational studies as a measurement of exposure with those from randomized studies as a measurement of control in meta-analyses com- www.nature.com/scientificreports/ bining both types of studies. We expressed the quantitative differences in effect estimates for primary efficacy outcomes between study designs, taking the ROR 31 . Further, we assessed discrepancies in statistical significance between study designs. The absence of discrepancies, which represents agreement between efficacy and effectiveness, was defined as follows: (1) both study types were significant with the same direction of point estimates, and (2) both study types were not significant. In contrast, the presence of discrepancies was defined as follows: (1) one study type was significant while the other type was not significant, and (2) both study types were significant, although the point estimates had the opposite direction 24 . The assessment of the methodological quality of these meta-analyses combining both types of studies was performed using the AMSTAR (assessment of multiple systematic reviews) 2 appraisal tool 32 .
Data extraction. Two authors (M.K., K.K.) independently performed full screening to capture the trends and characteristics of systematic reviews of observational studies in the past decade. Three authors (M.K., A.O., A.T.) independently extracted the relevant data, such as the number of events or non-events, to compare the effect estimates between observational studies and RCTs in meta-analyses combining both types of studies. In addition, two authors (M.K., A.O.) independently graded each review for overall confidence as high, moderate, low, and critically low using the AMSTAR 2 tool. Statistical analyses. We described the baseline characteristics of systematic reviews of observational studies using means (standard deviation [SD]) for continuous data with a normal distribution, medians (interquartile range [IQR]) for continuous variables with skewed data, and proportions for categorical data.
For the comparison of effect estimates between observational studies and RCTs in meta-analyses combining both types of studies, we estimated the ROR of the pooled OR obtained from observational studies versus those from RCTs. If an OR was not reported in a review, we recalculated the OR by extracting the number of events and non-events in both the intervention and control groups from a review or the primary study itself. If the number of events or non-events was 0, we added 0.5 to all cells of each result 30 . If we could not find the number of events or non-events from a review or primary articles to calculate the OR, we substituted original outcome measures, such as relative risks or risk ratios (RR), and hazard ratios (HR), instead of OR 21,31 . In addition, standardized meanfrom a review or the primary study itself. If the number differences (SMD) and mean differences (MD) were converted to ORs based on a previous study 33 . The standard errors (SEs) and 95% CI were calculated in accordance with previous studies 22,31 . Further, if the reviews did not report effect sizes separately for two designs, we synthesized the results obtained from primary articles. If positive outcomes such as survival were adopted, the OR comparing the intervention with control were coined. In addition, if ordinary or older interventions were included in the numerator of the OR, those OR were also coined. If several outcomes were reported, we used the first outcome that was described in the paper.
We estimated the differences in the primary efficacy outcomes between study designs by calculating the pooled ROR with the 95% CI using a two-step approach 34 . First, the ROR was estimated with the OR obtained from observational studies and RCTs in each review using random effects meta-regression. Second, we estimated the pooled ROR with the 95% CI across reviews with a random-effects model. Further, we performed sensitivity analysis using fixed effect model. If the ROR was more than 1.0, this would indicate that the OR from observational studies were larger than those from RCTs 22,31 . Heterogeneity was estimated using I 2 test 30 . I 2 values of 25%, 50% and 75% represent low, medium and high levels of heterogeneity.
Further, we examined the association between discrepancies in statistical significance of each design in accordance with above-mentioned definitions and risk factors using a multiple logistic regression model, adjusted for differences in the number of primary articles between study designs, publication year, countries of first authors, pharmacological intervention, adjustment for confounding factors, and methodological quality of systematic reviews based on rating overall confidence of AMSTAR 2 tool.
All statistical analyses were performed using STATA 16.0 (StataCorp LLC, College Station, TX, USA).

Results
Study flow diagram. The PRISMA flow diagram (see Fig. 1) shows the study selection process. Of 5,547 records identified through database searching, we screened the titles and abstracts of the 3994 records remaining after removing duplicates and ultimately obtained 613 records. After a full-text review, we included a total of 477 records for the description of characteristics of systematic reviews of observational studies. Further, of the 114 records that combined both observational studies and RCTs, 56 were eligible for the evaluation of quantitative systematic differences in effect estimates of meta-analyses between observational studies and RCTs (see Supplement Table 2).
Trends over the past decade and description of study characteristics. We summarized the baseline characteristics of 477 nephrology systematic reviews of all comparative observational studies (see Table 1). The number of systematic reviews of observational studies in nephrology increased 11.7-fold between 2006 and 2016. In particular, the number of publications from China, as well as the United States of America (USA) and European countries increased (see Supplement Table 3). As shown in Table 1, most of the reviews dealt with topics related to therapies for patients with acute kidney injury, malignancy, end-stage renal diseases, and renal transplantation, aside from basic research. As for the eligible designs of observational studies, 67.1% of records included cohort studies and 33.8% included case-control studies. Of the 82 reviews related to basic research, 75 (91.5%) included case-control studies. Case series and before-after studies without comparisons were excluded in many studies. NOS was the most frequently used tool for assessing the risk of bias. ACROBAT-NRSI was used in only 0.8% of records. www.nature.com/scientificreports/

Comparison of qualitative systematic differences in effect estimates between observational studies and RCTs in meta-analyses combining both types of studies. Fifty-six meta-analyses
combining both observational studies and RCTs were eligible for the analyses. A total of 418 observational studies and 204 RCTs were included, and the median number (interquartile range) per meta-analysis was 7 (2.5 to 10) observational studies and 3 (2 to 5) RCTs. Almost all reviews indicated a critically low quality (see Supplement Table 4). We compared the effect estimates of primary outcomes between study designs using ROR with 95% CI. No significant differences were noted in the effect estimates by study designs (ROR 1.05, 95% CI 0.90 to 1.23) (see Fig. 2). There was moderate heterogeneity (I 2 = 47.5%). Additionally, the result obtained using the fixed effect model was closely similar to that obtained using the random effect model (ROR 0.98, 95% CI 0.89 to 1.07). Of the 56 studies, 2 reviews showed that observational studies had significantly larger effects than RCTs (ROR > 1.0), while 6 showed that observational studies had significantly smaller effects than RCTs (ROR < 1.0). The remaining 48 reviews indicated no significant differences between the study designs.
Of the 56 studies, 29 reviews showed no discrepancy in terms of statistical significance (14 reviews, significant in the same direction as the point estimates; 15 reviews, neither significant), while 27 reviews showed some discrepancy (all 27 studies, one significant and the other not significant). No review showed statistical significance in the opposite direction of the point estimates. Table 2 compares baseline characteristics between the presence and absence of discrepancies. In addition, we explored the factors associated with discrepancies (see Table 3) but found no significant association for any covariate; in particular, we found no differences in the number of papers between observational studies and RCTs (OR 1.10, 95% CI 0.99 to 1.23).
Further, on comparison of the results of ROR and the distribution of discrepancies of statistical significance, of 48 records (85.7%) that indicated non-significance of the ROR, 20 (35.7%) showed discrepancies in statistical significance (see Table 4).

Discussion
Our findings indicate that the number of systematic reviews of observational studies in nephrology have dramatically increased in the past decade, especially from China and the USA. Around 60% of reviews assessed the risk of bias, mostly using the NOS. A comparison of effect estimates between observational studies and RCTs in meta-analyses combining both types of studies revealed that the effect estimates from observational studies were largely consistent with those from RCTs. However, when interpreted in terms of statistical significance, almost half of the reviews led to discrepant interpretations. www.nature.com/scientificreports/ Observational studies generally have larger sample sizes and better represent real-world populations than RCTs. Nevertheless, confounding factors, especially confounding by indication, often disturb the precise assessment of causal inference and establishment of high levels of evidence [35][36][37][38] . The quality of evidence based on observational studies might depend on how confounding factors are controlled. Adjustment using appropriate techniques, including propensity score matching and instrumental variables, are likely to be useful, although these methods cannot completely deal with unmeasured variables 39,40 . However, most of the reviews included in the present study did not mention the implementation of adjustment in detail.
Recently, several risk of bias appraisal tools for evaluating the quality of systematic reviews of observational studies in multiple domains have been developed, including ACROBAT-NRSI 25,41,42 . However, the present study showed that these tools are not yet widely implemented. Most of the studies reported the risk of bias using the NOS, although this has been reported to show uncertain reliability and validity in previous studies 24,43 . www.nature.com/scientificreports/ In the present study, we compared the effect estimates between observational studies and RCTs in metaanalyses combining both types of studies using two analytical methods: ROR and discrepancies in statistical significance between the study designs. ROR with a 95% CI revealed that effect estimates were, on average, consistent between the two study designs. These findings would encourage researchers to justify the inclusion of observational studies in meta-analyses. Combining different types of designs in meta-analyses based on the ROR may be reasonable, as improvement in statistical power leads to a more definite assessment if a sufficient www.nature.com/scientificreports/ No discrepancy (both non-significant) (n = 15) Discrepancy (one significant and the other non-significant) (n = 27)

Cause of kidney injury, n (%)
Renal tumor 4 (7. www.nature.com/scientificreports/ number of RCTs cannot be obtained. Further, the degree of guideline recommendations in nephrology is almost always low because evidence from high-quality RCTs is lacking. The increase in evidence derived from the finding that the effect estimates of observational studies are similar to those of RCTs might lead to an improvement in the quality of guidelines in nephrology. However, with regard to the interpretation of the findings, almost half of records showed discrepancies in statistical significance between the study designs. Further, 35.7% of records indicated disagreement in judgement between the two analytical methods. Therefore, the findings should be interpreted with care, as inconsistent findings due to the modification of analytical methods might reflect poor internal validity between the study designs. In addition, the present study failed to identify systematic review-level factors associated with discrepancies in statistical significance, including differences in the number of primary articles between study designs and the implementation of adjustment with confounding factors. Future studies should explore risk factors at the primary study level. Several limitations of our study should be mentioned. First, it is possible that we failed to include several gray-area studies or smaller studies, Albiet that we performed a comprehensive search. Second, we included similar research questions that were published by different authors, which might have led to overestimation. Third, to compare effect estimates between study designs, we substituted original outcome measures, such as RR or HR instead of OR, if the number of events could not be determined from primary articles to calculate the OR, similarly to previous studies 21,44 . However, results using the RR and HR are not necessarily consistent with those using the OR, particularly when the number of events is large. Fourth, we were unable to estimate the ROR adjusted for the methodological quality of systematic reviews based on the AMSTAR 2 tool, as almost all reviews were judged to be of low quality. Fifth, in the present study, we performed a literature search using two databases recommended by AMSTAR 2: the EMBASE and MEDLINE databases, which are the most universally used in the medical field. Sixth, we were unable to adjust for several potential risk factors that may have influenced the results in each primary study, such as the sample size, details concerning the techniques used to adjust for confounding factors, presence of selection bias, degree of risk of bias, and funding sources. In particular, differences in the sample size in each primary study might have influenced the results, but we only adjusted for differences in the number of primary articles between study designs at the systematic review level. Future studies should explore those risk factors at the primary study level. In addition, meta-analyses of observational studies are likely to have dramatically increased in number over the past few years, so we must continue to update our research. Finally, because we sampled meta-analyses which included both observational studies and RCTs, it is conceivable that extreme results, either from observational studies or from RCTs, could have been excluded when the original Table 3. Predictors of discrepancies in results between observational studies and randomized control trials. Adjusted for differences in the number of primary articles between observational studies and RCTs, publication year, country of first author, and pharmacological intervention. OR odds ratio, CI confidence interval, USA United States of America.  Table 4. Comparison of ROR and discrepancies defined by statistical significance. ROR ratio of odds ratio, number (%).

Discrepancy in the interpretations based on statistical significance
No discrepancy (significant in the same direction) (n = 14) www.nature.com/scientificreports/ meta-analysis was conducted, leading to spurious greater concordance between the two study designs. Without a pre-specified protocol, we cannot assess the extent of such practices.

Conclusion
This study indicates that evidence synthesis based on observational studies has been increasing in nephrology. When we examined ROR, we found no systematic differences in effect estimates between observational studies and RCTs when meta-analyses included both study design types. These findings might encourage researchers to justify the inclusion of observational studies in meta-analyses. This approach can increase statistical power and allow stronger causal inference. However, caution is needed when interpreting the findings from both observational studies and RCTs because the interpretations based on statistical significance were shown to be less concordant than those based on ROR. Further studies are necessary to explore the causes of these contradictions.

Data availability
The datasets used and/or analysed during the current study are available from the corresponding author on reasonable request.