Skip to main content

Thank you for visiting nature.com. You are using a browser version with limited support for CSS. To obtain the best experience, we recommend you use a more up to date browser (or turn off compatibility mode in Internet Explorer). In the meantime, to ensure continued support, we are displaying the site without styles and JavaScript.

  • Article
  • Published:

Evaluating efficiency and accuracy of deep-learning-based approaches on study selection for psychiatry systematic reviews

Abstract

Scientific publications in mental health are rapidly growing in number and complexity, making curation of abstracts for systematic reviews increasingly time consuming and challenging. As a result, systematic reviews on broad topics have also complicated human review due, in part, to variations and lack of objectivity among multiple human reviewers. Resolving these complexities can be time consuming and impact the accuracy and breadth of systematic reviews. To address these challenges, we propose and evaluate multiple machine-learning-based approaches to capture inclusion and exclusion criteria and automate the abstract selection process. We fine-tuned or trained models on psychiatry abstracts from four systematic-review topic areas. The models were then applied to abstracts derived from an independently curated oncology literature database. Transformer-based machine-learning models outperformed trained human reviewers in abstract screening for three out of four topic areas with accuracies ranging from –4% to 17.7%. Such approaches may facilitate the sharing and synthesis of research expertise across disciplines.

This is a preview of subscription content, access via your institution

Access options

Buy this article

Prices may be subject to local taxes which are calculated during checkout

Fig. 1: Process flow for individual systematic-review topic area.
Fig. 2: Paper selection methods performance.
Fig. 3: Human/AI augmentation performance.

Similar content being viewed by others

Data availability

Source data are provided with this paper. All papers used as source data for abstracts are available at Embase, Web of Science, PsycInfo, CINAHL, PubMed, and NCI. All datasets, including abstracts, used for machine learning and evaluation of results (human reviewers and automated) are clearly identified and publicly available at https://github.com/MetaAnalysisPipeline/MetaAnalysisPipeline.

Code availability

The code used for text preprocessing and machine-learning pipelines for SciBERT, BERT, and Naïve Bayes are available at https://github.com/MetaAnalysisPipeline/MetaAnalysisPipeline.

References

  1. Levels of Evidence and Grades for Recommendations for Developers of Clinical Practice Guidelines (NHMRC, 2009).

  2. Hoffmann, T., Bennett, S. & Mar, C. D. Evidence-Based Practice Across the Health Professions (Churchill Livingstone, 2014).

  3. Kendall, S. Evidence-based resources simplified. Can. Fam. Physician 54, 241–243 (2008).

    PubMed  PubMed Central  Google Scholar 

  4. Davidson, M. & Iles, R. in Research Methods in Health: Foundations for Evidence-Based Practice (ed. Liamputtong, P.) 285–300 (Oxford Univ. Press, 2010).

  5. Cook, D. J., Mulrow, C. D. & Haynes, R. B. Systematic reviews: synthesis of best evidence for clinical decisions. Ann. Intern. Med. 126, 376–380 (1997).

    Article  PubMed  Google Scholar 

  6. Glass, G. V. Primary, secondary, and meta-analysis of research. Educ. Res. 5, 3–8 (1976).

    Article  Google Scholar 

  7. Greco, T., Zangrillo, A., Biondi-Zoccai, G. & Landoni, G. Meta-analysis: pitfalls and hints. Heart Lung Vessels 5, 219–225 (2013).

    PubMed  PubMed Central  Google Scholar 

  8. Michelson, M. & Reuter, K. The significant cost of systematic reviews and meta-analyses: a call for greater involvement of machine learning to assess the promise of clinical trials. Contemp. Clin. Trials Commun. 16, 100443 (2019).

    Article  PubMed  PubMed Central  Google Scholar 

  9. Allen, I. E. Estimating time to conduct a meta-analysis from number of citations retrieved. JAMA 282, 634–635 (1999).

    Article  PubMed  Google Scholar 

  10. Polanczyk, G., de Lima, M. S., Horta, B. L., Biederman, J. & Rohde, L. A. The worldwide prevalence of ADHD: a systematic review and metaregression analysis. Am. J. Psychiatry 164, 942–948 (2007).

    Article  PubMed  Google Scholar 

  11. Kennis, M. et al. Prospective biomarkers of major depressive disorder: a systematic review and meta-analysis. Mol. Psychiatry 25, 321–338 (2020).

    Article  PubMed  Google Scholar 

  12. Broyd, S. J. et al. Default-mode brain dysfunction in mental disorders: a systematic review. Neurosci. Biobehav. Rev. 33, 279–296 (2009).

    Article  PubMed  Google Scholar 

  13. Dowlati, Y. et al. A meta-analysis of cytokines in major depression. Biol. Psychiatry 67, 446–457 (2010).

    Article  PubMed  Google Scholar 

  14. Cipriani, A. et al. Comparative efficacy and acceptability of 21 antidepressant drugs for the acute treatment of adults with major depressive disorder: a systematic review and network meta-analysis. Lancet 391, 1357–1366 (2018).

    Article  PubMed  PubMed Central  Google Scholar 

  15. Moore, T. H. M. et al. Cannabis use and risk of psychotic or affective mental health outcomes: a systematic review. Lancet 370, 319–328 (2007).

    Article  PubMed  Google Scholar 

  16. Xiong, J. et al. Impact of COVID-19 pandemic on mental health in the general population: a systematic review. J. Affect. Disord. 277, 55–64 (2020).

    Article  PubMed  PubMed Central  Google Scholar 

  17. Leucht, S. et al. Comparative efficacy and tolerability of 15 antipsychotic drugs in schizophrenia: a multiple-treatments meta-analysis. Lancet 382, 951–962 (2013).

    Article  PubMed  Google Scholar 

  18. Bown, M. J. & Sutton, A. J. Quality control in systematic reviews and meta-analyses. Eur. J. Vasc. Endovasc. Surg. 40, 669–677 (2010).

    Article  PubMed  Google Scholar 

  19. Gurevitch, J., Koricheva, J., Nakagawa, S. & Stewart, G. Meta-analysis and the science of research synthesis. Nature. 555, 175–182 (2018).

    Article  PubMed  Google Scholar 

  20. Elliott, J. H. et al. Living systematic reviews: an emerging opportunity to narrow the evidence–practice gap. PLoS Med. 11, e1001603 (2014).

    Article  PubMed  PubMed Central  Google Scholar 

  21. Lerner, I., Créquit, P., Ravaud, P. & Atal, I. Automatic screening using word embeddings achieved high sensitivity and workload reduction for updating living network meta-analyses. J. Clin. Epidemiol. 108, 86–94 (2019).

    Article  PubMed  Google Scholar 

  22. Bao, Y. et al. Using machine learning and natural language processing to review and classify the medical literature on cancer susceptibility genes. JCO Clin. Cancer Inform. https://doi.org/10.1200/CCI.19.00042 (2019).

  23. Bannach-Brown, A. et al. Machine learning algorithms for systematic review: reducing workload in a preclinical review of animal studies and reducing human screening error. Syst. Rev. 8, 23 (2019).

    Article  PubMed  PubMed Central  Google Scholar 

  24. Lange, T., Schwarzer, G., Datzmann, T. & Binder, H. Machine learning for identifying relevant publications in updates of systematic reviews of diagnostic test studies. Res. Synth. Methods https://doi.org/10.1002/jrsm.1486 (2021).

  25. Khalil, H., Ameen, D. & Zarnegar, A. Tools to support the automation of systematic reviews: a scoping review. J. Clin. Epidemiol. 144, 22–42 (2022).

    Article  PubMed  Google Scholar 

  26. Ouzzani, M., Hammady, H., Fedorowicz, Z. & Elmagarmid, A. Rayyan-a web and mobile app for systematic reviews. Syst. Rev. 5, 210 (2016).

    Article  PubMed  PubMed Central  Google Scholar 

  27. Gates, A. et al. Performance and Usability of Machine Learning for Screening in Systematic Reviews: A Comparative Evaluation of Three Tools (Agency for Healthcare Research and Quality, 2019); http://www.ncbi.nlm.nih.gov/books/NBK550175/

  28. Orgeolet, L. et al. Can artificial intelligence replace manual search for systematic literature? Review on cutaneous manifestations in primary Sjögren’s syndrome. Rheumatology (Oxford) 59, 811–819 (2020).

    Article  PubMed  Google Scholar 

  29. Xiong, Z. et al. A machine learning aided systematic review and meta-analysis of the relative risk of atrial fibrillation in patients with diabetes mellitus. Front. Physiol. 9, 835 (2018).

    Article  PubMed  PubMed Central  Google Scholar 

  30. Tshitoyan, V. et al. Unsupervised word embeddings capture latent knowledge from materials science literature. Nature. 571, 95–98 (2019).

    Article  PubMed  Google Scholar 

  31. Olier, I. et al. Transformational machine learning: learning how to learn from many related scientific problems. Proc. Natl Acad. Sci. USA 118, e2108013118 (2021).

    Article  PubMed  PubMed Central  Google Scholar 

  32. Nichols, J. D., Oli, M. K., Kendall, W. L. & Boomer, G. S. Opinion: a better approach for dealing with reproducibility and replicability in science. Proc Natl Acad. Sci. USA 118, e2100769118 (2021).

    Article  PubMed  PubMed Central  Google Scholar 

  33. Patel, B. N. et al. Human–machine partnership with artificial intelligence for chest radiograph diagnosis. NPJ Digit. Med. 2, 111 (2019).

    Article  PubMed  PubMed Central  Google Scholar 

  34. Marshall, I. J., Noel-Storr, A., Kuiper, J., Thomas, J. & Wallace, B. C. Machine learning for identifying randomized controlled trials: an evaluation and practitioner’s guide. Res. Synth. Methods. 9, 602–614 (2018).

    Article  PubMed  PubMed Central  Google Scholar 

  35. Wolf, T. et al. HuggingFace’s transformers: state-of-the-art natural language processing. ArXiv http://arxiv.org/abs/1910.03771 (2020).

  36. Norman, C. R., Leeflang, M. M. G., Porcher, R. & Névéol, A. Measuring the impact of screening automation on meta-analyses of diagnostic test accuracy. Syst. Rev. 8, 243 (2019).

    Article  PubMed  PubMed Central  Google Scholar 

  37. Frénay, B. & Kabán, A. A comprehensive introduction to label noise. In Proc. European Symposium on Artificial Neural Networks, Computational Intelligence and Machine Learning 267–276 (ESANN, 2014).

  38. Delgado-Rodriguez, M. Bias. J. Epidemiol. Community Health. 58, 635–641 (2004).

    Article  PubMed  PubMed Central  Google Scholar 

  39. Song, H., Kim, M., Park, D. & Lee, J. G. Learning from noisy labels with deep neural networks: a survey. ArXiv http://arxiv.org/abs/2007.08199 (2020).

  40. Beltagy, I., Lo, K. & Cohan, A. SciBERT: a pretrained language model for scientific text. ArXiv http://arxiv.org/abs/1903.10676 (2019).

  41. Devlin, J., Chang, M. W., Lee, K. & Toutanova, K. BERT: pre-training of deep bidirectional transformers for language understanding. ArXiv http://arxiv.org/abs/1810.04805 (2019).

  42. Pedregosa, F. et al. Scikit-learn: machine learning in Python. J. Mach. Learn. Res. 12, 2825–2830 (2011).

    Google Scholar 

  43. Vaswani, A. et al. Attention is all you need. ArXiv http://arxiv.org/abs/1706.03762 (2017).

  44. Berlin, J. A. & Golub, R. M. Meta-analysis as evidence: building a better pyramid. JAMA. 312, 603–605 (2014).

    Article  PubMed  Google Scholar 

  45. Oremus, M., Oremus, C., Hall, G. B. C. & McKinnon, M. C. ECT & cognition systematic review team. Inter-rater and test–retest reliability of quality assessments by novice student raters using the Jadad and Newcastle–Ottawa Scales. BMJ Open 2, e001368 (2012).

    Article  PubMed  PubMed Central  Google Scholar 

  46. Atkinson, D. & Murray, M. Improving Interrater Reliability (ERIC, 1987); https://eric.ed.gov/?id=ED287175.

  47. Linder, S. K., Kamath, G. R., Pratt, G. F., Saraykar, S. S. & Volk, R. J. Citation searches are more sensitive than keyword searches to identify studies using specific measurement instruments. J. Clin. Epidemiol. 68, 412–417 (2015).

    Article  PubMed  Google Scholar 

  48. Kanaris, I., Kanaris, K., Houvardas, I. & Stamatatos, E. Words vs. character n-grams for anti-spam filtering. Int. J. Artif. Intel.l 20, 1–20 (2006).

    Google Scholar 

  49. Chen, P. H., Zafar, H., Galperin-Aizenberg, M. & Cook, T. Integrating natural language processing and machine learning algorithms to categorize oncologic response in radiology reports. J. Digit. Imaging. 31, 178–184 (2018).

    Article  PubMed  Google Scholar 

  50. Yamamoto, S., Lauscher, A., Ponzetto, S. P., Glavaš, G. & Morishima, S. Self-supervised learning for visual summary identification in scientific publications. ArXiv http://arxiv.org/abs/2012.11213 (2021).

  51. Bisong, E. In Building Machine Learning and Deep Learning Models on Google Cloud Platform: A Comprehensive Guide for Beginners 1st edn (ed. Bisong, E.) 59–64 (Apress, 2019); https://doi.org/10.1007/978-1-4842-4470-8_7.

  52. McKinney, W. Data Structures for Statistical Computing in Python (SciPy, 2010); https://doi.org/10.25080/Majora-92bf1922-00a.

  53. Harris, C. R. et al. Array programming with NumPy. Nature 585, 357–362 (2020).

    Article  PubMed  PubMed Central  Google Scholar 

  54. Abadi, M. et al. TensorFlow: Large-Scale Machine Learning on Heterogeneous Distributed Systems. ArXiv https://arxiv.org/abs/1603.04467 (2016).

  55. Paszke, A. et al. PyTorch: An Imperative Style, High-Performance Deep Learning Library. ArXiv https://arxiv.org/abs/1912.01703 (2019).

  56. Van Rossum, G. & Drake, F. L. Python 3 Reference Manual (CreateSpace, 2009).

Download references

Acknowledgments

We express appreciation to the members of our Pediatric Emotion and Resilience Lab (PEARL) for their contributions to this work. We have no funding to disclose related to this work.

Author information

Authors and Affiliations

Authors

Contributions

A.J.G., M.G.G., K.K.R., and M.K.S. conceptualized and executed the study protocol. A.J.G., M.G.G., K.K.R., and M.K.S. contributed to the analyses. A.J.G., M.G.G., M.K.S., K.K.R., V.P., and A.N. contributed to writing of the manuscript. A.J.G. and M.G.G. contributed to visualizations. K.K.R., A.N., A.F.N., V.P., S.R.K., T.P., M.L., S.S., and M.K.S. contributed to the manual tagging of abstracts.

Corresponding author

Correspondence to Manpreet K. Singh.

Ethics declarations

Competing interests

M.K.S. has received research support from Stanford’s Maternal Child Health Research Institute and Stanford’s Department of Psychiatry and Behavioral Sciences, National Institute of Mental Health, National Institute of Aging, Patient Centered Outcomes Research Institute, Johnson and Johnson, and the Brain and Behavior Research Foundation. She is on the advisory board for Sunovion and Skyland Trail and is a consultant for Johnson and Johnson, Alkermes, and Neumora. She has previously consulted for X, moonshot factory, Alphabet Inc., and Limbix Health. She receives honoraria from the American Academy of Child and Adolescent Psychiatry and royalties from American Psychiatric Association Publishing and Thrive Global. K.K.R. receives support from The Permanente Medical Group’s Physician Researcher Program. No other authors report any biomedical financial interests or potential conflicts of interest.

Peer review

Peer review information

Nature Mental Health thanks Federica Colombo and the other, anonymous, reviewer(s) for their contribution to the peer review of this work.

Additional information

Publisher’s note Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Supplementary information

Supplementary Information

Supplementary Figs. 1–10, Tables 1–14, and Methods.

Reporting Summary

Supplementary Data

Source data for Supplementary Figs. 1–5, 7, and 8 and Supplementary Tables 11–13.

Source data

Source Data Fig. 2

Source data for Fig. 2.

Source Data Fig. 3

Source data for Fig. 3.

Rights and permissions

Springer Nature or its licensor (e.g. a society or other partner) holds exclusive rights to this article under a publishing agreement with the author(s) or other rightsholder(s); author self-archiving of the accepted manuscript version of this article is solely governed by the terms of such publishing agreement and applicable law.

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Gorelik, A.J., Gorelik, M.G., Ridout, K.K. et al. Evaluating efficiency and accuracy of deep-learning-based approaches on study selection for psychiatry systematic reviews. Nat. Mental Health 1, 623–632 (2023). https://doi.org/10.1038/s44220-023-00109-w

Download citation

  • Received:

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1038/s44220-023-00109-w

Search

Quick links

Nature Briefing

Sign up for the Nature Briefing newsletter — what matters in science, free to your inbox daily.

Get the most important science stories of the day, free in your inbox. Sign up for Nature Briefing