A growing number of artificial intelligence (AI)-based clinical decision support systems are showing promising performance in preclinical, in silico evaluation, but few have yet demonstrated real benefit to patient care. Early-stage clinical evaluation is important to assess an AI system’s actual clinical performance at small scale, ensure its safety, evaluate the human factors surrounding its use and pave the way to further large-scale trials. However, the reporting of these early studies remains inadequate. The present statement provides a multi-stakeholder, consensus-based reporting guideline for the Developmental and Exploratory Clinical Investigations of DEcision support systems driven by Artificial Intelligence (DECIDE-AI). We conducted a two-round, modified Delphi process to collect and analyze expert opinion on the reporting of early clinical evaluation of AI systems. Experts were recruited from 20 pre-defined stakeholder categories. The final composition and wording of the guideline was determined at a virtual consensus meeting. The checklist and the Explanation & Elaboration (E&E) sections were refined based on feedback from a qualitative evaluation process. In total, 123 experts participated in the first round of Delphi, 138 in the second round, 16 in the consensus meeting and 16 in the qualitative evaluation. The DECIDE-AI reporting guideline comprises 17 AI-specific reporting items (made of 28 subitems) and ten generic reporting items, with an E&E paragraph provided for each. Through consultation and consensus with a range of stakeholders, we developed a guideline comprising key items that should be reported in early-stage clinical studies of AI-based decision support systems in healthcare. By providing an actionable checklist of minimal reporting items, the DECIDE-AI guideline will facilitate the appraisal of these studies and replicability of their findings.
The prospect of improved clinical outcomes and more efficient health systems has fueled a rapid rise in the development and evaluation of AI systems over the last decade. Because most AI systems within healthcare are complex interventions designed as clinical decision support systems, rather than autonomous agents, the interactions among the AI systems, their users and the implementation environments are defining components of the AI interventions’ overall potential effectiveness. Therefore, bringing AI systems from mathematical performance to clinical utility needs an adapted, stepwise implementation and evaluation pathway, addressing the complexity of this collaboration between two independent forms of intelligence, beyond measures of effectiveness alone1. Despite indications that some AI-based algorithms now match the accuracy of human experts within preclinical in silico studies2, there is little high-quality evidence for improved clinician performance or patient outcomes in clinical studies3,4. Reasons proposed for this so-called AI chasm5 are lack of necessary expertise needed for translating a tool into practice, lack of funding available for translation, a general underappreciation of clinical research as a translation mechanism6 and, more specifically, a disregard for the potential value of the early stages of clinical evaluation and the analysis of human factors7.
The challenges of early-stage clinical AI evaluation (Box 1) are similar to those of complex interventions, as reported by the Medical Research Council dedicated guidance1, and surgical innovation, as described by the IDEAL Framework8,9. For example, in all three cases, the evaluation needs to consider the potential for iterative modification of the interventions and the characteristics of the operators (or users) performing them. In this regard, the IDEAL framework offers readily implementable and stage-specific recommendations for the evaluation of surgical innovations under development. IDEAL stages 2a and 2b, for example, are described as development and exploratory stages, during which the intervention is refined, operators’ learning curves are analyzed and the influence of patient and operator variability on effectiveness are explored prospectively, before large-scale efficacy testing.
Early-stage clinical evaluation of AI systems should also place a strong emphasis on validation of performance and safety, in a similar manner to phase 1 and phase 2 pharmaceutical trials, before efficacy evaluation at scale in phase 3. For example, small changes in the distribution of the underlying data between the algorithm training and clinical evaluation populations (so-called dataset shift) can lead to substantial variation in clinical performance and expose patients to potential unexpected harm10,11.
Human factors (or ergonomics) evaluations are commonly conducted in safety-critical fields such as aviation, military and energy sectors12,13,14. Their assessments evaluate the effect of a device or procedure on their users’ physical and cognitive performance and vice-versa. Human factors, such as usability evaluation, are an integral part of the regulatory process for new medical devices15,16, and their application to AI-specific challenges is attracting growing attention in the medical literature17,18,19,20. However, few clinical AI studies have reported on the evaluation of human factors3, and usability evaluation of related digital health technology is often performed with inconstant methodology and reporting21.
Other areas of suboptimal reporting of clinical AI studies have also recently been highlighted3,22, such as implementation environment, user characteristics and selection process, training provided, underlying algorithm identification and disclosure of funding sources. Transparent reporting is necessary for informed study appraisal and to facilitate reproducibility of study results. In a relatively new and dynamic field such as clinical AI, comprehensive reporting is also key to construct a common and comparable knowledge base to build upon.
Guidelines already exist, or are under development, for the reporting of preclinical, in silico studies of AI systems, their offline validation and their evaluation in large comparative studies23,24,25,26; but there is an important stage of research between these, namely studies focusing on the initial clinical use of AI systems, for which no such guidance currently exists (Fig. 1 and Table 1). This early clinical evaluation provides a crucial scoping evaluation of clinical utility, safety and human factors challenges in live clinical settings. By investigating the potential obstacles to clinical evaluation at scale and informing protocol design, these studies are also important stepping stones toward definitive comparative trials.
To address this gap, we convened an international, multi-stakeholder group of experts in a Delphi exercise to produce the DECIDE-AI reporting guideline. Focusing on AI systems supporting, rather than replacing, human intelligence, DECIDE-AI aims to improve the reporting of studies describing the evaluation of AI-based decision support systems during their early, small-scale implementation in live clinical settings (that is, the supported decisions have an actual effect on patient care). Whereas TRIPOD-AI, STARD-AI, SPIRIT-AI and CONSORT-AI are specific to particular study designs, DECIDE-AI is focused on the evaluation stage and does not prescribe a fixed study design.
Reporting item checklist
The DECIDE-AI guideline should be used for the reporting of studies describing the early-stage live clinical evaluation of AI-based decision support systems, independently of the study design chosen (Fig. 1 and Table 1). Depending on the chosen study design, and if available, authors may also want to complete the reporting according to study-type-specific guidelines (for example, STROBE for cohort studies)27. Table 2 presents the DECIDE-AI checklist, comprising the 17 AI-specific reporting items and ten generic reporting items selected by the Consensus Group. Each item comes with an E&E to explain why and how reporting is recommended (Supplementary Appendix 1). A downloadable version of the checklist, designed to help researchers and reviewers check compliance when preparing or reviewing a manuscript, is available as Supplementary Appendix 2. Reporting guidelines are a set of minimum reporting recommendations and not intended to guide research conduct. Although familiarity with DECIDE-AI might be useful to inform some aspects of the design and conduct of studies within the guideline’s scope28, adherence to the guideline alone should not be interpreted as an indication of methodological quality (which is the realm of methodological guidelines and risk of bias assessment tools). With increasingly complex AI interventions and evaluations, it might become challenging to report all the required information within a single primary manuscript, in which case references to the study protocol, open science repositories, related publications and supplementary materials are encouraged.
The DECIDE-AI guideline is the result of an international consensus process involving a diverse group of experts spanning a wide range of professional backgrounds and experience. The level of interest across stakeholder groups and the high response rate among the invited experts speaks to the perceived need for more guidance in the reporting of studies presenting the development and evaluation of clinical AI systems and to the growing value placed on comprehensive clinical evaluation to guide implementation. The emphasis placed on the role of human-in-the-loop decision-making was guided by the Steering Group’s belief that AI will, at least in the foreseeable future, augment, rather than replace, human intelligence in clinical settings. In this context, thorough evaluation of the human–computer interaction and the roles played by the human users will be key to realizing the full potential of AI.
The DECIDE-AI guideline is the first stage-specific AI reporting guideline to be developed. This stage-specific approach echoes recognized development pathways for complex interventions1,8,9,29 and aligns conceptually with proposed frameworks for clinical AI6,30,31,32, although no commonly agreed nomenclature or definition has so far been published for the stages of evaluation in this field. Given the current state of clinical AI evaluation, and the apparent deficit in reporting guidance for the early clinical stage, the DECIDE-AI Steering Group considered it important to crystallize current expert opinion into a consensus, to help improve reporting of these studies. Beside this primary objective, the DECIDE-AI guideline will hopefully also support authors during study design, protocol drafting and study registration, by providing them with clear criteria around which to plan their work. As with other reporting guidelines, it is important to note that the overall effect on the standard of reporting will need to be assessed in due course, once the wider community has had a chance to use the checklist and explanatory documents, which is likely to prompt modification and fine-tuning of the DECIDE-AI guideline, based on its real-world use. Although the outcome of this process cannot be pre-judged, there is evidence that the adoption of consensus-based reporting guidelines (such as CONSORT) does, indeed, improve the standard of reporting33.
The Steering Group paid special attention to the integration of DECIDE-AI within the broader scheme of AI guidelines (for example, TRIPOD-AI, STARD-AI, SPIRIT-AI and CONSORT-AI). It also focused on DECIDE-AI being applicable to all types of decision support modalities (that is, detection, diagnostic, prognostic and therapeutic). The final checklist should be considered as minimum scientific reporting standards and does not preclude reporting additional information, nor are the standards a substitute for other regulatory reporting or approval requirements. The overlap between scientific evaluation and regulatory processes was a core consideration during the development of the DECIDE-AI guideline. Early-stage scientific studies can be used to inform regulatory decisions (for example, based on the stated intended use within the study) and are part of the clinical evidence generation process (for example, clinical investigations). The initial item list was aligned with information commonly required by regulatory agencies, and regulatory considerations are introduced in the E&E paragraphs. However, given the somewhat different focuses of scientific evaluation and regulatory assessment34, as well as differences between regulatory jurisdictions, it was decided to make no reference to specific regulatory processes in the guideline, nor to define the scope of DECIDE-AI within any particular regulatory framework. The primary focus of DECIDE-AI is scientific evaluation and reporting, for which regulatory documents often provide little guidance.
Several topics led to more intense discussion than others, both during the Delphi process and the Consensus Group discussion. Regardless of whether the corresponding items were included, these represent important issues that the AI and healthcare communities should consider and continue to debate. First, we discussed at length whether users (see glossary of terms) should be considered as study participants. The consensus reached was that users are a key study population, about whom data will be collected (for example, reasons for variation from the AI system recommendation and user satisfaction), and who might logically be consented as study participants and, therefore, should be considered as such. Because user characteristics (for example, experience) can affect intervention efficacy, both patient and user variability should be considered when evaluating AI systems and reported adequately.
Second, the relevance of comparator groups in early-stage clinical evaluation was considered. Most studies retrieved in the literature search described a comparator group (commonly the same group of clinicians without AI support). Such comparators can provide useful information for the design of future large-scale trials (for example, information on the potential effect size). However, comparator groups are often unnecessary at this early stage of clinical evaluation, when the focus is on issues other than comparative efficacy. Small-scale clinical investigations are also usually underpowered to make statistically significant conclusions about efficacy, accounting for both patient and user variability. Moreover, the additional information gained from comparator groups in this context can often be inferred from other sources, such as previous data on unassisted standard of care in the case of the expected effect size. Comparison groups are, therefore, mentioned in item VII but considered optional.
Third, output interpretability is often described as important to increase user and patient trust in the AI system, to contextualize the system’s outputs within the broader clinical information environment19 and potentially for regulatory purposes35. However, some experts argued that an output’s clinical value may be independent of its interpretability and that the practical relevance of evaluating interpretability is still debatable36,37. Furthermore, there is currently no generally accepted way of quantifying or evaluating interpretability. For this reason, the Consensus Group decided not to include an item on interpretability at the current time.
Fourth, the notion of users’ trust in the AI system and its evolution with time were discussed. As users accumulate experience with, and receive feedback from, the real-world use of AI systems, they will adapt their level of trust in its recommendations. Whether appropriate or not, this level of trust will influence, as recently demonstrated by McIntosh et al.38, how much effect the systems have on the final decision-making and, therefore, influence the overall clinical performance of the AI system. Understanding how trust evolves is essential for planning user training and determining the optimal timepoints at which to start data collection in comparative trials. However, as for interpretability, there is currently no commonly accepted way to measure trust in the context of clinical AI. For this reason, the item about user trust in the AI system was not included in the final guideline. The fact that interpretability and trust were not included highlights the tendency of consensus-based guidelines development toward conservatism, because only widely agreed-upon concepts reach the level of consensus needed for inclusion. However, changes of focus in the field, as well as new methodological development, can be integrated into subsequent guideline iterations. From this perspective, the issues of interpretability and trust are far from irrelevant to future AI evaluations, and their exclusion from the current guideline reflects less a lack of interest than a need for further research into how we can best operationalize these metrics for the purposes of evaluation in AI systems.
Fifth, the notion of modifying the AI system (the intervention) during the evaluation received mixed opinions. During comparative trials, changes made to the intervention during data collection are questionable unless the changes are part of the study protocol; some authors even consider them as impermissible, on the basis that they would make valid interpretation of study results difficult or impossible. However, the objectives of early clinical evaluation are often not to make definitive conclusions on effectiveness. Iterative design–evaluation cycles, if performed safely and reported transparently, offer opportunities to tailor an intervention to its users and beneficiaries and augment chances of adoption of an optimized, fixed version during later summative evaluation8,9,39,40.
Sixth, several experts noted the benefit of conducting human factors evaluation before clinical implementation and considered that, therefore, human factors should be reported separately. However, even robust preclinical human factors evaluation will not reliably characterize all the potential human factors issues that might arise during the use of an AI system in a live clinical environment, warranting a continued human factors evaluation at the early stage of clinical implementation. The Consensus Group agreed that human factors play a fundamental role in AI system adoption in clinical settings at scale and that the full appraisal of an AI system’s clinical utility can happen only in the context of its clinical human factors evaluation.
Finally, several experts raised concerns that the DECIDE-AI guideline prescribes an evaluation that is too exhaustive to be reported within a single manuscript. The Consensus Group acknowledged the breadth of topics covered and the practical implications. However, reporting guidelines aim to promote transparent reporting of studies rather than mandating that every aspect covered by an item must have been evaluated within the studies. For example, if a learning curves evaluation has not been performed, then fulfilment of item 14b would be to simply state that this was not done, with an accompanying rationale. The Consensus Group agreed that appropriate AI evaluation is a complex endeavour necessitating the interpretation of a wide range of data, which should be presented together as far as possible. It was also felt that thorough evaluation of AI systems should not be limited by a word count and that publications reporting on such systems might benefit from special formatting requirements in the future. The information required by several items might already be reported in previous studies or in the study protocol, which could be cited rather than described in full again. The use of references, online supplementary materials and open-access repositories (for example, Open Science Framework (OSF)) is recommended to allow the sharing and connecting of all required information within one main published evaluation report.
Our work has several limitations that should be considered. First, the issue of potential biases, which apply to any consensus process, must be considered. These include anchoring or participant selection biases41. The research team tried to mitigate bias through the survey design, using open-ended questions analyzed through a thematic analysis, and by adapting the expert recruitment process, but it is unlikely that it was eliminated entirely. Despite an aim for geographical diversity and several actions taken to foster it, representation was skewed toward Europe and, more specifically, the United Kingdom. This could be explained, in part, by the following factors: a likely selection bias in the Steering Group’s expert recommendations; a higher interest in our open invitation to contribute coming from European/United Kingdom scientists (25 of 30 experts approaching us, 83%); and a lack of control over the response rate and self-reported geographical location of participating experts. Considerable attention was also paid to diversity and balance among stakeholder groups, even though clinicians and engineers were the most represented, partly due to the profile of researchers who contacted us spontaneously after the public announcement of the project. Stakeholder group analyses were performed to identify any marked disagreements from underrepresented groups. Finally, as also noted by the authors of the SPIRIT-AI and CONSORT-AI guidelines25,26, few examples of studies reporting on the early-stage clinical evaluation of AI tools were available at the time that we started developing the DECIDE-AI guideline. This might have affeced the exhaustiveness of the initial item list created from literature review. However, the wide range of stakeholders involved and the design of the first round of Delphi allowed identification of several additional candidate items, which were added in the second iteration of the item list.
The introduction of AI into healthcare needs to be supported by sound, robust and comprehensive evidence generation and reporting. This is essential both to ensure the safety and efficacy of AI systems and to gain the trust of patients, practitioners and purchasers, so that this technology can realize its full potential to improve patient care. The DECIDE-AI guideline aims to improve the reporting of early-stage live clinical evaluation of AI systems, which lays the foundations for both larger clinical studies and later widespread adoption.
The DECIDE-AI guideline was developed through an international expert consensus process and in accordance with the EQUATOR Network’s recommendations for guideline development42. A Steering Group was convened to oversee the guideline development process. Its members were selected to cover a broad range of expertise and ensure a seamless integration with other existing guidelines. We conducted a modified Delphi process43, with two rounds of feedback from participating experts and one virtual consensus meeting. The project was reviewed by the University of Oxford Central University Research Ethics Committee (approval R73712/RE003) and registered with the EQUATOR Network. Informed consent was obtained from all participants in the Delphi process and consensus meeting.
Initial item list generation
An initial list of candidate items was developed based on expert opinion informed by (1) a systematic literature review focusing on the evaluation of AI-based diagnostic decision support systems3; (2) an additional literature search about existing guidance for AI evaluation in clinical settings (search strategy available on the OSF44); (3) literature recommended by Steering Group members19,22,45,46,47,48,49; and (4) institutional documents50,51,52,53.
Experts were recruited through five different channels: (1) invitation to experts recommended by the Steering Group; (2) invitation to authors of the publications identified through the initial literature searches; (3) call to contribute published in a commentary article in a medical journal7; (4) consideration of any expert contacting the Steering Group on their own initiative; and (5) invitation to experts recommended by the Delphi participants (snowballing). Before starting the recruitment process, 20 target stakeholder groups were defined, namely: administrators/hospital management, allied health professionals, clinicians, engineers/computer scientists, entrepreneurs, epidemiologists, ethicists, funders, human factors specialists, implementation scientists, journal editors, methodologists, patient representatives, payers/commissioners, policymakers/official institution representatives, private sector representatives, psychologists, regulators, statisticians and trialists.
One hundred thirty-eight experts agreed to participate in the first round of Delphi, of whom 123 (89%) completed the questionnaire (83 identified from Steering Group recommendations, 12 from their publications, 21 from contacting the Steering Group on own initiative and seven through snowballing). One hundred sixty-two experts were invited to take part in the second round of Delphi, of whom 138 completed the questionnaire (85%). One hundred ten had also completed the first round (continuity rate of 89%)54, and 28 were new participants. The participating experts represented 18 countries and spanned all 20 of the defined stakeholder groups (Supplementary Note 1 and Supplementary Tables 1 and 2).
The Delphi surveys were designed and distributed via the REDCap web application55,56. The first round consisted of four open-ended questions on aspects viewed by the Delphi participants as necessary to be reported during early-stage clinical evaluation. The participating experts were then asked to rate, on a 1–9 scale, the importance of items in the initial list proposed by the research team. Ratings of 1–3 on the scale were defined as ‘not important’, 4–6 as ‘important but not critical’ and 7–9 as ‘important and critical’. Participants were also invited to comment on existing items and to suggest new items. An inductive thematic analysis of the narrative answers was performed independently by two reviewers (B.V. and M.N.), and conflict was resolved by consensus57. The themes identified were used to correct any omissions in the initial list and to complement the background information about proposed items. Summary statistics of the item scores were produced for each stakeholder group by calculating the median score, the interquartile range (IQR) and the percentage of participants scoring an item 7 or higher, as well as 3 or lower, which were the pre-specified inclusion and exclusion cutoffs, respectively. A revised item list was developed based on the results of the first round.
In the second round, the participants were shown the results of the first round and invited to rate and comment on the items in the revised list. The detailed survey questions of the two rounds of Delphi can be found on the OSF44. All analyses of item scores and comments were performed independently by two members of the research team (B.V. and M.N.) using NVivo (QSR International Pty Ltd., version 1.0) and Python (Python Software Foundation, version 3.8.5). Conflicts were resolved by consensus.
The initial item list contained 54 items. One hundred twenty sets of responses were included in the analysis of the first round of Delphi (one set of responses was excluded due to a reasonable suspicion of scale inversion, two due to completion after the deadline). The first round yielded 43,986 words of free text answers to the four initial open-ended questions, 6,419 item scores, 228 comments and 64 proposals for new items. The thematic analysis identified 109 themes. In the revised list, nine items remained unchanged, 22 were reworded/completed, 21 were reorganized (merged/split, becoming 13 items), two items were dropped and nine new items were added, for a total of 53 items. The two items dropped were related to health economic assessment. They were the only two items with a median score below 7 (median: 6, IQR: 2–9 for both) and received many comments describing them as an entirely separate aspect of evaluation. The revised list was reorganized into items and subitems. One hundred thirty-six sets of answers were included in the analysis of the second round of Delphi (one set of answers was excluded due to lack of consideration for the questions, one due to completion after the deadline). The second round yielded 7,101 item scores and 923 comments. The results of the thematic analysis and the initial and revised item lists, as well as per-item narrative and graphical summaries of the feedback received in both rounds, can be found on the OSF44.
A virtual consensus meeting was held on three occasions between 14 and 16 June 2021 to debate and agree to the content and wording of the DECIDE-AI reporting guideline. The 16 members of the Consensus Group (Supplementary Note 1 and Supplementary Table 2a,b) were selected to ensure a balanced representation of the key stakeholder groups as well as geographic diversity. All items from the second round of Delphi were discussed and voted on during the consensus meeting. For each item, the results of the Delphi process were presented to the Consensus Group members, and a vote was carried out anonymously using the Vevox online application (https://www.vevox.com). A pre-specified cutoff of 80% of the Consensus Group members (excluding blank votes and abstentions) was necessary for an item to be included. To highlight the new, AI-specific reporting items, the Consensus Group divided the guidelines into two item lists: an AI-specific items list, which represents the main novelty of the DECIDE-AI guideline, and a second list of generic reporting items, which achieved high consensus but are not AI specific and could apply to most types of studies. The Consensus Group selected 17 items (made of 28 subitems in total) for inclusion in the AI-specific list and ten items for inclusion in the generic reporting item list. A summary of the Consensus Group votes can be found in Supplementary Table 3.
The drafts of the guideline and of the E&E sections were sent for qualitative evaluation to a group of 16 selected experts with experience in AI system implementation or in the peer-reviewing of literature related to AI system evaluation (Supplementary Note 1), all of whom were independent of the Consensus Group. These 16 experts were asked to comment on the clarity and applicability of each AI-specific item, using a custom form (available on the OSF44). Item wording amendments and modifications to the E&E sections were conducted based on the feedback from the qualitative evaluation, which was independently analyzed by two reviewers (B.V. and M.N.) and with conflicts resolved by consensus. A glossary of terms (Box 2) was produced to clarify key concepts used in the guideline. The Consensus Group approved the final item lists, including any changes made during the qualitative evaluation. Supplementary Figs. 1 and 2 provide graphical representations of the two item lists’ (AI-specific and generic) evolution.
Further information on research design is available in the Nature Research Reporting Summary linked to this article.
All data generated during this study (pseudonymized where necessary) are available upon justified request to the research team and for a duration of 3 years after publication of this manuscript. Translation of these guidelines into different languages is welcomed and encouraged, as long as the authors of the original publication are included in the process and resulting publication.
All codes produced for data analysis during this study are available upon justified request to the research team and for a duration of 3 years after publication of this manuscript.
Skivington, K. et al. A new framework for developing and evaluating complex interventions: update of Medical Research Council guidance. Br. Med. J. 374, n2061 (2021).
Liu, X. et al. A comparison of deep learning performance against health-care professionals in detecting diseases from medical imaging: a systematic review and meta-analysis. Lancet Digit. Health 1, e271–e297 (2019).
Vasey, B. et al. Association of clinician diagnostic performance with machine learning-based decision support systems: a systematic review. JAMA Netw. Open 4, e211276 (2021).
Freeman, K. et al. Use of artificial intelligence for image analysis in breast cancer screening programmes: systematic review of test accuracy. Br. Med. J. 374, n1872 (2021).
Keane, P. A. & Topol, E. J. With an eye to AI and autonomous diagnosis. NPJ Digital Med. 1, 40 (2018).
McCradden, M. D., Stephenson, E. A. & Anderson, J. A. Clinical research underlies ethical integration of healthcare artificial intelligence. Nat. Med. 26, 1325–1326 (2020).
Vasey, B. et al. DECIDE-AI: new reporting guidelines to bridge the development-to-implementation gap in clinical artificial intelligence. Nat. Med. 27, 186–187 (2021).
McCulloch, P. et al. No surgical innovation without evaluation: the IDEAL recommendations. Lancet 374, 1105–1112 (2009).
Hirst, A. et al. No surgical innovation without evaluation: evolution and further development of the ideal framework and recommendations. Ann. Surg. 269, 211–220 (2019).
Finlayson, S. G. et al. The clinician and dataset shift in artificial intelligence. N. Engl. J. Med. 385, 283–286 (2021).
Subbaswamy, A. & Saria, S. From development to deployment: dataset shift, causality, and shift-stable models in health AI. Biostatistics 21, 345–352 (2020).
Kapur, N., Parand, A., Soukup, T., Reader, T. & Sevdalis, N. Aviation and healthcare: a comparative review with implications for patient safety. JRSM Open 7, 2054270415616548 (2015).
Corbridge, C., Anthony, M., McNeish, D. & Shaw, G. A new UK defence standard for human factors integration (HFI). Proc. Hum. Factors Ergon. Soc. Annu. Meet. 60, 1736–1740 (2016).
Stanton, N. A., Salmon, P., Jenkins, D. & Walker, G. Human Factors in the Design and Evaluation of Central Control Room Operations (CRC Press, 2009).
US Food and Drug Administration (FDA). Applying human factors and usability engineering to medical device: guidance for industry and Food and Drug Administration staff. https://www.fda.gov/regulatory-information/search-fda-guidance-documents/applying-human-factors-and-usability-engineering-medical-devices (2016).
Medicines & Healthcare products Regulatory Agency (MHRA). Guidance on applying human factors and usability engineering to medical devices including drug-device combination products in Great Britain. https://assets.publishing.service.gov.uk/government/uploads/system/uploads/attachment_data/file/970563/Human-Factors_Medical-Devices_v2.0.pdf (2021).
Asan, O. & Choudhury, A. Research trends in artificial intelligence applications in human factors health care: mapping review. JMIR Hum. Factors 8, e28236 (2021).
Felmingham, C. M. et al. The importance of incorporating human factors in the design and implementation of artificial intelligence for skin cancer diagnosis in the real world. Am. J. Clin. Dermatol. 22, 233–242 (2021).
Sujan, M. et al. Human factors challenges for the safe use of artificial intelligence in patient care. BMJ Health Care Inform. 26, e100081 (2019).
Sujan, M., Baber, C., Salmon, P., Pool, R. & Chozos, N. Human factors and ergonomics in healthcare AI. https://www.researchgate.net/publication/354728442_Human_Factors_and_Ergonomics_in_Healthcare_AI (2021).
Wronikowska, M. W. et al. Systematic review of applied usability metrics within usability evaluation methods for hospital electronic healthcare record systems. J. Eval. Clin. Pract. 27, 1403–1416 (2021).
Nagendran, M. et al. Artificial intelligence versus clinicians: systematic review of design, reporting standards, and claims of deep learning studies. Br. Med. J. 368, m689 (2020).
Collins, G. S. & Moons, K. G. M. Reporting of artificial intelligence prediction models. Lancet 393, 1577–1579 (2019).
Sounderajah, V. et al. Developing specific reporting guidelines for diagnostic accuracy studies assessing AI interventions: the STARD-AI Steering Group. Nat. Med. 26, 807–808 (2020).
Cruz Rivera, S. et al. Guidelines for clinical trial protocols for interventions involving artificial intelligence: the SPIRIT-AI extension. Nat. Med. 26, 1351–1363 (2020).
Liu, X. et al. Reporting guidelines for clinical trial reports for interventions involving artificial intelligence: the CONSORT-AI extension. Nat. Med. 26, 1364–1374 (2020).
von Elm, E. et al. Strengthening the Reporting of Observational Studies in Epidemiology (STROBE) statement: guidelines for reporting observational studies. Br. Med. J. 335, 806–808 (2007).
Page, M. J. et al. The PRISMA 2020 statement: an updated guideline for reporting systematic reviews. BMJ 372, n71 (2021).
Sedrakyan, A. et al. IDEAL-D: a rational framework for evaluating and regulating the use of medical devices. Br. Med. J. 353, i2372 (2016).
Park, Y. et al. Evaluating artificial intelligence in medicine: phases of clinical research. JAMIA Open 3, 326–331 (2020).
Higgins, D. & Madai, V. I. From bit to bedside: a practical framework for artificial intelligence product development in healthcare. Adv. Intell. Syst. 2, 2000052 (2020).
Sendak, M. P. et al. A path for translation of machine learning products into healthcare delivery. Eur. Med. J. https://www.emjreviews.com/innovations/article/a-path-for-translation-of-machine-learning-products-into-healthcare-delivery/ (2020).
Moher, D., Jones, A., Lepage, L. & CONSORT Group. Use of the CONSORT statement and quality of reports of randomized trials: a comparative before-and-after evaluation. J. Am. Med. Assoc. 285, 1992–1995 (2001).
Park, S. H. Regulatory approval versus clinical validation of artificial intelligence diagnostic tools. Radiology 288, 910–911 (2018).
US Food and Drug Administration (FDA). Clinical decision support software: draft guidance for industry and Food and Drug Administration staff. https://www.fda.gov/media/109618/download (2019).
Lipton, Z. C. The mythos of model interpretability. Commun. ACM 61, 36–43 (2018).
Ghassemi, M., Oakden-Rayner, L. & Beam, A. L. The false hope of current approaches to explainable artificial intelligence in health care. Lancet Digit. Health 3, e745–e750 (2021).
McIntosh, C. et al. Clinical integration of machine learning for curative-intent radiation treatment of patients with prostate cancer. Nat. Med. 27, 999–1005 (2021).
International Organization for Standardization. Ergonomics of human–system interaction—part 210: human-centred design for interactive systems. https://www.iso.org/standard/77520.html (2019).
Norman, D. A. User Centered System Design (CRC Press, 1986).
Winkler, J. & Moser, R. Biases in future-oriented Delphi studies: a cognitive perspective. Technol. Forecast. Soc. Change 105, 63–76 (2016).
Moher, D., Schulz, K. F., Simera, I. & Altman, D. G. Guidance for developers of health research reporting guidelines. PLoS Med. 7, e1000217 (2010).
Dalkey, N. & Helmer, O. An experimental application of the DELPHI method to the use of experts. Manage. Sci. 9, 458–467 (1963).
Vasey, B., Nagendran, M. & McCulloch, P. DECIDE-AI 2022. https://doi.org/10.17605/OSF.IO/TP9QV (2022).
Vollmer, S. et al. Machine learning and artificial intelligence research for patient benefit: 20 critical questions on transparency, replicability, ethics, and effectiveness. Br. Med. J. 368, l6927 (2020).
Bilbro, N. A. et al. The IDEAL reporting guidelines: a Delphi consensus statement stage specific recommendations for reporting the evaluation of surgical innovation. Ann. Surg. 273, 82–85 (2021).
Morley, J., Floridi, L., Kinsey, L. & Elhalal, A. From what to how: an initial review of publicly available ai ethics tools, methods and research to translate principles into practices. Sci. Eng. Ethics 26, 2141–2168 (2019).
Xie, Y. et al. Health economic and safety considerations for artificial intelligence applications in diabetic retinopathy screening. Transl. Vis. Sci. Technol. 9, 22 (2020).
Norgeot, B. et al. Minimum information about clinical artificial intelligence modeling: the MI-CLAIM checklist. Nat. Med. 26, 1320–1324 (2020).
IMDRF Medical Device Clinical Evaluation Working Group. Clinical Evaluation. https://www.imdrf.org/sites/default/files/docs/imdrf/final/technical/imdrf-tech-191010-mdce-n56.pdf (2019).
IMDRF Software as Medical Device (SaMD) Working Group. ‘Software as a medical device’: possible framework for risk categorization and corresponding considerations. https://www.imdrf.org/sites/default/files/docs/imdrf/final/technical/imdrf-tech-140918-samd-framework-risk-categorization-141013.pdf (2014).
National Institute for Health and Care Excellence (NICE). Evidence standards framework for digital health technologies. https://www.nice.org.uk/about/what-we-do/our-programmes/evidence-standards-framework-for-digital-health-technologies (2019).
High-Level Independent Group on Artificial Intelligence (AI HLEG). Ethics guidelines for trustworthy AI. European Commission. Vol. 32. https://ec.europa.eu/digital (2019).
Boel, A., Navarro-Compán, V., Landewé, R. & van der Heijde, D. Two different invitation approaches for consecutive rounds of a Delphi survey led to comparable final outcome. J. Clin. Epidemiol. 129, 31–39 (2021).
Harris, P. A. et al. Research electronic data capture (REDCap)—a metadata-driven methodology and workflow process for providing translational research informatics support. J. Biomed. Inform. 42, 377–381 (2009).
Harris, P. A. et al. The REDCap consortium: building an international community of software platform partners. J. Biomed. Inform. 95, 103208 (2019).
Nowell, L. S., Norris, J. M., White, D. E. & Moules, N. J. Thematic analysis: striving to meet the trustworthiness criteria. Int. J. Qual. Methods 16, 1609406917733847 (2017).
International Organization for Standardization. Information technology—artificial intelligence (AI)—bias in AI systems and AI aided decision making. https://www.iso.org/standard/77607.html (2021).
IMDRF Medical Device Clinical Evaluation Working Group. Clinical Investigation. https://www.imdrf.org/sites/default/files/docs/imdrf/final/technical/imdrf-tech-191010-mdce-n57.pdf (2019).
Hopper, A. N., Jamison, M. H. & Lewis, W. G. Learning curves in surgical practice. Postgrad. Med. J. 83, 777–779 (2007).
International Organization for Standardization. Ergonomics of human–system interaction—part 11: usability: definitions and concepts. https://www.iso.org/standard/63500.html (2018).
The authors would like to thank all Delphi participants and experts who participated in the guideline qualitative evaluation. B.V. would also like to thank B. Beddoe (Sheffield Teaching Hospital), N. Bilbro (Maimonides Medical Center), N. Marlow (Oxford University Hospitals), E. Taylor (Nuffield Department of Surgical Sciences, University of Oxford) and S. Ursprung (Department for Radiology, Tübingen University Hospital) for their support in the initial stage of the project. This work was supported by the IDEAL Collaboration. B.V. is funded by a Berrow Foundation Lord Florey scholarship. M.N. is supported by the UKRI CDT in AI for Healthcare (http://ai4health.io; grant P/S023283/1). D.C. receives funding from the Wellcome Trust, AstraZeneca, RCUK and GlaxoSmithKline. G.S.C. is supported by the NIHR Biomedical Research Centre, Oxford, and Cancer Research UK (program grant C49297/A27294). M.I. is supported by a Maimonides Medical Center Research fellowship. X.L. receives funding from the Wellcome Trust, the National Institute of Health Research/NHSX/Health Foundation, the Alan Turing Institute, the MHRA and NICE. B.A.M. is a fellow of the Alan Turing Institute, supported by EPSRC grant EP/N510129/, and holds a Wellcome Trust-funded honorary post at University College London for the purposes of carrying out independent research. M.M. receives funding from the Dalla Lana School of Public Health and the Leong Centre for Healthy Children. J.O. is employed by the Medicines and Healthcare products Regulatory Agency, which is the competent authority responsible for regulating medical devices and medicines in the United Kingdom. Elements of the work relating to the regulation of AI as a medical device are funded by grants from NHSX and the Regulators’ Pioneer Fund (Department for Business, Energy and Industrial Strategy). S.S. receives grants from the National Science Foundation, the American Heart Association, the National Institutes of Health and the Sloan Foundation. D.S.W.T. is supported by the National Medical Research Council, Singapore (NMRC/HSRG/0087/2018;MOH-000655-00), the National Health Innovation Centre, Singapore (NHIC-COV19-2005017), the SingHealth Fund Limited Foundation (SHF/HSR113/2017), the Duke-NUS Medical School (Duke-NUS/RSF/2021/0018;05/FY2020/EX/15-A58) and the Agency for Science, Technology and Research (A20H4g2141; H20C6a0032). P. Watkinson is supported by the NIHR Biomedical Research Centre, Oxford, and holds grants from the NIHR and Wellcome. P. McCulloch receives grants from Medtronic (unrestricted educational grant to Oxford University for the IDEAL Collaboration) and the Oxford Biomedical Research Centre. The views expressed in this guideline are those of the authors, Delphi participants and experts who participated in the qualitative evaluation of the guideline. These views do not necessarily reflect those of their institutions or funders.
M.N. consults for Cera Care, a technology-enabled homecare provider. B.C. was a Non-Executive Director of the UK Medicines and Healthcare products Regulatory Agency (MHRA) from September 2015 until 31 August 2021. D.C. receives consulting fees from Oxford University Innovation, Biobeats and Sensyne Health and has an advisory role with Bristol Myers Squibb. B.G. has received consultancy and research grants from Philips NV and Edwards Lifesciences LLC and is owner and board member of Healthplus.ai BV and its subsidiaries. X.L. has advisory roles with the National Screening Committee UK, the WHO/ITU focus group for AI in health and the AI in Health and Care Award Evaluation Advisory Group (NHSX, AAC). P. Mathur is the co-founder of BrainX LLC and BrainX Community LLC. M.M. reports consulting fees from AMS Healthcare and honoraria from the Osgoode Law School and the Toronto Pain Institute. L.M. is director and owner of Morgan Human Systems. J.O. holds an honorary post as an Associate of Hughes Hall, University of Cambridge. C.R. is an employee of HeartFlow Inc., including salary and equity. S.S. has received honoraria from several universities and pharmaceutical companies for talks on digital health and AI. S.S. has advisory roles in Child Health Imprints, Duality Tech, Halcyon Health and Bayesian Health. S.S. is on the board of Bayesian Health. This arrangement has been reviewed and approved by Johns Hopkins in accordance with its conflict of interest policies. D.S.W.T. holds patents linked to AI-driven technologies and is a co-founder and equity holder of EyRIS Pte Ltd. P. Watkinson declares grants, consulting fees and stocks from Sensyne Health and holds patents linked to AI-driven technologies. P. McCulloch has an advisory role for WEISS International and the technology incubator PhD program at University College London. B.V., G.S.C., A.K.D., L.F., M.I., B.A.M., S.D., P. Wheatstone and W.W. have no further conflicts of interest to declare.
Peer review information
Nature Medicine thanks Alejandro Berlin, Rahul Deo, Isabelle Boutron and Leo Anthony Celi for their contribution to the peer review of this work. Javier Carmona was the primary editor on this article and managed its editorial process and peer review in collaboration with the rest of the editorial team.
Publisher’s note Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
About this article
Cite this article
Vasey, B., Nagendran, M., Campbell, B. et al. Reporting guideline for the early-stage clinical evaluation of decision support systems driven by artificial intelligence: DECIDE-AI. Nat Med 28, 924–933 (2022). https://doi.org/10.1038/s41591-022-01772-9
This article is cited by
Retrospective batch analysis to evaluate the diagnostic accuracy of a clinically deployed AI algorithm for the detection of acute pulmonary embolism on CTPA
Insights into Imaging (2023)
A systematic review of radiomics in giant cell tumor of bone (GCTB): the potential of analysis on individual radiomics feature for identifying genuine promising imaging biomarkers
Journal of Orthopaedic Surgery and Research (2023)
A user evaluation of speech/phrase recognition software in critically ill patients: a DECIDE-AI feasibility study
Critical Care (2023)
Nature Reviews Urology (2023)
Development of an artificial intelligence bacteremia prediction model and evaluation of its impact on physician predictions focusing on uncertainty
Scientific Reports (2023)