Introduction

With emerging innovations in artificial intelligence (AI) poised to substantially impact medical practice, interest in training current and future physicians about AI is growing1. Alongside this interest comes the question of what, precisely, medical students should learn2. While competencies for the clinical usage of AI are broadly similar to those for any other novel technology in medicine, there are qualitative differences of critical importance to concerns regarding explainability, health equity, and data security3,4,5. We advocate for a dual-focused approach: combining robust, learner-centered AI additions to baseline curricula and extracurricular programs to cultivate leadership in this space.

What do physicians need to understand about AI in the clinical context?

Most directly, physicians need to understand AI in the same way that they need to understand any technology impacting clinical decision-making. A physician utilizing MRI, for example, does not need to understand the particle spin physics differentiating T1 and T2 weighted scans, but they do need to be able to:

  1. (i)

    Use it—identify when the technology is appropriate for a given clinical context, and what inputs are required to receive meaningful results.

  2. (ii)

    Interpret it—understand and interpret the results with a reasonable degree of accuracy, including awareness of sources of error, bias, or clinical inapplicability.

  3. (iii)

    Explain it—be able to communicate the results and the processes underlying them in a way that others (e.g. allied health professionals and patients) can understand.

These skills take on particular nuances in the context of AI. For (i) and (ii), it is critical for physicians to appreciate the highly context-specific nature of AI, and the fact that performance in a single restricted context may not always be transferable. It is also important to be aware of factors which may decrease the performance of algorithms for specific patient groups3.

AI has been commonly criticized for the “black box” effect—that is, the mechanism by which a model arrives at a decision may be indecipherable1. This lack of technical “explainability”, however, does not discharge the obligations of (iii). To satisfy requirements of informed consent and clinical collaboration, a physician may be called upon to communicate their understanding of the origin, nature, and justification of an algorithm’s results to patients, families, and colleagues.

What do physicians need to understand about AI in the broader professional context?

The professional obligations of physicians extend beyond the clinical role into leadership and health advocacy. The disruptive prospects of AI in healthcare raise significant ethical and operational challenges which physicians must collectively be prepared to engage with for the sake of ensuring patient welfare.

Substantial concerns exist regarding the impact of algorithmic clinical decision support on health equity, due to factors such as the use of datasets lacking representation from minority populations3, and the possibility for algorithms to learn from and perpetuate existing biases4. Risks around data security and privacy are also becoming rapidly apparent5. There is also, however, the potential for AI itself to alleviate some of medicine’s existing problems with bias and unfairness6. Physicians should be aware of both possibilities and be equipped to advocate for the development and deployment of ethical and equitable systems. Finally, physicians must act as responsible stewards for patient data to ensure that the foundational trust between provider and patient is not violated.

How might medical students learn what they need to learn?

Concerted efforts should be taken to cultivate physician-leaders who are fluent in both AI and medicine. Such dual competence is important, as it is no simple task to select clinically relevant and computationally feasible targets for AI in medicine. A siloed approach may lead to clear clinical targets going unnoticed and worsen the production of technical “solutions in search of problems”7. A multidisciplinary, integrated approach to learning will serve to facilitate this goal.

When approaching such a complex topic, it is critical to distinguish between that which all physicians must know for everyday practice, and that which some physicians should know to drive innovation. Curricular components should be targeted to address the former, while robust extracurricular programs can be targeted toward the latter. Both components serve to promote discussions on how the convergence between AI and medicine is currently impacting and will continue to impact the physician’s identity. This aligns with the concept of the “reimagined medical school”, which establishes a framework of core knowledge while supporting students who seek deep dives into specific subject areas8.

This approach has been piloted at the University of Toronto (UofT) Faculty of Medicine and has been embraced by administration as an important part of the Faculty’s strategic plan8. Lectures in the preclinical curriculum introduce all students to these concepts, and the 2-year-long “Computing for Medicine” certificate program provides particularly interested students with practical programming skills and immersion into clinical data science projects9. Additionally, an “AI in Medicine” student interest group hosts extracurricular seminars on the subject and helps to facilitate connections between medical students and a city’s broader AI ecosystem (in academia and industry) (see Supplementary Table 1 for a list of AI in Medicine offerings in the last two years).

Harvard Medical School has engaged in a similar approach, offering clinical informatics training as an elective for medical students10. During this elective, students are paired with faculty mentors in their area of interest and engage in a mix of didactic and hands-on learning to explore how informatics is embedded into health systems. The School has also collaborated with the MIT Critical Data group to offer a project-based course on data science in medicine11. Extracurricularly, the MIT Critical Data Group has worked to spur interest in AI through “datathons” (brief competitions wherein computer scientists and clinicians work together to use data to solve clinical problems)12. These collaborations are emblematic of the possibilities for collaboration with non-medical faculties to enrich the education of medical students.

With insight from these experiences, we identify a series of important opportunities in both the curricular and extracurricular realms (outlined in Table 1). We wish to emphasize the importance of finding synergy between the learning objectives and their delivery, and of maintaining a learner-centered ethos with a focus upon student engagement rather than passive knowledge transfer. These concepts should be integrated with other aspects of the curriculum wherever appropriate (such as the inclusion of an AI case study in a workshop about ethical clinical decision-making), as the competencies required to effectively work with AI will often overlap with those required to fulfil other core aspects of the physician role such as advocacy, leadership, and communication. Medical schools have a critical role to play not only in helping their students learn but also in nurturing their academic interests and sowing the seeds of future leadership. These recommendations can and should be tailored to the context and strengths of each medical school, its partnerships, and its student body.

Table 1 Potential curricular and extracurricular learning opportunities for artificial intelligence in medicine.

What about after medical school?

While detailed discussion on postgraduate medical education (PGME) and continuing medical education (CME) is outside the scope of this work, it is important to consider that medical education is viewed as a life-long pursuit and attention needs to be provided to learners at later career stages13. Competencies around AI could be integrated in PGME curricula in existing research or Quality Improvement (QI) blocks. Research training, for medical or surgical trainees, could be in technical areas such as data science or biomedical engineering but also in ethics, health services research, and medical education. QI would focus on translating and evaluating proven innovations into care. CME offerings through online or in-person workshops can not only allow clinicians to refresh their competencies over the course of their career but also empower established practitioners with the skills and knowledge to keep up with this field14. The various curricular aspects in Table 1 can be modified to suit learners at different stages in their careers.

Conclusion

Ultimately, medical schools are tasked with training physicians for a future in which artificial intelligence is poised to play a significant role. In order to succeed at this task, it will be essential for students to have curricular and extracurricular learning opportunities around the clinical usage, technical limitations, and ethical implications of the tools at their disposal. Given the importance and potential impact of this technology, we must act both to ensure a base of artificial intelligence literacy among physicians at-large and to nurture the skills and interests of the future leaders who will drive innovation in this space.