With emerging innovations in artificial intelligence (AI) poised to substantially impact medical practice, interest in training current and future physicians about the technology is growing. Alongside comes the question of what, precisely, should medical students be taught. While competencies for the clinical usage of AI are broadly similar to those for any other novel technology, there are qualitative differences of critical importance to concerns regarding explainability, health equity, and data security. Drawing on experiences at the University of Toronto Faculty of Medicine and MIT Critical Data’s “datathons”, the authors advocate for a dual-focused approach: combining robust data science-focused additions to baseline health research curricula and extracurricular programs to cultivate leadership in this space.
With emerging innovations in artificial intelligence (AI) poised to substantially impact medical practice, interest in training current and future physicians about AI is growing1. Alongside this interest comes the question of what, precisely, medical students should learn2. While competencies for the clinical usage of AI are broadly similar to those for any other novel technology in medicine, there are qualitative differences of critical importance to concerns regarding explainability, health equity, and data security3,4,5. We advocate for a dual-focused approach: combining robust, learner-centered AI additions to baseline curricula and extracurricular programs to cultivate leadership in this space.
What do physicians need to understand about AI in the clinical context?
Most directly, physicians need to understand AI in the same way that they need to understand any technology impacting clinical decision-making. A physician utilizing MRI, for example, does not need to understand the particle spin physics differentiating T1 and T2 weighted scans, but they do need to be able to:
Use it—identify when the technology is appropriate for a given clinical context, and what inputs are required to receive meaningful results.
Interpret it—understand and interpret the results with a reasonable degree of accuracy, including awareness of sources of error, bias, or clinical inapplicability.
Explain it—be able to communicate the results and the processes underlying them in a way that others (e.g. allied health professionals and patients) can understand.
These skills take on particular nuances in the context of AI. For (i) and (ii), it is critical for physicians to appreciate the highly context-specific nature of AI, and the fact that performance in a single restricted context may not always be transferable. It is also important to be aware of factors which may decrease the performance of algorithms for specific patient groups3.
AI has been commonly criticized for the “black box” effect—that is, the mechanism by which a model arrives at a decision may be indecipherable1. This lack of technical “explainability”, however, does not discharge the obligations of (iii). To satisfy requirements of informed consent and clinical collaboration, a physician may be called upon to communicate their understanding of the origin, nature, and justification of an algorithm’s results to patients, families, and colleagues.
What do physicians need to understand about AI in the broader professional context?
The professional obligations of physicians extend beyond the clinical role into leadership and health advocacy. The disruptive prospects of AI in healthcare raise significant ethical and operational challenges which physicians must collectively be prepared to engage with for the sake of ensuring patient welfare.
Substantial concerns exist regarding the impact of algorithmic clinical decision support on health equity, due to factors such as the use of datasets lacking representation from minority populations3, and the possibility for algorithms to learn from and perpetuate existing biases4. Risks around data security and privacy are also becoming rapidly apparent5. There is also, however, the potential for AI itself to alleviate some of medicine’s existing problems with bias and unfairness6. Physicians should be aware of both possibilities and be equipped to advocate for the development and deployment of ethical and equitable systems. Finally, physicians must act as responsible stewards for patient data to ensure that the foundational trust between provider and patient is not violated.
How might medical students learn what they need to learn?
Concerted efforts should be taken to cultivate physician-leaders who are fluent in both AI and medicine. Such dual competence is important, as it is no simple task to select clinically relevant and computationally feasible targets for AI in medicine. A siloed approach may lead to clear clinical targets going unnoticed and worsen the production of technical “solutions in search of problems”7. A multidisciplinary, integrated approach to learning will serve to facilitate this goal.
When approaching such a complex topic, it is critical to distinguish between that which all physicians must know for everyday practice, and that which some physicians should know to drive innovation. Curricular components should be targeted to address the former, while robust extracurricular programs can be targeted toward the latter. Both components serve to promote discussions on how the convergence between AI and medicine is currently impacting and will continue to impact the physician’s identity. This aligns with the concept of the “reimagined medical school”, which establishes a framework of core knowledge while supporting students who seek deep dives into specific subject areas8.
This approach has been piloted at the University of Toronto (UofT) Faculty of Medicine and has been embraced by administration as an important part of the Faculty’s strategic plan8. Lectures in the preclinical curriculum introduce all students to these concepts, and the 2-year-long “Computing for Medicine” certificate program provides particularly interested students with practical programming skills and immersion into clinical data science projects9. Additionally, an “AI in Medicine” student interest group hosts extracurricular seminars on the subject and helps to facilitate connections between medical students and a city’s broader AI ecosystem (in academia and industry) (see Supplementary Table 1 for a list of AI in Medicine offerings in the last two years).
Harvard Medical School has engaged in a similar approach, offering clinical informatics training as an elective for medical students10. During this elective, students are paired with faculty mentors in their area of interest and engage in a mix of didactic and hands-on learning to explore how informatics is embedded into health systems. The School has also collaborated with the MIT Critical Data group to offer a project-based course on data science in medicine11. Extracurricularly, the MIT Critical Data Group has worked to spur interest in AI through “datathons” (brief competitions wherein computer scientists and clinicians work together to use data to solve clinical problems)12. These collaborations are emblematic of the possibilities for collaboration with non-medical faculties to enrich the education of medical students.
With insight from these experiences, we identify a series of important opportunities in both the curricular and extracurricular realms (outlined in Table 1). We wish to emphasize the importance of finding synergy between the learning objectives and their delivery, and of maintaining a learner-centered ethos with a focus upon student engagement rather than passive knowledge transfer. These concepts should be integrated with other aspects of the curriculum wherever appropriate (such as the inclusion of an AI case study in a workshop about ethical clinical decision-making), as the competencies required to effectively work with AI will often overlap with those required to fulfil other core aspects of the physician role such as advocacy, leadership, and communication. Medical schools have a critical role to play not only in helping their students learn but also in nurturing their academic interests and sowing the seeds of future leadership. These recommendations can and should be tailored to the context and strengths of each medical school, its partnerships, and its student body.
What about after medical school?
While detailed discussion on postgraduate medical education (PGME) and continuing medical education (CME) is outside the scope of this work, it is important to consider that medical education is viewed as a life-long pursuit and attention needs to be provided to learners at later career stages13. Competencies around AI could be integrated in PGME curricula in existing research or Quality Improvement (QI) blocks. Research training, for medical or surgical trainees, could be in technical areas such as data science or biomedical engineering but also in ethics, health services research, and medical education. QI would focus on translating and evaluating proven innovations into care. CME offerings through online or in-person workshops can not only allow clinicians to refresh their competencies over the course of their career but also empower established practitioners with the skills and knowledge to keep up with this field14. The various curricular aspects in Table 1 can be modified to suit learners at different stages in their careers.
Ultimately, medical schools are tasked with training physicians for a future in which artificial intelligence is poised to play a significant role. In order to succeed at this task, it will be essential for students to have curricular and extracurricular learning opportunities around the clinical usage, technical limitations, and ethical implications of the tools at their disposal. Given the importance and potential impact of this technology, we must act both to ensure a base of artificial intelligence literacy among physicians at-large and to nurture the skills and interests of the future leaders who will drive innovation in this space.
Topol, E. J. High-performance medicine: the convergence of human and artificial intelligence. Nat. Med. 25, 44–56 (2019).
Wartman, S. A. The empirical challenge of 21st-century medical education. Academic Med. 94, 1412–1415 (2019).
Adamson, A. S. & Smith, A. Machine learning and health care disparities in dermatology. JAMA Dermatol 154, 1247–1248 (2018).
Parikh, R. B., Teeple, S. & Navathe, A. S. Addressing bias in artificial intelligence in health care. JAMA. http://jamanetwork.com/journals/jama/fullarticle/2756196. (2019)
Price, W. N. & Cohen, I. G. Privacy in the age of medical big data. Nat. Med. 25, 37–43 (2019).
Chen, I. Y., Joshi, S. & Ghassemi, M. Treating health disparities with artificial intelligence. Nat. Med. 26, 16–17 (2020).
Wiens, J. et al. Do no harm: a roadmap for responsible machine learning for health care. Nat. Med. 25, 1337–1340 (2019).
Prober, C. G. & Khan, S. Medical education reimagined: a call to action. Acad. Med. 88, 1407–1410 (2013).
Law, M., Veinot, P., Campbell, J., Craig, M. & Mylopoulos, M. Computing for medicine: can we prepare medical students for the future? Acad. Med. 94, 353 (2019).
Harvard Medical School Course Catalogue. PD530.7 Clinical Informatics. http://www.medcatalog.harvard.edu/coursedetails.aspx?cid=PD530.7&did=260&yid=2020&fbclid=IwAR3FRgDGVFK4ca_wHGGnXBwf3zRLkN8LMiJXBph1q3tFc_g3ZAVT5gK1qAI (2020).
MIT Critical Data. 2019.HST.953: Collaborative Data Science in Medicine. https://criticaldata.mit.edu/blog/2019/08/06/hst-953-2019/. (2020).
Aboab, J. et al. A “datathon” model to support cross-disciplinary collaboration. Sci. Transl. Med. 8, 333ps8 (2016).
Aschenbrener, C. A., Ast, C. & Kirch, D. G. Graduate medical education: its role in achieving a true medical education continuum. Acad. Med. 90, 1203–1209 (2015).
McMahon, G. T. The leadership case for investing in continuing professional development. Acad. Med. 92, 1075–1077 (2017).
Floridi, L. et al. AI4People—an ethical framework for a good AI society: opportunities, risks, principles, and recommendations. Minds Mach. 28, 689–707 (2018).
We would like to acknowledge the Faculty of Medicine, MD Program, and Medical Society at the University of Toronto for their support and commitment to AI in Medicine Students’ Society and other initiatives driven by students in service of our profession and its changing needs. L.A.C. is funded by the National Institute of Health through the NIBIB R01 grant EB017205.
The authors declare no competing interests
Publisher’s note Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
About this article
Cite this article
McCoy, L.G., Nagaraj, S., Morgado, F. et al. What do medical students actually need to know about artificial intelligence?. npj Digit. Med. 3, 86 (2020). https://doi.org/10.1038/s41746-020-0294-7
This article is cited by
BMC Medical Education (2024)
Exploring patient perspectives on how they can and should be engaged in the development of artificial intelligence (AI) applications in health care
BMC Health Services Research (2023)
The need for digital health education among next-generation health workers in China: a cross-sectional survey on digital health education
BMC Medical Education (2023)
Psychometric properties of the persian version of the Medical Artificial Intelligence Readiness Scale for Medical Students (MAIRS-MS)
BMC Medical Education (2023)
Journal of Public Health (2023)