How will artificial intelligence change medical training?

Artificial intelligence is changing medicine and it will relieve physicians from the burden of rote knowledge. Here, I discuss how this might affect medical training, drawing from the example of how automation in aviation redefined the role of the pilot.

Precertification Program of the FDA is also likely to speed up the transition of software as medical device to bedside 7 . In addition, tomorrow's doctors may have access to a wider portfolio of assistive devices. Speech recognition is already automating clinical interview transcriptions. A smart electronic medical record may prompt a doctor to ask specific questions based on symptoms and might even suggest tests and diagnoses. While current prognostic calculators use only 5-10 variables, AI-based calculators could include substantially more, improving accuracy. Given more accurate disease assessment, smart tools could then recommend a menu of treatments considering patients' allergies, current medications, and medical comorbidities. Dosage guidance could automatically account for patient weight, gender, and drug metabolism and excretion as relevant. Potentially, AI could help refine best practices by optimizing the scope and setting for newer treatment modalities, such as cancer immunotherapies. As AI becomes more widely used, knowledge and data may no longer differentiate the skill levels of physicians. As AI becomes part of the team and extends physicians' skill sets, medical schools will need to emphasize a new set of competencies to keep up with how medical practice would evolve in the post-knowledge age.
In some ways, AI could alter the physician's role in a similar way to how general automation altered the pilot's role several decades ago. Initially, an aircraft was powered by a stick and rudder and the pilot's intuition about how best to handle them was essential. Pilots learned basic psychomotor skills of keeping an aircraft suspended in flight, gauging the effect of multiple forces on a plane by gestalt 8 -so-called stick-and-rudder flying. However, as flying continually became more complex, with new components to control, automation enabled aircrafts to constantly adjust parameters, such as fuel efficiency, aviation, navigation, and communication with air controllers in real time through interacting automated systems 8 . Today, being a pilot entails seamlessly switching between stick-and-rudder skills and flight deck management as informed by system feedback. An effective mission relies on the interaction between the human and automated parts-not one or the other alone. Such automation brought clear advantages; simpler interfaces for humans, standardization, enhanced operational efficiency, and increased safety as nearly 80% of accidents were attributable to human error 8 . However, both too little and too much system feedback proved to be dangerous: too much could confuse the pilot and cause a crash, while too little may deter the pilot from acting on a system error. For example, the recent crashes of two Boeing 737 MAX aircraft just 4 months apart were caused by failure of a single malfunctioning sensor. Normally, planes rely on redundant systems to eliminate the risk of single point of failure, but the 737 MAX's software relied on a single sensor despite having two. Pilots struggled to maneuver the plane in the absence of meaningful feedback, resulting in crashes 9 . This example should serve as a warning about the risks of using AI to make healthcare decisions, which should be factored in when building such a system.
Application of AI in healthcare comes with new risks. Overreliance on AI may reduce physicians' situational awareness and create significant risks of being blindsided. Another risk of depending on AI is that, if it ceases to operate or is no longer capable of delivering the required services, there needs to be a safety valve-which means experts will still be needed. Furthermore, AI systems add another dimension beyond automation as they continuously learn from data. While AI programs could offer one of the biggest advantages by curating the best information through datadriven practices, they can also make errors on a large scale. Therefore, AI must always be a feedback-driven system, with users having a right to notify when AI-driven decisions are incorrect, so that the model can learn from its mistakes in subsequent iterations of training. There are also ethical considerations regarding the use of AI in healthcare. Physicians and patients often make trade-offs when deciding on treatments; one example is between quality of life and length of life. As a result, there is no such thing as a one-sizefits-all approach to patient treatment. It is important for AI systems to capture the complexity of multiple-choice scenarios, and when a medical decision necessitates a trade-off, it must still be delegated to the stakeholders.
Medical students play a pivotal role as they train with a plethora of new devices. Keeping patients at the center of the mission, doctors-in-training could learn how to manage patient data more like managing the signs on a flight deck-exploring the impact of multiple influences on patient health, such as social determinants, clinical diagnosis and care, timely decisions, and teamwork with other health professionals 8 . In post-knowledge medicine, the focus of training might shift from biology to psychology and sociology, focusing on empathy and a greater understanding of socioeconomic structures. Knowledge-centered or even therapeutic interaction has a smaller impact on wellbeing than is commonly assumed. According to some studies, symptom and knowledge-centered treatments have just 10-15% impact on health 10 , while a combination of social determinants makes the biggest difference in results 11 . Only when physicians are better equipped to consider the various data components of a patient's health and to resolve hidden barriers to health, such as lack of access to medicine, transportation, adequate nutrition, and undiagnosed complex diseases, will they be able to concentrate on long-term patient well-being. This is only possible if a physician can work interactively with AI to develop a treatment plan that is specifically tailored for a patient's needs. Further, medical education must teach students how to scrutinize and crosscheck knowledge and data, as well as recognize when they must fall back on stick-and-rudder medical skills, where to find the experts, and when to seek collaboration with other members of healthcare team to address the hidden barriers to health.
Finally, some could argue that, as the physician's role changes to that of a patient healing supervisor, doctors will become less happy in their jobs. In aviation, pilots sometimes became bored and disengaged while serving as flight deck supervisors 8 . "Human-centered automation," which compensates for human operators' shortcomings via visual assistance and alerts 8 , provided a solution that effectively engaged pilots and enriched their skill set. Remembering the human at the center is a keystone opportunity to return medicine to its original ethos. Today, intern physicians strike a frenzied balance between administrative work, education, charting, and other activities, leaving as little as 12% of time for direct patient care 12 . Yet, each year, thousands of newly minted medical students recite the Hippocratic Oath from one of the oldest texts in history. The hallowed bond between the ill and healing practitioner has remained steadfast, defining the heart of the profession. When physicians give time to the patient-doctor relationship, outcomes improve 13 . If AI can relieve physicians of duties competing for physician time, then post-knowledge medicine will return physicians to the bedside-where the sacred relationship between an ill patient and a compassionate physician started.