Advances in medical imaging and the proliferation of diagnostic and screening tests have generated mountains of data on patient health. Digital information technology has seemed poised to revolutionize health care in the United States since 2009, when the Obama administration made the technology part of plans to revive a sinking economy. The US government has now spent tens of billions of dollars on putting patient information at doctors’ fingertips.
Yet many physicians have come to hate their computers. Overwhelmed by administrative work, they now spend more time attending to data entry than they do interacting with patients. So far, electronic health records have not been the panacea to efficiency and safety that many expected them to be. But problems are being identified, and as such systems mature, there is still hope that they will live up to their potential.
Forty years ago, when personal computers were in their infancy, a person’s medical records comprised a few sheets of paper in a folder. Two decades later, these folders were bulging with photocopies, printouts and faxes of test results, but the medical profession was slow to adopt a digital remedy.
Since the United States began its big push in 2009, the digitalization of US medical records has soared. Data from the US Department of Health and Human Services show that in 2017, 96% of hospitals and 86% of physicians’ offices in the United States had access to electronic health records.
Many patients recognize the impact that electronic health records have made. A 2019 poll by the Henry J. Kaiser Family Foundation, a non-profit health-care advocacy organization in San Francisco, California, found that 45% of US citizens think that electronic health records have improved the quality of care, with only 6% reporting a decline. Yet, US primary-care physicians are discontent. In a 2018 survey by Stanford Medicine in California, 59% said they felt that the systems needed a complete overhaul. Health-care managers and developers of electronic health records are looking for fixes.
When the medical imaging techniques computed tomography and magnetic resonance imaging became widespread in the 1980s, the resulting data were stored on magnetic tapes or disks. The amount of imaging data being accumulated grew massively as such images proved their medical value, their resolution increased and their costs dropped. For example, neurological imaging expanded by a factor of 25,000 — from about 200 gigabytes of data a year worldwide in the late 1980s to 5 petabytes a year in the early 2010s1.
The US National Academy of Medicine began to recommend the computerization of other types of health data in the 1990s. At the same time, artificial-intelligence researchers proposed using machine learning to seek patterns and correlations that physicians might not recognize in compilations of medical records. “People like me said that we could analyse all this data if we had it in digital form,” recalls Peter Szolovits, a computer scientist at the Massachusetts Institute of Technology in Cambridge, who studies the use of artificial intelligence in medical decision-making. Private companies and some hospitals soon began to develop electronic health-record systems.
In 2004, then-US president George W. Bush created an agency called The Office of the National Coordinator for Health Information Technology to develop and promote the use of advanced information technology in health care. However, it received little funding from the Bush administration, and the use of electronic health records grew slowly. In 2008, only 9% of hospitals in the United States had electronic health records that met minimal standards.
A search in 2009 by the incoming administration of President Barack Obama for shovel-ready projects to quickly boost the ailing US economy kicked the development of electronic health records into high gear. The technology was in the right place at the right time; the systems existed, but their uptake was slow.
The US government commissioned researchers to find the best way to invest US$30 billion of stimulus funding to improve and promote the use of electronic health records. John Halamka, executive director of the Health Technology Exploration Center at Beth Israel Lahey Health in Boston, Massachusetts, was one of those involved. From 2009 to 2016, he says that a working group he led on standards for health information technology held hundreds of meetings with doctors, administrators, health-insurance companies, legislators and other stakeholders. One outcome was a list of 140 data elements that should be collected from every patient on each visit to a physician. Feedback from Halamka and others led the US government to introduce three waves of fresh regulations. After each wave, the developers of electronic health-record systems had 18 months in which to incorporate the new regulations into their products to obtain a share of the stimulus money. It was a tight schedule, and those involved did not have time to consider the overall user experience, says Halamka. “Our trajectory of getting from 20% adoption to 90% adoption was very good,” he says, but the usability of the resulting systems was “not so wonderful”.
An awkward adolescent
Before adopting digitized medical-record systems, “We were unable to truly follow patients in time and space,” says Gregg Meyer, chief clinical officer at Partners Healthcare, which operates several hospitals in Boston, including Massachusetts General Hospital and Brigham and Women’s Hospital. “We had many siloed systems which may or may not have effectively talked to each other. At best, we could put together a patchwork quilt. It was wasting money and clinicians’ time, and not providing the best care.”
The allure of the new systems was the potential to better mobilize a hospital’s resources to improve outcomes. Another, more subtle goal was to foster “cultural innovation”, says Meyer. His hope was that by sharing information through such systems, doctors would be encouraged to discuss which procedures and drugs work best.
Yet the first generation of electronic health records are now at an “awkward adolescent stage of growth”, says Alistair Erskine, chief digital health officer at Partners. The system that Partners launched in 2015 was a long way from maturity. “We have to improve the usability and reduce the burden of using the system,” says Halamka.
One such problem arose from the fact that the various vendors had separately developed systems that formatted data in different ways, which made it hard to share records between hospitals, physicians and external testing laboratories. It also made it trickier to incorporate data collected by patient monitoring devices. Medical imaging had faced a similar challenge in the 1980s: images captured using one make and model of equipment could not necessarily be analysed by another. This led the American College of Radiography in Reston, Virginia, and the US National Electrical Manufacturers Association in Rosslyn, Virginia, to develop a standard for storing and transmitting medical images. Called Digital Imaging and Communications in Medicine (DICOM), it enables images to be displayed on various systems in different hospitals. The Fast Healthcare Interoperability Resources (FHIR) draft standard is trying to achieve the same thing for other forms of medical information, and it has now been accepted by most vendors of electronic health-record systems. Regulations proposed by the US government health-insurance plan Medicare in February might soon make using FHIR in electronic health records a requirement.
Troubleshooter and troublemaker
A big hope for electronic health records was that they would reduce mistakes and oversights. The notoriously illegible handwriting of many physicians has been blamed for countless errors. Repeated photocopying or faxing can render even neatly printed documents unintelligible. And paper medical records can be mislaid, or might simply not be where they are needed.
“Overall, computers have made safety better. The frequency of medication errors has gone down significantly as we computerized,” says Robert Wachter, who leads the department of medicine at the University of California, San Francisco (UCSF).
Yet digitalization has also introduced extra opportunities for error. Wachter recalls the case of a 16-year-old patient who, in 2013, experienced a massive drug overdose at the UCSF Medical Center after a doctor entered the dosage in milligrams, as he would for an adult, without realizing that the computer expected the dosage to be given in milligrams per kilogram, as would be done for a child. The computer warned the dose was excessive, but the doctor had received so many false-positive warnings that he shrugged off the alert. The pharmacist did the same. A robot then dutifully packaged the erroneously prescribed 38 and a half tablets. The nurse who administered the dose knew that it was a gross overdose, but the computer assured her that it had been signed off by both the doctor and the pharmacist, and she went ahead.
Wachter thinks that this error could never have happened with paper records — “nobody trusted paper,” he says. Tracking by the Pennsylvania Patient Safety Authority in Harrisburg found that from January 2016 to December 2017, electronic health-record systems were responsible for 775 problems during laboratory testing in the state2, with human–computer interactions responsible for 54.7% of events and the remaining 45.3% caused by a computer. Three of the errors physically harmed patients, and the agency warned that “every event in this analysis had the potential to affect patients” by causing inconvenience, errors or delays in diagnosis.
Electronic health-record systems are designed to prevent errors by alerting clinicians to possible mistakes. But as the UCSF incident in 2013 shows, they are not foolproof. A draft US government report issued in 2018 (see go.nature/2lu2to9) found that clinicians are inundated with alerts that range from minor issues about drug interactions to errors that pose considerable risks. This can lead to ‘alert fatigue’, a phenomenon in which system users, when faced with many lower-level alerts, ignore all levels of alert, and thereby miss crucial ones that can affect the health and safety of patients.
After investigating the 2013 incident, Wachter says that UCSF identified some non-essential warnings in its electronic health-record system that staff members routinely ignored. To reduce the risk of alert fatigue leading to similar mistakes in the future, it decided to switch off those most frequently ignored alerts.
In 2017, 11 chief executives of medical centres in the United States penned a joint open letter to their peers (see go.nature.com/2kjgms4). Although they acknowledged the potential of electronic health records for improving patient safety, the group warned that such systems had “radically altered and disrupted established workflows and patient interactions” for physicians, and had become a main contributor to the growing problem of physician burnout.
Wachter blames this on user interfaces that look like they belong to the mid-1990s, with crucial clinical information sometimes requiring dozens of clicks to access.
Even with those clumsy interfaces, such systems enable doctors to retrieve information more efficiently than is possible with paper medical records. However, the systems’ continual demands for data have become a huge burden. “We spend enormous amounts of time entering data into the machines, but precious little time getting useful information out of them,” says Wachter. During the day, doctors at UCSF Medical Center spend much more time on their computers than they do with patients, he says, and they still need to spend a further two to three hours in the evening catching up on data entry. To add insult to injury, they then often find that little of the laboriously entered information tells them something useful.
Wachter’s workplace is not the exception. A study of 142 general practitioners in Wisconsin found that, on average, their working day lasted 11.4 hours, which included 5.9 hours using an electronic health-record system3. Of the time spent on the computer, 44.2% involved clerical work and 23.7% was devoted to managing inboxes. It’s no wonder that a 2015 study found that more than half of US physicians showed one or more signs of burnout4.
Part of the data-entry burden comes from the 140 data elements that Halamka’s working group proposed should be collected from every patient on each visit. Halamka stands by the recommendation as being reasonable, but says that when combined with other changes, including the enactment in 2010 of the US Affordable Care Act, extended patient privacy requirements and an updated version of the International Statistical Classification of Diseases and Related Health Problems, physicians have become overloaded. Wachter notes, however, that some of the data elements were not intended to be used by doctors. Rather, their inclusion was requested by private health-care companies, which use them to reward hospitals that document good health practices in patients, such as stopping smoking.
Meyer says that this radical change in the physician’s workflow was an “ugly, obligate step” that chained him to the computer screen. But he also understands that it was an essential move to standardize data, to share them freely, and to get all physicians to work from the same type of medical record before implementing a further set of tools to improve performance.
The perils of progress
One way to free doctors from their keyboards might be to take advantage of improvements in the ability of machines to process the spoken word. Some physicians already use speech-recognition systems to dictate letters, just as they used tape recorders and medical secretaries 30 years ago. But the clarity of the output will be dependent on the doctor’s way with words: one specialist might produce a clear and concise report, whereas another might hand over something that is incomprehensible to a patient without a medical dictionary. Building a system that is capable of finessing physicians’ words into clear medical documents is an “interesting and challenging problem”, says Szolovits, and is an issue that one of his postgraduate students is investigating.
Meyer hopes that natural language processing will “re-humanize medicine”, so that he will be able to spend more time with patients. Halamka imagines a system that could go beyond transcription to search for structured information in existing records. For instance, if a person remembers having a vaccination for influenza, the system could search its files to identify the date it was administered, the supplier, the lot number and the expiration date. In July, the UK National Health Service (NHS) announced that it had taken a step in this direction by agreeing that Amazon could provide medical advice through its digital assistant Alexa.
But questions remain, particularly with regard to patient privacy. Both Europe and the United States have strong medical-privacy rules that focus on the secure encryption of data, especially during its transmission between computers. Yet breaches of electronic health-record systems, particularly at health-insurance companies, have exposed the data of more than 100 million people in the United States. Such stolen medical data can be used to fraudulently invoice insurers for care that has never been provided, and on the dark net (a network that hosts anonymous and often illegal online activity), the information sells for more than do credit-card data, says Erskine.
The growing power of computers and the deep-learning algorithms that are used in speech recognition also threaten people’s privacy. “If you have the patient’s voice, identity leakage is inevitable”, because such technology can distinguish between individuals’ voices, says Lee Tien, a lawyer at the Electronic Frontier Foundation in San Francisco, California. Speech-recognition systems process much voice data in the cloud, which could take those data out of the secure realm of the medical provider. Although the NHS says that it is not sharing patient data with Amazon, Alexa runs on computers that comprise Amazon’s cloud, and saves some data for further speech processing. The privacy group Big Brother Watch, a non-profit organization based in London, called the deal “a data-protection disaster waiting to happen”.
However, to aid physicians with medical diagnoses, electronic health records could also draw on machine-learning techniques that were developed for recommending films or consumer products. Wachter predicts systems that could search through large volumes of clinical records and insurance-reimbursement data to recommend the cheapest drug that would be effective for a patient. “It’s a much harder nut to crack than recommending movies,” he says, but the underlying aim of aiding decisions is similar. The digital medical assistant might be more capable than the movie-selector algorithm, but in the end it’s the humans that should call the shots.