OUTLOOK

How artificial intelligence is helping to prevent blindness

Machine learning is being used to automate the detection of eye diseases.
Sandeep Ravindran is a freelance science writer in New York City, New York.

Search for this author in:

Artistic image showing a person wearing glasses surrounded by robotic arms

Credit: Taj Francis

When people with diabetes visit their general practitioner, they’re often referred to an ophthalmologist, who can check their eyes for signs of diabetic retinopathy. The disease damages the light-sensitive layer of tissue at the back of the eye known as the retina and is a leading cause of blindness, resulting in up to 24,000 cases each year in adults in the United States. But when diagnosed before symptoms appear, the disease can usually be managed and the worst outcome avoided. “We know so well how to treat it, but we don’t catch it early enough,” says Michael Abràmoff, a retinal specialist and computer scientist at the University of Iowa in Iowa City.

Regular screening is therefore crucial to managing diabetic retinopathy. But assessing the 30 million or so people affected by diabetes in the United States, and more than 400 million people worldwide, seems an insurmountable challenge. Only about half of people with diabetes get their eyes examined every year, as recommended.

That’s partly because of a shortage of ophthalmologists. Such specialist physicians require extensive training and particular equipment, and their scarcity in many regions of the world often forces people to travel long distances for an eye examination. The problem is particularly acute in low- and middle-income countries, but even richer countries are expected to experience a shortfall as elderly, high-risk populations grow faster than the pool of ophthalmologists that is needed to treat them. Telemedicine — in which ophthalmologists assess photos of the retina remotely — could help to improve access for patients but has yet to gain widespread acceptance.

Abràmoff had long wondered whether a computer program could be used to screen people for eye disease. Over several decades, he developed IDx-DR, an artificial intelligence (AI) system that can tell in minutes whether a person has a more-than-mild case of diabetic retinopathy. Such cases comprise only about 10% of people with diabetes, so under this AI system, ophthalmologists would have to examine many fewer people.

IDx-DR is the first device to be approved by the US Food and Drug Administration (FDA) to provide a screening decision without the need for a clinician. But it is not the only AI-based tool that is poised to transform the field of ophthalmology. Advances in computing and the availability of large data sets of retinal images have spurred the development of AI systems for detecting not only diabetic retinopathy, which is relatively easy to spot, but also other common eye diseases such as age-related macular degeneration (AMD) and glaucoma. These AI systems could improve the speed and accuracy of large-scale screening programmes, as well as improve access to eye examinations in underserved areas by enabling their provision at medical centres that could not otherwise offer eye care.

Using AI in the clinic will inevitably raise concerns about missed diagnoses and misdiagnosis, says Tien Yin Wong, an ophthalmologist at Singapore National Eye Centre. The legal and ethical issues that result might ultimately determine how common the technology becomes, he says.

Yet those in the field are optimistic that AI-assisted diagnosis is ready to take off. Pearse Keane, an ophthalmologist at Moorfields Eye Hospital in London, is also a consultant at DeepMind Technologies, an AI research company based in London, and owned by Google’s parent company, Alphabet, that is developing a system that can diagnose eye diseases. “I still remember one of the first times where I saw the algorithm in action,” he says. “I was just stunned and felt that I had seen something transformative for the whole field of ophthalmology.”

A 30-year vision

Abràmoff began to look into automating the detection of eye diseases around 30 years ago. Ophthalmologists typically diagnose such conditions by studying either a colour photograph of the back of the eye or a cross-section of the retina captured using an imaging technique called optical coherence tomography (OCT). But Abràmoff was uncertain whether a computer program could stand in for a highly trained specialist, at least at the outset.

Machine learning, which uses data and custom-built algorithms to train machines to perform tasks, had shown promise for use in image analysis since the 1950s. But the hardware wasn’t powerful enough to make machine learning practical for analysing real-world medical images, even by the time that Abràmoff started his research 40 years later.

Nevertheless, Abràmoff painstakingly devised mathematical equations to describe various lesions in the retina, and then wrote algorithms to detect them. By the early 2000s, he had published numerous papers on the topic, and as the decade progressed he obtained relevant patents in the hope that a pharmaceutical or biotechnology company would license them. But the idea did not take off. “Nothing happened,” he says.

The use of AI systems in medical imaging received a huge boost in the late 2000s, thanks to the video-game industry. The push for realistic graphics led to the development of more powerful graphics cards that were ideal for the kind of parallel processing that is required by AI systems. These graphics cards made it easier to implement computationally intensive systems known as artificial neural networks, which are inspired by the way that neurons interconnect in the brain. Such networks consist of layers of connected nodes that process different characteristics of an image. Each attribute is given a certain weight, which the system then combines to generate an output such as a decision on whether an eye has been affected by diabetic retinopathy.

By combining artificial neural networks with considerable processing power and massive image data sets, researchers were able to create deep-learning networks that can perform sophisticated tasks beyond the reach of conventionally programmed software, including beating some of the world’s best players of the ancient board game Go. “There’s sort of this quantum leap forward that’s occurred, where all these things that used to be a pie in the sky are now technically feasible to do,” says Aaron Lee, an ophthalmologist at the University of Washington in Seattle.

A successful trial

Amid these technological advances, Abràmoff founded AI research company IDx Technologies in Coralville, Iowa, in 2010. After a lengthy discussion with the FDA, he set up a clinical trial to show that IDx-DR could work in a real-world setting. The trial opened for enrolment in January 2017 and included 900 people with diabetes from 10 locations in the United States.

A patient undergoing an eye examination

Retinal specialist Michael Abràmoff is developing artificial intelligence programs that could improve access to screening for eye diseases in regions with limited ophthalmology services.Credit: Brice Critser/Dept Ophthalmology/UIHC

The results showed that Abràmoff’s decades of work had paid off. IDx-DR correctly identified the presence of more-than-mild diabetic retinopathy around 87% of the time, and correctly identified people who did not have the condition almost 90% of the time1. The AI system’s accuracy met FDA requirements and, in April 2018, IDx-DR became the first autonomous diagnostic system to be approved for detecting diabetic retinopathy in the United States. “It was a very good day,” says Abràmoff.

The system uses a camera to photograph the back of the eye. An AI algorithm then analyses the resulting images to detect early signs of diabetic retinopathy such as haemorrhaging. Another algorithm helps the operator to take high-quality images of the retina, which means that after receiving only four hours of training, anyone with a secondary school education could operate IDx-DR.

In June, University of Iowa Health Care became the first organization to implement IDx-DR in the clinic. Competing AI systems might not be too far behind. “With what IDx has done, there is now a precedent for other companies in the deep-learning space,” says Lee. So far, most have also focused on detecting diabetic retinopathy because the condition is relatively easy to spot in an image. “That is a fairly simple problem from a computer-vision standpoint,” says Lee.

Do as humans do

AI systems will eventually have to do more than detect a single eye disease. “When a physician assesses someone’s eye, they pick up many common conditions,” says Wong. “You can’t just say, ‘I’m only interested in picking up whether or not you have diabetic retinopathy’.” That’s why Wong and others, including Abràmoff, are developing AI systems that are capable of detecting several eye diseases at the same time.

Rather than teaching AI algorithms which disease features to look for (as Abràmoff did for IDx-DR), some researchers train their programs by instructing them to sift through numerous images that originate from healthy or diseased eyes. The AI systems must then work out by themselves how to differentiate between them. In 2017, Wong and his team used retinal images collected from several studies, including the Singapore National Diabetic Retinopathy Screening Program, to train an AI system2. They tested its effectiveness in 11 multi-ethnic cohorts of people with diabetes, and showed that their AI program could use differences in retinal images to detect not only diabetic retinopathy, but also glaucoma and AMD. The system’s screening prowess matched that of a human specialist for diabetic retinopathy roughly 90% of the time.

Researchers at DeepMind and Moorfields Eye Hospital have gone even further. They built an AI algorithm that taught itself to make referral decisions for 50 common eye conditions3. The system identifies signs of eye disease in an OCT retinal scan and then decides the urgency with which a person should see a specialist. DeepMind’s AI system could ease the patient load for ophthalmologists considerably. “People don’t realize the sheer volume of cases we deal with,” says Keane. The National Health Service in England scheduled 8.25 million ophthalmology outpatient appointments last year.

Training an AI algorithm typically requires large amounts of data and prepares the system to perform only narrow tasks; an algorithm trained to play Go by instructing it to play itself 30 million times would be no good at chess, for instance. But a method known as transfer learning could help to train AI programs using fewer task-specific data, enabling them to learn to perform similar tasks more quickly.

A team led by Kang Zhang, an ophthalmologist at the University of California, San Diego, in La Jolla, took an AI algorithm that had been pretrained on tens of millions of images of everyday objects from the public repository ImageNet, and then applied it to a set of around 100,000 OCT retinal images4. Despite the low number of retina-specific images used to train the system, the pretraining enabled the team’s AI program to accurately diagnose two common causes of vision loss — diabetic macular oedema and choroidal neovascularization (often a consequence of advanced AMD) — and to decide who needed an urgent referral to a specialist. Reducing the number of OCT retinal images used in the training to about 4,000 doubled the algorithm’s error rate, but its performance was still broadly comparable with that of human experts.

Zhang, Keane and Wong are planning to conduct clinical trials in the next two years to confirm whether their AI systems are as effective at diagnosis as are ophthalmologists — a necessary precursor to receiving regulatory approval. But further work will still be required to produce a commercial product that is ready for use in a variety of settings. “Scientists need to make it as usable as an iPhone,” says Wong.

Not just a question of technology

The abilities of these AI systems might, in some cases, exceed those of humans. For instance, Bernhard Weber, a geneticist at the University of Regensburg in Germany, and his colleagues have developed a deep-learning algorithm for classifying the progression of AMD5, a leading cause of vision loss in people aged 50 and older. Although late-stage AMD is easy to detect, Weber found that his team’s AI program could also identify early stages of the disease. “That’s tough stuff,” he says — challenging even for an ophthalmologist.

Although the accuracy of such AI systems helps to obtain regulatory approval, that green light might not be enough to win the trust of clinicians and patients. “As a society, are we ready to implement these things?” asks Lee.

One obstacle to gaining users’ confidence is the closed nature of many AI systems, which operate as black boxes — it’s not always clear how such programs reach a decision. “With a black-box algorithm, you have no idea why the algorithm chose to make that diagnosis,” says Lee (see ‘Opening the black box’).

Opening the black box

The complex artificial neural networks that make artificial intelligence (AI) systems so powerful also make it difficult to understand how such systems reach the decisions that they do — an issue known as the black-box problem.

This opacity is particularly vexing in the clinic, where the reasoning behind an AI system’s diagnosis could be crucial to getting regulatory approval. “Explainability became a big issue with the US Food and Drug Administration,” says Michael Abràmoff, a retinal specialist and computer scientist at the University of Iowa in Iowa City. “You need to be able to explain what your algorithm does if you want it to be autonomous.”

Researchers are discovering how to peer into the black box. AI research companies IDx Technologies in Coralville, Iowa, and DeepMind Technologies in London use a two-pronged approach to interrogate their AI systems’ decision-making when diagnosing eye conditions. One algorithm detects disease features in an image of a person’s retina. Another algorithm then uses those features to make a decision about whether that person needs to consult an ophthalmologist and, if so, how urgently. By dividing up those steps, clinicians can determine how a deep-learning network interprets an image before it makes a referral suggestion, says Olaf Ronneberger, a computer scientist at DeepMind.

Another way to untangle what’s going on involves the use of a different sort of black box. Kang Zhang, an ophthalmologist at the University of California, San Diego, in La Jolla, and Bernhard Weber, a geneticist at the University of Regensburg in Germany, used black masks to shield parts of retinal images from their AI algorithm, and observed how such masking affected the system’s diagnoses. This enabled Weber to determine where in the retina the AI algorithm was looking to make its decision5. “What you see is that it’s exactly where a human ophthalmologist would look,” he says. S.R.

Wong likens the rise of AI-based diagnosis to that of driverless cars — and in both cases, he is unsure whether people are ready for complete automation. He therefore designed his system to function as either a fully automated process, or a semi-automated one, in which it works in conjunction with a human. It’s similar to ensuring that a driverless car has a steering wheel and brakes so that a person can take over in an emergency. “That gives a lot more confidence, as well as reducing the workload significantly,” says Wong.

This two-tiered model might work well in places where ophthalmologists are readily available. But the technology’s greatest potential lies in improving access to eye care in low-income countries or remote areas. That reasoning led Abràmoff to test IDx-DR in an isolated part of New Mexico, several hours’ drive from the nearest ophthalmologist, and researchers from Google to trial a deep learning algorithm designed to spot signs of diabetic retinopathy in retinal photographs in eye hospitals in India, where just 15,000 ophthalmologists serve about 70 million people with diabetes.

Existing AI systems require detailed images of the eye to reach decisions, and in many countries the equipment and expertise needed to take those images are in short supply. But smartphones fitted with special cameras for retinal imaging could be combined with cloud-based AI software to screen for diabetic retinopathy, making eye examinations even cheaper and more convenient.

“In my opinion, the greatest benefit to mankind will occur in resource-limited settings, where there is no expert available,” says Lee. “I think AI can play a very big and disruptive role in the delivery of medicine in those settings.”

doi: 10.1038/d41586-019-01111-y

This article is part of Nature Outlook: The eye, an editorially independent supplement.

References

  1. 1.

    Abràmoff, M. D., Lavin, P. T., Birch, M., Shah, N. & Folk, J. C. npj Digit. Med. 1, 39 (2018).

  2. 2.

    Ting, D. S. W. et al. J. Am. Med. Assoc. 318, 2211–2223 (2017).

  3. 3.

    De Fauw, J. et al. Nature Med. 24, 1342–1350 (2018).

  4. 4.

    Kermany, D. S. et al. Cell 172, 1122–1131 (2018).

  5. 5.

    Grassmann, F. et al. Ophthalmology 12, 1410–1420 (2018).

Download references

Nature Briefing

Sign up for the daily Nature Briefing email newsletter

Stay up to date with what matters in science and why, handpicked from Nature and other publications worldwide.

Sign Up