Artificial intelligence pinpoints nine different abnormalities in head scans

The new algorithms could help emergency clinics identify serious head trauma cases faster

Search for this author in:

A brain scan (left) showing an intraparenchymal hemorrhage in left frontal region and a scan (right) of a subarachnoid hemorrhage in the left parietal region. Both conditions were accurately detected by the tool.

The rise in the use of computed tomography (CT) scans in US emergency rooms has been a well-documented trend1 in recent years2. At the same time, the diagnosis of life-threatening conditions using these head scans has risen only slightly in emergency rooms. One problem ER doctors face is trying to separate out serious cases of head trauma from less serious injuries.

A new study suggests that deep-learning algorithms could help automate the triage process for some of these head trauma cases, specifically for patients with brain injury who require immediate attention. The study3, which appeared recently in The Lancet, found that deep-learning algorithms were able to accurately identify as many as nine different critical abnormalities in CT head scans.

The study is the latest in a slew of new research that uses artificial intelligence (AI) to analyze medical images. Eric Topol, a physician at and the executive vice president of Scripps Research who wasn’t involved in the research, says that this study represents a step forward because most previous reports of AI in medical imaging gave a yes-or-no answer for one type of abnormality, like a brain lesion. But the algorithms in this study were trained to parse multiple kinds of brain trauma.

“It’s one of the best radiology–AI efforts to date, because it widens the deep-learning interpretation task to urgent referral of many different types of head CT findings,” Topol says.

In the new study, funded by the Mumbai-based company, which seeks to use AI for radiology, scientists employed by the company and their collaborators collected more than 313,000 anonymized head CT scans from 20 hospital and out-patient radiology centers in India. They then used these scans to develop and train their algorithms. Next, they randomly selected 21,000 scans in this sample representing more than 9,000 patients to validate the algorithms.

The system was able to identify skull fractures and five different types of intercranial hemorrhage. It was also able to detect mass effect and midline shift, both used as indicators of brain injury severity. “These are critical results that need to be communicated to the doctor really fast,” says Sasank Chilamkurthy, the lead author of the study.

The study authors asked three senior radiologists to independently analyze the CT scans. They found that the reviewers agreed with the algorithms’ diagnoses 86–99% of the time, depending on the type of brain abnormality.

Chilamkurthy says one of the challenges of developing these types of algorithms is that a large volume of scans is needed in order to train an AI model and validate the findings. “You have to have a huge sample size because the abnormalities in the dataset are usually of low prevalence,” he says.

Ideally, Chilamkurthy says, the system could diagnose patients with head trauma faster so that patients in critical condition could be treated as soon as possible. The authors say that their automated system could also be useful in remote locations where a radiologist isn’t immediately available.

Chilamkurthy says is pursuing regulatory clearance through the US Food and Drug Administration for its automated system. Earlier this year, the company won approval in Europe to market its AI-based chest X-ray product that can evaluate 15 different abnormalities.

Eric Oermann, a neurosurgeon at Mount Sinai in New York who recently published similar research4 on using AI to analyze CT head scans, says the biggest challenge of applying AI to medical scans is generalizability. “Images come from significantly different distributions between hospitals, and deep learning models easily over-fit to these local generators,” Oermann says. “Getting models that work everywhere is a notable and open problem.”

A clinical trial would be needed to determine if’s triage system could improve radiologist efficiency and patient care. The real test for these algorithms will be in a real, prospective clinical environment, Topol says.

doi: 10.1038/d41591-018-00003-4
Nature Briefing

Sign up for the daily Nature Briefing email newsletter

Stay up to date with what matters in science and why, handpicked from Nature and other publications worldwide.

Sign Up


  1. 1.

    Kocher, K. E. et al. Ann. Emerg. Med 58, 452–462 (2011).

  2. 2.

    Korley, F. K., Pham, J. C. & Kirsch, T. D. JAMA 304, 1465–1471 (2010).

  3. 3.

    Chilamkurthy, S. et al. Lancet 392, 2388–2396 (2018).

  4. 4.

    Titano, J. J. et al. Nat. Med. 24, 1337–1341 (2018).

Download references