San Francisco

A simple improvement in the way health data are monitored for signs of a bioterror attack could speed up the process and cut the number of false alarms, says a team of specialists at Harvard Medical School.

The findings, reported online this week (B. Y. Reis, M. Pagano and K. D. Mandl Proc. Natl Acad. Sci USA doi:0.1073/ pnas.0335026100; 2003), are expected to influence the current US drive to improve early-warning systems for such attacks.

Public-health officials fear that biological attacks may not be recognized until it is too late to prevent casualties. Smallpox and anthrax, for example, start with 'flu-like symptoms, and the first victims are likely just to be sent home to rest.

Biodefence researchers have been looking at everything from patterns of hospital visits to sales of cough syrup. Dozens of systems are now being field-tested by state and local health departments across the United States. In theory, a sudden outbreak of disease, whether natural or deliberate, will register as a spike in the data, alerting health officials.

But the chief difficulty is separating this from day-to-day variation. “There are a lot of bumps and noise in public healthcare data,” says Ben Reis, a biosurveillance specialist with Harvard Medical School at the Children's Hospital in Boston. To prevent false alarms, most systems set the alert threshold so high that they risk missing the first signs of a real outbreak.

The standard approach is to forecast the number of emergency cases that hospitals have to deal with one day at a time, based on historical data. Departures from the forecast send an alert to a regional epidemiologist for further investigation.

Reis designed his system to look at the data a week at a time. He reasoned that the wider window would make it easier to disregard blips that might otherwise register as false positives. It should also spot rising trends earlier than the standard software.

He tested the approach with emergency records from the Children's Hospital, which comprise the main complaint of every patient who checked in from 1992 to 2002 —a total of more than 500,000 visits. Because there were no real outbreaks, Reis added simulated ones calculated to look like small or large releases of anthrax or smallpox.

The week-long average was able to reveal outbreaks that the one-day system missed, Reis and colleagues report, because its detection threshold could be set much lower without triggering false alarms.

Marc Overhage, an expert in healthcare informatics at the Regenstrief Institute in Indianapolis, says that false alarms need to be eliminated. Investigations into possible outbreaks are expensive, costing an average of $50,000, he says. Too many could render a system worthless. “We shouldn't build all these surveillance networks until we know they work,” he says. “Reis is the only one running through how to do it best.”