The US Environmental Protection Agency (EPA) is under fire. Its flagship Integrated Risk Information System (IRIS), which develops risk values for human chemical exposure that are used by regulators and others, is being widely criticized for being too slow and scientifically flawed. The system needs an overhaul.

Last year, for instance, the US National Academy of Sciences (NAS) castigated the EPA's inadequate assessment of the health risks of formaldehyde1. Evaluations of other chemicals, including dioxin, have been equally controversial2. In December 2011, Congress directed the agency to improve its risk assessments and submit documentation to the NAS for review (see go.nature.com/xmeqyv). But the problems go deeper than the IRIS process.

The town of Times Beach in Missouri was evacuated in 1983 and later demolished after a dioxin spill. Credit: B. PIERCE/TIME LIFE PICTURES/GETTY

Two main challenges render the EPA's risk assessments inadequate for decision-making. First, they take years or even decades to conclude, meaning that many chemicals have never been examined. Second, their scientific credibility is often challenged. Peer reviewers have questioned the EPA's selective use of data and some assumptions that it has made to plug gaps in the scientific evidence. The NAS has recommended that the EPA better justify and quantify its risk-assessment assumptions.

As scientists who have served at the EPA (G.M.G.) and participated in NAS reviews (J.T.C.), we believe that more is needed. The agency needs to fundamentally alter its approach to risk evaluation. First, it should offer faster summaries for more chemicals. Rough-and-ready estimates are often sufficient for policy-making, and are better than nothing. IRIS should include information from private groups and other governments, and apply available techniques for calculating the risks of chemicals for which there are few data. Second, the EPA needs to acknowledge that its risk estimates are uncertain by reporting a range of plausible values, not just those that support its science-policy goals.

Rooted in the past

Attitudes towards environmental regulation have changed since the agency was founded in 1970. Less than a decade after Rachel Carson exposed the environmental damage caused by the pesticide DDT in her 1962 book Silent Spring, Americans wanting “freedom from risk”3 embraced government protection.

The EPA successfully addressed health threats posed by high-profile pollutants. A ban on leaded petrol spearheaded by the EPA in 1973 helped to reduce the level of lead in children's blood by nearly an order of magnitude in the decades that followed. Other agency regulations introduced in the early 1970s halved the levels of air pollutants such as sulphur dioxide and carbon monoxide.

Credit: SOURCE: EPA

By the mid-1990s, the most glaring environmental problems had been dispatched and the EPA's progress stalled. Although IRIS now counts 557 finished risk assessments in its repository, releases in each year since 1995 have mostly been in single digits (see 'Count down'). Risk assessments have become mired in controversy and extended review cycles. Worse, the EPA prioritizes revisions to assessments of chemicals it has already evaluated, such as dioxin and mercury4, rather than evaluating crucial chemicals for the first time.

The slow pace of IRIS threatens public health. Many people might assume that chemicals lacking an IRIS risk estimate are safer than those that have been assigned one, even if they are not. For example, the EPA's assessment of perchloroethylene, used in dry cleaning, has encouraged phasing out of the chemical. Some dry cleaners are switching to n-propyl bromide — for which there is no IRIS entry — despite evidence that it may pose a greater health risk than perchloroethylene5.

Other difficulties arise from EPA efforts to characterize risk at ever-lower exposure levels, at which health effects are hard to observe. Reliant on animal experiments, the agency resorts to two critical assumptions: that any adverse health effects seen in rodents are mirrored in humans, and that the high doses used in the lab (to see an effect using a reasonable number of animals) can be extrapolated downwards, often by orders of magnitude, to reflect human population exposures. As the NAS has pointed out, the EPA often fails to justify the data used or explain how risks were estimated at low levels1,2.

In our view, the problem is the EPA's use of assumptions that it claims are “public health protective”, which err on the side of overstating risk when data are lacking. Take dioxin, for example. In its assessment, the EPA assumed the worst case — that low levels of dioxin cause cancer — because that possibility cannot be ruled out. Yet other agencies, including the World Health Organization6, interpret the biological studies of dioxin as suggesting that it is unlikely to cause cancer at low levels because of the way the chemical behaves within cells.

Such inflated risk estimates can lead to overly stringent regulations and can scramble agency priorities because the degree of precaution differs across chemicals. For example, the EPA's National-Scale Air Toxics Assessment from 2005 estimated a tenfold-higher cancer risk from outdoor air exposure to carbon tetrachloride (used in dry cleaning and as a solvent and refrigerant) than from ethylene dibromide (a termite fumigant and former additive in petrol). Yet by taking on board the biological evidence, other agencies around the world have concluded the opposite — that carbon tetrachloride poses little risk because, unlike ethylene dibromide, it has a threshold for its carcinogenic action.

The EPA intended that its air-toxicity results would help to set priorities for improving data in emission inventories, to target risk-reduction activities more effectively and to identify pollutants and industrial sources of greatest concern. But its aggressive use of precautionary assumptions, even when they are scientifically unwarranted, instead misleads decision-makers.

The way forward

To its credit, the EPA has committed to adopting the NAS recommendations, including streamlining presentation of its analyses, making its toxicity evaluations more uniform and incorporating multiple data sets7. To become fit for purpose again, the agency must change its view of risk assessment. It should not see assessments as a search for scientific truth, but as a way to bring available information to bear on regulatory and public-health decisions.

The EPA should expand IRIS to include sources of information that are not currently used, similar to the International Toxicity Estimates for Risk Assessment database (www.tera.org/iter). IRIS should report risk values developed by international public-health agencies, by other health agencies in the United States and by private groups.

The agency should integrate into IRIS information from its internal programmes, such as its Provisional Peer-Reviewed Toxicity Value database, which contains more than 300 rapid-risk estimates developed to inform clean-up decisions at hazardous-waste sites. These estimates draw on information of varying quality, such as short-term toxicity tests, expert judgements and statistical models that predict a chemical's behaviour on the basis of its structure. The associated uncertainties should be reflected in the IRIS entry.

In the longer term, the EPA should expedite its ongoing exploration of high-throughput screening methods. These can quickly ascertain a broad range of properties for a chemical, such as how readily it reacts with biological systems, and hence evaluate potential health risks8. Once these methods and an understanding of how they feed into risk estimates are established, the information should be incorporated into IRIS.

Fundamentally, the EPA should replace risk values that are built on science-policy assumptions with risk estimates that acknowledge underlying uncertainties. For instance, the agency could follow the example of the Intergovernmental Panel on Climate Change9 and report a range of risks that correspond to different models. Users would then be able to see whether a value is sufficiently precise to support a particular course of action.

Critics might argue that decision-makers will suffer 'paralysis by analysis' if confronted with a range of values rather than just one. Yet that is how it should be. The EPA's definitive values are illusions: they conceal uncertainty that cannot be resolved scientifically. Bringing conflicting value judgements into the open will enable honest debate and improve public health.