Last week, US Senator Frank Lautenberg (Democrat, New Jersey) introduced legislation to overhaul one of the key US chemical regulatory laws. The 1976 Toxic Substances Control Act (TSCA), which covers chemicals other than medicines, cosmetics and pesticides, essentially assumes that compounds introduced into the marketplace are safe until proven otherwise: potential risks to the environment or human health are acted on only if the Environmental Protection Agency (EPA) can uncover and prove them. The Lautenberg legislation, and complementary legislative language being drafted in the House of Representatives by Bobby Rush (Democrat, Illinois) and Henry Waxman (Democrat, California) would place the burden of proof where it belongs, requiring industries to provide a certain quantity of safety data before releasing a new chemical into wide use. It would also give the EPA the authority to request additional data from manufacturers as it deems necessary. These changes would bring the TSCA more in line with Europe's Registration, Evaluation, Authorisation and Restriction of Chemicals (REACH) legislation from 2007 (see Nature 460, 1065; 2009).

This effort at legal reform has won applause from environmental, health and industry groups alike. If passed, the law would be a welcome step in the right direction. Although chemical companies are understandably concerned about the extra costs such testing would entail, the new framework should make navigating the regulatory waters considerably more straightforward.

But scientific reform is needed as well. For decades, regulatory bodies have relied on guideline studies conducted under national and internationally agreed standards known as Good Laboratory Practice (GLP). This governs how the studies are planned, performed, monitored, recorded, reported and archived. These standards are invaluable, providing a guarantee of reliability and cross-comparability for studies on chemical safety. But the glacial pace of consensus building and validation required to update guidelines can leave gaping holes that allow the approval of chemicals of questionable safety.

A case in point is bisphenol A (BPA), a major component of polycarbonate plastics and resins widely used in consumer products, including food-can linings and baby bottles (see page 1122). Research in laboratories around the world has now produced many studies showing cognitive, developmental and reproductive effects associated with exposure to the chemical in lab animals. But because these effects seem to be triggered through BPA's hormone-mimicking qualities, and its long-term, epigenetic influence on gene expression, they can be considerably more subtle than the ones that guideline studies were designed to look for — those arising from more clearly toxic substances such as asbestos or thalidomide. Moreover, detecting BPA's effects generally requires cutting-edge biological techniques whose results, in the eyes of regulatory bodies, carry just a fraction of the weight of those produced by a GLP study.

This situation has to change. The scientists who develop these techniques need to put a high priority on validating and standardizing them in ways that make the results usable by regulators. And regulators need to find faster ways to get the new techniques incorporated into guideline studies. If they don't, keeping pace with the increasing speed at which chemicals are being developed and introduced will be impossible. Regulators are already overwhelmed by a backlog of consumer chemicals on which there are inadequate safety data. Yet now they are also having to grapple with nanomaterials and other novel compounds that need sophisticated science to evaluate, and sophisticated laws to regulate properly.

The new US legislation, although it may take some time to be enacted, could and should be shaped to encourage such changes. Regulators should not be expected to chase down every result produced by new and unproven methods. But they should be able to take into account new methods as rapidly as they can be validated.