Nearly one-third of junior scientists spend no time validating antibodies, even though accurate results depend on these reagents working as expected, according to the results of a survey reported today in BioTechniques1.

“This is quite alarming,” says Matthias Uhlén, a protein researcherat the Royal Institute of Technology in Stockholm who heads an international working group on antibody validation, but who was not directly involved in the survey.

Poorly performing antibodies can give false positive signals if they bind to proteins other than their intended targets, and false negative signals when they fail to bind to the correct protein. Such problems have led scientists and journals to retract papers, and have caused researchers to reach invalid and highly disputed conclusions.

Individual laboratories have reported that they have wasted years of work, thousands of human samples, and hundreds of thousands of dollars when antibodies did not work as expected. Yet more than half of the nearly 400 biomedical researchers who answered the survey’s online questions about how they assess antibodies said they had not received specific training on validation.

No time

The survey was carried out by the Global Biological Standards Institute (GBSI), a non-profit group in Washington DC devoted to improving the use of reagents in biomedical experiments. Last October, a separate GBSI survey showed that 52% of researchers failed to authenticate the identity of cell lines, which can easily become contaminated with alien fast-growing cells. By contrast, 70% of respondents in the more recent survey said that they validate antibodies purchased from commercial suppliers.

But Leonard Freedman, president of the GBSI, thinks that unvalidated antibodies probably represent a more significant problem. Because antibodies are so widely used in experiments, he says, the actual number of researchers who do not validate antibodies is probably much higher than those who don’t validate cell lines.

What’s more, the gap in practices between senior researchers and junior researchers was striking. Whereas 76% of researchers with more than 10 years’ experience reported validating commercial antibodies, only 43% of those with 5 or fewer years of experience did. The most common reason given was the time required to do so.

‘Two-headed monster’

Validating antibodies is more complex than cell authentication, says Freedman. Whether or not an antibody works depends on the particular assay. For example, some antibodies detect a protein in the ‘denatured’ state that is found in cell preparations, but not in the protein’s natural, folded state — or vice versa. And an antibody that functions well for one type of tissue or preparation might produce false signals in other contexts. One-third of respondents in the GBSI’s survey said that they did not apply different validation procedures depending on the assay.

Freedman says that reproducibility problems attributed to antibodies can be blamed on “a two-headed monster” of poor antibodies and poor training. Both are fuelled by lack of clear, commonly accepted guidelines about what is required to validate an antibody, and what information companies should supply about the reagent’s performance.

This is a problem that Freedman hopes will soon be solved. In September, his group is hosting a workshop at Asilomar in California, the site made famous for producing guidance on recombinant DNA. The meeting will bring together antibody suppliers and users, as well as funders and journals, he says. “We are going to lock the doors and not let anyone out until we have elucidated a set of practical, user-friendly validation standards.”