Chicago

Further evidence has emerged that money from the pharmaceutical industry is distorting the medical literature.

Meta-analyses — studies that combine the results from several trials — report more favourable results if they are sponsored by industry. The same effect had already been seen for clinical trials in several areas of medicine. But the new finding is even more worrying, say its authors, as policy-makers often give meta-analyses more weight than individual trials.

Bitter pill? Meta-analyses funded by drug companies are more likely to be positive.

The latest result comes from a study of 71 meta-analyses of hypertension medications published between 1996 and 2002. The data in the industry-sponsored analyses were no more or less positive than those in publicly funded ones. But 93% of the meta-analyses funded by a single drug company drew positive conclusions about the medications — compared with 79% of those from academic institutions.

Many of the industry-funded papers reached conclusions not justified by the data, says study author Veronica Yank, an expert on medical publishing at the University of Washington in Seattle. She estimates that little more than half of the industry-funded studies that reached a favourable verdict actually had the data to back it up.

“Conclusions in meta-analyses often spin the results to put them in a favourable light,” says Yank, who presented her results on 17 September at the Fifth International Congress on Peer Review and Biomedical Publication in Chicago, Illinois. And, she notes, “meta-analyses often surpass clinical trials in terms on influence on policy”.

“It's a marvellous study and very disturbing,” says Richard Smith, chief executive of the London-based healthcare company UnitedHealth Europe and a former editor of the British Medical Journal (BMJ).

The results should alert journal staff and reviewers, admit editors. They point out that referees should pick up conclusions that go beyond the data. “This is a massive failure of peer review,” says Jeremy Theobald, an editor with the publishers John Wiley in Chichester, UK. Yank agrees: “I was embarrassed on behalf of the editors.”

Yank did not say where the meta-analyses were published. Some editors say that smaller journals, which lack both the staff to scrutinize referees' reports and a large pool of submissions to choose, are more likely to accept flawed studies. Cathy DeAngelis, editor of the Journal of the American Medical Society (JAMA), says she is wary of industry-sponsored papers and always checks for bias, but questions whether all journals have the resources to do so.

Educating editors could tackle the problem, she suggests. Before the conference, JAMA paid for 20 editors from small journals, and some based in developing countries, to attend workshops on best practice in peer review.

Yank adds that journals can help by asking authors to place claims in the context of their data, and requiring them to own up to the limitations of the study.