Results are starting to come in from what may be the largest-ever study of the practice of peer review.

Three prestigious medical journals — The Lancet, Annals of Internal Medicine and the BMJ — threw open their doors and allowed the study team unprecedented access. Researchers were able to see all relevant paperwork, including confidential referee reports, and were permitted to video editorial meetings. The result is a data set that documents the passage of more than 1,000 papers through the peer-review process, from submission to publication or, far more often, rejection (see ‘The data set’).

Having cameras in meetings was a bizarre experience, says Richard Horton, editor of The Lancet — he says it felt like the researchers were making a reality television show about medical journals. But Horton adds the process was well worth it, as “this is a huge study, which makes the results very reliable”.

Those results, the first of which were submitted for publication earlier this month, give a qualified thumbs up to current editorial practices. They may also go some way to dispelling widely held doubts about peer review.

One question investigated by the study authors, who are based at the University of California, San Francisco, is whether editors favour positive results over null findings. Many researchers do not attempt to publish negative results, assuming that editors are not interested. This is a particular problem in medicine, because the efficacy of a drug will be exaggerated if trials reporting negative results are not published.

But when the California researchers compared the 68 manuscripts that were published with those rejected, they found no evidence of bias towards statistically significant results. “Hopefully, this will encourage authors to submit,” says Kirby Lee, an expert in evidence-based health care and an author on the study. “Publication bias is a serious problem; it can really skew results in meta-analyses.”

Other findings also tend towards the quantitative (see ‘Dissecting peer review’). But peer-review experts say the qualitative parts of the study, which had to wait for transcripts of the videos, are likely to prove more interesting. “Peer review can be a complex decision-making process involving lots of people,” notes Sara Schroter, who studies journal practice at the BMJ Publishing Group in London. “If you want to understand why something happens it is best to conduct qualitative research.”

Lee says that the qualitative study should shed light on issues such as the criteria that editors use when deciding to review or reject articles, as well as how interactions at editorial meetings help shape decisions about publication. “We want to identify sources of systematic bias in the editorial review process that may result in a publication record that is not representative of the true distribution of study findings submitted to each journal,” he says.

Horton adds that the study could also help improve public understanding of peer review. The timing is good, he says, because questions about editorial standards are being asked in the wake of the scandal surrounding the South Korean stem-cell scientist Woo Suk Hwang, who published two widely acclaimed papers later found to have been faked. Horton says that some criticisms, such as failure to spot fabricated data, stem from a lack of understanding about what peer review can and cannot do.

“Peer review is a black box to the public and politicians,” he says. “Unless we open up that box we are going to get misperceptions.”