Peer review may be a good way to assess research papers, but it can fall short in ranking the journals themselves. That's the reaction of some metrics experts to the first such journal rankings, launched this week by the Faculty of 1000 (F1000) in London. Critics question the method, which relies on scores awarded to individual papers by the F1000 'faculty' of 10,000 scientists and clinicians. Such scores, they claim, could be skewed by the interests and enthusiasms of individual reviewers.

Richard Grant, associate editor of F1000, says that the rankings give authors a valuable measure, complementary to journal league tables based on citation impact. He says that the first F1000 rankings will be refined, adding that F1000 is "constantly striving to improve coverage in all specialities".

Created in 2002, the F1000 aims to filter the literature by asking experts to select noteworthy papers and rate them as 6 (recommended), 8 (must read) or 10 (exceptional). Now, it has extended the concept by totting up the scores of all a journal's rated articles over a given period, and normalizing the totals — adjusting for the total number of articles that the journal published over that period, for example.

The results put the usual suspects at the top. In the rankings for 2010, the latest full year available, Nature leads in biology and the New England Journal of Medicine in medicine. But further down the lists, the F1000 often departs from impact factors. "We're aware the correlation with impact factors isn't exact, and we wouldn't expect it to be," says Grant. The Proceedings of the National Academy of Sciences (PNAS) "does particularly well by our ranking because there are a lot of papers in there that are obviously valuable to the community".

But some critics say that the limited number of papers reviewed — fewer than 20,000 per year, of more than one million published — could compromise the rankings. "The scores may tell us as much about the composition of the F1000 faculty as they do about the relative quality of various journals," says Carl Bergstrom, a biologist at the University of Washington, Seattle, and an F1000 faculty member who publishes a rival metric, the Eigenfactor.

Philip Davis, a scholarly-publishing expert at Cornell University in Ithaca, New York, says that "a single enthusiastic reviewer could propel a small, specialist journal into a high ranking simply by submitting more reviews". One journal seems to owe its surprisingly high ranking to a series of very positive evaluations of its articles by its own editor.

Grant says that such a competing interest should have been declared, and that the F1000 will look into the matter. In the interim, the journal in question has been withdrawn from the rankings. The F1000 will also make its code of conduct more explicit, says Grant. He notes that all evaluations and methodology are available on the F1000's website, making the ranking process transparent and allowing users to alert the F1000 to any concerns.