Many science journalists rely on peer review to check their stories are true.

There has been much gnashing of teeth in the science-journalism community this week, with the release of an infographic that claims to rate the best and worst sites for scientific news. According to the American Council on Science and Health, which helped to prepare the ranking, the field is in a shoddy state. “If journalism as a whole is bad (and it is),” says the council, “science journalism is even worse. Not only is it susceptible to the same sorts of biases that afflict regular journalism, but it is uniquely vulnerable to outrageous sensationalism”.

News aggregator RealClearScience, which also worked on the analysis, goes further: “Much of science reporting is a morass of ideologically driven junk science, hyped research, or thick, technical jargon that almost no one can understand”.

How — without bias or outrageous sensationalism, of course — do they judge the newspapers and magazines that emerge from this sludge? Simple: they rank each by how evidence-based and compelling they subjectively judge its content to be. Modesty (almost) prevents us from naming the publication graded highest on both (okay, it’s Nature), but some names are lower than they would like. Big hitters including The New York Times, The Washington Post and The Guardian score relatively poorly.

It’s a curious exercise, and one that fails to satisfy on any level. It is, of course, flattering to be judged as producing compelling content. But one audience’s compelling is another’s snoozefest, so it seems strikingly unfair to directly compare publications that serve readers with such different interests as, say, The Economist and Chemistry World. It is equally unfair to damn all who work on a publication because of some stories that do not meet the grade. (This is especially pertinent now that online offerings spread the brand and the content so much thinner.)

The judges’ criterion of evidence-based news is arguably problematic, as well. Many journalists could reasonably point to the reproducibility crisis in some scientific fields and ask — as funders and critics are increasingly asking — just how reliable some of that evidence truly is. Mainstream science reporters have typically taken peer review as an official stamp of approval from the research community that a published finding is sufficiently robust to share with their readers. Yet this kind of evidence-based reporting is only as reliable as the evidence it reports on. And many scientists would complain (even if only among themselves) that some published studies, especially those that draw press attention, are themselves vulnerable to bias and sensationalism.

This is one reason why the rise of the scientist (and non-scientist) as blogger, along with other forms of post-publication review, has been so valuable. Many scientists know about the problems with some fields of research. Many journalists do, too — articles on questionable practices from statistical fishing to under-powered studies are an increasing presence in most of the publications in the infographic. The relationship between science and media reporting is far from simple, and both sides should remember this.

An attempt to rank science news sites has caused controversy. Credit: American Council on Science and Health/RealClear Media Group