As the current headlines make all too clear, controversies over scientific conclusions in fields such as climate change can have the effect — deliberate or otherwise — of undermining the public's faith in science. As journals are an essential component of the scientific process, it perhaps makes sense to offer an explanation of how we pick research papers for publication in Nature, focusing on a number of false impressions that we have become aware of in and beyond the research community.

One myth that never seems to die is that Nature's editors seek to inflate the journal's impact factor by sifting through submitted papers (some 16,000 last year) in search of those that promise a high citation rate. We don't. Not only is it difficult to predict what a paper's citation performance will be, but citations are an unreliable measure of importance. Take two papers in synthetic organic chemistry, both published in June 2006. One, 'Control of four stereocentres in a triple cascade organocatalytic reaction' (D. Enders et al. Nature 441, 861–863; 2006), had acquired 182 citations by late 2009, and was the fourth most cited chemistry paper that we published that year. Another, 'Synthesis and structural analysis of 2-quinuclidonium tetrafluoroborate' (K. Tani and B. M. Stoltz Nature 441, 731–734; 2006), had acquired 13 citations over the same period. Yet the latter paper was highlighted as an outstanding achievement in Chemical and Engineering News, the magazine of the American Chemical Society.

Indeed, the papers we publish with citations in the tens greatly outnumber those in the 100s, although it is the latter that dominate our impact factor. We are proud of our full spectrum.

Another long-standing myth is that we allow one negative referee to determine the rejection of a paper. On the contrary, there were several occasions last year when all the referees were underwhelmed by a paper, yet we published it on the basis of our own estimation of its worth. That internal assessment has always been central to our role; Nature has never had an editorial board. Our editors spend several weeks a year in scientific meetings and labs, and are constantly reading the literature. Papers selected for review are seen by two or more referees. The number of referees is greater for multidisciplinary papers. We act on any technical concerns and we value the referees' opinions about a paper's potential significance or lack thereof. But we make the final call on the basis of criteria such as the paper's depth of mechanistic insight, or its value as a data resource or in enabling applications of an innovative technique.

At the same time, we operate on the strict principle that our decisions are not influenced by the identity or location of any author. Almost all our papers have multiple authors, often from several countries. And we commonly reject papers whose authors happen to include distinguished or 'hot' scientists.

Yet another myth is that we rely on a small number of privileged referees in any given discipline. In fact, we used nearly 5,400 referees last year, and are constantly recruiting more — especially younger researchers with hands-on expertise in newer techniques. We use referees from around the scientifically developed world, whether or not they have published papers with us, and avoid those with a track record of slow response. And in highly competitive areas, we will usually follow authors' requests and our own judgement in avoiding referees with known conflicts of interest.

Myths about journals will continue to proliferate. We can only attempt to ensure that the processes characterized above remain as robust and objective as possible, in our perpetual quest to deliver to our readers the best science that we can muster.