Must try harder

Journal name:
Nature
Volume:
483,
Page:
509
Date published:
DOI:
doi:10.1038/483509a
Published online

Too many sloppy mistakes are creeping into scientific papers. Lab heads must look more rigorously at the data — and at themselves.

Science: Branch of knowledge or study dealing with a body of facts or truths systematically arranged. So says the dictionary. But, as most scientists appreciate, the fruits of what is called science are occasionally anything but. Most of the time, when attention focuses on divergence from this gold (and linguistic) standard of science, it is fraud and fabrication — the facts and truth — that are in the spotlight. These remain important problems, but this week Nature highlights another, more endemic, failure — the increasing number of cases in which, although the facts and truth have been established, scientists fail to make sure that they are systematically arranged. Put simply, there are too many careless mistakes creeping into scientific papers — in our pages and elsewhere.

A Comment article on page 531 exposes one possible impact of such carelessness. Glenn Begley and Lee Ellis analyse the low number of cancer-research studies that have been converted into clinical success, and conclude that a major factor is the overall poor quality of published preclinical data. A warning sign, they say, should be the “shocking” number of research papers in the field for which the main findings could not be reproduced. To be clear, this is not fraud — and there can be legitimate technical reasons why basic research findings do not stand up in clinical work. But the overall impression the article leaves is of insufficient thoroughness in the way that too many researchers present their data.

“Handling corrections that have arisen from avoidable errors in manuscripts has become an uncomfortable part of the publishing process.”

The finding resonates with a growing sense of unease among specialist editors on this journal, and not just in the field of oncology. Across the life sciences, handling corrections that have arisen from avoidable errors in manuscripts has become an uncomfortable part of the publishing process.

The evidence is largely anecdotal. So here are the anecdotes: unrelated data panels; missing references; incorrect controls; undeclared cosmetic adjustments to figures; duplications; reserve figures and dummy text included; inaccurate and incomplete methods; and improper use of statistics — the failure to understand the difference between technical replicates and independent experiments, for example.

It is usually the case that original data can be produced, mistakes corrected, and the findings of the corrected research paper still stand. At the very least, however, there is too little attention paid and too many corrections, which reflect unacceptable shoddiness in laboratories that risks damaging trust in the science that they, and others, produce.

The situation throws up many questions. Here are three of them. Who is responsible? Why is it happening? How can it be stopped?

The principal investigators (PIs) of any lab from which the work originates, especially if their names are on the paper, have an absolute and unavoidable responsibility to ensure the quality of the data from their labs, even if the main work is done by experienced postdocs. Officially, postdocs and graduate students are still in training, and it is the PI's job to make sure they are properly trained — in statistics and appropriate image editing, for a start. It is unacceptable for lab heads —who are happy to take the credit for good work — to look at raw data for the first time only when problems in published studies are reported.

In private, scientists who run labs in even the most prestigious universities admit that they have little time to supervise and train all their students. Institutions such as the European Molecular Biology Laboratory in Heidelberg, Germany, have maximum lab sizes for this reason. Funding agencies should require grant applicants to indicate lab size and offer adequate supervision. As is the case in commercial companies, larger labs should introduce formal training and a management hierarchy, with more experienced postdocs and research associates required to sign off data and experiments if PIs cannot do so themselves.

What can journal editors and referees do? Sloppiness is sometimes caught, but so much must be taken on trust. Journals should certainly offer online commenting, so that alert readers can point out errors. Where comments or corrections appear in other journals, these should be linked from the original paper — as the Comment authors recommend.

There should also be increased scope to publish fuller results from an experiment, and subsequent negative or positive corroborations. There is an opportunity here for 'minimum threshold' journals, such as PLoS ONE and Scientific Reports. Editors and referees cannot be expected to divine when only positive data are included and inconvenient results left out, but journals should encourage online presentation of the complete picture. And scientists should offer it. The complete picture is, after all, what this science of ours strives to provide.

Comments

  1. Report this comment #40700

    Reto Muller said:

    And that is your explanation for cis-platin from the 70ies still being the ?/ among the most widely used anti-cancer drugs? After we have been sequenced and after all the billions put into applied cancer research? Missing references and bad stats? There wouldn't be something more fundamental going wrong, would there?

  2. Report this comment #40706

    Qinglong Zhao said:

    "Journals should certainly offer online commenting, so that alert readers can point out errors. "
    That will be very helpful. Journals should help academic communication, not only print the paper as they did a century ago.

  3. Report this comment #40707

    Stephan Lloyd Watkins said:

    What can you say, on one side I do know over the last 40 (or more) years some MDs and PhDs leave out say the one thing necessary to make it work from the protocol (say a proper buffer or something) to make there competitors waste 6 months to get it to work, but they do get it to work in the end (as an example, a chemical fabrication technique published here a few years ago made several labs try to reproduce results, and eventually after a year the magic ingredient for a 300 million a year fabrication industry was figured out, everyone a year behind)

    Now we also seem to be amassing the tertiary effects of such things, where a group or people have to preserve 20,30,50 years of a norm or other published work already drawing the money. If new techniques and new research indicates something in the far off past was fabricated or wrong, but your funding (or even your whole departments funding) depends on this, then an entire research area can be wiped out just trying to keep their product going through the medical or other hurdles to reach the end sales point. This in turn, as some scientific work produces small or larger industries when applied, or in the least millions or billions in sales for a single product which employees a body of people, becomes a specific governments national security issue as it effects the well being of their "citizens".

    As a purist, one could say there is no way to have real science and money hand in hand especially when one is required to generate money with the respective science. By definition they are not the same. Now though, we have an emerging new dilemma in the realm mentioned above I like to call "shoot yourself in the leg". When governments become involved with such things as say the drug industry, which has a revenue that is greater than 80% of the worlds countries yearly income, you end up with stagnation via one of 20 means. This stagnation, usually to protect a market or area within a market however small, ends up preventing the emergence of other markets or research which may lead to markets or products at some later time point. As 2/3 of the earths countries actually saw an increase in yearly revenues over the last several years the chances that somewhere else will just say "screw it" and do the research or produce the products and then further damage an economy are larger. It approaches a point where the richer governments have to actually use "force" to maintain marketing dominance.

    With all that in mind, try getting some individual at the lower end, a PhD student or young scientist, to actually think beyond there pocket books like a scientist is supposed to, and realize the magnitude of what they do and what their work does or can entail to society.

    And then we all are still using cis platen, when there's 10 other drugs, or even thearapies which might cure your specific (4335 or more types) cancer. Maybe we will all be able to fly to Vietnam, Brazil, Africa or Malaysia...

    Stephan Watkins

  4. Report this comment #40713

    M. Talha GOKMEN said:

    I wish I could comment on that one paper, which is published in that well-known high impact factor journal, that their results are certainly not reproducible. Neither by me or by another scientist independantly.

    The suggestions in this article are not bad. But we know that the problem is in the academic rewarding system. The more papers you have, the better scientist you are. So what do you do, publish as much as and as fast as you can...

    I am not in a position to judge this but some PIs publish as much as one paper a week! This PI still travels for conferences, joins meetings, teaches... What is the limit?

  5. Report this comment #40722

    Jim Woodgett said:

    The issue with inaccuracies in scientific publication seems not to be major fraud (which should be correctable) but a level of irresponsibility. When we publish our studies in mouse models, we are encouraged to extrapolate to human relevance. This is almost a requirement of some funding agencies and certainly a pressure from the press in reporting research progress. When will this enter the clinic? The problem is an obvious one. If the scientific (most notably, biomedical community) does not take ownership of the problem, then we will be held to account. If we break the "contract" with the funders (a.k.a. tax payers), we will lose not only credibility but also funding. There is no easy solution. Penalties are difficult to enforce due to the very nature of research uncertainties. But peer pressure is surely a powerful tool. We know other scientists with poor reputations (largely because their mistakes are cumulative) but we don't challenge them. Until we realize that doing nothing makes us complicit in the poor behaviour of others, the situation will only get worse. Moreover, this is also a strong justification for fundamental research since many of the basic principles upon which our assumptions are based are incomplete, erroneous or have missing data. Building only on solid foundations was a principle understood by the ancient Greeks and Egyptians yet we are building castles on the equivalent of swampland. No wonder clinical translation fails so often.

  6. Report this comment #40726

    Lynn Silver said:

    I agree that there are financial pressures and and publish-or-perish pressures, but it seems to me that good old fashioned scientific method and doing the right controls are being forgotten. Figuring out what controls to do, and designing experiments aimed at testing [and falsifying] a hypothesis, should be a major part of graduate science education. And yet – in many of the papers I referee and in papers in the literature, I see a lack of this basic capability. Better refereeing and editing is needed. The tendency to sloppy science will only increase with more non-peer-reviewed online publication.

    On the other hand....The publication of an experiment that succeeded once out of 6 times [and 0 out of 50 by the Amgen group] is getting pretty close to misconduct.

  7. Report this comment #40765

    H T said:

    Ahem, is everyone forgetting peer review? Yes, the same system that pains every paper submission, supposedly brings an air of legitimacy to the papers, and the line that separates biology papers from the physics Arxiv preprints. In better days, the aged peer review system will catch honest errors and maintain quality of the publications, but how can it combat borderline fraud like submitting the same manuscript to different journals until one accepts? The former is encouraged by the publish-or-perish system while the latter takes advantage of scientists willing to exploit the publication system.

    While many researchers openly criticize the non-sensible and outdated patent system, many are willing to defend the peer review system. And the reason? There is no "better alternative" available. Well, the pharmaceutical industry has awoken from the delusion. If this trend continues, we'll no longer submit papers to Nature or Science but to Amgen and Bayer.

  8. Report this comment #40767

    Mostly Anonymous said:

    I fail to understand how this problem comes as a surprise to anyone.

    Look around you, how many labs are run by a PI who hasn't touched a pipette in 10, 20, 30 years? How many undergrads, grad students, and post-docs are never given formal mentorship? How many PIs or senior lab members have the time to invest in providing any formal oversight of trainees actually performing experiments and analyzing the data?

    The standard approach taken in most labs is:

    "Here's a bench and a computer, go figure it out. You're expected to do everything on your own since everyone else is busy and doesn't really have the time or interest to provide you with more than a couple minutes of actual help. Usually, you'll just be told to 'read the literature'.

    Also, I would really prefer that you report positive data since my hypothesis/grant/paper depends on it. And you would really prefer positive data so that you can get a letter of recommendation/graduate/publish/keep your job. Did I mention that a failure to produce positive results will be frowned up and probably indicates your overall failure as a human being? I'm sure you can have these experiments done by Friday, right?"

    If this is how a lab is run, questionable results are inevitable. Honest people make mistakes, and without proper oversight, mentorship and training, those mistakes will make it all the way to publication.

  9. Report this comment #40777

    B. B. Goel said:

    I met many "senior" postdocs and grad students who say that senior members, including many PIs in their labs consider them as future competitors and potential threats. Even within labs the rivalry among different members have increased significantly in recent times and mostly it is not so healthy competition.

    Many PIs actively promote such conflicts to "get best out of them" and put pressure (to already almost inhuman working condition and "worst work place bullying", as in the US as reported by USA Today news paper). In this era of cut throat competition to get jobs in research (in both academics and private companies) and funding, many PIs and senior lab members deliberately recruit and/or promote mediocre postdocs whom they feel are the least threat and/or higher probability to go back to their native countries. Reduced or non-existent mentoring can also be deliberate. The days of honest desire to "do something for the society and students" seems long gone. now "research" is a profession and "success" is the buzz word. Not many people have the time and/or ability to understand the cost and consequences, in the long run. Who cares, so long my job and "success" is ensured by whatever means necessary!!!!

  10. Report this comment #40779

    B. B. Goel said:

    I agree with H T (09:26 AM)- "we'll no longer submit papers to Nature or Science but to Amgen and Bayer." Some people already started doing just that and few more (mainly general public and private company executives) started believing that articles published in open access newspapers and media (e.g BBC, NPR, CNN, The Guardian etc) have more credibility than so-called peer reviewed journals. They believe that more people, including the "subject matter experts" do have equal opportunity to read and express views in favor or against any article published. In more extreme cases they can take legal steps against the news group and/or the author for lies, distorted facts and other issues, which is not possible for peer-reviewed article. That gives not only wider coverage (among readers) but also more credibility for any articles published in general news paper.

  11. Report this comment #40783

    Lilly Potter said:

    I am not surprised by the conclusions of Dr. Glenn Begley. For many different reasons, scientific frauds reach now endemic levels in academia and it is about time that someone rings the alarm.

    The key word in academic labs is "success" and "success" is defined by the number of times your name will end up in articles, regardless the way you took to have your name there. Nobody cares any more about the manner by which the results are obtained.

    I am also completely agree with B.B.Goel when he says that the human environment in which science is done in academia is completely insane, unhealthy and pathological. No mentoring, war among postdocs (I really mean wars not competition...), vicious bullying, cheating, abuse of some lab members by some golden kids, aggression to have access to equipment, destruction of data from colleagues, etc. I am a scientist and I have worked in different countries in America and Europe and these behaviours can be observed not only in USA but also in some European countries. And the worst is that these behaviours are encouraged. To become a scientist you must not be smart and have good critical thinking, you must be strong, tough and have very great political abilities, that's all. So, one advice for young people who want to do scientific research, "it is not necessary to read the Origins of Species of Darwin and to think about it..., instead watch Rambo movies"

    Very, very sad....

  12. Report this comment #40817

    Anita Bandrowski said:

    Wow, this is a really great commentary on the state of science in 2012.
    I agree with the sentiment of many people on the discussion board the authors of this article, and editors of Nature that we need to do something about this.

    However, I would like to point out that publishing in Nature actually encourages this sort of reckless behavior in science. How are we to publish good science when the methods section is being shortened to the point where a reader must find the methods in a paper published by the same group elsewhere? In many cases the methods are moved off to some completely opaque supplementary figure section and are not peer reviewed in the same way as the main text. Why is it surprising that this sort of "science" is not reproducible?

    In our analysis of the Journal of Neuroscience, less than 10% of the antibody reagents reported in a 2010 issue were identified with catalog numbers. Of course the same is true for almost all other journals except for the Journal of Comparative Neurology (JCN) where over 90% of the commercial antibodies were identified by clone ids and catalog numbers.

    One should consider that:
    1. antibodies are often the critical reagent that establishes a set of conclusions, and
    2. that antibody companies buy and sell new product lines at alarming rates making the probability of seeing the same reagents from month to month at a company a relative rarity.
    Having no catalog number for an antibody is equivalent to saying "I obtained a rodent from PetSmart [headquartered in Phoenix, AZ]". Why are editors not up in arms about this? I have no idea, but it is certainly interesting that the journal with the most stringent guidelines for reporting methods, JCN, also has one of the lowest retraction rates (personal communication, C.Saper).

    Why does Nature, with its 'concerned staff' not take a stand and actually change its policies regarding reporting of the methods? I am positive that authors, who are trying to publish good science in Nature will be able to provide simple things such as reagent catalog numbers, animal identifiers, links to supplementary data deposited in a curated repository (where both positive and negative data should be deposited) and other such relatively trivial things.

    On a side note, I work on the Neuroscience Information Framework project and the antibody registry so my opinions are both informed and biased from the perspective of someone attempting to make sense of the biomedical literature and data on a daily basis. I applaud loudly both Nature for taking this paper and anyone attempting to reproduce results because this is a part of science that is too often under appreciated.

  13. Report this comment #40828

    Jim Smith said:

    Journals have their part to play in this too. Many of us will have been told by an editor "we'll publish your paper if you can show [some remarkable result or other]." It's not the job of editors to tell nature (lower case n) how to behave, and while scientists should not be so weak as to tweak their results in any way, they shouldn't be pressured to do so.

  14. Report this comment #40829

    Michael Lichten said:

    Good idea. Nature can take the lead by requiring authors to include complete blots, numbers to back up "representative images", etc. in their articles. These things are often sorely lacking.

  15. Report this comment #40859

    Kevin McLure said:

    I recall Tom Curran telling about publishing his paper on growth factors regulating fos expression in Nature [1984, 312(5996):716-20]. The key result was observed in a fraction of experiments, which he wanted to present, but the editor requested only the positive data to be mentioned in order to present a clear story. Tom stuck to his guns and still mentioned in the text what fraction of experiments replicated the result. Why don't editors encourage such honest discourse, or even require it for each experiment?

  16. Report this comment #40874

    Tom Curran said:

    Kevin,
    I am surprised you remember my anecdote! You are correct, the paper was published in Science (Sonnenberg et al., 1989, Science 246; 1622-1625) and, following the published guidelines regarding variability of results, we included the comment that "the data reported here have been obtained in at least three independent experiments; however, on several occasions no transactivation was detected." The Editor requested that we move the statement from the body of the text into the references and notes section (it is note 16). In discussions with many postdocs working in the field, though not their PIs, I learned that all had encountered this variability but chose to disregard the results when "the experiment didn't work". Actually, I still can't fully explain the variability except that we now know that transcription regulation is a lot more complicated than a couple of proteins binding to a little piece of DNA.

    I have continued to tilt at these windmills but to little avail (see Curran EMBO Mol Med 2010, 2; 386-7). I read the commentaries on this article due to my continuing interest. I am afraid I was a bit reminded of the Scottish Bard's immortal work "Holy Willie's Prayer" (http://www.robertburns.org.uk/Assets/Poems_Songs/holy_willie.htm)

  17. Report this comment #40903

    Jukka Westermarck said:

    Commenting tool attached to an article in journal´s pages is an good idea, but if you need to reveal your identity, nobody would comment in a fear of getting troubles. One very good way to control the reliability of your own data is to ask somebody else from the lab (a technician) to sneak in to the project and repeat some key findings. Even better is if some experiments would be done in a collaborators lab. My own lab is so small that we always have several collaborating labs involved and if somebody in another country can see the same results as we did when following our protocol, I can be sure that at least that piece of data I can count on. Maybe Journals could ask rather than "who wrote the paper", "which participating lab has done which experiment".

  18. Report this comment #40904

    Tom Curran said:

    Kevin,
    Your memory is better than mine. I now recall the conversation relating to the 1984 Nature paper. Originally the paper included a very nice experiment conducted by Rodrigo Bravo showing that the induction of fos expression by growth factors was not necessarily linked to cell cycle progression as the same immediate-early gene response was observed in variant cell lines whose growth was inhibited by growth factor treatment. A Nature editor required us to remove this data because it added complexity. The data were eventually published elsewhere (Bravo et al., 1985, EMBOJ 4; 1193-1197) but it took quite a while, and a lot of effort, to dispel the notion that the cellular immediate-early gene response was a cell cycle event.

  19. Report this comment #41132

    Jim Caryl said:

    I spent three frustrating years working on a bionanosciences postdoc at a time when bionanosciences was very much a blue skies, nascent field. I identified numerous nanotechnologies that could be adapted for the new and interesting use we wanted to put them to, but initially began by trying to repeat the authors own findings with their own controls, and a few of my own. If my postdoc work had been about debunking the work of others, I would have had plenty of substrate, however, my postdoc was to create something new, and that was rendered impossible because of the poor reproducibility of all but a very few studies.

    This much becomes clear when you realise the lab publishing the work has done nothing further on that line of investigation – they've realised it themselves, but at least got their hit. I got a number of my own things to work in the same way; but the reproducibility was also terrible. I could get it to work from a different approach, once, then find the same reproducibility issue. I was later to learn that – at the time I was working in the field – everyone was finding this a headache. Being an idealist, I was not willing to put my name to something that was not reproducible – which is why a journal of negative results would have been useful.

    But I wonder if the authors of those papers whose work I and the other researchers on our grant were unable to reproduce knew their work was not reproducible prior to publishing their one and only paper on the subject? I have to imagine they were, or at least they were not tested by an independent researcher within their own labs. This discovery, as well as then noting that this occurs rather more often than I could have believed, has left me with a bitter taste since then.

    Fundamental, blue-skies research is a necessary part of science, and has to be done, but getting early stage researchers to work in such endeavours is a twin edge sword. There's plenty of 'room' to publish, but frankly you may find yourself banging your head against the wall of a field with a limited knowledge base. None of this counts for your permanent record of publications, thus it's difficult to communicate the many and creative ways you tried to address problems, all be it fruitlessly. It's useful information for your own lab, and a cautionary tale for the researchers that follow you, but it's a career killer for that first set of researchers. Everyone knows this is how science works, it's just a shame that with the current system, there's no room for failure in a field of endeavour within with failure is the bread and butter of the daily grind!

  20. Report this comment #42308

    Vadim Volkov said:

    Recent indications that up to 90% of publications in cancer research may be incorrect (Nature 483, 509, 2012, doi:10.1038/483509a ) pose serious questions about science ethics, research community and funding. The indications coincide with the drop in efficiency of research and development in pharmaceutical industry, where expenses rose about 80 times per a new medical drug since 1950 (Nature Reviews Drug Discovery 11, 191-200, 2012, doi:10.1038/nrd3681). It might be a reason why money and production move out of developed European countries to new growing economies with cheaper and less regulated labour force.

    Several measures could release the problem. They could be as following: include publishing negative results, exclude strong correlation between career growth and publication activity (especially for younger scientists in biomedical research), providing more opportunities to express opinion and share results for students. The whole community is loosing more when fast incorrect results are published.

    From the other point, it might be reasonable to reclassify large part of cancer research which is not related to medicine as animal/mouse and cell science research and pour fresh blood from practical medicine into the rest of the field.

  21. Report this comment #47355

    Alexander Stern said:

    It's actualy interesting to see how a really profound works can be found in the journals today. Nature journal is no exception. It seems the the labs should make sure they pass their work through 1-2 additional side scientists in the same field so theworks vare verified for correctness and fullness before it's even attempted to be published. Otherwise some works which clearly are "home" made just to make other scientists wrong, like researches which prooved dr simeons diet as wrong, succeeded in prooving this to FDA without much verification from the scientific community.

  22. Report this comment #56720

    Alexander Stern said:

    This much becomes clear when you realize the lab publishing the work has done nothing further on that line of investigation ? they've realized it themselves, but at least got their hit. I got a number of my own things to work in the same way; but the reproducibility was also terrible. I could get it to work from a different approach, once, then find the same reproducibility issue. I was later to learn that ? at the time I was working in the field ? everyone was finding this a headache. Being an idealist, I was not willing to put my name to something that was not reproducible ? which is why a journal of negative results would have been useful. ????? HCG

  23. Report this comment #57689

    asaf segal said:

    Very interested indeed.
    <a href="http://www.hcgdiet.org.il/">????? ??????</a>

  24. Report this comment #57690

    asaf segal said:

    Good idea. Nature can take the lead by requiring authors to include complete blots, numbers to back up "representative images", etc. in their articles.

    <a href="http://www.hcgdiet.org.il/" target="_blank">????? ??????</a>

  25. Report this comment #57692

    asaf segal said:

    Good idea. Nature can take the lead by requiring authors to include complete blots, numbers to back up "representative images", etc. in their articles.

    <a href="http://www.hcgdiet.org.il/" target="_blank">????? ??????</a>

  26. Report this comment #63611

    Kinom Tiron said:
    spent three frustrating years working on a bionanosciences postdoc at a time when bionanosciences was very much a blue skies, nascent field. I identified numerous nanotechnologies that could be adapted for the new and interesting use we wanted to put them to, but initially began by trying to repeat the authors own findings with their own controls, and a few of my own. If my postdoc work had been about debunking the work of others, I would have had plenty of substrate, however, my postdoc was to create something new, and that was rendered impossible because of the poor reproducibility of all but a very few studies.

    Several measures could release the problem. They could be as following: include publishing negative results, exclude strong correlation between career growth and publication activity (especially for younger scientists in biomedical research), providing more opportunities to express opinion and share results for students. The whole community is loosing more when fast incorrect results are published.

    Kinom@jocgatit

Subscribe to comments

Additional data