The treatment of patients should be underpinned by principles and practices derived from dependable high-quality health and medical research. Much of this research is funded by government through taxes or by philanthropic support. There is a justifiable expectation that science will provide new advances which benefit people with spinal cord injury. Tethered to this is an expectation that researchers themselves should provide high-quality and replicable findings. The number of researchers is growing, probably exponentially, together with the number of peer-reviewed publications, a common measure of scientists’ stature .
Amid the growth in publication numbers are disturbing trends affecting all types of medical research, including research in spinal cord injury. First, there is increasing attention on pervasive flaws in methodology. These range from the failure of many models of neurological diseases, the lack of robustness of biomarkers, and contamination of neural cell lines, to broader issues related to the poor reliability of published research findings (especially when participant numbers are low). Not only has it been argued that most published research findings must be false , but statistical power is commonly low in the biomedical and clinical sciences. Perhaps not surprisingly then, the rate of translation of new findings to the clinic is slow and problematic . Second, the number of papers retracted from the peer-reviewed literature is also increasing . Third, there is an over-reliance on a scientist’s publication metrics (numbers, journal impact factors, citation numbers) for progression, promotion, prizes, and research grants. Indeed, gaming the metrics of science is an occupational requirement for scientists, journal staff and university administrators. Publications now contain more spin (reliance on findings which are not justified by the statistics) and more liberal use of words such as ‘novel’ . These trends are driven by an unhealthy culture in which it can be more important to publish a result than publish a correct result [6, 7]. The trends also expose deep flaws in the current systems of peer review.
The term “scientist” was introduced by William Whewell, a Cambridge theologian and polymath, at a meeting of the newly formed British Association for the Advancement of Science in 1833. This was when science, and its growing subspecialties, were pursued by people nearly always with no need to work for an income. Publication and promotion of results was a much slower and gentlemanly affair. Not so now. Now the factory of new results is fueled not only by the need to publish new papers but also by the necessity to survive as an employed scientist. That’s an unmentionable conflict of interest! What are some consequences?
This research culture can lead to cost- and corner-cutting, with hasty publication of irreproducible results and poor-quality work—it’s an era in which scientists can fall prey to the temptation to do whatever they can get away with in order to publish. This leads to scientific misconduct, commonly defined as ‘fabrication, falsification, or plagiarism in proposing, performing, or reviewing research, or in reporting research results’. A well-known recent case is Professor Paolo Macchiarini at the Karolinska Institutet. He was found guilty of over-hyping life-saving outcomes in patients who received his synthetic tracheas. There are many others. Cases which reach the public arena are a small fraction of those investigated, and these in turn are a fraction of actual cases of scientific misconduct. New data point to the scale of dodgy science. For example, 3.8% of papers contain suspicious images with many likely to have been manipulated deliberately . Subsequent studies predicted that as many as 35,000 published papers are candidates for retraction just for image duplication . Furthermore, it is likely that most researchers are well aware of what are politely called ‘questionable’ research practices .
Evidence for the premeditated nature of scientists’ misbehavior comes not just from their admission of performing questionable research practices but from their near unanimous view that such practices (e.g., deletion of outliers, searching for significant probability values) should be disclosed in publications when in fact no such admissions appear (e.g., ). If the herd can get away with it, then join the herd!
One vexed issue is what should be done about the investigation of claims of research misconduct. More than 20 European countries, the UK, USA, Canada, and others have national offices for research integrity. They have variable responsibilities for assessment of scientific misconduct. In the last few weeks we have seen national governments take quite divergent positions. At one extreme, in China, the State Council and Communist Party are implementing a broad crackdown on scientific fraud and misconduct . Scientific misconduct will now be investigated independently by the science ministry. At the other extreme, the Australian government has gone soft in a newly-released code of research conduct (and guide) in which an institution can simply choose not to use the term ‘scientific misconduct’ and can now deal with claims completely in-house with no independent external investigators . Between these extremes, the UK government has released a parliamentary report into ‘Research Integrity’ which considers the problems of institutional secrecy and non-disclosure in misconduct investigations . It reveals that most universities have not signed on to the UK Research Integrity Office. However, while the report proposes establishment of a new overarching body, it stops short of mandating an external investigative process, which is independent of the universities and free from conflicts of interest.
Pure self-regulation by the universities in dealing with cases of potential scientific misconduct is doomed to fail—look at the manifest failures of this form of regulation by religious organizations. We should no longer tolerate misconduct investigations that take years, that are conducted in secret, and where their outcomes are not reported to the public. After all, it is the society that pays for the research, and which should ultimately benefit from its findings. So, allegations of scientific misconduct and fraud should be addressed from the start by an external and independent inquiry, preferably organized at a national level. Given that society demands science be highly credible, we need better research governance. With this, the move to ‘open’ science should be accompanied by more open investigation of scientists.
Sarewitz D. The pressure to publish pushes down quality. Nature. 2016;533:147.
Ioannidis JP. Why most published research findings are false. PLoS Med. 2005;2:e124.
Ioannidis JP. Why most clinical research is not useful. PLoS Med. 2016;13:e1002049.
Stern AM, Casadevall A, Steen RG, Fang FC. Financial costs and personal consequences of research misconduct resulting in retracted publications. eLife. 2014;3:e02956.
Chiu K, Grundy Q, Bero L. ‘Spin’ in published biomedical literature: a methodological systematic review. PLoS Biol. 2017;15:e2002173.
Smaldino PE, McElreath R. The natural selection of bad science. R Soc Open Sci. 2016;3:160384.
Boulbes DR, Costello T, Baggerly K, Fan F, Wang R, Bhattacharya R et al. A survey on data reproducibility and the effect of publication process on the ethical reporting of laboratory research. Clin Cancer Res. 2018;24:3447–55.
Bik EM, Casadevall A, Fang FC. The prevalence of inappropriate image duplication in biomedical research publications. mBio. 2016;7:pii: e00809–16.
Bik EM, Fang FC, Kullas AL, Davis RJ, Casadevall A. Analysis and correction of inappropriate image duplication: the molecular and cellular biology experience. Mol Cell Biol. 2018. In press https://doi.org/10.1128/MCB.00309-18.
Héroux ME, Loo CK, Taylor JL, Gandevia SC. Questionable science and reproducibility in electrical brain stimulation research. PLoS One. 2017;12:e0175635.
Cyranoski D. China introduces sweeping reforms against misconduct. Nature. 2018;558:171–171.
Vaux D, Brooks P, Gandevia S. Weakened code risks Australia’s reputation for research integrity. Australia: The Conversation (June 28, 2018).
House of Commons Science and Technology Committee Report on Research Integrity (Sixth Report of Session 2017-19; printed 26 June 2018).
SG is funded by the National Health and Medical Research Council (of Australia)
Conflict of interest
The author declares that he has no conflict of interest.
About this article
Cite this article
Gandevia, S. Publication pressure and scientific misconduct: why we need more open governance. Spinal Cord 56, 821–822 (2018). https://doi.org/10.1038/s41393-018-0193-9
This article is cited by
A review of the current concerns about misconduct in medical sciences publications and the consequences
DARU Journal of Pharmaceutical Sciences (2020)