The treatment of patients should be underpinned by principles and practices derived from dependable high-quality health and medical research. Much of this research is funded by government through taxes or by philanthropic support. There is a justifiable expectation that science will provide new advances which benefit people with spinal cord injury. Tethered to this is an expectation that researchers themselves should provide high-quality and replicable findings. The number of researchers is growing, probably exponentially, together with the number of peer-reviewed publications, a common measure of scientists’ stature [1].

Amid the growth in publication numbers are disturbing trends affecting all types of medical research, including research in spinal cord injury. First, there is increasing attention on pervasive flaws in methodology. These range from the failure of many models of neurological diseases, the lack of robustness of biomarkers, and contamination of neural cell lines, to broader issues related to the poor reliability of published research findings (especially when participant numbers are low). Not only has it been argued that most published research findings must be false [2], but statistical power is commonly low in the biomedical and clinical sciences. Perhaps not surprisingly then, the rate of translation of new findings to the clinic is slow and problematic [3]. Second, the number of papers retracted from the peer-reviewed literature is also increasing [4]. Third, there is an over-reliance on a scientist’s publication metrics (numbers, journal impact factors, citation numbers) for progression, promotion, prizes, and research grants. Indeed, gaming the metrics of science is an occupational requirement for scientists, journal staff and university administrators. Publications now contain more spin (reliance on findings which are not justified by the statistics) and more liberal use of words such as ‘novel’ [5]. These trends are driven by an unhealthy culture in which it can be more important to publish a result than publish a correct result [6, 7]. The trends also expose deep flaws in the current systems of peer review.

The term “scientist” was introduced by William Whewell, a Cambridge theologian and polymath, at a meeting of the newly formed British Association for the Advancement of Science in 1833. This was when science, and its growing subspecialties, were pursued by people nearly always with no need to work for an income. Publication and promotion of results was a much slower and gentlemanly affair. Not so now. Now the factory of new results is fueled not only by the need to publish new papers but also by the necessity to survive as an employed scientist. That’s an unmentionable conflict of interest! What are some consequences?

This research culture can lead to cost- and corner-cutting, with hasty publication of irreproducible results and poor-quality work—it’s an era in which scientists can fall prey to the temptation to do whatever they can get away with in order to publish. This leads to scientific misconduct, commonly defined as ‘fabrication, falsification, or plagiarism in proposing, performing, or reviewing research, or in reporting research results’. A well-known recent case is Professor Paolo Macchiarini at the Karolinska Institutet. He was found guilty of over-hyping life-saving outcomes in patients who received his synthetic tracheas. There are many others. Cases which reach the public arena are a small fraction of those investigated, and these in turn are a fraction of actual cases of scientific misconduct. New data point to the scale of dodgy science. For example, 3.8% of papers contain suspicious images with many likely to have been manipulated deliberately [8]. Subsequent studies predicted that as many as 35,000 published papers are candidates for retraction just for image duplication [9]. Furthermore, it is likely that most researchers are well aware of what are politely called ‘questionable’ research practices [7].

Evidence for the premeditated nature of scientists’ misbehavior comes not just from their admission of performing questionable research practices but from their near unanimous view that such practices (e.g., deletion of outliers, searching for significant probability values) should be disclosed in publications when in fact no such admissions appear (e.g., [10]). If the herd can get away with it, then join the herd!

One vexed issue is what should be done about the investigation of claims of research misconduct. More than 20 European countries, the UK, USA, Canada, and others have national offices for research integrity. They have variable responsibilities for assessment of scientific misconduct. In the last few weeks we have seen national governments take quite divergent positions. At one extreme, in China, the State Council and Communist Party are implementing a broad crackdown on scientific fraud and misconduct [11]. Scientific misconduct will now be investigated independently by the science ministry. At the other extreme, the Australian government has gone soft in a newly-released code of research conduct (and guide) in which an institution can simply choose not to use the term ‘scientific misconduct’ and can now deal with claims completely in-house with no independent external investigators [12]. Between these extremes, the UK government has released a parliamentary report into ‘Research Integrity’ which considers the problems of institutional secrecy and non-disclosure in misconduct investigations [13]. It reveals that most universities have not signed on to the UK Research Integrity Office. However, while the report proposes establishment of a new overarching body, it stops short of mandating an external investigative process, which is independent of the universities and free from conflicts of interest.

Pure self-regulation by the universities in dealing with cases of potential scientific misconduct is doomed to fail—look at the manifest failures of this form of regulation by religious organizations. We should no longer tolerate misconduct investigations that take years, that are conducted in secret, and where their outcomes are not reported to the public. After all, it is the society that pays for the research, and which should ultimately benefit from its findings. So, allegations of scientific misconduct and fraud should be addressed from the start by an external and independent inquiry, preferably organized at a national level. Given that society demands science be highly credible, we need better research governance. With this, the move to ‘open’ science should be accompanied by more open investigation of scientists.