All researchers (including Editors) would like to think they are impartial scientists who are immune to biased thinking. But they may well believe that others are less resistant to such biases. That is, we tend to have a “bias blind spot” for our own cognitive biases but we are much better at detecting biased thinking in others [1]. This and many other insights from cognitive science have persuaded a growing number of researchers that a better understanding of how people think is needed if we are to reduce the level of bias in research [2]. This is important because biased research wastes money, effort, and the time of researchers and participants, but more importantly it moves us away from finding answers for people with SCI and their families. Below are a few examples of our hardwired and subconscious tendencies to introduce bias into our work and thinking. These examples are supported by many decades of research in cognitive science.

The term “heuristic”, as used in cognitive psychology since the 1970s, describes decision-making processes in our brain that are automatic, often subconscious, and allow for faster, more energy efficient decisions that in general, help us deal effectively with the world around us [3, 4]. One relevant example can be seen when we automatically accept the first explanation that comes to mind, called by some the “take-the-first heuristic” [4]. The first explanation that comes to mind will be the one most easily and quickly retrieved; hence, when a researcher sees the results of a study, the first explanation will likely be in terms of their own hypothesis or the causal mechanisms they imagine would have taken place. On one level it seems reasonable to accept an explanation that aligns with our understanding of scientific “facts” and the world. It does, however, become a problem when we do not consider that the results of a study may also support a second, and even third possible explanation. Such an alternative explanation might, for example, relate to a confounder in a cohort study that was not measured yet may have distorted the result; or the bias that results when participants who responded poorly to the treatment are more likely to be lost to follow-up and so are not included in the analysis; or a measurement bias from unblinded outcome assessors unintentionally interviewing participants differently depending on the treatment they received. For studies of interventions or observational studies of factors that might increase risk, constructing a causal diagram is one method that can help us explore alternative explanations.

Another inclination that is relevant to science has been called “myside bias”: a concept that is similar to but not quite the same as confirmation bias, and is defined as a tendency to look for and interpret evidence in a way that supports our prior beliefs [5]. It can be seen when we unconsciously hunt for “evidence” to justify our opinions instead of trying to determine the truth. One example of myside bias is when the results of a high-quality clinical trial give an answer that contradicts our beliefs and opinions, but instead of accepting the possibility that we may have been wrong, we try to find reasons that show the trial was flawed. It is always wise and reasonable to question the results of any trial, particularly if the results are out of line with previous research or otherwise unexpected. But this becomes problematic when we fail to accept what mounting evidence might be telling us. Myside bias partly explains why it is often difficult to change entrenched clinical practice when the evidence suggests that what we are doing may not be effective, or that we may get better results if done in a different way.

Myside bias possibly arises from the fundamental human motive or drive that psychology calls the “desire for status” [6]. Everything we do, say, and write that is viewed by other people can potentially alter how we are seen by them, and these evaluations form our reputation. And if we cannot justify our beliefs to others, then our reputation will suffer. So in an attempt to improve, or at least maintain, our reputation we seek “evidence” (however minimal and tangential) that gives support to our opinions [5]. But while differences in status among colleagues are mostly implicit and informal [6], some explicit indicators include academic rank, publications, citation counts, and personal wealth. They are thus potential targets for any conflict of interest, which may offer a financial, professional, or reputational benefit if a researcher supports a certain belief. We are rightfully less forgiving about these types of “myside biases” because they are not subconscious but rather a semi-deliberate self-serving act to corrupt science and the truth. But on a positive note, people working in health and medical research are also often heavily influenced by another fundamental human motive: compassion [2].

Lastly, there is also strong evidence for the existence of a “causality heuristic” [2, 3, 7] where we automatically search for a causal explanation of the events we see; often as a component of a causal sequence or story. This in part comes from our innate need to understand the world in terms of cause and effect. While the causality heuristic is advantageous most of the time, it can easily lead to a belief in a causal relationship that lacks empirical support, particularly in science and related disciplines like health services. For example, when we assume that one thing causes another because the two variables appear to be associated and one variable precedes the other. Of course, strong methodology will go some way to addressing the truth of these issues. However, there needs to be a widespread understanding of our automatic tendencies to hastily and without good research design assign causal explanations to observations. An increased awareness of the “causality heuristic” might make it easier for people to question their own beliefs and assumptions, and be more open to others questioning their pet theories.

Much of research methodology exists to protect us from ourselves, and to try and ensure that research provides unbiased answers to our questions. For example, blinded outcome assessment in clinical trials is a strategy to minimise observer bias in an effort to avoid observers being influenced by their expectations or other factors [8], with myside bias one of the underlying targets. Some have argued that perhaps the best way to combat all forms of bias in original research and reviews is to have some form of “adversarial collaboration” [2, 9]. That is, to include in your research team people or even groups who in some way stand to benefit if they can spot any bias in your part of the project, and vice versa. Similarly, some advocate that those best suited to conducting Systematic Reviews are those with no pre-existing opinions about the treatments being assessed.

Spinal Cord encourages all readers and researchers to better understand their own biases, to be open to the possibilities that bias creeps into their work and the work of others, and to do everything possible to guard against such biases through strong methodology.