Some within the spinal cord injury (SCI) research community are criticising randomised controlled trials (RCTs) and advocating alternate research methodologies to answer questions about treatment effectiveness and to guide practice. The underlying objection, whether stated or not, is that RCTs frequently fail to demonstrate treatment effectiveness. If RCTs consistently confirmed the treatments we believe in were effective, then most researchers and clinicians would be happy to acknowledge that RCTs provide the best quality evidence and should inform treatment decisions. However, often RCTs in the area of SCI are inconclusive or demonstrate that treatments are ineffective.1 This frustrates researchers, clinicians and people with SCI, fuelling antagonism towards RCTs. However, the problem is not RCTs. If RCTs throw doubt on the effectiveness of interventions then perhaps we should be questioning the intervention rather than the research methodology.

Critics of RCTs often argue that RCTs were designed for testing drugs and were not designed for testing complex interventions administered to diverse populations such as patients with SCI. There are two parts to this criticism, pertaining to the complexity of the interventions and the heterogeneity of people with SCI.

The first concern is based on the erroneous assumption that interventions tested in RCTs must be strictly controlled and regimented. This might be the case for explanatory RCTs designed to determine the efficacy of a drug but it is not the case for pragmatic RCTs designed to determine the effectiveness of an intervention as it is applied in the real world.2, 3 It is quite usual in pragmatic RCTs to allow some flexibility in the administration of an intervention and to allow clinicians to tailor an intervention to patients, provided this reflects clinical practice and provided the researchers make it clear how the intervention was tailored.

Neither does the heterogeneity of people with SCI constitute a reason to abandon RCTs. Heterogeneity is not a problem unique to SCI.2 People with all types of health conditions have widely varying presentations. This can generate noise in RCTs because participants with certain traits might respond differently to others. Consequently, estimates of treatment effects from RCTs may be imprecise unless the trial is very large. We can limit this problem by restricting the inclusion criteria of our RCTs to target those most likely to benefit. This is a reasonable approach provided the inclusion criteria does not become so restrictive that the results have little relevance to the majority (that is, limited external validity) and provided we can identify those most likely to benefit.2, 3 If not, then the inclusion criteria should reflect those in whom the intervention is typically administered in clinical practice. Alternatively, we can statistically adjust for strongly predictive covariates. If these approaches still yields imprecise estimates of treatment effects then we should acknowledge that there is uncertainty in the effects of the interventions rather than search for alternate research methodologies.

Randomised controlled trials are hailed as a gold standard for a reason.4 When conducted well, they potentially provide unbiased estimates of treatment effectiveness. Randomisation means that the groups can be expected to be comparable with regards to all known and unknown variables likely to influence outcomes.2, 3 It is only by randomising trial participants to treatment and control groups that we can confidently distinguish between the effects of a treatment and effects of confounding variables such as natural recovery.5 No other research design can achieve the same degree of control of confounding, regardless of how carefully the research is conducted. So while less rigorous research methodologies are appealing because they can sidestep some of the challenges of RCTs and because they often suggest that treatments we believe in are effective, we should avoid less rigorous methods wherever possible.6 Opting for less rigorous research methodologies will not advance the care of patients with SCI. Clinical practices supported by poorly designed studies can waste time, money and resources and expose patients with SCI to unnecessary risks. We should strive for the situation in which treatments for patients with SCI are supported by well designed, conducted and reported RCTs.