Adv. Meth. Pract. Psychol. Sci. https://doi.org/10.1177/2515245919869583

The past decade has seen substantial efforts to improve the reliability of research findings. However, little attention has been devoted to improving the reliability of inferences that scientists draw from data.

Credit: Bulat Silvia / Alamy Stock Photo

Jeffrey J. Starns of the University of Massachusetts, Amherst, and colleagues propose that blinded inference—in which researchers use models to infer the states of independent variables—is a means of assessing (and ultimately improving) the quality of inferences researchers draw from data. As an example of the blinded inference procedure, recognition memory researchers received seven datasets, each consisting of two conditions. The researchers were asked to determine for each dataset whether memory strength and/or response bias had been manipulated. Starns et al. found that there was substantial variability in the inferences drawn by their sample of memory researchers on the basis of the same data sets and that many researchers made more errors in inference than one would expect on the basis of variability in the data.

Although Starns et al. examined a very specific memory task, the types of inferences the participating researchers had to draw were very simple and have been studied for more than seven decades. The high variability and error rates in inference the authors observed are likely pervasive, highlighting the need to systematically assess and improve the quality of scientific inferences.