In research studies, the need for additional samples to obtain sufficient statistical power has often to be balanced with the experimental costs. One approach to this end is to sequentially collect data until you have sufficient measurements, e.g., when the pvalue drops below 0.05. I outline that this approach is common, yet that unadjusted sequential sampling leads to severe statistical issues, such as an inflated rate of false positive findings. As a consequence, the results of such studies are untrustworthy. I identify the statistical methods that can be implemented in order to account for sequential sampling.
In experiments, researchers must balance between two competing arguments with respect to the sample size. On one hand, the sample size must be large enough to have sufficient power for accurate statistical inference. On the other hand, each additional observation comes at a cost and, especially when performing medical experiments or working with test animals, the researcher has the ethical obligation to avoid unnecessary oversampling.
A seemingly appealing approach is to sequentially collect data, one measurement at a time, and stop when you have sufficient measurements, e.g. when the pvalue drops below 0.05. However, this approach also invalidates the statistical tests and biases the estimates, which is why it is usually labeled as a questionable research practice^{1}. Quite often the description of the data collection in a paper is insufficient to check whether this approach has been followed or not. This is peculiar, because explicitly stating how the sample size was decided upon is advised by many academic associations, such as the Animal research association NC3Rs (item 10b in the ARRIVE guidelines^{2}) and the American Psychological Association APA^{3}. Furthermore, in the field of animal research, researchers usually must “assure an ethics committee that the proposed number of animals is the minimum necessary to achieve a scientific goal”^{4}.
In various anonymous largescale surveys, large numbers of researchers, active in various fields of research, have admitted to following this strategy at least once. Some of the findings include 36.9% of ecologists and 50.7% of evolutionary biologists^{5}. For psychologists, the estimates include 55.9% for American^{1}, 53.2% for Italian^{6}, and 45% for German^{7} psychologists. Thus, the issue is widespread and occurs in a variety of scientific fields.
The problem with multiple statistical testing is more often recognized in the context of multiple independent testing. In this scenario, due to a large number of statistical tests being performed, the number of falsepositives is increased and this needs to be corrected for (Fig. 2). Corrections such as the Bonferronicorrection are included in most statistical textbooks. If the null hypothesis holds true, a single statistical test will yield a false positive, so p < 0.05, in 5% of the times. This 5% is something many scientists think is an acceptably small probability for incorrectly rejecting the null hypothesis (although you can make a motivated choice for another rate^{8,9}). When, for instance, performing 10 independent tests, whilst H_{0} is true, then the probability of finding at least one false positive is equal to 1 – (1 − 0.05)^{10} = 40.13%, very high. The Bonferronicorrection, and other corrections, ensure that this socalled familywise error rate remains at an acceptable level.
As most editors and reviewers are aware of the need for multiple testing, it rarely happens in published research that authors explicitly abstain from any correction for multiple testing. This does not imply that this practice is without problems. First, it is not straightforward to decide which tests within a single paper constitute the ‘family’ for which the familywise error rate needs to be capped at 5%^{10,11}. Consider, for instance, the common situation of a twoway ANOVA. Here, one performs three tests: a main effect of each of both ‘ways’ plus an interaction. Yet, researchers rarely correct for this^{12}.
Second, correcting for many tests has a deteriorating effect on the statistical power (too often not rejecting H_{0} even though it is false^{13}). Third, one could present fewer comparisons than were actually performed, and thus employ a more lenient correction. For instance, when a study has been performed where three groups were mutually compared, the Bonferroniadjusted αlevel would be 0.05/3 = 0.0167. By omitting one group from the paper, the αlevel for the comparison between the remaining groups could remain at 0.05. This research practice is clearly questionable, yet not uncommon^{1}.
Things are different, and much less wellknown for sequential testing. Sequentially collecting data until some threshold is reached doesn’t have to be problematic, as long as you employ an appropriate correction. Here, I outline the problem and indicate what can be done to deal with this. I will demonstrate this based on the wellknown ttest as the simplicity of this test works for demonstrative purposes. The issue is not exclusive to the ttest, and holds for all significance testing procedures.
Suppose you want to perform an independent samples ttest. You begin with n = 2 measurements per group (with 1 measurement per group you cannot compute the withingroupvariance, and thus cannot conduct a ttest). You perform the experiment, take your measurements and conduct your ttest. If p < 0.05, you stop collecting more data, else you collect one more measurement per group. Again, you conduct the analyses and conduct the ttest. This approach continues until you either have p < 0.05 or have run out of resources to collect more data or reached a predecided stopping point.
When performing independent tests, the FDR for k tests can be computed via the formula 1 – 0.95^{k}. When doing sequential comparisons, the situation is somewhat different: the subsequent tests are not independent, as they are partly based on the same observations. For instance, the pvalue for the test after 25 measurements is largely based on the 24 observations that were the basis of the previous pvalue. Still, the multiple testing issue remains—albeit not as severe as with independent tests. It is possible to prove mathematically^{14} that with such a sequential approach it actually is guaranteed that at some point, the p value drops below 0.05, and also that at some later point, it again is above this threshold when H_{0} is true.
An example is given by the thick line Fig. 1. This figure is based on a computer simulation in the situation that H_{0} is true: there is no effect—both groups are not different and claiming a significant result constitutes a false discovery. The sequential approach outlined above has it’s first significant result for n = 42. Stopping the data collection here would enable the researcher to write a paper with a significant effect. However, for n = 43, the pvalue would not be significant anymore. It crosses back and forth over the significance threshold a couple of times before the end of the plot. At n = 150, we’re kind of back where we started, with a very nonsignificant p value.
This is of course just a single simulation. With other randomly generated data, the pattern will be different, as can be seen by the thin lines in Fig. 1. Note that for different trials of the simulation, the value dips below 0.05 at different number of trials (black dots in Fig. 1). To study how severe the problem is, I simulated 10,000 of these sequential strategies, and recorded at what sample size significance was reached for the first time. Figure 2 displays the results of this simulation.
As can be seen, the issue is very severe—although less severe than the case of uncorrected multiple independent tests. Even if you would apply some rule where you stop collecting new data once n exceeds, say, 25, your false discovery rate exceeds 25%. Rather than the oneintwenty chance of labelling a null result significant, we have a oneinfour chance, five times higher than intended.
Note that this problem not only affects the pvalues, but also the estimates themselves. With sequential sampling, with each step the distance between the means of both groups will sometimes increase, sometimes decrease—simply due to coincidence. If we continue sampling until the means of both groups are sufficiently far apart in order to call it significant, we overestimate the effects. Thus, not only is the significance biased, so is the effect size.
So, in an attempt to require as few measurements—whether it concerns animals, participants, or something else—as possible for the experiment, this strategy would actually invalidate a study. Even more worrisome, it does so in a way that cannot be corrected for in a later stage. Thus, the informational value of the study is diminished, such that a new study is needed. In the end, this leads to more test animals/participants/etc. being needed, rather than less.
I outlined why unadjusted sequential testing is problematic. (Note that I’m by far not the first to do this, see e.g.^{1,15} and the references therein.) This does not imply, however, that the concept of sequential analysis—increasing your sample size in small bits until you meet some threshold—is not a good idea. It actually is a good idea, provided the necessary corrections have been made, as it safeguard against taking a sample larger than necessary (ref. ^{16}, p.448,449). There are two classes of such sequential approaches: interim analyses (also known as group sequential analyses) and full sequential analyses.
In interim analysis^{17,18} one prespecifies when one wants to inspect the data, e.g. both halfway at n_{1} = 50 and after collecting n_{2} = 100 measurements. If one tests with α = 0.029 at n_{1}, and stops when the result is significant or to continues until n_{2} and tests again at this αlevel, then the overall FDR is equal to 0.05. An advantage to nonsequential testing is that in case of sufficient evidence, one can stop data collection halfway through the process.
In full sequential approaches, one doesn’t check the data at a few prespecified points, but after every observation. Theories about this by Abraham Wald^{14} and Alan Turing^{19,20} date back to the 1940s. These sequential approaches are more technical than standard methods. Wald’s procedure, for instance, involves computing the cumulative loglikelihood ratio after each observation, and stopping when this sum leaves a prespecified interval (a, b). The computation of this loglikelihood ratio is far from straightforward. Statistically, this is the optimal approach of deciding upon the sample size. In interim analysis, one can stop data collection early in case there is sufficient evidence to reject H_{0}. This is the same with the full sequential method, but here one can also stop when it is sufficiently clear that H_{0} will not be rejected. In practice, however, it is not always feasible to employ this approach, for instance when participants need to undergo group therapy in groups of size 20. In such contexts, interim analysis is an appealing alternative.
For sequential testing, much less (easytouse) software is available as for more conventional methods. Overviews of are available^{21}. Apart from specifically programmed software and packages for R, which are not always straightforward for the practical researcher, interim testing is also possible in the statistical program SAS (ref. ^{22}, (Chapter 109)). So far, for the full sequential method, it seems that the applied researcher cannot rely on easytouse software, the few R packages that deal with this method lack tutorials. One has to work through extensive technical textbooks^{23,24} in order to use this method, which explains why this method is so little used in practice, with the exception of the field of industrial statistics. Fortunately, employing the interim approach, instead of the conventional method of deciding upon the sample size based on a power analysis, can already provide large benefits. If researcher would employ this method more, precious resources would be saved.
For years, researchers interested in sequential methods were told to seek professional statistical help (ref. ^{16}, p.455). It wasn’t until recently that attempts have been made to make the matter of sequential and, specifically, interim testing more accessible to researchers in other fields. In Table 1 the various approaches are summarized, with references to further reading. Hopefully, such efforts make this methodology more accessible to nonstatisticians.
References
 1.
John, L. K., Loewenstein, G. & Prelec, D. Measuring the prevalence of Questionable Research Practices with incentives for truth telling. Psychol. Sci. 23, 524–532 (2012).
 2.
Kilkenny, C., Browne, W. J., Cuthill, I. C., Emerson, M. & Altman, D. G. Improving bioscience research reporting: the ARRIVE guidelines for reporting animal research. PLoS Biol. 8, e1000412 (2010).
 3.
Wilkinson, L. Statistical methods in psychology journals: Guidelines and explanations. Am. Psychol. 54, 594 (1999).
 4.
Fittz, D. A. Ethics and animal numbers: Informal analyses, uncertain sample sizes, inefficient replications, and Type I errors. J. Am. Assoc. Lab. Anim. 50(4), 445–453 (2011).
 5.
Fraser, H., Parker, T. H., Nakagawa, S., Barnett, A. & Fidler, F. Questionable research practices in ecology and evolution. PLoS ONE 13, e0200303 (2018).
 6.
Agnoli, F., Wicherts, J. M., Veldkamp, C. L. S., Albiero, P. & Cubelli, R. Questionable research practices among italian research psychologists. PLoS ONE 12(3), e0172792 (2017).
 7.
Fiedler, K. & Schwarz, N. Questionable research practices revisited. Soc. Psychol. Pers. Sci. 7, 45–52 (2015).
 8.
Benjamin et al. Redefine statistical significance. Nat. Hum. Behav. 2, 6–10 (2018).
 9.
Lakens, D. et al. Justify your alpha. Nat. Hum. Behav. 2, 168–171 (2018).
 10.
Althouse, A. Adjust for multiple comparisons? It’s not that simple. Ann. Thorac. Surg. 101(5), 1644–1645 (2016).
 11.
Bender, R. & Lange, S. Adjusting for multiple testing – when and how? J. Clin. Epidemiol. 54, 343–349 (2001).
 12.
Cramer, A. O. J. et al. Hidden multiplicity in exploratory multiway ANOVA: Prevalence and remedies. Psychon. B. Rev. 23, 640–647 (2015).
 13.
Fiedler, K., Kutzner, F. & Krueger, J. I. The long way from αerror control to validity proper: problems with a shortsighted falsepositive debate. Pers. Psychol. Sci. 7, 661–669 (2012).
 14.
Wald, A. Sequential tests of statistical hypotheses. Ann. Math. Stat. 16, 117–186 (1945).
 15.
Simmons, J. P., Nelson, L. D. & Simonsohn, U. Falsepositive psychology: undisclosed flexibility in data collection and analysis allows presenting anything as significant. Psychol. Sci. 22, 1359–1366 (2011).
 16.
Altman, D. G. Practical Statistics for Medical Research. (Chapman & Hall, Boca Raton, 1991).
 17.
Schulz, K. F. & Grimes, D. A. Multiplicity in randomised trials II: subgroup and interim analyses. Lancet 365, 1657–1661 (2005).
 18.
Jennison, C. & Turnbull, B. W. Group sequential methods with applications to clinical trials. (Chapman & Hall, Boca Raton, 1999).
 19.
Good, I. J. Studies in the history of probability and statistics. XXXVII A. M. Turing’s statistical work in World War II. Biometrika 66, 393–396 (1979).
 20.
Albers, C. J. The Statistician Alan Turing. Nieuw Arch. voor Wiskd. 5/18, 209–210 (2018).
 21.
Zhu, L., Ni, L. & Yao, B. Group sequential methods and software applications. Am. Stat. 65, 127–135 (2012).
 22.
SAS Institute Inc. SAS/STAT 14.3 User’s Guide http://support.sas.com/documentation/onlinedoc/stat/143/seqdesign.pdf (2017).
 23.
Bartroff, J, Lai, T. L. & Shih, M.C. Sequential Experimentation in Clinical Trials: Design and Analysis (Springer, New York, 2013).
 24.
Siegmund, D.Sequential Analysis (Springer, New York, 1985).
 25.
Lakens, D. Performing high‐powered studies efficiently with sequential analyses. Eur. J. Soc. Psychol. 44, 701–710 (2014).
 26.
Neumann K. et al. Increasing efficiency of preclinical research by group sequential designs. PLoS Biol. 15, e2001307 (2017).
 27.
Whitehead, J. The Design and Analysis of Sequential Clinical Trials 2nd edn, (Wiley & Sons: New York, 1997).
Author information
Affiliations
Contributions
C.A. conceptualized this work, did the research, made the visualizations, wrote the manuscript.
Corresponding author
Ethics declarations
Competing interests
The author declares no competing interests.
Additional information
Publisher’s note: Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
Rights and permissions
Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made. The images or other third party material in this article are included in the article’s Creative Commons license, unless indicated otherwise in a credit line to the material. If material is not included in the article’s Creative Commons license and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this license, visit http://creativecommons.org/licenses/by/4.0/.
About this article
Cite this article
Albers, C. The problem with unadjusted multiple and sequential statistical testing. Nat Commun 10, 1921 (2019). https://doi.org/10.1038/s41467019099410
Received:
Accepted:
Published:
Further reading

Strategies in adjusting for multiple comparisons: A primer for pediatric surgeons
Journal of Pediatric Surgery (2020)
Comments
By submitting a comment you agree to abide by our Terms and Community Guidelines. If you find something abusive or that does not comply with our terms or guidelines please flag it as inappropriate.