Data analysis workflows in many scientific domains have become increasingly complex and flexible. Here we assess the effect of this flexibility on the results of functional magnetic resonance imaging by asking 70 independent teams to analyse the same dataset, testing the same 9 ex-ante hypotheses1. The flexibility of analytical approaches is exemplified by the fact that no two teams chose identical workflows to analyse the data. This flexibility resulted in sizeable variation in the results of hypothesis tests, even for teams whose statistical maps were highly correlated at intermediate stages of the analysis pipeline. Variation in reported results was related to several aspects of analysis methodology. Notably, a meta-analytical approach that aggregated information across teams yielded a significant consensus in activated regions. Furthermore, prediction markets of researchers in the field revealed an overestimation of the likelihood of significant findings, even by researchers with direct knowledge of the dataset2,3,4,5. Our findings show that analytical flexibility can have substantial effects on scientific conclusions, and identify factors that may be related to variability in the analysis of functional magnetic resonance imaging. The results emphasize the importance of validating and sharing complex analysis workflows, and demonstrate the need for performing and reporting multiple analyses of the same data. Potential approaches that could be used to mitigate issues related to analytical variability are discussed.
This is a preview of subscription content, access via your institution
Open Access articles citing this article.
Extended functional connectivity of convergent structural alterations among individuals with PTSD: a neuroimaging meta-analysis
Behavioral and Brain Functions Open Access 13 September 2022
Derivation and utility of schizophrenia polygenic risk associated multimodal MRI frontotemporal network
Nature Communications Open Access 22 August 2022
Predicting treatment response using EEG in major depressive disorder: A machine-learning meta-analysis
Translational Psychiatry Open Access 12 August 2022
Subscribe to Nature+
Get immediate online access to the entire Nature family of 50+ journals
Subscribe to Journal
Get full journal access for 1 year
only $3.90 per issue
All prices are NET prices.
VAT will be added later in the checkout.
Tax calculation will be finalised during checkout.
Get time limited or full article access on ReadCube.
All prices are NET prices.
The full fMRI dataset is publicly available on OpenNeuro (https://doi.org/10.18112/openneuro.ds001734.v1.0.4) and is described in detail in a Data Descriptor1. The results reported by all teams are presented in Extended Data Table 2. A table describing the methods used by the analysis teams is available with the analysis code. NeuroVault collections containing the submitted statistical maps are available via the links provided in Extended Data Table 3a. Source data for Figs. 1, 2 are provided with the paper. Readers may obtain access to the data and run the full analysis stream on the team submissions by following the directions at https://github.com/poldrack/narps/tree/master/ImageAnalyses. Access to the raw data requires specifying a URL for the dataset, which is: https://zenodo.org/record/3528329/files/narps_origdata_1.0.tgz. Results (automatically generated figures, results and output logs) for image analyses are available for anonymous download at https://doi.org/10.5281/zenodo.3709275.
Code for all analyses of the reports and statistical maps submitted by the analysis teams is openly shared in GitHub (https://github.com/poldrack/narps). Image-analysis code was implemented within a Docker container, with software versions pinned for reproducible execution (https://hub.docker.com/r/poldrack/narps-analysis/tags). Python code was automatically tested for quality using the flake8 static analysis tool and the codacy.com code quality assessment tool, and the results of the image-analysis workflow were validated using simulated data. The image-analysis code was independently reviewed by an expert who was not involved in writing the original code. Prediction market analyses were performed using R v.3.6.1; packages were installed using the checkpoint package, which reproducibly installs all package versions as of a specified date (13 August 2019). Analyses reported in this manuscript were performed using code release v.2.0.3 (https://doi.org/10.5281/zenodo.3709273). Although not required to, several analysis teams publicly shared their analysis code. Extended Data Table 3d includes these teams along with the link to their code.
Botvinik-Nezer, R. et al. fMRI data of mixed gambles from the Neuroimaging Analysis Replication and Prediction Study. Sci. Data 6, 106 (2019).
Dreber, A. et al. Using prediction markets to estimate the reproducibility of scientific research. Proc. Natl Acad. Sci. USA 112, 15343–15347 (2015).
Camerer, C. F. et al. Evaluating replicability of laboratory experiments in economics. Science 351, 1433–1436 (2016).
Camerer, C. F. et al. Evaluating the replicability of social science experiments in Nature and Science between 2010 and 2015. Nat. Hum. Behav. 2, 637–644 (2018).
Forsell, E. et al. Predicting replication outcomes in the Many Labs 2 study. J. Econ. Psychol. 75, 102117 (2019).
Wicherts, J. M. et al. Degrees of freedom in planning, running, analyzing, and reporting psychological studies: a checklist to avoid P-hacking. Front. Psychol. 7, 1832 (2016).
Simmons, J. P., Nelson, L. D. & Simonsohn, U. False-positive psychology: undisclosed flexibility in data collection and analysis allows presenting anything as significant. Psychol. Sci. 22, 1359–1366 (2011).
Carp, J. On the plurality of (methodological) worlds: estimating the analytic flexibility of FMRI experiments. Front. Neurosci. 6, 149 (2012).
Silberzahn, R. et al. Many analysts, one data set: making transparent how variations in analytic choices affect results. Adv. Methods Pract. Psychol. Sci. 1, 337–356 (2018).
Tom, S. M., Fox, C. R., Trepel, C. & Poldrack, R. A. The neural basis of loss aversion in decision-making under risk. Science 315, 515–518 (2007).
De Martino, B., Camerer, C. F. & Adolphs, R. Amygdala damage eliminates monetary loss aversion. Proc. Natl Acad. Sci. USA 107, 3788–3792 (2010).
Canessa, N. et al. The functional and structural neural basis of individual differences in loss aversion. J. Neurosci. 33, 14307–14317 (2013).
Esteban, O. et al. fMRIPrep: a robust preprocessing pipeline for functional MRI. Nat. Methods 16, 111–116 (2019).
Acikalin, M. Y., Gorgolewski, K. J. & Poldrack, R. A. A coordinate-based meta-analysis of overlaps in regional specialization and functional connectivity across subjective value and default mode networks. Front. Neurosci. 11, 1 (2017).
Gorgolewski, K. J. et al. NeuroVault.org: a web-based repository for collecting and sharing unthresholded statistical maps of the human brain. Front. Neuroinform. 9, 8 (2015).
Nosek, B. A., Ebersole, C. R., DeHaven, A. C. & Mellor, D. T. The preregistration revolution. Proc. Natl Acad. Sci. USA 115, 2600–2606 (2018).
Nosek, B. A. & Lakens, D. Registered reports: a method to increase the credibility of published results. Soc. Psychol. 45, 137–141 (2014).
Markiewicz, C., De La Vega, A., Yarkoni, T., Poldrack, R. & Gorgolewski, K. FitLins: reproducible model estimation for fMRI. Poster W621 in 25th Annual Meeting of the Organization for Human Brain Mapping (OHBM, 2019).
Simonsohn, U., Simmons, J. P. & Nelson, L. D. Specification curve: descriptive and inferential statistics on all reasonable specifications. https://doi.org/10.2139/ssrn.2694998 (2015).
Patel, C. J., Burford, B. & Ioannidis, J. P. A. Assessment of vibration of effects due to model specification can demonstrate the instability of observational associations. J. Clin. Epidemiol. 68, 1046–1058 (2015).
Steegen, S., Tuerlinckx, F., Gelman, A. & Vanpaemel, W. Increasing transparency through a multiverse analysis. Perspect. Psychol. Sci. 11, 702–712 (2016).
LaConte, S. et al. The evaluation of preprocessing choices in single-subject BOLD fMRI using NPAIRS performance metrics. Neuroimage 18, 10–27 (2003).
Gorgolewski, K. J. et al. The brain imaging data structure, a format for organizing and describing outputs of neuroimaging experiments. Sci. Data 3, 160044 (2016).
Tversky, A. & Kahneman, D. Advances in prospect theory: cumulative representation of uncertainty. J. Risk Uncertain. 5, 297–323 (1992).
Nichols, T. E. et al. Best practices in data analysis and sharing in neuroimaging using MRI. Nat. Neurosci. 20, 299–303 (2017).
Bates, D., Mächler, M., Bolker, B. & Walker, S. Fitting linear mixed-effects models using lme4. J. Stat. Softw. 67, 1–48 (2015).
Lubke, G. H. et al. Assessing model selection uncertainty using a bootstrap approach: an update. Struct. Equ. Modeling 24, 230–245 (2017).
Abraham, A. et al. Machine learning for neuroimaging with scikit-learn. Front. Neuroinform. 8, 14 (2014).
Hughett, P. Accurate computation of the F-to-z and t-to-z transforms for large arguments. J. Stat. Softw. 23, 1–5 (2007).
Turkeltaub, P. E., Eden, G. F., Jones, K. M. & Zeffiro, T. A. Meta-analysis of the functional neuroanatomy of single-word reading: method and validation. Neuroimage 16, 765–780 (2002).
Eickhoff, S. B. et al. Behavior, sensitivity, and power of activation likelihood estimation characterized by massive empirical simulation. Neuroimage 137, 70–85 (2016).
Eklund, A., Nichols, T. E. & Knutsson, H. Cluster failure: why fMRI inferences for spatial extent have inflated false-positive rates. Proc. Natl Acad. Sci. USA 113, 7900–7905 (2016).
Yarkoni, T., Poldrack, R. A., Nichols, T. E., Van Essen, D. C. & Wager, T. D. Large-scale automated synthesis of human functional neuroimaging data. Nat. Methods 8, 665–670 (2011).
Open Science Collaboration. Estimating the reproducibility of psychological science. Science 349, aac4716 (2015).
Arrow, K. J. et al. Economics. The promise of prediction markets. Science 320, 877–878 (2008).
Wolfers, J. & Zitzewitz, E. Interpreting prediction market prices as probabilities. https://doi.org/10.3386/w12200 (NBER, 2006).
Manski, C. F. Interpreting the predictions of prediction markets. Econ. Lett. 91, 425–429 (2006).
Fountain, J. & Harrison, G. W. What do prediction markets predict? Appl. Econ. Lett. 18, 267–272 (2011).
Hanson, R. Logarithmic market scoring rules for modular combinatorial information aggregation. J. Prediction Markets 1, 3–15 (2007).
Chen, Y. Markets as an Information Aggregation Mechanism for Decision Support. PhD thesis, Penn State Univ. (2005).
Neuroimaging data collection, performed at Tel Aviv University, was supported by the Austrian Science Fund (P29362-G27), the Israel Science Foundation (ISF 2004/15 to T. Schonberg) and the Swedish Foundation for Humanities and Social Sciences (NHS14-1719:1). Hosting of the data on OpenNeuro was supported by a National Institutes of Health (NIH) grant (R24MH117179). We thank M. C. Frank, Y. Assaf and N. Daw for comments on an earlier draft; the Texas Advanced Computing Center for providing computing resources for preprocessing of the data; the Stanford Research Computing Facility for hosting the data; and D. Roll for assisting with data processing. T. Schonberg thanks The Alfredo Federico Strauss Center for Computational Neuroimaging at Tel Aviv University; A.D. thanks the Knut and Alice Wallenberg Foundation and the Marianne and Marcus Wallenberg Foundation (A.D. is a Wallenberg Scholar), the Austrian Science Fund (FWF, SFB F63) and the Jan Wallander and Tom Hedelius Foundation (Svenska Handelsbankens Forskningsstiftelser); F. Holzmeister, J. Huber and M. Kirchler thank the Austrian Science Fund (FWF, SFB F63); D.W. was supported by the Research Foundation Flanders (FWO) and the European Union’s Horizon 2020 research and innovation programme (https://ec.europa.eu/programmes/horizon2020/en) under the Marie Skłodowska-Curie grant agreement no. 665501; L. Tisdall was supported by the University of Basel Research Fund for Junior Researchers; C.B.C. was supported by grant 12O7719N from the Research Foundation Flanders; E.L. was supported by grant 12T2517N from the Research Foundation Flanders and Marie Skłodowska-Curie Actions under COFUND grant agreement 665501; A. Eed was supported by a predoctoral fellowship La Caixa-Severo Ochoa from Obra Social La Caixa and also acknowledges Comunidad de Cálculo Científico del CSIC for the high-performance computing (HPC) use; C.L. was supported by the Vienna Science and Technology Fund (WWTF VRG13-007) and Austrian Science Fund (FWF P 32686); A.B.L.V. was supported by the Vienna Science and Technology Fund (WWTF VRG13-007); L.Z. was supported by the Vienna Science and Technology Fund (WWTF VRG13-007), the National Natural Science Foundation of China (no. 71801110), MOE (Ministry of Education in China) Project of Humanities and Social Sciences (no. 18YJC630268) and China Postdoctoral Science Foundation (no. 2018M633270); D.P. is currently supported by the Deutsche Forschungsgemeinschaft (DFG, German Research Foundation) under Germany’s Excellence Strategy ‘Science of Intelligence’ (EXC 2002/1, project number 390523135); P.H. was supported in part by funding provided by Brain Canada, in partnership with Health Canada, for the Canadian Open Neuroscience Platform initiative; J.-B.P. was partially funded by the NIH (NIH-NIBIB P41 EB019936 (ReproNim), NIH-NIMH R01 MH083320 (CANDIShare) and NIH RF1 MH120021 (NIDM)) and the National Institute Of Mental Health of the NIH under award number R01MH096906 (Neurosynth), as well as the Canada First Research Excellence Fund, awarded to McGill University for the Healthy Brains for Healthy Lives initiative and the Brain Canada Foundation with support from Health Canada; S.B.E. was supported by the European Union’s Horizon 2020 Research and Innovation Programme under grant agreement no. 785907 (HBP SGA2); G.M. was supported by the Max Planck Society; S. Heunis has received funding from the Dutch foundation LSH-TKI (grant LSHM16053-SGF); J.F.G.M. was supported by a Graduate Research Fellowship from the NSF and T32 Predoctoral Fellowship from the NIH; B.M. was supported by the Deutsche Forschungsgemeinschaft (grant CRC1193, subproject B01); A.R.L. was supported by NSF 1631325 and NIH R01 DA041353; M.E.H., T.J. and D.J.W. were supported by the Australian National Imaging Facility, a National Collaborative Research Infrastructure Strategy (NCRIS) capability; P.M.I. was supported by VIDI grant 452-17-013 from the Netherlands Organisation for Scientific Research; B.M.B. was supported by the Max Planck Society; J.P.H. was supported by a grant from the Swedish Research Council; R.W.C. and R.C.R. were supported by NIH IRP project number ZICMH002888; D.M.N., R.W.C., and R.C.R. used the computational resources of the National Institutes of Health High Performance Computing Biowulf cluster (http://hpc.nih.gov); D.M.N. was supported by NIH IRP project number ZICMH002960; C.F.C. was supported by the Tianqiao and Chrissy Center for Social and Decision Neuroscience Center Leadership Chair; R.G.B. was supported by the Max Planck Society; R.M.W.J.B. was supported by the Max Planck Society; M.B., O.C. and R.G. were supported by the Belgian Excellence of Science program (EOS project 30991544) from the FNRS-Belgium; O.C. is a research associate at the FRS-FNRS of Belgium; A.D.L. was supported by grant R4195 “Repimpact” of EraNET Neuron; Q.S. was funded by grant no. 71971199,71602175 and 71942004 from the National Natural Science Foundation of China and no. 16YJC630103 of the Ministry of Education of Humanities and Social Science; and T.E.N. was supported by the Wellcome Trust award 100309/Z/12/Z.
The authors declare no competing interests.
Peer review information Nature thanks Martin Lindquist, Marcus Munafo and the other, anonymous, reviewer(s) for their contribution to the peer review of this work.
Publisher’s note Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
Extended data figures and tables
Maps showing at each voxel the proportion of teams (out of n = 65 teams) that reported significant activations in their thresholded statistical map, for each hypothesis (labelled H1–H9), thresholded at 10% (that is, voxels with no colour were significant in fewer than 10% of teams). + or − refers to the direction of effect; gain or loss refers to the effect being tested; and equal indifference (EI) or equal range (ER) refers to the group being examined or compared. Hypotheses 1 and 3, as well as hypotheses 2 and 4, share the same statistical maps as they relate to the same contrast and experimental group but different regions (see Extended Data Table 1). Images can be viewed at https://identifiers.org/neurovault.collection:6047.
For each hypothesis, we present a heat map based on Spearman correlations between unthresholded statistical maps (n = 64), clustered according to their similarity, and the average of unthresholded images for each cluster (cluster colours in titles refer to colours in left margin of heat map). Column colours represent hypothesis decisions (green, yes; red, no) reported by the analysis teams; row colours denote cluster membership. Maps are thresholded at an uncorrected value of z > 2 for visualization. Unthresholded maps for hypotheses 2 and 4 are identical (as they both relate to the same contrast and group but different regions), and the colours represent reported results for hypothesis 2. For hypotheses 1 and 3, see Fig. 2.
n = 64. a, Maps of estimated between-team variability (tau) at each voxel for each hypothesis. b, Results of the image-based meta-analysis. A consensus analysis was performed on the unthresholded statistical maps to obtain a group statistical map for each hypothesis, accounting for the correlation between teams owing to the same underlying data (see Methods). Maps are presented for each hypothesis, showing voxels (in colour) in which the group statistic was significantly greater than zero after voxelwise correction for FDR (P < 0.05). Colour bar reflects statistical value (z) for the meta-analysis. Hypotheses 1 and 3, as well as hypotheses 2 and 4, share the same unthresholded maps, as they relate to the same contrast and group but different regions (see Extended Data Table 1). Images can be viewed at https://identifiers.org/neurovault.collection:6051.
n =64. a, Activation for each hypothesis as determined using consistent thresholding (black, P < 0.001 and cluster size (k) > 10 voxels; blue, FDR correction with P < 0.05) and ROI selection across teams (y axis), versus the actual proportion of teams reporting activation (x axis). Numbers next to each symbol represent the hypothesis number for each point. b, Results from re-thresholding of unthresholded maps, using either uncorrected values with the threshold (P < 0.001, k > 10) or FDR correction (PFDR < 5%) and common anatomical ROIs for each hypothesis. A team is recorded as having an activation if one or more significant voxels are found in the ROI. Results for image-based meta-analysis (IBMA) for each hypothesis are presented, also thresholded at PFDR < 5%.
n = 240 observations (10 days × 24 h). a, Panel regressions. The table summarizes the results of preregistered fixed-effects panel regressions of the absolute errors of the predictions (that is, the absolute deviation of the market price from the fundamental value) on an hourly basis (average price of all transactions within an hour) on time and prediction market indicators. Standard errors were computed using a robust estimator. b, Market prices for each of the nine hypotheses separated for the team members (green) and non-team members (blue) prediction markets. The figure shows the average prices of the prediction market per hour, separated for the two prediction markets, for the time the markets were open (10 days, that is, 240 h). The grey line indicates the actual share of the analysis teams that reported a significant result for the hypothesis (that is, the fundamental value).
This file contains Supplementary Methods and Results, a Supplementary Discussion and Supplementary References. The Supplementary Methods include additional descriptions of some of the methods used, as well as additional analyses of the analysis teams’ results, thresholded and unthresholded statistical maps, and prediction markets. The Supplementary Discussion contains a more detailed discussion of the findings, implications and suggested solutions.
About this article
Cite this article
Botvinik-Nezer, R., Holzmeister, F., Camerer, C.F. et al. Variability in the analysis of a single neuroimaging dataset by many teams. Nature 582, 84–88 (2020). https://doi.org/10.1038/s41586-020-2314-9
This article is cited by
Extended functional connectivity of convergent structural alterations among individuals with PTSD: a neuroimaging meta-analysis
Behavioral and Brain Functions (2022)
Regression to the mean in latent change score models: an example involving breastfeeding and intelligence
BMC Pediatrics (2022)
Do German university medical centres promote robust and transparent research? A cross-sectional study of institutional policies
Health Research Policy and Systems (2022)
Using causal methods to map symptoms to brain circuits in neurodevelopment disorders: moving from identifying correlates to developing treatments
Journal of Neurodevelopmental Disorders (2022)
Nature Reviews Neuroscience (2022)