Abstract
We describe the multimodal neuroimaging dataset VEPCON (OpenNeuro Dataset ds003505). It includes raw data and derivatives of high-density EEG, structural MRI, diffusion weighted images (DWI) and single-trial behavior (accuracy, reaction time). Visual evoked potentials (VEPs) were recorded while participants (n = 20) discriminated briefly presented faces from scrambled faces, or coherently moving stimuli from incoherent ones. EEG and MRI were recorded separately from the same participants. The dataset contains raw EEG and behavioral data, pre-processed EEG of single trials in each condition, structural MRIs, individual brain parcellations at 5 spatial resolutions (83 to 1015 regions), and the corresponding structural connectomes computed from fiber count, fiber density, average fractional anisotropy and mean diffusivity maps. For source imaging, VEPCON provides EEG inverse solutions based on individual anatomy, with Python and Matlab scripts to derive activity time-series in each brain region, for each parcellation level. The BIDS-compatible dataset can contribute to multimodal methods development, studying structure-function relations, and to unimodal optimization of source imaging and graph analyses, among many other possibilities.
Measurement(s) | visual evoked potentials • brain activity measurement • diffusion weighted images |
Technology Type(s) | high-density electroencephalography • magnetic resonance imaging |
Factor Type(s) | visual stimulus |
Sample Characteristic - Organism | Homo sapiens |
Sample Characteristic - Environment | laboratory environment |
Machine-accessible metadata file describing the reported data: https://doi.org/10.6084/m9.figshare.17074685
Similar content being viewed by others
Background & Summary
Visual evoked potentials (VEPs) have a long record of shedding light on the spatial and temporal dynamics of large-scale neural processing in the brain1,2. EEG potentials registered at scalp electrodes result from synchronous activity in large populations of neurons that are distributed across cortical and subcortical areas3,4. Visual stimulation gives rise to a fast sequence of well-known EEG components that reflect initial processing at latencies before ~100 ms5,6, subsequent object and recurrent processes7,8,9, and later components that reflect target detection, integration and decisions10,11,12. The VEP provides a millisecond by millisecond recording of whole-brain activity dynamics, and has a rich distribution of temporal frequencies that provides further insight into the functionality of brain processes13,14.
From VEPs recorded across the scalp, the underlying distributed patterns of brain activity can be estimated using an inverse solution based on anatomical constraints15,16,17,18. The anatomical constraints determine what electrical fields from a neural activity source would look like at the recording electrodes on the scalp, given the conductivities of various tissues and fluids lie in between. An inverse solution translates the recorded electrical potential field back to a pattern of distributed source activity in the brain. Source localizations from EEG are necessarily coarse, as compared to fMRI, and improving them further is an active field of research and helps understand the activity dynamics within areas, and their inter-relatedness19,20,21,22.
Activity in each area can influence activity throughout the brain in a few steps, due to the dense connectivity of cortico-cortical and cortico-subcortical fibers23,24,25,26. This structural connectivity can be inferred with diffusion weighted imaging (DWI)27,28. The resulting connectome constitutes a road map of sorts over which activity can propagate between areas, and in an important sense constrains how activity within an area can evolve through the influences it receives from others29,30.
The VEPCON dataset31 combines raw EEG data, T1-weighted (T1w) MRI, and DWI for 20 human participants, with as derivatives, VEPs, inverse solution matrices, brain parcellations and connectomes at 5 different spatial scales (Fig. 1). These data were recorded to study the dynamics of functionally specialized processes that support face and motion perception. High-density EEG was recorded in two active paradigms where participants categorically discriminated face images from scrambled counterparts, or coherent from incoherent motion in random dot kinematograms. VEP sources for face stimuli are known to include inferior temporal and lateral occipital cortex and for motion stimuli they include dorsal area MT8,9. Part of these data were previously used for improving and validating EEG source imaging methods32 and time-varying functional connectivity methods33,34, for using connectomes to inform inverse solutions35,36, as well as for developing the multi-modal image processing pipeline software Connectome Mapper 3 (CMP3)37.
The dataset is publicly available on OpenNeuro31, and raw data are structured following the MRI and EEG Brain Imaging Data Structure standards (BIDS38,39). We expect the data to be useful for the development and benchmarking of multimodal analysis methods that combine functional and structural information, for exploring structure-function relations, and for controlling whether and how the level of parcellation affects results. We also expect unimodal reuse value for development and benchmarking of EEG source-imaging approaches and functional connectivity analyses. Beyond this, the availability within the same participant of single-trial behavior, EEG data, inverse solutions, anatomical and diffusion MRI, and connectomes based on four different indices allows for many types of analyses that can generate new hypotheses about dynamic brain function in human visual processing and behavior.
Methods
Participants
Twenty participants (3 males, mean age = 23 ± 3.5) took part, recruited from the local student population (University of Fribourg, Switzerland). Participants had normal or corrected-to-normal vision. Before the experiment visual acuity was tested with the Freiburg Acuity test, and a value of 1 had to be reached with both eyes open40. Nineteen participants were right-handed, one was left-handed. All participants provided written informed consent before the experiment. The experimental procedures complied with the Declaration of Helsinki and were approved by the regional ethics board (CER-VD, Protocol Nr. 2016–00060).
Stimuli, display and procedures
EEG data were recorded while participants performed a face detection and a motion discrimination task (see Fig. 1). The order of the two tasks was counterbalanced across participants.
Face stimuli were female and male faces (4° by 4° of visual angle) taken from online repositories and cropped with a Gaussian kernel to smooth the borders. Scrambled images were obtained by fully randomizing the phase spectra of the original images41. In the face detection task, each trial lasted 1.2 s and started with a blank screen (500 ms). After the blank screen, one image (either a face or a scramble image of a face) was presented at the center of the screen for 200 ms and participants had the remaining 1000 ms to respond. The task was to report whether they saw a face or not (yes/no task) by pressing one of two buttons in a response box with their right hand. After the response and a random interval (from 600 to 900 ms), a new trial began. The experiment consisted of four blocks of 150 trials each, for a total of 600 trials, i.e., 300 with faces and 300 with scrambled faces. Faces and scrambled faces were randomly interleaved across trials.
In the motion discrimination task, motion stimuli were dot kinematograms presented on a circular frame at the center of the screen (dot field size = 8°; dot size = 12 pixels; lifetime = 10 frames; average number of dots inside the field = 75; dot speed in displacements per frame = 0.01°; mean dot luminance = 50%). For coherent motion, 80% of the dots were moving toward either the left or right, with the remaining 20% moving in random directions. For incoherent motion stimuli, all dots moved randomly. Each trial started with a blank interval of 500 ms followed by a centrally presented stimulus for 300 ms (Fig. 1). Participants discriminated whether the presented motion was coherent or incoherent by pressing one of two buttons of a response box. After the response there was a random interval (from 600 to 900 ms) before the next trial began. There were four blocks of 150 trials each, for a total of 600 trials (300 with coherent motion). The two conditions were randomly intermixed within each block.
Stimuli were generated using Psychopy42,43 and presented on a VIEWPixx/3D display system (1920 × 1080 pixels, refresh rate of 100 Hz). Responses were collected using a ResponsePixx response box (VPixx technologies).
EEG recording and preprocessing
EEG data were recorded at a sampling rate of 2048 Hz with a 128-channel Biosemi Active Two EEG system (Biosemi, Amsterdam, The Netherlands) in a dimly lit and electrically shielded room. Signal quality was ensured by monitoring and maintaining the offset between the active electrodes and the Common Mode Sense - Driven Right Leg (CMS-DRL) feedback loop under a standard value of ± 20 mV. After each recording session, individual 3D electrode positions were digitized using an ultrasound motion capture system (Zebris Medical GmbH).
Offline, data were preprocessed using EEGLAB 14.1.144. EEG data were downsampled to 250 Hz and detrended (antialiasing filter: cut-off = 112.5 Hz, bandwidth = 50 Hz, detrending at <1 Hz). Line noise (50 Hz) was removed via spectrum interpolation45. Data were then segmented into epochs and time locked from −1500 to 1000 ms from the stimulus onset in both tasks. Data from participant 05 (Face and Motion tasks) and 15 (Motion task) was not further processed and discarded due to excessive noise. Bad channels and epochs were identified and removed before preprocessing. Remaining physiological artifacts were isolated using an independent component analysis (ICA) decomposition (FastIca). Bad components were labelled by crossing the results of a machine-learning algorithm (MARA, Multiple Artifact Rejection Algorithm in EEGLAB) with the criterion of >90% of total variance explained and removed manually. Bad channels were then interpolated using the nearest-neighbor spline method and data were re-referenced to the average reference.
EEG source imaging
EEG source imaging was performed using Cartool (v3.80)46 and custom-made scripts (see Matlab and Python examples in the code/ directory). Source reconstruction was based on Cartool-segmented individual MRI T1w data, co-registered individual electrode positions, and the LORETA47 and LAURA15 algorithms (regularization = 6; spherical model with anatomical constraints, LSMAC). Leadfields were calculated for each of the around 5000 freely oriented dipoles while limiting the solution space to voxels within the gray matter mask46 provided by CMP3.
MRI recording and processing
MR data from the same 20 subjects was acquired on a General Electrics Discovery MR750 3 T MRI clinical scanner at the cantonal hospital in Fribourg, Switzerland, using a 32-channel head coil. The acquisition included anatomical T1-weighted images and DTI. T1-weighted images were acquired as rapid-gradient echo (MPRAGE) volumes using a COR FSPGR BRAVO pulse sequence (flip angle = 9°; echo time = 2.8 ms; repetition time = 7300 ms; inversion time = 0.9 s, FOV = 220 mm, matrix size = 256 × 256, number of slices = 276, slice thickness = 1 mm, in-plane resolution = 0.9 × 0.9 mm2). DTI data were acquired with a spin echo single shot EPI pulse sequence and a diffusion sensitizing gradient set of 30 different directions and 5 diffusion-free B0 scans (echo time = 87 ms; repetition time = 8000 ms; interleaved slice order; b-weighting of 1000, FOV = 260 mm, matrix size = 128 × 128, number of slices = 60; slice thickness = 2.0 mm; slice spacing = 0.2 mm, in-plane resolution = 2.0 × 2.0 mm2).
Processing of all T1w and DTI data was performed using the Connectome Mapper v3.0.0-beta-RC1 pipelines37. All T1w scans were resampled to 1mm3 isotropic resolution from which gray and white matter were segmented using Freesurfer 6.0.148, and parcellated into 83 cortical and subcortical areas49. The parcels were then further subdivided following the method proposed by50 into 129, 234, 463 and 1015 approximately equally-sized parcels according to the Lausanne anatomical atlas. DTI data were corrected from motion and eddy current distortions using mcflirt and eddy_correct provided by FSL 5.0.9 and resampled to 1mm3 resolution using mrconvert from MRtrix 3.0.0-RC1. Diffusion directions per voxel were then reconstructed using the algorithm of constrained spherical deconvolution implemented in MRtrix 3.0.0-RC151 with a maximal order of 4, enabling the estimation of multiple directions per voxel.
For sharing purposes, all raw T1w and DTI data were anonymized during BIDS conversion and all anatomical T1w data were de-identified by removing facial features using Quickshear52.
Structural connectomes
For each participant, structural connectivity matrices were estimated from the reconstructed fiber orientation distribution (FOD) image using the SD_stream deterministic streamline tractography algorithm implemented in MRtrix 3.0.0-RC151. Fiber streamline reconstruction started from seeds in the white-matter that were spatially random and the whole process completed when a number of 1 M fiber streamlines were reconstructed. At each streamline step of 0.5 mm, the local FOD was sampled, and from the current streamline tangent orientation, the orientation of the nearest FOD amplitude peak was estimated via a Newton optimization on the sphere. Fibers were stopped if a change in direction was greater than 45 degrees. Fibers with a length not in the 5 mm to 200 mm range were discarded. The streamline reconstruction process was complete when both ends of the fiber left the white matter mask. Then, for each scale, the parcellation was projected to the native DTI space after symmetric diffeomorphic co-registration between the T1w scan and the diffusion-free B0 using the Advanced Normalization Tools (ANTs) 2.2.0. Finally, connectivity matrices at 5 different spatial scales were built by considering all fiber streamlines connecting parcels according to the following connectivity measures: number of fibers, fiber density, average and median Fractional Anisotropy (FA), and Mean Diffusivity (MD).
Data Records
The VEPCON dataset is available via the Open Neuro repository31, and is fully BIDS compatible (v1.4.1). Below, we describe all data records following the directory structure.
The main directory contains descriptor files detailing the dataset, and the age and sex of each participant. Subfolders (sub- <label>/) hold the raw EEG data (eeg/) in .bdf format, as well as the MRI T1w images (anat/) and diffusion weighted images (DWI) (dwi/) in the compressed Nifti format, with acquisition parameters stored in their associated sidecar .JSON file. With respect to raw EEG data, description of timing and other event properties are stored in *_task- <label>_events.tsv files and the corresponding column descriptor (*_task- <label> _events.json), for each task (task-faces and task-motion). The events file contains a table listing for each single trial the stimulus onset relative to recording onset (sec), condition specifiers, response made, response evaluation, reaction time and whether the trial was discarded or not in preprocessing. Trials with behavioral errors or reaction times slower than 200 ms were marked as outliers. For each electrode the digitized electrode positions (x, y, z; mm) are listed in the *_electrodes.tsv file, and status information (good/bad) is listed in the *_channels.tsv file.
The derivatives/ directory includes a dataset_description.json file for each derivatives dataset (mriqc, cmp, cartool, eeglab) at the root level. Any subject-specific datatype-specific derivatives are housed within each subject/datatype’s directory (sub- <label>/<datatype>) where <datatype> can be “anat”, “dwi”, “eeg”, “xfm” or “connectivity”. The output reports generated by MRIQC, and CMP3 derivatives of anatomical and diffusion MRI are in the mriqc- <version> cmp- <version>) folders, where the version of each software used is encoded by the <version> label (e.g., cmp-v3.0.0-beta-RC1). EEG derivatives obtained with EEGLAB (eeglab- <version>) and Cartool (cartool- <version>) are contained in their respective folders and organized to comply with BIDS to the fullest possible extent (specifications for some of these files are currently under development).
In the eeglab/ directory, each eeg/ participant’s folder contains the preprocessed single trial epochs (.fdt, .set). The task-faces files contain epochs for face stimuli and control stimuli, the task-motion files contain epochs for coherent and incoherent motion stimuli. The directory holds summaries of preprocessing and data cleaning for the motion and face task in two separate .tsv files. These files list for each participant the proportion of channels, epochs and ICA components removed during preprocessing.
In the cartool/ directory, each eeg/ participant’s folder holds a text file (.xyz) with individual electrode positions in mm, co-registered to the participant’s head (x, y, z, Biosemi electrode name). The .spi file is a similar text file listing x, y, z coordinates and a text label for each sourcepoint for which an inverse solution was calculated. The inverse solution matrices for LAURA and LORETA15,47 are in the respective .is files; they map observed patterns of EEG potentials to a distribution of 3D dipoles across source points. The rois/ folder lists files that indicate which source points belong to what area (region of interest, ROI), according to each parcellation level, with a Cartool readable (.rois) and a Python readable version (.pickle.rois).
The cmp/ directory contains anatomical, diffusion and structural connectivity data for each participant. All T1w-derived data are placed in the anat/ folder, which includes the brain segmentations and parcellations in the native T1w space and in the native DTI space (_space-DWI_). All DTI-derived data are placed in the dwi/ folder. It includes the preprocessed DTI, the FOD image and the final tractogram used to build the connectivity matrices. The connectivity/ folder holds connectomes at each parcellation level, with a Matlab readable (.mat) and Python readable (.gpickle) version. Each connectome file contains the following metrics: number_of_fibers, fiber_density, mean_FA and median_FA, mean_MD and median_MD. The transformations from the native T1w space to the native DTI space applied to the parcellations are stored in the xfm/ folder. The generated .pdf in the group/ folder visualizes each individual’s connectome using the number of fibers metric, for parcellation scale 1.
The code/ directory provides code for preprocessing and derivation steps for MRI and DWI data, as well as sample scripts to derive time-series of activity per ROI from the preprocessed EEG data in Python and Matlab. Each code folder has its own README file with afor more specific user guide and documentation to get started. The MRIQC-Docker/ folder holds the code to run MRIQC on all participants. All outputs generated by MRIQC to support quality assessment of T1w data can be found in the same directory53. Code to run Connectome Mapper 3 on all participants that can be found in the ConnectomeMapper-Docker/ folder. The code to deface anatomical MRIs52 is provided in the Deface-Python/ folder, together with the generated log file and a report in pdf format. In Source-Reconstruction-Python/, the main.py script performs all steps for source reconstruction, with comments that explain each step. The utils.py contains all necessary functions to run main.py. The Source-Reconstruction-Matlab/ folder provides the equivalent in Matlab code, with the ESI_RECONSTRUCTION.m script performing all steps of source reconstruction, and dependencies and utilities as separate files.
Technical Validation
Behavioral analysis
Analysis of proportion correct and reaction times showed that participants behaved according to task instruction. In the face detection task, the average accuracy was 97 ± 2% of correct responses, with mean reaction times of 501 ± 60 ms. Trials with behavioral errors or reaction times slower than 200 ms were marked as outliers (mean proportion of outliers 0.03 ± 0.02).
In the motion discrimination task, the average accuracy was 90 ± 14% and mean reaction times were 680 ± 90 ms. The mean proportion of outliers was 0.10 ± 0.14. One participant (number 16) inverted the response keys for several trials in the motion task, leading to an outlier accuracy value (36% of correct responses).
VEPs
EEG data were visually inspected and noisy trials and channels were excluded from further analysis (see EEG recording and preprocessing). Remaining ocular, muscle and other artifacts were removed using ICA. For the face task, the average proportion of channels removed across participants was 0.12 ± 0.07, range 0.02–0.28; mean proportion of epochs removed due to non-stereotyped artifacts, peristimulus eye blinks and eye movements: 0.04 ± 0.06, range 0.002–0.26); proportion of ICA components removed: 0.05 ± 0.03, range 0.01–0.15. For the motion task, the average proportion of channels removed was 0.14 ± 0.06, range 0.02–0.23, the average proportion of epochs removed was 0.03 ± 0.03, range 0.005–0.13); proportion of ICA components removed: 0.05 ± 0.04, range 0.008–0.13.
Figure 2 shows the grand-average VEPs for the Face, Scrambled Face, Coherent Motion and Incoherent Motion conditions, showing robust evoked responses with components that conform the existing literature.
MRI and connectomes
After recording MRI data were visually inspected and checked for neurological anomalies by the radiologist. Quality of T1w data was further inspected quantitatively by running MRIQC v0.16.1, an automated and robust quality control tool for T1w data which derives a set of 56 different image quality metrics to characterize image quality at different levels such as noise, motion or imaging artifacts53. Results were summarized in reports at both individual and group levels, which are included in the mriqc/ derivatives folder.
De-identification of T1w data was checked individually and supported by the creation of a PDF report that is available in the code/ folder.
For each participant, the quality of the output after each processing step of the CMP3 pipelines was assessed using its graphical user interface, where we inspected individually the different Freesurfer outputs, brain parcellations, DTI data after preprocessing, co-registered T1w-derived parcellations with DTI data, the estimated FOD image, the reconstructed tractogram, and the reconstructed connectomes. Visual inspection did not reveal any artifacts.
Usage Notes
For optimal sharing and re-usability, raw data of the dataset conforms to BIDS standards for MRI38 and EEG39.
Python and Matlab example code in Source-Reconstruction-Python/ and Source-Reconstruction-Matlab/ shows how to import the various raw and derivative files. This assures compatibility with commonly used EEG and MRI software, including MNE-Python54, NeuroPycon55, Fieldtrip56 and SPM57. The code also shows how to calculate one time-series of activity from all source points contained in a region using singular-value decomposition. Source activity for around 5000 freely oriented dipoles was extracted from all the source points inside each cortical area, as defined by the parcellation, and projected to a representative single direction using singular-value decomposition32.
For creating new inverse solutions, new forward models can be created from the T1w images, using the individual electrode coordinate files. When new leadfields are generated using the defaced MRIs, small differences can occur, but are unlikely to pose problems for the intended reuse purposes58.
Code availability
The dataset contains a code/folder with scripts, and .ini files for generating each derivative. CMP3 is freely available at https://connectome-mapper-3.readthedocs.io, the Cartool software via https://sites.google.com/site/cartoolcommunity.
References
Lopes da Silva, F. EEG and MEG: Relevance to Neuroscience. Neuron 80, 1112–1128 (2013).
Michel, C. M. & Murray, M. M. Towards the utilization of EEG as a brain imaging tool. NeuroImage 61, 371–385 (2012).
Buzsáki, G., Anastassiou, C. A. & Koch, C. The origin of extracellular fields and currents — EEG, ECoG, LFP and spikes. Nature Reviews Neuroscience 13, 407–420 (2012).
Nunez, P. L. & Srinivasan, R. Electric Fields of the Brain: The Neurophysics of Eeg. (Oxford University Press, 2006).
Di Russo, F., Martínez, A., Sereno, M. I., Pitzalis, S. & Hillyard, S. A. Cortical sources of the early components of the visual evoked potential. Hum. Brain Mapp. 15, 95–111 (2002).
Lehmann, D. & Skrandies, W. Multichannel evoked potential fields show different properties of human upper and lower hemiretina systems. Experimental Brain Research 35, 151–159 (1979).
Foxe, J. J. & Simpson, G. V. Flow of activation from V1 to frontal cortex in humans. A framework for defining ‘early’ visual processing. Exp Brain Res 142, 139–150 (2002).
Itier, R. J. & Taylor, M. J. Source analysis of the N170 to faces and objects. Neuroreport 15, 1261 (2004).
Plomp, G., Michel, C. M. & Herzog, M. H. Electrical source dynamics in three functional localizer paradigms. NeuroImage 53, 257–267 (2010).
Mancuso, G. & Plomp, G. Neural dynamics of cue reliability in perceptual decisions. Journal of Vision 20, 23–23 (2020).
Philiastides, M. G. & Sajda, P. Temporal Characterization of the Neural Correlates of Perceptual Decision Making in the Human Brain. Cereb Cortex 16, 509–518 (2006).
Picton, T. W. The P300 wave of the human event-related potential. J Clin Neurophysiol 9, 456–479 (1992).
Makeig, S. et al. Dynamic Brain Sources of Visual Evoked Responses. Science 295, 690–694 (2002).
Pfurtscheller, G. Spatiotemporal analysis of alpha frequency components with the ERD technique. Brain Topogr 2, 3–8 (1989).
Grave de Peralta Menendez, R., Murray, M. M., Michel, C. M., Martuzzi, R. & Gonzalez Andino, S. L. Electrical neuroimaging based on biophysical constraints. NeuroImage 21, 527–539 (2004).
López, J. D., Litvak, V., Espinosa, J. J., Friston, K. & Barnes, G. R. Algorithmic procedures for Bayesian MEG/EEG source reconstruction in SPM. Neuroimage 84, 476–487 (2014).
Michel, C. M. et al. EEG source imaging. Clinical Neurophysiology 115, 2195–2222 (2004).
Uutela, K., Hämäläinen, M. & Somersalo, E. Visualization of Magnetoencephalographic Data Using Minimum Current Estimates. NeuroImage 10, 173–180 (1999).
Acar, Z. A. & Makeig, S. Effects of Forward Model Errors on EEG Source Localization. Brain Topogr 26, 378–396 (2013).
Dalal, S. S. et al. Five-dimensional neuroimaging: Localization of the time–frequency dynamics of cortical activity. NeuroImage 40, 1686–1700 (2008).
He, B. et al. Electrophysiological Brain Connectivity: Theory and Implementation. IEEE Transactions on Biomedical Engineering 1–1, https://doi.org/10.1109/TBME.2019.2913928 (2019).
Mahjoory, K. et al. Consistency of EEG source localization and connectivity estimates. NeuroImage 152, 590–601 (2017).
Bassett, D. S. & Sporns, O. Network neuroscience. Nat Neurosci 20, 353–364 (2017).
Hagmann, P. et al. Mapping the Structural Core of Human Cerebral Cortex. PLOS Biology 6, e159 (2008).
Markov, N. T. et al. A Weighted and Directed Interareal Connectivity Matrix for Macaque. Cereb. Cortex 24, 17–36 (2014).
Sporns, O. Contributions and challenges for network models in cognitive neuroscience. Nature Neuroscience 17, 652–660 (2014).
Bammer, R. Basic principles of diffusion-weighted imaging. European Journal of Radiology 45, 169–184 (2003).
Hagmann, P. et al. Understanding Diffusion MR Imaging Techniques: From Scalar Diffusion-weighted Imaging to Diffusion Tensor Imaging and Beyond. RadioGraphics 26, S205–S223 (2006).
Breakspear, M. Dynamic models of large-scale brain activity. Nat Neurosci 20, 340–352 (2017).
Honey, C. J. et al. Predicting human resting-state functional connectivity from structural connectivity. PNAS 106, 2035–2040 (2009).
Pascucci, D. et al. VEPCON: Source imaging of high-density visual evoked potentials with multi-scale brain parcellations and connectomes. OpenNeuro. https://doi.org/10.18112/openneuro.ds003505.v1.0.3 (2021).
Rubega, M. et al. Estimating EEG Source Dipole Orientation Based on Singular-value Decomposition for Connectivity Analysis. Brain Topogr 32, 704–719 (2019).
Pascucci, D., Rubega, M. & Plomp, G. Modeling time-varying brain networks with a self-tuning optimized Kalman filter. PLOS Computational Biology 16, e1007566 (2020).
Rubega, M. et al. Time-varying effective EEG source connectivity: the optimization of model parameters*. in 2019 41st Annual International Conference of the IEEE Engineering in Medicine and Biology Society (EMBC) 6438–6441, https://doi.org/10.1109/EMBC.2019.8856890 (2019).
Glomb, K. et al. Connectome spectral analysis to track EEG task dynamics on a subsecond scale. NeuroImage 221, 117137 (2020).
Rué-Queralt, J. et al. The connectome spectrum as a canonical basis for a sparse representation of fast brain activity. NeuroImage 244, 118611 (2021).
Tourbier, S. et al. connectomicslab/connectomemapper3: Connectome Mapper v3.0.0-RC4. Zenodo https://doi.org/10.5281/zenodo.4587906 (2021).
Gorgolewski, K. J. et al. The brain imaging data structure, a format for organizing and describing outputs of neuroimaging experiments. Scientific Data 3, 160044 (2016).
Pernet, C. R. et al. EEG-BIDS, an extension to the brain imaging data structure for electroencephalography. Sci Data 6, 1–5 (2019).
Bach, M. The Freiburg Visual Acuity test–automatic measurement of visual acuity. Optom Vis Sci 73, 49–53 (1996).
Ales, J. M., Farzin, F., Rossion, B. & Norcia, A. M. An objective method for measuring face detection thresholds using the sweep steady-state visual evoked response. Journal of Vision 12, 18–18 (2012).
Peirce, J. W. PsychoPy—Psychophysics software in Python. Journal of Neuroscience Methods 162, 8–13 (2007).
Peirce, J. W. Generating Stimuli for Neuroscience Using PsychoPy. Front Neuroinformatics 2 (2009).
Delorme, A. & Makeig, S. EEGLAB: an open source toolbox for analysis of single-trial EEG dynamics including independent component analysis. Journal of Neuroscience Methods 134, 9–21 (2004).
Leske, S. & Dalal, S. S. Reducing power line noise in EEG and MEG data via spectrum interpolation. Neuroimage 189, 763–776 (2019).
Brunet, D., Murray, M. M. & Michel, C. M. Spatiotemporal Analysis of Multichannel EEG: CARTOOL. Intell. Neuroscience 2011, 2:1–2:15 (2011).
Pascual-Marqui, R. D., Michel, C. M. & Lehmann, D. Low resolution electromagnetic tomography: a new method for localizing electrical activity in the brain. Int J Psychophysiol 18, 49–65 (1994).
Fischl, B. FreeSurfer. NeuroImage 62, 774–781 (2012).
Desikan, R. S. et al. An automated labeling system for subdividing the human cerebral cortex on MRI scans into gyral based regions of interest. NeuroImage 31, 968–980 (2006).
Cammoun, L. et al. Mapping the human connectome at multiple scales with diffusion spectrum MRI. J Neurosci Methods 203, 386–397 (2012).
Tournier, J.-D. et al. MRtrix3: A fast, flexible and open software framework for medical image processing and visualisation. NeuroImage 202, 116137 (2019).
Schimke, N. & Hale, J. Quickshear defacing for neuroimages. in Proceedings of the 2nd USENIX conference on Health security and privacy 11 (USENIX Association, 2011).
Esteban, O. et al. MRIQC: Advancing the automatic prediction of image quality in MRI from unseen sites. PLOS ONE 12, e0184661 (2017).
Gramfort, A. et al. MEG and EEG data analysis with MNE-Python. Front. Neurosci. 7 (2013).
Meunier, D. et al. NeuroPycon: An open-source python toolbox for fast multi-modal and reproducible brain connectivity pipelines. NeuroImage 219, 117020 (2020).
Oostenveld, R., Fries, P., Maris, E. & Schoffelen, J.-M. FieldTrip: Open Source Software for Advanced Analysis of MEG, EEG, and Invasive Electrophysiological Data. Computational Intelligence and Neuroscience 2011, e156869, https://www.hindawi.com/journals/cin/2011/156869/ (2010).
Litvak, V. et al. EEG and MEG Data Analysis in SPM8. Computational Intelligence and Neuroscience 2011, e852961 (2011).
Mikulan, E. et al. Simultaneous human intracerebral stimulation and HD-EEG, ground-truth for source localization methods. Scientific Data 7, 127 (2020).
Acknowledgements
This research was supported by Swiss National Science Foundation grants PP00P1_183714, PP00P1_190065 and CRSII5-170873.
Author information
Authors and Affiliations
Contributions
Conceptualization G.P., D.P. Investigation D.P. Formal analysis D.P., S.T., J.R.Q., M.C. Data curation and Software S.T., D.P., J.R.Q. Supervision and Funding acquisition G.P., P.H. Writing original draft G.P, D.P, S.T. Writing review & editing All authors
Corresponding authors
Ethics declarations
Competing interests
The authors declare no competing interests.
Additional information
Publisher’s note Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
Rights and permissions
Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made. The images or other third party material in this article are included in the article’s Creative Commons license, unless indicated otherwise in a credit line to the material. If material is not included in the article’s Creative Commons license and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this license, visit http://creativecommons.org/licenses/by/4.0/.
The Creative Commons Public Domain Dedication waiver http://creativecommons.org/publicdomain/zero/1.0/ applies to the metadata files associated with this article.
About this article
Cite this article
Pascucci, D., Tourbier, S., Rué-Queralt, J. et al. Source imaging of high-density visual evoked potentials with multi-scale brain parcellations and connectomes. Sci Data 9, 9 (2022). https://doi.org/10.1038/s41597-021-01116-1
Received:
Accepted:
Published:
DOI: https://doi.org/10.1038/s41597-021-01116-1