Abstract
Hand visibility affects motor control, perception, and attention, as visual information is integrated into an internal model of somatomotor control. Spontaneous brain activity, i.e., at rest, in the absence of an active task, is correlated among somatomotor regions that are jointly activated during motor tasks. Recent studies suggest that spontaneous activity patterns not only replay task activation patterns but also maintain a model of the body's and environment's statistical regularities (priors), which may be used to predict upcoming behavior. Here, we test whether spontaneous activity in the human somatomotor cortex as measured using fMRI is modulated by visual stimuli that display hands vs. non-hand stimuli and by the use/action they represent. A multivariate pattern analysis was performed to examine the similarity between spontaneous activity patterns and task-evoked patterns to the presentation of natural hands, robot hands, gloves, or control stimuli (food). In the left somatomotor cortex, we observed a stronger (multivoxel) spatial correlation between resting state activity and natural hand picture patterns compared to other stimuli. No task-rest similarity was found in the visual cortex. Spontaneous activity patterns in somatomotor brain regions code for the visual representation of human hands and their use.
Similar content being viewed by others
Introduction
The hand is an active sensory organ. Daily, we rely on its physical and motor properties to grasp and manipulate objects, often under visual guidance. This ability also depends on proprioception and touch. These two sensory systems track the movement of the hand and convey information about the objects, respectively. Therefore, visual and haptic information co-occurs1 with the motor counterparts, especially during everyday manual behavior. Sensory afferents and movements are represented in the primary somatosensory (S1) and motor cortices (M1)2,3,4,5. Notably, recent studies in healthy individuals showed non-afferent processing of visual stimuli within the primary somatosensory cortex6. It is unknown, however, whether such multisensory response maps reflect the co-occurrence statistics occurring during natural manual behavior. Having “access” to the visual and sensorimotor properties of the hand can be an efficient strategy for the brain to rapidly interact with the features of everyday objects. Recent studies show that visual inputs can reach the motor cortex at a relatively short latency and that the connection is facilitated during visuomotor stimuli, possibly contributing to visuomotor integration7. The perception of static body stimuli suggestive of fluent movement recruits the motor area, as shown by increased oxygen-dependent responses in M1 and somatomotor area (SMA) and increased functional connectivity between the two areas8.
Recent theoretical and empirical studies9,10,11 suggest that frequent environmental stimuli and body movements (statistical regularities), also referred to as prior experience12,13, may be coded in resting brain activity. The spontaneous activity observed when the participant lies quietly at rest, without any sensory input or motor output14, may have a role in encoding such multisensory representation of the hand more than other biological stimuli. Previous electrophysiological and neuroimaging studies show that rather than being random noise, spontaneous activity is highly structured in space and time (for a review15,16). However, there is still no consensus on its functional role and computations.
The first systematic investigation addressing the spatiotemporal structure of the spontaneous activity focused on the hand region of the primary somatomotor cortex. Through interregional correlations at rest, or resting state functional connectivity, authors found that the topography of the somatomotor areas is similar to that evoked by finger movements17. One possible interpretation of Biswal’s (1995) findings could be that the internal representations of the hand at rest resemble those during touch and action18. This begs the question of whether the resting (somatomotor) brain retains traces of everyday experience with the hand, either visual, motor, or somatosensory. If this hypothesis holds, it can suggest that the internal hand model is stable, present even at rest, and multisensorial, encompassing the visual and somatomotor aspects.
Stimulus-evoked patterns are linked to spontaneous mutlivertex activity patterns, mainly in the stimulus’s preferred brain region19. More recently, other fMRI evidence shows that activity patterns related to hand movements replay in spontaneous activity in the motor cortex and association networks11,20. More precisely, the human motor cortex codes more frequently for ecological hand movements than uncommon hand movements11. These findings support the idea that the co-occurrence of vision and usage of the hand may be represented in the generic patterns of the resting somatomotor cortex. Building on this body of evidence, we hypothesized that the multivariate patterns of BOLD resting state activity in the somatomotor cortex, as measured using functional magnetic resonance imaging (fMRI), retain higher similarity with patterns of the multivariate task-evoked activity elicited by visual hand stimuli compared to other non-hand stimuli. If the internal model adapts to natural stimuli21, this effect should be specific for natural hands but not for hand-shaped objects (such as robot hands or gloves) or control stimuli (i.e., food). Given that intrinsic representations are specific for the stimuli being coded in that area19, we do not expect this effect in the early visual areas that code low to mid-level features. Differences between rest and task distributions were assessed by employing both the Kolmogorov–Smirnov (KS) statistics and multivoxel similarity analysis11,19,20. Unlike the Kolmogorov–Smirnov test, this second analysis captures the highest similarity between task-evoked multivoxel patterns of a category and the patterns observed on each frame of the resting scan. These two analyses confirmed that an internal representation of the static hand, not suggestive of any motion, is coded in the left somatomotor region. This representation was found in the early somatomotor areas but not in the visual cortex, suggesting that it might be employed for its inferred use. This is further confirmed by the trend analysis showing that the internal hand representation is stronger than that of the robot hand or glove.
Results
In this study, we examined whether the human somatomotor cortex codes at rest for a visual representation of the hand and its inferred use. Using fMRI, we acquired an eight-minute resting state scan in which observers kept their eyes on a fixation point without any explicit cognitive or motor task (Fig. 1A). We then presented observers with pictures of four categories of stimuli, i.e., natural hands, robot hands, gloves, and control stimuli (i.e., food) (Fig. 1D). We were interested in testing two main predictions. First, we predicted that the somatomotor cortex maintains a stronger similarity between the activity patterns evoked by hand stimuli and resting state patterns compared to those evoked by non-hand stimuli. This is given by the nearly continuous vision of our hands in everyday life, compared to hand-like stimuli like robot hands or gloves. These stimuli share visual attributes with hands but are much less common and not parts of the human body. Second, inferred action/use may also modulate spontaneous activity in the somatomotor cortex. Robot hands perform similar actions to hands, while gloves have no autonomous motor attributes despite their relative visual similarity. To test the association between resting state spontaneous activity and task-evoked responses, we selected three separate regions of interest (ROIs, Fig. 2A): left and right somatomotor areas (i.e., precentral and postcentral gyri) and bilateral early visual areas (V1–V2–V3). The left somatomotor area was identified with a three-minute finger tapping scan (Fig. 1C) by performing a student t-test group analysis on all subjects. The mask was then mirrored to identify the right somatomotor area. Finally, the early visual areas were selected using a functional atlas of the visual cortex.
Correlation between resting state and task-evoked multivoxel activity in the selected ROIs
First, we extracted task-evoked and resting state multivoxel activity for each ROI and measured the overall differences between distributions of pattern similarities using the KS test. We then employed a repeated-measure one-way ANOVA using the KS statistic (i.e., maximum difference between the task and rest empirical cumulative distribution functions, ECDFs) to assess the differences between resting state activity and the four categories. Moreover, since KS statistic does not provide an indication of the portions of the ECDFs that differ among task categories, we performed the same analysis using only the upper 90th percentile (U90 value) of the ECDF, as done in previous studies19. Briefly, to highlight the differences characterised by distributions with heavy tails, we compared the mean multivoxel activity of each stimulus object to the patterns of every time point of resting state (Fig. 2B). This resulted in four vectors (for the stimulus categories) of Pearson’s r values representing the strength of the relationship between the stimulus and rest patterns and can be represented with four ECDFs. We compared the ECDFs, considering only the upper 90th percentile (U90 value) as a cutoff, and used a one-way repeated-measures ANOVA in each region of interest.
Using ANOVA and considering the KS statistic, significant differences (after Bonferroni corrected threshold) between rest-task category group correlations were found in the left somatomotor ROI (Fig. 3A), (F(3,54) = 4.194, p = 0.01, ηp2 = 0.189), but not in the right somatomotor ROI (Fig. 3C) (F(3,54) = 0.847, p = 0.47, ηp2 = 0.45) or the early visual areas (Fig. 3E) (F(3,54) = 1.133, p = 0.34, ηp2 = 0.059). When focusing our interest on the upper portion of the right tail (i.e., U90 values), the significant results after Bonferroni correction were confirmed only in the left somatomotor cortex, where the hand stimuli yielded the strongest rest-task similarity as compared to non-hand stimuli or objects (Fig. 3B) (F(3,54) = 4.932, p = 0.004, ηp2 = 0.215). Post-hoc comparisons using Sidak correction (p = 0.04) indicated that the hand condition (M = 0.281, SD = 0.016) is significantly higher than the food condition (M = 0.232, SD = 0.011). Instead, in the right somatomotor area, the ANOVA using U90 correlation values showed no significant main effect of conditions (F(3,54) = 0.664, p = 0.578, ηp2 = 0.036), confirming that the effect was left-lateralized (Fig. 3D). Finally, as expected, there were no differences in correlation in the non-hand-preferred early visual areas (Fig. 3F) that extract low- to mid-level visual features controlled for in our stimuli22,23 where the ANOVA showed no significant main effect of conditions (F(3,54) = 1.266, p = 0.295, ηp2 = 0.066).
As a control, we also considered the distribution percentile from U1 to U99 in the three ROIs. We obtained significant effects at the extremes of the distribution in the left somatomotor area (< 30th and > 70th percentiles), suggesting that our results were not dependent on the specific choice of the U90 (Fig. 4). The observation that the similarities between task and rest correlation in a stimulus category-preferred area occur at the extremes of the cumulative distribution function is neither surprising nor novel. Indeed, Kim and colleagues (2020) reported effects on the positive and negative tails in the visual cortex. As mentioned by the authors, high positive and negative correlation coefficients indicate that the resting multivertex patterns on a given frame were very similar to the pattern evoked by the preferred category. Here, this is true for the hand category and specifically for the left somatomotor area. Values of the left tail of the distribution (i.e., < 30th percentiles) indicate an inverse, yet significant, relationship in support of the results found in the right tail.
On the characterization of the task-rest similarity in the left somatomotor area
In the left somatosensory area, we followed up with a trend analysis for exploring post-hoc ANOVA interactions with U90 and testing the differences among stimulus categories (hand > robot > glove > food) (Fig. 5A). Here, the aim is to understand better the multimodal activity highlighted by the visually induced effect. This investigation revealed a significant trend for decreased task-rest similarity in the multivoxel spatial pattern going from natural hand to hand-shaped objects (e.g., glove) (F(1,18) = 8.463, p = 0.009, ηp2 = 0.320) (Fig. 5B). Finally, a searchlight analysis (radius = 6 mm) in a larger extent of the left somatomotor cortex was performed to better highlight the subregion of the postcentral and precentral gyri involved in the association between task and rest activity. Interestingly, this region falls in the postcentral gyrus in correspondence with the hand notch (Fig. 5C).
Discussion
Encoding of the hand form in the resting somatomotor region
In the case of hand, use and visibility often co-occur, with beneficial effects on behavioral performance: hand visibility improves the accuracy of volitional movements24, reduces the perception of pain25, increases tactile perception26 and allows motor-visual regularities that compute the sense of agency27. During the interaction with the external objects, we also rely on an intrinsic model of the body structure to mediate an understanding of position28,29. Tactile perception relies on prior knowledge rather than accurate spatiotopic representations based on current sensory input30. Other evidence suggests that the brain employs a standard posture or a Bayesian prior for guiding body-space perception and action31. For example, automatic hand postural configurations can be quantified with near-optimal Bayesian inference on somatosensory signals32. This is interesting because spontaneous activity maintains statistical regularities (priors) to anticipate and even predict environmental demands15. This hypothesis has been tested using natural visual stimuli and cognitive tasks11,19,33,34,35. More specifically, the idea is that during offline periods, the brain forms generic priors or low-dimensional representations as categories or synergies, rather than individual instances or movements, that summarise the relative abundance of visual stimuli, objects, or motor patterns in the natural environment (statistical regularities). Interestingly, this reduced subspace of summary representations is formed along a hierarchy with the somatomotor cortices at the lower level10. Consistently, our results show task-rest patterns similarity in the cortical hand region.
The multivoxel activity between rest and task tests the occurrence of similar temporal fluctuations in a given region, during the visual stimulation and the resting state. Correlation, which does not imply causation, tests the relationship between intrinsic and task-evoked activity. In this context, results show that at rest the hand somatomotor region maintains a stronger multivoxel pattern of activity that resembles that evoked by the presentation of the natural hands compared to control stimuli (e.g., food). We use two methods to correlate our data: while the Kolmogorov–Smirnov test evaluates the whole of two distributions, with the multivoxel similarity analysis we correlated task-related patterns with those extracted from each time point of the resting state to generate cumulative distribution functions using the upper 90th percentile as a measure of strength of correlation between stimulus group and rest patterns for each ROI and subject. This allows us to highlight the differences in the tails of the distributions. As shown in Fig. 2B, while the mean distribution of task-rest similarity values is consistently zero across all tasks, the tail of the distribution for the hand category is more positive (higher number of values > 90th percentile) compared to other conditions. A control analysis performed on the lower number of values, i.e., < 30th percentile (Fig. 4), supports this finding for the hand category, in agreement with previous studies11,19,20. Therefore, the somatomotor regions acting as a central node of processing of afferent and efferent inputs, fundamental for the active tactile feedback and proprioception, may retain low-dimensional representations (e.g., the body form) during the offline periods, instrumental to the interaction with the environment. We often rely on the physical properties of our body (especially the hand) to grasp and manipulate objects and the co-occurrence of sight and use contributes to generating priors tied to the actual experience. According to the idea that things occurring nearly coincidentally in time are represented together in the cortex36, i.e., cutaneous, proprioceptive, and visual signals, the co-occurrence statistics of usage and visibility may be represented in the somatomotor regions. Despite diverse spatial and temporal resolutions, previous findings in humans have demonstrated the existence of preferred tuning of single neurons to visually cued non-grasp-related hand shapes in the posterior parietal cortex37. Moreover, in monkeys, a substantial number of neurons in the arm/hand region of the postcentral gyrus is activated by both somatosensory and visual stimulation38. Here, for the first time, we found that the rest-task similarity in the somatomotor cortex is driven by the hand form, and we can access it through a visually cued paradigm without explicit motor processing.
Our results align with the multimodal role of M1/S1 that embodies different body/motor-related representations, including the one mediated by hands. Linguistic studies show correlates of action words in the somatotopic activation of the motor and premotor cortex (e.g.,39,40). Similarly, embodied cognition theories suggest that understanding action verbs is reliant on the involvement of action-related areas; this representation is found to be body-specific. For example, right- and left-handers perform actions differently and use different brain regions for semantic representation41. In summary, the stability of the spontaneous activity suggests that this set of neural signals is a possible candidate to preserve long-term models and priors of common behaviours and natural stimuli9,10. These prior representations are the result of statistical learning mechanisms that store the co-occurrence statistics of hand visibility and usage, instrumental to the exploration and manipulation of the surroundings.
From birth, humans learn to use their hands in a more refined and precise fashion to interact with external objects. Spontaneous activity has been hypothesised to reflect a recapitulation of previous experiences or expectations of highly probable sensory events. More precisely, the ongoing activity could be related to the statistics of habitual cortical activations during real life, both in humans19,35,42 and animals21,43. For example, a higher similarity between motor11 and association cortex20 and spontaneous patterns have been shown for natural hand sequences than for novel sequences. Based on these findings, our results can be interpreted as evidence that rest-task similarity reflects natural stimuli, or more specifically, hand-like objects compared to artificial ones. Recent studies have found that this effect is higher in stimuli-selective regions19.
Beyond being natural, hands have sensory and motor attributes. From a visual point of view, gloves and robotic hands share with the hands both size and shape, but they are non-living items with either synthetic motor properties such as the robot, or no independent motor attributes such as the glove. While the visual attributes may explain the similarity between rest and task activity induced by the natural hand compared to control objects (i.e., food), the inferred action/used can bias such similarities along a continuum where hands are higher, as tied to natural movements, than robotic hands performing similarly, yet unnatural, and gloves, without autonomous motor attributes (Fig. 3A/C and Fig. 5B). The interplay of these factors offers an interpretation of the rankings obtained: on top of the continuum, the natural hand, most abundant environmentally, has the necessary visual features and motor attributes, then the robot hand though not as environmentally abundant, has the same visual features and synthetic motor attributes, the glove retains only the visual features but cannot act on its own, and finally the food objects neither have the same visual nor motor attributes.
Studies in the visual cortex demonstrated that the long-term natural experience shapes the response profile. The high-level cortical representations of these regions capture the statistics with which visual stimuli occur44,45. Furthermore, animal studies demonstrated that when visual stimuli are natural scenes, the reliability of visual neurons' response increases and persists in the subsequent spontaneous activity; these effects are not observed with the stimulation with the noise of flashed bar stimuli43. In the ferret’s visual cortex, the tuning function of neurons “learns” the statistics of natural but not artificial stimuli as the animal grows21. Here, for the first time, we found evidence of visual representations encoded in non-visual regions at rest, but regions still specific to our hand stimuli (hand notch area). Thus, we believe that the cumulative impact of the statistics with which natural stimuli occur during the development and the experience shapes the ongoing activity of the whole brain, not limited to the visual cortex. Our results are confirmed and opposed by Stringer and coworkers46 that show representations of natural motor sequences at rest in the mouse visual brain but also across the forebrain. Similar to our results, they confirm that natural sequences are coded in resting state and shape the activity of the whole brain, but conversely, they find motor sequence representations in the visual system. The discrepancy could be explained by the fact that our stimuli did not represent any actions (i.e., strictly open hands), and the fact that spontaneous activity in mice is recorded differently than in humans (mouse running in darkness vs humans staring at a cross with eyes open). However, their results similarly show evidence of generic multimodal representations encompassing both motor and visual attributes.
Our results could be alternatively interpreted as a result of motor imagery. Very early works have found that motor imagery (i.e., imagining a movement without executing it) has been found to share overlapping networks with motor performance47. However, we can exclude the possibility that our participants were engaged in hand motor imagery during the resting state scan, since that was acquired before the presentation of the visual stimuli task and they were naive to the aim of the study.
Our study shows that the multivoxel activity of hands in the somatomotor area is most represented in resting state activity. This effect was lateralized to the left, not the right, somatomotor area (Fig. 5B). From a theoretical point of view, the lateralization result is well aligned with the existing literature: compared to other body parts and objects, static pictures of hands and tools have overlapping activations in the LOTC that are then selectively connected to the left intraparietal sulcus and left premotor cortex48,49,50. Moreover, a body of literature shows bilateral motor cortical activations are produced with the left non-dominant hand. In contrast, movements with the dominant right hand induce only contralateral (left) activations51. Precisely, visuospatial orientation attention, measured with eye movements, activates a network of premotor and parietal areas in the right hemisphere, while motor attention and selection, measured as the attention needed to redirect a hand movement, activate the left hemisphere52,53. TMS and lesion studies further support this left lateralization, describing motor selection, motor attention, or motor learning51,53. The effector-independent activation in the left hemisphere is also found in kinesthetic motor imagery that activates common circuits for motion in the premotor, posterior parietal, and cerebellar regions52. More recently, Karolis and colleagues54, using fMRI, built a lateralization functional taxonomy along four axes representing symbolic communication, perception/action, emotion, and decision-making. Along the action/perception axis, the categories of movement, finger tapping, motor observation, and touch were all found to activate the sensorimotor areas of the left hemisphere selectively. Interestingly, all these categories had the term hand or finger as principal components with the highest loadings.
In summary, we provide the first evidence that the ongoing activity in the left somatomotor regions maintains a long-term representation of the hand shape in the absence of any motor task or sensory stimulation. Furthermore, this result may support the representation of visually related information in M1/S1, enforcing a multimodal role in these areas.
Limitations
The most significant limitation of our study is the use of highly controlled still images. Stimuli presented were in black and white with noise imposed on top to correct for low and mid-level features. This is necessary as a first step because we are looking for generic representations in resting state activity and are using early visual regions as a control. Based on current theoretical investigation, resting state patterns represent the statistical regularities of the environment. Therefore, future studies may employ naturalistic stimuli (for example, videos).
Our sample only included right-handed participants, similar to previous neuroimaging studies testing the relationship between spontaneous and task-evoked studies, employing motor paradigms11,17,20, visual stimulation19,34,35 or cognitive tasks55,56, to give some examples. Moreover, it is noteworthy that the numerosity employed here is aligned with these previous studies and is even higher (n = 19).
Another limitation is the lack of hand motion recordings, which is beyond the aim of our study, i.e., testing the relationship between resting-state and visually-evoked activity patterns. Therefore, this would not interfere with our results even with minimal movements.
On a methodological note, our ROIs encompass pre and post-central gyrus. Even though the searchlight analysis shows the spatial specificity of the task-rest association localized on the postcentral gyrus activity, further studies should entangle the hand region alone. Moreover, since kinesthetic motor imagery activates the left premotor, posterior parietal, and cerebellar regions and is effector-independent52, this would help us rule out mental imagery as a possible explanation of our results.
Conclusion
The sensory and perceptual analysis depends not only on external stimuli but also on coding expected features in the surroundings that are simultaneous to the flow of information57. Building models of the environment creates generic priors that are stable and common across individuals yet malleable with experience and age9. If motor-sensory interactions are entrained throughout development into spontaneous cortical oscillations, our internal model must have a reservoir of natural behaviours. Our experiment shows that spontaneous activity representing the internal model, despite its noisy structure, reliably encodes the visuospatial topography of the natural human hand in somatomotor areas. This suggests that the human hand represents a prior for the effective motor interaction with the external environment to allow exploration, learning, and adaptation. In line with the malleability of the cerebral cortex in response to behavior and other input manipulations, to our knowledge, this is the first experiment to show that visually-conveyed representation of hands in resting state activity in frontoparietal somatomotor areas by looking at the relationship between evoked and spontaneous activity. By measuring the correlation between evoked activity and resting-state activity, we shed light on the multimodal role of the somatomotor areas.
Materials and methods
Subjects
Twenty healthy individuals (10 females) were enrolled in the study (mean age ± SD 29 ± 2.59 years, range 24–34 years). All subjects underwent medical interviews and examinations to rule out the history or presence of any disorders that could affect brain function and development. Participants were provided with a detailed description of all the experimental procedures and were required to sign a written informed consent. All methods were carried out in accordance with relevant guidelines and regulations of the ethical review board. The study was conducted under a protocol approved by the Area Vasta Nord Ovest Ethics Committee (protocol n. 24579/2018). The Area Vasta Nord Ovest Ethics Committee is an independent and multidisciplinary body that is competent for the evaluation of clinical and non-clinical/no-profit studies carried out in healthcare and academic structures of the Tuscany North West Area, including IMT School for Advanced Studies and the Tuscany Gabriele Monasterio Foundation in Massa (where the MRI data have been acquired). All subjects were right-handed, following the Edinburgh Handedness Inventory (10-item inventory)58. We discarded one subject from subsequent analyses because of excessive movement artifacts in fMRI data, leading to a final sample of nineteen subjects.
Design and stimuli
The paradigm was divided into three parts (Fig. 1). The first part included an eight-minute resting state scan (pre-task scan). During the resting state scan (Fig. 1A), subjects fixated a red cross (24 × 24 degrees) at the center of the screen (VisuaStimDigital dual-display goggles, 32 × 24 degrees of visual angles, Resonance Technology Inc.), without performing any cognitive or motor tasks. In the second part (Fig. 1B), we presented subjects with pictures of four categories of objects, natural hands, robot hands and control stimuli (i.e., food). All stimuli had the same vertical orientation. Left and right hands presented from the front and from the back were randomly selected and presented. We used six different examples for each object category. We presented subjects with five runs lasting four minutes. In each run, subjects attended twelve randomized blocks (four categories, three repetitions each). Each block lasted 12 s, followed by 8 s of fixation, and included 20 randomized repetitions of the six stimulus variations, each presented for 0.3 s and followed by fixation for 0.3 s. Participants were instructed to perform a covert working memory one-back task, in which they were instructed to identify repetitions of the same visual stimulus. Each run began and ended with 20 s of rest to acquire baseline fMRI activity. For the stimulus set, we used pictures of items pertaining to the four categories, which were converted to grayscale and matched for global luminance and root-mean-squared (RMS) contrast to control for low-level visual biases. The object pictures were centered and embedded in a circular pink-noise display with a fixed circumference (16 degrees) blending into a grey background (Fig. 1D). For the third part (Fig. 1C), subjects performed a three-minute finger-tapping localizer scan. We instructed them to tap their thumb on every other finger of their right dominant hand sequentially and at their own pace (blocks of 15 s of activity followed by 15 s of rest).
All the visual stimuli were presented using MR-compatible display goggles (VisuaStimDigital, Resonance Technology Inc.) covering 32 × 24 degrees of visual angle, and a PC running MATLAB (MathWorks Inc, Natick, MA, USA) and the Psychophysics Toolbox version 359.
MRI data acquisition
We used a Philips 3 T Ingenia scanner with a 32-channel phased-array coil. We used a gradient recall echo-planar (GRE-EPI) sequence with TR/TE = 2,000/30 ms, FA = 75°, FOV = 256 mm, acquisition matrix = 84 × 82, reconstruction matrix = 128 × 128, acquisition voxel size = 3 × 3 × 3 mm, reconstruction voxel size = 2 × 2 × 3 mm, 38 interleaved axial slices, and 240 volumes. We also acquired three-dimensional high-resolution anatomical images of the brain using a magnetization prepared rapid gradient echo sequence (MPRAGE) with TR/TE = 7/3.2 ms, FA = 9°, FOV = 224 mm, acquisition matrix = 224 × 224, acquisition and reconstruction voxel size = 1 × 1 × 1 mm, 156 sagittal slices.
Preprocessing
We used the AFNI software package60 and a standard preprocessing pipeline to preprocess fMRI data, separately for tasks (i.e., one-back task and finger tapping) and resting state. We temporally aligned the runs (3dTshift), then corrected for head motion (3dvolreg), and used the transformation matrices to compute the framewise displacement, identifying time points affected by excessive motion61. We then spatially smoothed the data using a Gaussian kernel and an iterative procedure up to 6 mm Full Width at Half Maximum (3dBlurToFWHM). We then normalized the runs by dividing the intensity of each voxel over its mean over the time series and applied a multiple regression analysis (3dDeconvolve) to estimate the activation patterns for each category. The model included the four stimulus categories (hand, robot, glove, food) as regressors for the visual working memory task, and finger movement blocks for the localizer of the hand motor area. The output was a stimulus-evoked BOLD multivoxel activity (beta weight) for each category. We also included head movement parameters, framewise displacement, and signal trends as nuisance variables. We performed a generalized least squares time series fit for the resting state data to account for nuisance regressors defined above and signal autocorrelation using an autoregressive moving average model (order 1; 3dREMLfit). We registered single subject results and preprocessed rest scans to MNI152 standard space62 using nonlinear registration.
ROIs
We selected three regions of interest (ROIs; Fig. 2A) to test the association between resting state spontaneous activity and task-evoked activity. We identified the left somatomotor area after thresholding (p < 10–6 uncorrected) the finger tapping localizer group level t-test, resulting in a small ROI (~ 5000 μL) encompassing the precentral and postcentral gyri, as also suggested by the overlap with the HCP atlas63. The center of gravity of our ROI is − 42 − 24 54 (MNI atlas). Then, we selected the right somatomotor area by left/right flipping the ROI mentioned above (3dLRflip). We defined the bilateral early visual cortex (V1, V2, V3) using VisFAtlas64 as a further early sensory control region.
Task-rest multivoxel similarity analysis
In the main analysis, we first analysed the association between resting state and each task-evoked multivoxel activity using a Kolmogorov–Smirnov test. Briefly, for each ROI, we obtained two distributions of correlation coefficients: one measuring the similarities among patterns of brain activity during rest, one resembling the similarities between each stimulus category during task (hand, robot, glove, food) and rest. We converted correlation coefficients to z-scores by means of Fisher’s r–z transformation. Using these two distributions, we performed a 2-sample Kolmogorov–Smirnov test and we extracted from each participant the D statistical measure which represents the maximum difference between the two empirical cumulative distribution functions (ECDFs; kstest2 in MATLAB). Then, we performed for each ROI a one-way repeated-measures ANOVA according to the four stimulus categories.
Given that task specificity was represented with heavy-tailed distributions, we also used the multivoxel similarity analysis introduced by Kim and colleagues19. For each subject and ROI, we extracted the patterns of task-evoked activity of the four stimulus categories (hand, robot, glove, food). A total of four averaged task-evoked vectors were computed, one for each category. The length of these vectors represents the number of voxels within each network. A vector of the same length was computed for each resting state frame. Then, we correlated (using 1-pdist2, 'correlation' distance on MATLAB) the z-scored multivoxel activity of task conditions with the patterns of all resting state timepoints. We then converted all r values to z scores, using Fisher’s r–z transformation. For each category group, we then computed a cumulative distribution function that represents the strength (Pearson’s z score) of the correlation between the average multivoxel category representation and the patterns from every time point of the resting state signal. According to previous analysis11,19,20, we identified the upper 90% value of the distribution (U90 value) to measure task-rest similarity (Fig. 2B). This approach, recently adopted by other studies11,19,20 constitutes a representative measure of the relationship between patterns of resting state activity and those evoked by a category (averaged task-evoked vectors). The U90 value is a transparent estimation of spatial similarity since it refers to the value of a correlation coefficient suggesting the degree of similarity between task-evoked and resting state activity patterns. This cut-off has a statistical rationale: values below could be closer to the mean of the distribution (mean = 0), while values above 90th percentile could be a few, and not representative of the population. In order to compare our data to Kim and colleagues19, we additionally inspected the skewness (calculated as the mean of pattern association—the median of pattern association/standard deviation) and the spread (calculated as the variance of the pattern association) of the data. In the main analysis, we performed a repeated-measures ANOVA for each ROI (stimulus category, 4 levels) and applied Sidak corrections post-hoc.
As a control to the choice of the U90 value, we repeated the analysis using different cut-off. For each category group, we identified the percentiles of the cumulative distribution function from U1 to U99 to assess task-rest similarity for each subject. We then performed separate repeated measures ANOVA for each cut-off.
Further follow-up analyses in the left somatomotor ROI included a trend analysis to test a priori hypotheses of differences among stimulus categories (hand > robot > glove > food). This analysis is suited for exploring post hoc ANOVA interactions.
Finally, a searchlight analysis (radius = 6 mm) was performed in a larger extent of the left somatomotor cortex identified by performing a student t-test group analysis on the localizer data (p < 0.0001, uncorrected) to further explore and highlight the subregions of the postcentral and precentral gyri involved in the association between task and rest activity. Specifically, for each searchlight, we extracted U90 in each participant and performed a one-way repeated-measures ANOVA to assess similarities between resting-state activity and the four stimulus categories. Results were reported at an uncorrected p value < 0.05.
Data availability
This work is part of a European Research Council (ERC) project. The data have been deposited at Open Science Framework (OSF) and are publicly available (https://osf.io/d9y7j/?view_only = 053a3129a2ca45afb6ea7d191109c78b). Any additional information required to reanalyze the data reported in this paper is available from the lead contact upon reasonable request.
References
Ernst, M. O. & Banks, M. S. Humans integrate visual and haptic information in a statistically optimal fashion. Nature 415, 429–433 (2002).
Penfield, W. & Boldrey, E. Somatic motor and sensory representation in the cerebral cortex of main as studied by electrical stimulation. Brain 60, 389–443 (1937).
Merzenich, M. M., Kaas, J. H., Sur, M. & Lin, C. Double representation of the body surface within cytoarchitectonic area 3b and 1 in “SI” in the owl monkey (aotus trivirgatus). J. Comp. Neurol. 181, 41–73 (1978).
Rizzolatti, G. & Luppino, G. The cortical motor system. Neuron 31, 889–901 (2001).
Schieber, M. H. Constraints on somatotopic organization in the primary motor cortex. J. Neurophysiol. 86, 2125–2143 (2001).
Kuehn, E., Haggard, P., Villringer, A., Pleger, B. & Sereno, M. I. Visually-driven maps in area 3b. J. Neurosci. 38, 1295–1310 (2018).
Strigaro, G. et al. Interaction between visual and motor cortex: A transcranial magnetic stimulation study. J. Physiol. 593, 2365–2377 (2015).
Orgs, G. et al. Constructing visual Perception of body movement with the motor cortex. Cereb. Cortex. 26, 440–449 (2016).
Betti, V., Penna, S. D., de Pasquale, F. & Corbetta, M. Spontaneous beta band rhythms in the predictive coding of natural stimuli. Neurosci 27, 184–201 (2021).
Pezzulo, G., Zorzi, M. & Corbetta, M. The secret life of predictive brains: What’s spontaneous activity for?. Trends Cogn. Sci. 25, 730–743 (2021).
Livne, T. et al. Spontaneous activity patterns in human motor cortex replay evoked activity patterns for hand movements. Sci. Rep. UK 12, 16867 (2022).
Slijper, H., Richter, J., Over, E., Smeets, J. & Frens, M. Statistics predict kinematics of hand movements during everyday activity. J. Motor. Behav. 41, 3–9 (2009).
Fiser, J., Berkes, P., Orbán, G. & Lengyel, M. Statistically optimal perception and learning: From behavior to neural representations. Trends Cogn. Sci. 14, 119–130 (2010).
Fox, M. D. & Raichle, M. E. Spontaneous fluctuations in brain activity observed with functional magnetic resonance imaging. Nat. Rev. Neurosci. 8, 700–711 (2007).
Raichle, M. E. The restless brain. Brain Conn. 1, 3–12 (2011).
Deco, G. & Corbetta, M. The dynamical balance of the brain at rest. Neurosci 17, 107–123 (2011).
Biswal, B., Yetkin, F. Z., Haughton, V. M. & Hyde, J. S. Functional connectivity in the motor cortex of resting human brain using echo-planar mri. Magnet. Reson. Med. 34, 537–541 (1995).
Wang, Z. et al. The relationship of anatomical and functional connectivity to resting-state connectivity in primate somatosensory cortex. Neuron 78, 1116–1126 (2013).
Kim, D., Livne, T., Metcalf, N. V., Corbetta, M. & Shulman, G. L. Spontaneously emerging patterns in human visual cortex and their functional connectivity are linked to the patterns evoked by visual stimuli. J. Neurophysiol. 124, 1343–1363 (2020).
Zhang, L., Pini, L., Kim, D., Shulman, G. L. & Corbetta, M. spontaneous activity patterns in human attention networks code for hand movements. J. Neurosci. 43, 1976–1986 (2023).
Berkes, P., Orbán, G., Lengyel, M. & Fiser, J. Spontaneous cortical activity reveals hallmarks of an optimal internal model of the environment. Science 331, 83–87 (2011).
Burkhalter, A. & Essen, D. V. Processing of color, form and disparity information in visual areas VP and V2 of ventral extrastriate cortex in the macaque monkey. J. Neurosci. 6, 2327–2351 (1986).
Hubel, D. & Livingstone, M. Segregation of form, color, and stereopsis in primate area 18. J. Neurosci. 7, 3378–3415 (1987).
Desmurget, M., Rossetti, Y., Jordan, M., Meckler, C. & Prablanc, C. Viewing the hand prior to movement improves accuracy of pointing performed toward the unseen contralateral hand. Exp. Brain Res. 115, 180–186 (1997).
Longo, M. R., Betti, V., Aglioti, S. M. & Haggard, P. Visually induced analgesia: Seeing the body reduces pain. J. Neurosci. 29, 12125–12130 (2009).
Kennett, S., Taylor-Clarke, M. & Haggard, P. Noninformative vision improves the spatial resolution of touch in humans. Curr. Biol. 11, 1188–1191 (2001).
Wen, W. & Haggard, P. Prediction error and regularity detection underlie two dissociable mechanisms for computing the sense of agency. Cognition 195, 104074 (2020).
Longo, M. R. & Haggard, P. An implicit body representation underlying human position sense. Proc. National. Acad. Sci. 107, 11727–11732 (2010).
Longo, M. R., Azañón, E. & Haggard, P. More than skin deep: Body representation beyond primary somatosensory cortex. Neuropsychologia 48, 655–668 (2010).
Badde, S. & Heed, T. The hands’ default location guides tactile spatial selectivity. Proc. Natl. Acad. Sci. 120, e2209680120 (2023).
Romano, D. et al. Behavioral and physiological evidence of a favored hand posture in the body representation for action. Cereb. Cortex. 31, 3299–3310 (2021).
Peviani, V. C., Miller, L. E. & Medendorp, W. P. Biases in hand perception are driven by somatosensory computations, not a distorted hand model. BioRxiv https://doi.org/10.1101/2023.12.06.570330 (2024).
Spadone, S. et al. Dynamic reorganization of human resting-state networks during visuospatial attention. Proc. Nat. Acad. Sci. 112, 8112–8117 (2015).
Betti, V. et al. Natural scenes viewing alters the dynamics of functional connectivity in the human brain. Neuron 79, 782–797 (2013).
Betti, V., Corbetta, M., de Pasquale, F., Wens, V. & Penna, S. D. Topology of functional connectivity and hub dynamics in the beta band as temporal prior for natural vision in the human brain. J. Neurosci. 38, 3858–3871 (2018).
Blake, D. T., Byl, N. N. & Merzenich, M. M. Representation of the hand in the cerebral cortex. Behav. Brain Res. 135, 179–184 (2002).
Klaes, C. et al. Hand shape representations in the human posterior parietal cortex. J. Neurosci. 35, 15466–15476 (2015).
Iriki, A., Tanaka, M. & Iwamura, Y. Coding of modified body schema during tool use by macaque postcentral neurones. Neuro Rep. https://doi.org/10.1097/00001756-199610020-00010 (1996).
Hauk, O., Johnsrude, I. & Pulvermüller, F. Somatotopic representation of action words in human motor and premotor cortex. Neuron 41, 301–307 (2004).
Raposo, A., Moss, H. E., Stamatakis, E. A. & Tyler, L. K. Modulation of motor and premotor cortices by actions, action words and action sentences. Neuropsychologia 47, 388–396 (2009).
Willems, R. M., Hagoort, P. & Casasanto, D. Body-specific representations of action verbs. Psychol. Sci. 21, 67–74 (2009).
Strappini, F. et al. Resting-state activity in high-order visual areas as a window into natural human brain activations. Cereb. Cortex. 29, 3618–3635 (2018).
Yao, H., Shi, L., Han, F., Gao, H. & Dan, Y. Rapid learning in cortical coding of visual scenes. Nat. Neurosci. 10, 772–778 (2007).
Chan, A.W.-Y., Kravitz, D. J., Truong, S., Arizpe, J. & Baker, C. I. Cortical representations of bodies and faces are strongest in commonly experienced configurations. Nat. Neurosci. 13, 417–418 (2010).
Simoncelli, E. P. & Olshausen, B. A. Natural image statistics and neural representation. Annu. Rev. Neurosci. 24, 1193–1216 (2001).
Stringer, C. et al. Spontaneous behaviors drive multidimensional, brainwide activity. Science 364, eaav7893 (2019).
Porro, C. A. et al. Primary motor and sensory cortex activation during motor performance and motor imagery: A functional magnetic resonance imaging study. J. Neurosci. 16, 7688–7698 (1996).
Bracci, S. & Peelen, M. V. Body and object effectors: The organization of object representations in high-level visual cortex reflects body-object interactions. J. Neurosci. 33, 18247–18258 (2013).
Bracci, S., Ietswaart, M., Peelen, M. V. & Cavina-Pratesi, C. Dissociable neural responses to hands and non-hand body parts in human left extrastriate visual cortex. J. Neurophysiol. 103, 3389–3397 (2010).
Lingnau, A. & Downing, P. E. The lateral occipitotemporal cortex in action. Trends Cogn. Sci. 19, 268–277 (2015).
Rushworth, M. F. S., Krams, M. & Passingham, R. E. The attentional role of the left parietal cortex: The distinct lateralization and localization of motor attention in the human brain. J. Cogn. Neurosci. 13, 698–710 (2001).
Kuhtz-Buschbeck, J. P. et al. Effector-independent representations of simple and complex imagined finger movements: A combined fMRI and TMS study. Eur. J. Neurosci. 18, 3375–3387 (2003).
Schluter, N. D., Rushworth, M. F., Passingham, R. E. & Mills, K. R. Temporary interference in human lateral premotor cortex suggests dominance for the selection of movements. A study using transcranial magnetic stimulation. Brain 121, 785–799 (1998).
Karolis, V. R., Corbetta, M. & de Schotten, M. T. The architecture of functional lateralisation and its relationship to callosal connectivity in the human brain. Nat. Commun. 10, 1417 (2019).
Cole, M. W., Bassett, D. S., Power, J. D., Braver, T. S. & Petersen, S. E. Intrinsic and task-evoked network architectures of the human brain. Neuron 83, 238–251 (2014).
Spadone, S. et al. Spectral signature of attentional reorienting in the human brain. Neuroimage 244, 118616 (2021).
Engel, A. K., Fries, P. & Singer, W. Dynamic predictions: Oscillations and synchrony in top–down processing. Nat. Rev. Neurosci. 2, 704–716 (2001).
Oldfield, R. C. The assessment and analysis of handedness: The edinburgh inventory. Neuropsychologia 9, 97–113 (1971).
Kleiner, M. et al. What’s new in psychtoolbox-3. Perception 36, 1–16 (2007).
Cox, R. W. AFNI: Software for analysis and visualization of functional magnetic resonance neuroimages. Comput. Biomed. Res. 29, 162–173 (1996).
Power, J. D., Barnes, K. A., Snyder, A. Z., Schlaggar, B. L. & Petersen, S. E. Spurious but systematic correlations in functional connectivity MRI networks arise from subject motion. Neuroimage 59, 2142–2154 (2012).
Fonov, V., Evans, A., McKinstry, R., Almli, C. & Collins, D. Unbiased nonlinear average age-appropriate brain templates from birth to adulthood. Neuroimage 47, S102 (2009).
Glasser, M. F. et al. A multi-modal parcellation of human cerebral cortex. Nature 536, 171–178 (2016).
Rosenke, M., van Hoof, R., van den Hurk, J., Grill-Spector, K. & Goebel, R. A probabilistic functional atlas of human occipito-temporal visual cortex. Cereb. Cortex. NY. 31, 603–619 (2020).
Acknowledgments
This project has received funding from the European Research Council (ERC) under the European Union’s Horizon 2020 research and innovation programme (grant agreement No 759651) to VB. We thank Stefania Bracci for her comments to the early version of this manuscript.
Author information
Authors and Affiliations
Contributions
Y.E.R., A.L., P.P., E.R. and V.B. developed the study design. Y.E.R. programmed the experimentation scripts. Y.E.R, G.H., and A.L. developed the pre-processing and analysis script. Y.E.R., and A.L., tested the participants. Y.E.R., C.P and V.B drafted the manuscript. E.R. and M.C provided critical revisions. All authors revised the manuscript and approved its final version.
Corresponding author
Ethics declarations
Competing interests
The authors declare no competing interests.
Additional information
Publisher's note
Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
Rights and permissions
Open Access This article is licensed under a Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International License, which permits any non-commercial use, sharing, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if you modified the licensed material. You do not have permission under this licence to share adapted material derived from this article or parts of it. The images or other third party material in this article are included in the article’s Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article’s Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by-nc-nd/4.0/.
About this article
Cite this article
El Rassi, Y., Handjaras, G., Perciballi, C. et al. A visual representation of the hand in the resting somatomotor regions of the human brain. Sci Rep 14, 18298 (2024). https://doi.org/10.1038/s41598-024-69248-z
Received:
Accepted:
Published:
DOI: https://doi.org/10.1038/s41598-024-69248-z
Keywords
Comments
By submitting a comment you agree to abide by our Terms and Community Guidelines. If you find something abusive or that does not comply with our terms or guidelines please flag it as inappropriate.