Abstract
Visual exploration of the environment is achieved through gaze shifts or coordinated movements of the eyes and the head. The kinematics and contributions of each component can be decoupled to fit the context of the required behavior, such as redirecting the visual axis without moving the head or rotating the head without changing the line of sight. A neural controller of these effectors, therefore, must show code relating to multiple muscle groups, and it must also differentiate its code based on context. In this study we tested whether the ventral premotor cortex (PMv) in monkey exhibits a population code relating to various features of eye and head movements. We constructed three different behavioral tasks or contexts, each with four variables to explore whether PMv modulates its activity in accordance with these factors. We found that task related population code in PMv differentiates between all task related features and conclude that PMv carries information about task relevant features during eye and head movements. Furthermore, this code represents both lower-level (effector and movement direction) and higher-level (context) information.
Similar content being viewed by others
Introduction
Redirection of the visual axis to an object of interest causes its image to project on the fovea, the part of retina with greatest photoreceptor density, and permits visual inspection with high spatial acuity. Such changes in gaze are produced typically as a coordinated movement of the eyes-in-orbits and head-on-body. Previous reports on head and eye movements focus largely on the lawful relationship between the two effectors and less on features that allow them to be decoupled1,2,3,4,5,6. For example, we can keep eye contact and nod our head in agreement when talking to someone; or conversely, we can keep our head still and move just our visual axis to scan our environment. Additionally, we can perform the two movements in a sequence: nod first and then redirect the eyes within the orbits, or create a saccadic eye movement first and then nod. In both cases the individual eye and head movements are comparable, and yet the context for a movement of each is different. Even in conditions where we are not explicitly forcing a dissociation of the two effectors consciously, the interactions between eye and head movements can change depending on context6,7,8,9,10,11.
Investigations of the neural control of coordinated eye-head movements, let alone head-only movements dissociated from an accompanying gaze shift, are rare. Such studies have primarily focused on traditional oculomotor areas such as the superior colliculus (SC), the frontal eye fields (FEF), and supplementary eye fields (SEF)5,12,13,14,15,16,17,18,19,20. However, the general features of the movement, as well as a presence of a skeletomotor effector (head) led us to consider the role of a non-oculomotor area, the ventral premotor cortex (PMv), in the governance of head and eye movements. Stimulation of PMv produces eye and head movements that are often integrated with more complex and seemingly socially meaningful (e.g., defensive) movements of the hands and body21,22,23. These results were not presented in a traditional “eye movement” context as the focus was largely on skeleto-motor actions, and yet eye and head movements were produced. Additionally, neurophysiological work has reported modulation of PMv activity during eye and hand movements24. The results were presented from a hand–eye coordination viewpoint, rather than from an oculomotor perspective, but the parallels between hand–eye coordination and head-eye coordination reinforce our hypothesis that PMv encodes eye and head movement properties. More directly, Fujii et al. reported that saccades can be evoked by the electrical stimulation of the PMv25. Finally, a preliminary, trans-synaptic rabies virus injection study reported that PMv is only 3 or 4 synapses from eye and neck motoneurons26, suggesting possible control over those muscle groups. Given this set of results, we reasoned that PMv may also mediate movements of the eyes and the head, including instances when the two are decoupled.
The general consensus is that PMv does not exert direct control over movements. Its inactivation produces at most a mild change in a bias of movement direction and, in some cases, reduced EMG response in the fingers27,28. However, PMv does mediate higher-level properties for behaviors. For example, PMv has been shown to modulate its activity based on the sequence of motor actions in grasping29, reaching30, and even in linguistic domains31. Additionally, PMv neural activity differentiates between the contexts for movements. This has been primarily established in grasping domain, where several studies show that PMv modulates differently when a particular grasp is produced under different contexts, be it while performing the same grip to grasp different objects or producing the same grip but for different goals such as to eat a treat vs to just grasp32. Therefore, we hypothesize that PMv shows similar contextual, higher-level properties associated with eye and head movements.
Considering all the aforementioned information, we constructed an experimental design through which we assessed whether PMv participates in the generation of head and eye movements in a task-dependent manner. The design separated the gaze shift into individual head and eye movements performed in different order. We found significant correlations between single cell firing rates and task-related properties, which establishes the foundation for PMv role in eye and head movements. However, despite the statistical modulation, we could not identify any tuning properties in the activity of individual neurons. A population level analysis, in contrast and surprisingly, revealed a robust code for each individual task feature, which was additionally verified by a standard decoder. Collectively, these results show that PMv carries a population code for eye and head movement task features, such as the identity of the effector, the order of effector movement, and the hemifield to which the movement is directed.
Methods
Data were obtained from one rhesus macaque (Macaca Mulatta). All procedures were approved by the Institutional Animal Care and Use Committee at the University of Pittsburgh and were in compliance with the U.S. Public Health Service policy on the humane care and use of laboratory animals. They are also in accordance with ARRIVE guidelines.
Surgical procedures were performed under aseptic conditions. A stainless-steel head post was affixed to the skull with bone cement and reinforced with titanium screws. A Teflon-coated coil wire was implanted around the sclera of one eye, and the leads of the wire were attached to a plug that was imbedded in the head post. Craniotomies were performed over the left and right PMv at coordinates estimated from MRI scans (stereotaxis coordinates: 15 mm anterior and 23 lateral to the right and left of the midline), and stainless-steel receptacles were placed over each opening. Behavioral and neural recording sessions were initiated following a recovery period after each surgery.
Behavioral tasks
The animal sat in a custom primate chair approximately 100 cm from an LCD TV. The magnetic field induction method was used to sense eye position with the scleral search coil implanted around the eye. Head movements were tracked by placing another Teflon-coated coil on a custom-made pedestal on top of the head. To maintain consistency, we define eye and head movements in the external reference frame, which consequently equates an eye movement to a gaze shift. The animal received feedback about his head position via a live cursor on the screen, which is different from a head-mounted laser that we used in our previous works33,34. Reward, in the form of water or juice, was delivered through a tube that moved with his head33. Stimulus display and behavioral data acquisition at 1000 Hz were managed using custom software35.
Since we hypothesize that PMv encodes both motor- and context-related signals, we designed behavioral tasks to specifically differentiate these features. Special consideration was given to modifying the sequential order of eye-only gaze shifts and head-only movements. The tasks were designed in a way that kept characteristics of gaze shifts and head movements comparable across different trial types, but the context of the performed movement varied. Even in the most extreme cases (head velocity in eye-first and head-first tasks), the variance of behavior within each trial type was large enough yet the difference was low enough that much of the kinematics across the trial types overlapped (peak head velocities for eye-first: 20.2 deg/s, STD: 11.7 deg/s. head-first: 38.3 deg/s, STD: 27.1 deg/s); therefore we considered the kinematics across trial types similar. We reinforce this notion further in the manuscript by performing a subset of comparisons on trials that were matched by peak head velocity. This allowed us to assess whether PMv activity is associated with movement execution, task type, or a combination of both. In total, we constructed three different task types or contexts: “eye-first”, “head-first”, and “together” conditions (Fig. 1). These contexts were presented pseudo-randomly to approximately match the number of successful trials between all three types. All trials started with two fixation points 1° in diameter (red for head, white for gaze) appearing next to each other with no space in between. The animal positioned the head cursor within 3°-5° from the red target and directed gaze within 3° of the white target to initiate a trial.
In the “eye-first” case, the white point jumped from the center to one of four positions: (r = 10°, θ = 45°, 135°, 225°, 315°). This jump indicated to the animal to redirect its gaze to the new location while maintaining the head directed at the initial red point. After a variable delay (150–450 ms) the red point jumped to the same eccentric location as the white target (with a slight offset as to not overlap them). This indicated to the animal to align its head with the new location while maintaining gaze on the white point (Fig. 1, top). In the “head-first” condition the order of these events was reversed (Fig. 1, middle): the animal was instructed to first move its head, followed by an eye-only gaze shift. In the “together” condition, both points jumped to the eccentric position together and the animal was required to fixate and align the head cursor with the next location (Fig. 1, bottom). No constraints were placed on the relative timing of the two effectors.
Neural data collection
Electrophysiology experiments commenced once the animal performed the behavioral tasks consistently. In an initial set of experiments, a tungsten electrode was acutely lowered using a Narashige micromanipulator with an aid of a placement grid, which guided the penetrations through holes separated by 1 mm. To locate PMv within the chamber, we inserted the electrode in various grid locations, delivered electric pulses (20–100 µA, 50–300 ms, 200–400 Hz, biphasic pulses) and observed the evoked response, if any. We identified FEF as the region where microstimulation evoked a reliable saccade. We moved posterior one millimeter at a time until we no longer observed reliable FEF-like saccades, which indicated we were in PMv. In our case, we did not induce any saccades outside the FEF. Since PMv is a functionally diverse region, we aimed to only explore a specific area of interest (ROI) which corresponds to eye and head movements. We identified this area based on external works, notably that of Fujii et al.25, which placed our ROI approximately centered on the “spur” in the mediolateral axis, and ~ 1–4 mm posterior to the “spur”. This ROI also closely matches the polysensory zone described by Graziano and Gandhi36, which is associated with defensive movements which involve both head and eyes. Electrical microstimulation of this region in other studies evoked saccade-like eye movements with high reliability24,25, although we were not able to reliably elicit saccades in this region. We suspect the lack of microstimulation-induced behavior was a result in the differences in experimental design, notably due to the lack of a head restraint. Figure 2 shows the locations of the tracks that evoked saccades (blue) and those that did not evoke an identifiable movement (x shapes) superimposed over the MRI scans used for chamber placement coordinates.
After mapping the regions available within the chamber, we used Plexon™ 16 or 24 channel linear arrays with 200-micron intercontact distance to obtain neural recordings. We acutely lowered electrodes until we observed isolated neurons across a span of about 3 mm, or 15 channels. Well isolated neurons were seen on one to eleven separate channels, depending on the session. The locations of the recording sites are indicated in red in Fig. 2. Microstimulation did not evoke saccades at these sites. Although we did not systematically record the muscle activity of other effectors (e.g., arms, shoulders, torso, etc.), observation of the animal through a clear plate in the primate chair revealed no evoked movements. Data filtering, acquisition, and spike detection was handled by a Grapevine Scout system (Ripple Neuro). Raw signals were filtered with a 750 Hz high pass filter to remove spike-irrelevant signal, and spike detection was generally done using a 3RMS above the mean threshold. In some cases, a manual threshold was set to fine tune the detection of the spike. The channels with detected spikes were manually spike-sorted using MKsort (Ripple neuro); only one unit was isolated per channel as any additional units in this dataset were indistinguishable from noise.
Data analysis
All data was analyzed in MATLAB using custom scripts. Eye movement data was smoothed with a 5 ms square kernel to remove any equipment noise. Saccade onset was detected when the eye velocity exceeded 35°/s and remained over the threshold for at least 8 consecutive milliseconds. Head movement onset proved to be more challenging, as the head rarely remained perfectly still, and head velocity was generally an order of magnitude slower than the eye. To detect head movement, we used an epoch starting 100 ms prior to head go-cue to ~ 1000 ms after. We smoothed the velocity profile with a 30 ms square kernel and normalized it to the range of 0–1. Head movement onset was defined when the normalized velocity reached a threshold of 0.3 and remained above the threshold for 15 consecutive milliseconds. A temporal shift of -5 ms was added to account for an apparent lag in detection. We then used visual inspection to remove any trials where automatic head or eye movement detection failed. An average of detected eye and head movement velocities is shown in Fig. 1 (middle and right columns).
A total of 86 isolated cells were recorded. Neural data was smoothed using a gaussian kernel with a σ = 10. To reduce the number of possible conditions we collapsed across directions by only considering whether the movement was ipsiversive or contraversive to the side of neuron being recorded. This reduction yielded a total of 12 different conditions: three task types \(\times\) two directions (ipsi- and contraversive) \(\times\) two effectors (eye or head). For example, one condition could be that the neural activity is modulated during eye-only saccades directed in the contraversive direction in the head-first task only.
Statistical significance of the neural modulation of each neuron was determined by a custom application of a paired t-test. We first grouped the trials into twelve separate categories. For each trial, we extracted a baseline period of 0–300 ms after the animal aligned both its gaze and head with the initial fixation targets. During baseline the animal was unaware of the context of the task. We then averaged each trial’s neural activity during the baseline period, giving us a \(N\times 1\) vector (\(A\)) where \(N\) is the number of trials, and each element is the mean baseline firing rate for that trial. For each trial’s movement epoch, we extracted the neural activity from − 80 ms to + 80 ms around movement onset resulting in an \(N\times t\) matrix for each gaze and head movement (matrices \(B\) and \(C\), respectively), where \(N\) has the same trial indices as vector \(A\) and \(t\) is time in millisecond increments. Then we conducted a paired t-test with \(p\) threshold of 0.01 comparing each column of either matrix \(B\) or \(C\) to column vector \(A\). We considered the cell to be significantly modulating for a specific category when the test determined significant difference in distributions for at least 20 consecutive timepoints. Using this method, we identified 74 out of 86 cells which significantly modulated their firing rate during at least one of the conditions.
For population level representation, we used dimensionality-reduction techniques on all 86 cells, thereby including the 12 neurons that did not show statistically significant modulation. Since these cells were recorded individually and across many sessions, we created a pseudo-population by arranging the data in such a way that the resulting structure can be treated as a simultaneously recorded set. For each neuron, we first categorized each trial into one of the three trial-types and the direction of the movements. Next, we took a snippet of activity centered on each effector movement onset (± 150 ms), one aligned on the gaze shift and the other aligned on head movement. We then created virtual trials by pulling either gaze-shift or head-movement snippets from 5 random trials from each cell’s dataset and averaging them to create representative activity for the pseudo- trial. The averaging step accomplished three crucial things. It diminished any outlier effects from noisy data, allowed us to increase the number of trials for sessions with low sample sizes, and reduced the possibility of repeated identical pseudo-trials. By repeating this process for each cell in the set, we attained one pseudo-dataset in which all cells can be treated as if they were recorded simultaneously. We created 500 virtual trials for each of the 12 conditions.
After constructing the pseudo-population, we used principal component analysis (PCA) to determine the first two principal components of the population activity which gave us insight into population activity patterns. PCA determines an axis of the multidimensional dataset which encompasses the highest amount of covariance across all neurons’ firing rates: the first principal component (PC). Given that high variance has an overall high ceiling for information capacity, one can hypothesize that the first PC encompasses the dimension with highest amount of information. This typically captures the most relevant patterns of the multidimensional signal and eliminates the patterns which carry less information. To establish the principal components, we concatenated all trials in the pseudo-population, giving us a \(\left(301 \mathrm{ms} \times 500 \;trials \times 12 \;conditions\right)\times 86 \; neurons\) matrix. Applying PCA to this structure created a universal space for the entirety of the neural activity, onto which we could project activities of individual conditions to establish whether they reside in unique subspaces. We then plotted each trial (individual 301 ms snippets of the overall PCA matrix) corresponding to one of the 12 conditions aligned on movement onset to visualize any patterns of activity unique to that condition.
Although through this analysis it became evident that there are differences between conditions, we used PCA mainly as a visualization tool to gain insight into population activity patterns. To quantify the differences between high dimensional patterns corresponding to each condition, we used linear discriminant analysis (LDA) which finds the dimension of highest separation. We then determined whether there was significant pairwise difference between activities associated with each condition along that dimension. The linear discriminant model was developed using MATLAB’s Statistics and Machine Learning toolboxes. Since LDA is a pairwise discriminator, we created a separate model for each pair of conditions tested. As in PCA analysis, we concatenated the data across trials, but here we only used the subset of trials corresponding to the two selected conditions; thus, the resulting training matrix was: \(\left(301 \mathrm{ms} \times [500 \; trials \; condition \; a+500 \; trials \; condition \; b\right]) \times 86 \; neurons\) with the corresponding label vector of \(\left(301 \times [500 \; trials \; condition \; a+500 \; trials \; condition \; b\right])\) where each value indicates one of the 12 conditions for each ms sample. The resulting linear model maximizes the distance between category means while minimizing within-category variance, thus establishing pairwise separability. We then performed a t-test (p threshold < 0.001) on the distributions in the first latent factor to determine whether the means were significantly separable. To address a concern that the neural patterns are simply differentiating between kinematic differences across trial types, we performed some of these analyses on a subset of trials in which the distributions of peak head velocities between trail types were matched.
Decoder
We wanted to determine whether the information in the PMv code was robust enough to assign a class to an unknown signal. Although we could determine this using LDA as a classifier, the linear process inherent in this method meant that we would have to make many pairwise comparisons to determine a class. And in the case where LDA could not reliably separate between classes, we would have to create some choice rule to maximize classification accuracy. Thus, we instead trained a naïve Bayes classifier to assign one of 12 categories to an input trial and assessed the resulting decoding accuracy. We chose this particular classifier because of its relative simplicity and expect similar results regardless of the classifier chosen.
To maximize the rigor with which to test the classifier, we created separate ‘training’ and ‘testing’ pseudo-populations. To remove any possibility of a trial appearing in both ‘training’ and ‘testing’ datasets, we only pulled random even numbered trials from individual sessions for the ‘training’ set and odd numbered trials for ‘testing’ set. This ensured that when obtaining the 5 random trials needed to create one pseudo-trial, none of these individual trials were duplicated between the two resulting sets. Since this cut our available data in half, we removed all sessions where we had less than 35 trials remaining for any condition, which reduced our population from 86 to 67 cells.
The naïve Bayes classifier was trained in a similar fashion as the LDA method, except all conditions were used simultaneously. The input matrix was the same structure as the one used for PCA, albeit with less dimensions: \(\left(301 \mathrm{ms}\times 500 \;trials\times 12 \;conditions\right)\times 67 \;neurons\) and the label vector spanned all the resulting rows: \(\left(301 \mathrm{ms}\times 500 \;trials\times 12 \;conditions\right)\times 1\). After the classifier was trained on a full dimensional ‘training’ set, we used individual pseudo-trials from the ‘testing’ set as an input to the resulting model. The structure of the input was: \(301 \mathrm{ms}\times 67 \;neurons\) and the output was a 301-element vector classifying each millisecond sample of the trial as one of 12 conditions. The mode of the resulting vector was used to indicate the chosen category. We repeated this procedure for 250 pseudo-trials per category from the ‘testing’ set to gain an average classifier performance.
To establish the minimum number of cells required for accurate classification, we ranked cells by the magnitude of their projection onto the first PC of the population. We repeated our PCA procedure on the ‘training’ test to obtain this ranking, and then re-trained the decoder first on the data composed on just one highest ranking cell \(\left(301 \mathrm{ms}\times N \;trials\right)\times 1 \;neuron\). Then we tested this classifier with the ‘testing’ data from the same neuron to obtain classification accuracy. Next, we trained a classifier on the top two best ranked neurons combined \(\left(301 \mathrm{ms}\times N \;trials\right)\times 2 \;neurons\), then three best neurons, and so on, adding one neuron at a time until we reach full population.
Cortical depth
To explore the features of this neural code even further, we asked whether some features varied as a function of depth within the cortex. After all, our linear probes spanned the entirety of the cortex cross-section and therefore we recorded from cells from different layers. It is important to note that this was not a part of the initial hypothesis and therefore the experimental design did not control for the depth of the neural recordings. Regardless, we could estimate the relative depth of each neuron from the channel it was recorded from. Since we attempted to span the entirety of the depth of the cortex when we inserted the electrodes, it is reasonable to assume that superficial channels resided in superficial layers, and deep channels resided in deep layers.
We assigned one of three depth markers to each neuron: ‘deep’, ‘middle’, and ‘superficial’ based on the channel they were recorded on. Most of the recording was done on deep channels, so to keep the number of cells in each group equal, the channel-to-label assignments were not equal. Channels 1–3 were considered ‘deep’, channels 4–9 were considered ‘middle’, and channels 10–16 were considered ‘superficial’. This gave us relatively the same number of neurons between ‘deep’ and ‘superficial’ neurons (24 for ‘deep’, 36 for ‘middle’, and 23 for ‘superficial’), thus allowing for a more direct comparison between the two groups.
For those sessions where 24 channel electrodes were used, only the first 16 channels were considered. Since the channel spacing was the same between 16 and 24 channel electrodes, no transformation of electrode distances was needed to directly compare the two. By and large, channel 1 depth was comparable between different electrodes as they were all lowered to a similar depth (~ 4000–5000 µm) as measured by a microdrive (Narashige). It is worth reiterating here that the depth was not controlled, and any variation in scar tissue at the insertion site and cortex deformation would make the depth measurements unreliable.
The primary question we addressed is whether the task relevant information is only present at a certain depth in the cortex. To establish this, we repeated the LDA analysis on the depth relevant subgroups of neurons. We focused on any differences in this separability as a function of depth.
Results
Extracellular spiking activities of PMv neurons were recorded with a multicontact, laminar probe as a trained Rhesus monkey performed three, randomly interleaved tasks (Fig. 1). The ‘eye-first’ task required an eye-only saccade to a target followed later by a head-only movement to that location. The ‘head-first’ task required a head-only movement to an eccentric target followed by an eye-only saccade to that location. The ‘together’ task required a combined eye-head movement to a stimulus in the visual periphery. To establish the role of PMv in coordinated gaze shifts and head movements we analyzed 86 isolated neurons in the context of 12 separate categories: 3 tasks × 2 effectors × 2 movement directions (ipsiversive or contraversive). We report here analyses on activities of single neurons as well as on pseudo-populations constructed from these cells. Statistical analysis of individual neurons showed significant, although qualitatively unimpressive and uninterpretable, modulation during movement epochs, yet examination of these activities as a population, through PCA and LDA methods, showed a clear separation of the neural code by category. The robustness of this code was then verified by stressing the parameters of a Naïve-Bayes classifier.
Single neuron analysis
We first compared each cell’s firing rate during the movement epoch (− 80 ms to + 80 ms from movement onset) to that of a baseline period (see methods). If 20 or more consecutive samples in the movement epoch were outside the baseline distribution (paired t-test, threshold p < 0.01), the cell was considered to have significant modulation for that condition. An example neuron’s spike density waveforms are shown in Fig. 3. In this case, the cell increased its firing rate above baseline for all categories, and no noticeable tuning for any one condition is evident. When examining average firing rates over time for all significantly modulated cells we notice the lack of features common to traditional motor areas; the cells neither exhibit burst-like properties, nor visible suppression signatures, nor any event-locked modulation (Fig. 4, left column). Qualitatively, the responses are flat throughout the movement epoch. To further illustrate this lack of apparent modulation, we contrast these cells with those that did not show significant modulation for a given category (Fig. 4, right column). One is hard pressed to identify concrete differences between the cells that were significantly active for each category and those that were not. In short, when examining individual cells, we see no obvious or characterizable relationship between the cells’ firing rates and the animal’s behavior.
We cannot discount statistical results based simply on visual inspection. To gain insight into the validity of the t-test results we plotted the significant activity compared to baseline activity for single cells (Fig. 4, middle column, red). We contrasted those results with cells in which the activity was not different from baseline in the same figure (Fig. 4, middle column, black). We can see here that all significant firing rate averages and their 95% confidence intervals lay far away from the unity line, thus signifying that this analysis was not overly contaminated with type 1 errors. Overall, we found that 74 out of 86 isolated cells showed a significant firing rate modulation from baseline during the movement epoch for at least one category (modified paired t-test, p < 0.01) (Fig. 5). This indicates that although visual examination of the data does not show any obvious or reliable patterns, the cells do in fact carry some information during the movement epochs. This information, however, is not immediately characterizable into a useable pattern.
Pseudo population analysis
To gain a deeper understanding of the role PMv has in context-guided eye and head movements, we examined whether a clearer picture could be obtained if we considered the individually recorded neurons as a combined single population. We did not record all the cells at the same time. Each day the electrode was inserted acutely and therefore sessions across days had separate neurons recorded at different times. Luckily, the homogeneity of the tasks allowed us to restructure the data to simulate simultaneous recordings. We then used this pseudo-population with dimensionality reduction techniques to test for presence of a task-related code.
We constructed a dataset which mimics a simultaneous 86 channel recording (see methods for details). For visualization purposes, we reduced the dimensionality of the data from 86 to top 2 dimensions using PCA. Figure 6 plots the first dimension as a function of time to illustrate any temporal patterns that may be present. We also plot PC1 against PC2 as a phase plane plot to illustrate any patterns in a multidimensional space. The legend (right) indicates the colors associated with each of the 12 conditions. As a guide, we used lighter colors to denote conditions associated with the eye movement epoch and darker colors for the head movement epoch. Unlike single neuron analysis for which the pattern of activity was impossible to characterize, the population activity clearly exhibits an unique trajectory for each category (Fig. 6A). When matching for all factors except trial type (Fig. 6B), we see that during each movement epoch, the activity traverses a different path during “head-first” task (blue traces) as compared to “eye-first” task (green traces). Additionally, the paths for “eye-first” and “together” task types are similar (compare light green vs. orange traces in the left panel of Fig. 6B). This could be because in natural gaze shifts like the ones we mimicked with the “together” task, the eye tends to move before the head and therefore resembles the “eye-first” task. Despite their similarities, we are still able to differentiate between the two trajectories. We observe these separable neural trajectories when we examine head movement epochs as well (Fig. 6B, right). The most pronounced difference, however, can be seen when we isolate the trajectories associated with head movement (Fig. 6C, dark green) and gaze shifts (light green), suggesting that representations for head movements and gaze shifts are exceptionally separatable in PMv population code.
For the sake of completeness, we examined whether the last remaining task-related factor (ipsi- vs. contraversive movement) is also encoded in PMv. Figure 6D shows that neural trajectories between contra- (green) and ipsiversive (yellow) movements appear to be most similar. However, the similarity between the trajectories is not surprising, as PMv has a large proportion of ipsilateral projections37,38 which could imply that a population code would not show a strong laterality preference. Regardless, the trajectories still show some differentiation (Fig. 6D).
Next, we used LDA to quantify the separability of the neural code during each condition. We found the dimension of maximum separability between pairwise-matched conditions and determined whether each condition’s neural activity was significantly different from the other using a standard t-test. Each pairing of conditions was significantly separable (p < 0.01). Figure 7 contains LDA results for the same conditions as we examined in Fig. 6B–D. In essence, here we show that PCA-visualizations of different neural trajectories traverse measurably separable neural subspaces. Since LDA performs best on pairwise comparisons, in Fig. 7A we compare differences in trial-types (eye-first, head-first, and together) in a pairwise fashion. Notice that even the neural representations of the conditions that seemed very similar in PCA results, namely the activity associated with eye-first and together trial-types, can be reliably separated using LDA (Fig. 7A, middle row). We repeated this analysis with neural activity representing head movements and gaze shifts, and, separately, ipsi- and contraversive movements (Fig. 7B). All pairwise matches showed significant separability. Note that even though we show a representative sample of these matchups, we performed LDA on all 24 relevant pairings and all were significantly separable (e.g., for the sake of clarity we do not show the results for ipsi- vs. contraversive movements for together, or head-first trial types in Fig. 7, but we have performed the calculations and found separability). We also analyzed LDA separability between eye-first and head-first trial types using a subset of trials with comparable movement parameters. When using trials from each condition in a way which resulted in equal distributions of peak head velocity across conditions, we still see a significant separation between trial types (with 86% accuracy). This is impressive, considering that this analysis reduced the number of useable trials by approximately half.
Our multi-dimensional analyses demonstrate that even though the single-cell analysis showed no clear pattern, there is a definite population code that becomes apparent when we visualize high dimensional data in lower dimensions. PMv neurons carry a task-relevant code for eye and head movements when considered as a population.
Classification
Although LDA shows a clear pairwise separation between different task features, it does not accurately reflect the way these neural signals are used in the brain. It is unlikely that a brain utilizes a pairwise decoder which compares between each pair of conditions individually. Therefore, we wanted to establish that PMv code is robust enough to categorize the activity into one of 12 categories at once. To accomplish this, we trained a relatively simple Naïve Bayes classifier (see “Methods”). When using all 67 cells for decoding we achieved relatively good results with an average accuracy of ~ 80%. The decoding accuracy was near 100% for some conditions, while slightly above 50% for others, still well above chance (Fig. 8, bottom).
We decided to test the robustness of this code by performing the classification on systematically increasing number of neurons. After sorting the neurons based on their first component weight of the PCA projection, we created a classifier on just one “best” neuron and evaluated its performance. We then continued to add neurons and evaluate subsequent classifiers. We see that it takes 45 neurons to achieve an average classifier performance of ~ 70%. In comparison, some neural classifiers from other cortical motor regions require as little as 15 neurons to achieve this accuracy39. This could suggest several possibilities: either the code lacks robustness, or the complexity of the task requires a large number of cells to encode it.
Eye rotation in the orbits
One can apply a different interpretation of the results by taking a completely eye-centric approach. A difference in PMv neural activity in a contraversive eye movement and a contraversive head movement can be explained by the fact that in each case the eye rotates in a different direction in orbit: to the right in the former, and to the left in the latter. To account for this possibility, we compared the conditions where the eye rotation in orbit was in the same direction.
Figure 9 shows the case where we compared PMv activity in contraversive eye movements and ipsiversive head movements. In both cases the eye rotated in the ipsiversive direction in the orbit. We see that even when the rotation of the eye in orbit is in the same direction, the PMv activity is separable through LDA, indicating a difference in the population code for the two conditions.
Cortical depth
When comparing the ability of PMv code to separate between task-relevant categories as a function of layers, we do not observe a quantifiable result. The separation between categories remains equal regardless of the depth of the recorded neuron; as an example, we show the separation between eye and head subspaces at different depths in Fig. 10. However, we cannot make a strong statement about this particular feature of PMv as our control for depth was poor.
Discussion
The ventral premotor cortex exhibits a complex relationship with eye and head movement generation. During the exploration of the region via microstimulation, we observed no consistent effects on eye or head movements (see “Methods”). Single cell analysis uncovered modulation of neural activity during the movement epoch compared to a baseline period (Figs. 3, 4), but there was no obvious tuning by effector, task type, or direction of movement (Fig. 5). We also did not observe many of the familiar features present in traditional motor areas, such as movement-locked bursts40,41,42. In contrast, the pseudo-population analysis demonstrated the neural code can easily differentiate across the 12 different conditions we tested in our paradigm (Fig. 6). We also demonstrated that a simple decoder (Fig. 7) could access this code and determine the action (gaze shift or head movement), the direction of the movement (ipsiversive or contraversive), and the context under which this muscle group was recruited (eye-first, head-first, or together). Thus, we interpret our data to reveal that PMv does exhibit a clear code for changing gaze or head but through a framework that cannot be appreciated or recognized with single cell analysis.
Stimulation of the primary motor cortex evokes skeletomotor movements at short latencies. Finger movements can be produced with very low currents43,44. Eye and head movements can also be induced, typically as a coordinated gaze shift, with relatively low currents delivered to the superior colliculus and frontal eye fields. Regions that project to these primary motor structures, in contrast, require substantially stronger current parameters to generate a movement and, even then, the effect is inconsistent and heterogenous. One important reason for the departure from observing consistent stimulation-evoked movements is, we believe, the degradation of topography, which enables the region to gain additional functionality. Indeed, eye and/or head movements produced from the lateral intraparietal cortex and supplementary eye field require high currents, on the order of 5–10 times higher than SC and FEF, and the movements evoked seem more reflective of reference frame computations. Equally important, not every site in the region is equally effective at eliciting a stimulation-evoked movement. Stimulation of the dorsal premotor cortex (PMd) also evokes a variety of movements. Activation of skeletomotor musculature is possible with low currents, and a rough topography is indicated in some studies23,45. However, stimulation can also recruit the ipsilateral limb muscles and moreover have suppressive effects46, further emphasizing the increase in functional complexity.
A similar pattern has been described for movements evoked from PMv. The superior portion of PMv has large representations of forelimbs and shoulders, and inferior PMv contains a large orofacial representation45. However, orofacial and forelimb movements are also evoked from sites in superior PMv, indicating that the topography is coarse. The PMv is most closely associated with evoking complex and socially meaningful movements, which are observed only with long duration stimulation (500 ms) and large currents (> 100 μA). There also exist reports of ipsilateral head and goal-directed eye movements evoked by microstimulation of this region21,24 but the number of sites producing such effects is not big. We too reported previously similar effects and with low yield, and the results were comparable to previous reports47. Thus, we did not continue to systematically study the effects of microstimulation. We speculate that the coarse topography may be the key reason why stimulation of PMv, despite its close connectivity with primary skeletomotor and oculomotor areas, isn’t able to elicit movements reliably and homogenously.
Neural activity of PMv neurons exhibits ample heterogeneity as well. Given our experimental design, we were positioned to test for sensitivity of PMv neurons to innervation of multiple effectors, the impact of task type, and direction of movement. We first approached this examination through the lens of traditional oculomotor studies, analyzing firing rates of individual neurons aligned on stimulus and movement onsets. However, no such features were prevalent in the data. On visual inspections, cells did not seem to modulate the activity appreciably regardless of any experimental factor. There were many cases of statistical significance in firing rate modulation, but the magnitude of the effect was small and questionable for physiological significance. Cells in our database exhibited exceptionally distinct firing rate codes, and this heterogeneity obfuscated the mechanism by which these cells encode trial relevant features. Thus, we were not comfortable with attempting to extract a satisfactory description of PMv involvement in gaze shifts based on single unit analyses.
Previous studies have highlighted examples of PMv neurons that exhibit saccade related bursts during head-restrained eye movements produced either in isolation or in association with hand movements24,25,48. There are also many cases in which the activity modulation is modest at best and not burst-like48. We are not aware of previous studies that investigated PMv activity during head movements or head-unrestrained eye movements, but we do not believe the dearth of strong modulations in our dataset is due to the focus on a different effector system. We speculate that the complexity of our experimental paradigm may have altered the network properties of PMv neurons. Perhaps if the task was less complex, where the cells had to encode fewer parameters, then the firing rates may have exhibited different features. The notion is that if cells respond to multiple task parameters at once, their single-cell code might be obscured, and a population response may be needed to test our hypothesis. Hence, the logical next step in our data analysis was to examine the data as a unified pseudo-population. Indeed, a clear code was easily discernable. After a simple PCA dimensionality reduction, we could see the separation of neural trajectories as a function of task properties in the first two principal components (Fig. 6). This effect was even more obvious in an LDA projection of the data (Fig. 7).
The results of this study ascribe interesting properties to PMv neural code. It reveals that a weighted linear combination of PMv activity can be used to differentiate movement related activity for the eye (or gaze) from that of the head. Importantly, it does so with features—for example, an absence of bursts—that are different from traditional motor areas. Thus, it is likely that PMv activity is not controlling movement kinematics or dynamics, unlike the motor cortex49,50,51 and superior colliculus52,53,54,55,56. Our analysis also demonstrates that PMv can multiplex additional dimensions of information. Notably, the population response also encodes the context or task type and the directionality of the movement. These findings are consistent with insights gained through a machine-learning inspired approach to population level analysis in various cortical51,57,58 and subcortical59 areas. They also align with previous single-unit studies linking PMv involvement in determining context for actions, order of events, and decision making27,31,60: all roles which would not be out of place for PFC. This may corroborate the proposed functional connectivity between dorsolateral PFC, the PC, and M161.
Although we show that PMv has task-relevant information spread across the neural population, we feel it is prudent to speculate further why we do not observe traditional firing rate properties on the single-cell level. As mentioned above, it is possible that the complexity of a particular task determines how a neuron encodes information about it. We know that some neural populations enter specific subspaces based on the nature of the task the subject is performing, as if to prime the population for an efficient encoding of the task that is to come51,58. It is possible that when a population is tasked to encode for a task with one or two factors, it can encode these factors in one or two features of firing rate. Here we can imagine that high firing rate could encode factor “A”, medium firing rate can encode factor “B”, and unchanged firing rate could mean the lack of either factor. Once the population is tasked with encoding 12 factors, a simple firing-rate based code might not be sufficient. In this case, the neural population can enter a mode in which it spreads the encoding of factors across many cells. And in a natural setting, PMv might need to monitor several dozens of features, which is practically impossible to do on a single-cell level. This presents a question, when we observe a specific encoding pattern in cortical areas, is that pattern task specific? Do encoding patterns which we record in a lab setting, where we distill behaviors to a few factors, transfer to natural environment, where behaviors depend on many conditions? The advances in recording technologies, wireless in-cage neural recordings for example62, should take us closer to answering these questions. In the meantime, we must be cognizant of the effects the task parameters have on the way the neural population encodes the behavior.
Data availability
The data that support the findings of this study are available from the corresponding author, N.J.G, upon reasonable request.
References
Freedman, E. G., Sparks, D. L., Edward, G. & Eye-head, D. L. S. Eye-head coordination during head-unrestrained gaze shifts in rhesus monkeys. J. Neurophysiol. 77, 2328–2348 (1997).
Goldring, J. E., Dorris, M. C., Corneil, B. D., Ballantyne, P. A. & Munoz, D. P. Combined eye-head gaze shifts to visual and auditory targets in humans. Exp. Brain Res. Experimentelle Hirnforschung. Experimentation cerebrale 111, 68–78 (1996).
Crawford, J. D., Ceylan, M. Z., Klier, E. M. & Guitton, D. Three-dimensional eye-head coordination during gaze saccades in the primate. J. Neurophysiol. 81, 1760–1782 (1999).
Freedman, E. G. & Sparks, D. L. Coordination of the eyes and head: Movement kinematics. Exp. Brain Res. Experimentelle Hirnforschung. Experimentation cerebrale 131, 22–32 (2000).
Walton, M. M. G., Bechara, B. & Gandhi, N. J. Role of the primate superior colliculus in the control of head movements. J. Neurophysiol. 98, 2022–2037 (2007).
Oommen, B. S., Smith, R. M. & Stahl, J. S. The influence of future gaze orientation upon eye-head coupling during saccades. Exp. Brain Res. 155, 9–18 (2004).
Goonetilleke, S. C. et al. Cross-species comparison of anticipatory and stimulus-driven neck muscle activity well before saccadic gaze shifts in humans and nonhuman primates. J. Neurophysiol. 114, 902–913 (2015).
Haji-Abolhassani, I., Guitton, D. & Galiana, H. L. Modeling eye-head gaze shifts in multiple contexts without motor planning. J. Neurophysiol. 116, 1956–1985 (2016).
Goossens, H. H. L. M. & Van Opstal, A. J. Human eye-head coordination in two dimensions under different sensorimotor conditions. Exp. Brain Res. 114, 542–560 (1997).
Solman, G. J. F., Foulsham, T. & Kingstone, A. Eye and head movements are complementary in visual selection. R. Soc. Open Sci. 4, 160569 (2017).
Corneil, B. D. & Munoz, D. P. Human eye-head gaze shifts in a distractor task. II. Reduced threshold for initiation of early head movements. J. Neurophysiol. 82, 1406–1421 (1999).
Chen, L. L. & Walton, M. M. G. Head movement evoked by electrical stimulation in the supplementary eye field of the rhesus monkey. J. Neurophysiol. 94, 4502–4519 (2005).
Knight, T. A. & Fuchs, A. F. Contribution of the frontal eye field to gaze shifts in the head-unrestrained monkey: Effects of microstimulation. J. Neurophysiol. 97, 618–634 (2007).
Elsley, J. K., Nagy, B., Cushing, S. L. & Corneil, B. D. Widespread presaccadic recruitment of neck muscles by stimulation of the primate frontal eye fields. J. Neurophysiol. 98, 1333–1354 (2007).
Monteon, J. A., Constantin, A. G., Wang, H., Martinez-Trujillo, J. & Crawford, J. D. Electrical stimulation of the frontal eye fields in the head-free macaque evokes kinematically normal 3D gaze shifts. J. Neurophysiol. 104, 3462–3475 (2010).
Chen, L. L. Head movements evoked by electrical stimulation in the frontal eye field of the monkey: Evidence for independent eye and head control. J. Neurophysiol. 95, 3528–3542 (2006).
Corneil, B. D., Olivier, E. & Munoz, D. P. Neck muscle responses to stimulation of monkey superior colliculus. II. Gaze shift initiation and volitional head movements. J. Neurophysiol. 88, 2000–2018 (2002).
Stryker, M. P. & Schiller, P. H. Eye and head movements evoked by electrical stimulation of monkey superior colliculus. Exp. Brain Res. 23, 103–112 (1975).
Freedman, E. G., Stanford, T. R. & Sparks, D. L. Combined eye-head gaze shifts produced by electrical stimulation of the superior colliculus in rhesus monkeys. J. Neurophysiol. 76, 927–952 (1996).
Freedman, E. G. & Sparks, D. L. Activity of cells in the deeper layers of the superior colliculus of the rhesus monkey: Evidence for a gaze displacement command. J. Neurophysiol. 78, 1669–1690 (1997).
Boulanger, M., Bergeron, A. & Guitton, D. Ipsilateral head and centring eye movements evoked from monkey premotor cortex. NeuroReport 20, 669–673 (2009).
Graziano, M. S. A., Yap, G. S. & Gross, C. G. Coding of visual space by premotor neurons. Science 266, 1054–1057 (1994).
Graziano, M. S. A., Taylor, C. S. R. & Moore, T. Complex movements evoked by microstimulation of precentral cortex. Neuron 34, 841–851 (2002).
Neromyliotis, E. & Moschovakis, A. K. Response properties of motor equivalence neurons of the primate premotor cortex. Front. Behav. Neurosci. 11, 61 (2017).
Fujii, N., Mushiake, H. & Tanji, J. An oculomotor representation area within the ventral premotor cortex. Proc. Natl. Acad. Sci. U. S. A. 95, 12034–12037 (1998).
Billig, I. & Strick, P. Anatomical evidence for overlap of neck and oculomotor control systems in ventral premotor cortex. 1 (2012).
Schieber, M. H. Inactivation of the ventral premotor cortex biases the laterality of motoric choices. Exp. Brain Res. Experimentelle Hirnforschung. Experimentation cerebrale 130, 497–507 (2000).
Schmidlin, E., Brochier, T., Maier, M. A., Kirkwood, P. A. & Lemon, R. N. Pronounced reduction of digit motor responses evoked from macaque ventral premotor cortex after reversible inactivation of the primary motor cortex hand area. J. Neurosci. 28, 5772–5783 (2008).
Vargas-Irwin, C. E., Franquemont, L., Black, M. J. & Donoghue, J. P. Linking objects to actions: Encoding of target object and grasping strategy in primate ventral premotor cortex. J. Neurosci. 35, 10888–10897 (2015).
Mushiake, H., Inase, M. & Tanji, J. Neuronal activity in the primate premotor, supplementary, and precentral motor cortex during visually guided and internally determined sequential movements. J. Neurophysiol. 66, 705–718 (1991).
Opitz, B. & Kotz, S. A. Ventral premotor cortex lesions disrupt learning of sequential grammatical structures. Cortex 48, 664–673 (2012).
Bonini, L. et al. Selectivity for grip type and action goal in macaque inferior parietal and ventral premotor grasping neurons. J. Neurophysiol. 108, 1607–1619 (2012).
Gandhi, N. J. & Sparks, D. L. Experimental control of eye and head positions prior to head-unrestrained gaze shifts in monkey. Vis. Res. 41, 3243–3254 (2001).
Walton, M. M. G., Bechara, B. & Gandhi, N. J. Effect of reversible inactivation of superior colliculus on head movements. J. Neurophysiol. 99, 2479–2495 (2008).
Bryant, C. L. & Gandhi, N. J. Real-time data acquisition and control system for the measurement of motor and neural data. J. Neurosci. Methods 142, 193–200 (2005).
Graziano, M. S. A. & Gandhi, S. Location of the polysensory zone in the precentral gyrus of anesthetized monkeys. Exp. Brain Res. 135, 259–266 (2000).
Dancause, N. et al. Ipsilateral connections of the ventral premotor cortex in a new world primate. J. Comp. Neurol. 495, 374–390 (2006).
Gerbella, M., Belmalih, A., Borra, E., Rozzi, S. & Luppino, G. Cortical connections of the anterior (F5a) subdivision of the macaque ventral premotor area F5. Brain Struct. Funct. 216, 43–65 (2011).
Bansal, A. K., Truccolo, W., Vargas-Irwin, C. E. & Donoghue, J. P. Decoding 3D reach and grasp from hybrid signals in motor and premotor cortices: Spikes, multiunit activity, and local field potentials. J. Neurophysiol. 107, 1337–1355 (2012).
Donoghue, J. P. Contrasting properties of neurons in two parts of the primary motor cortex of the awake rat. Brain Res. 333, 173–177 (1985).
Kermadi, I., Liu, Y., Tempini, A., Calciati, E. & Rouiller, E. M. Neuronal activity in the primate supplementary motor area and the primary motor cortex in relation to spatio-temporal bimanual coordination. Somatosens. Mot. Res. 15, 287–308 (1998).
Tanji, J. & Kurata, K. Comparison of movement-related activity in two cortical motor areas of primates. J. Neurophysiol. 48, 633–653 (1982).
Asanuma, H. & Rosén, I. Topographical organization of cortical efferent zones projecting to distal forelimb muscles in the monkey. Exp. Brain Res. 14, 243–256 (1972).
Sato, K. C. & Tanji, J. Digit-muscle responses evoked from multiple intracortical foci in monkey precentral motor cortex. J. Neurophysiol. 62, 959–970 (1989).
Godschalk, M., Mitz, A. R., van Duin, B. & van der Burga, H. Somatotopy of monkey premotor cortex examined with microstimulation. Neurosci. Res. 23, 269–279 (1995).
Montgomery, L. R., Herbert, W. J. & Buford, J. A. Recruitment of ipsilateral and contralateral upper limb muscles following stimulation of the cortical motor areas in the monkey. Exp. Brain Res. 230, 153–164 (2013).
Smalianchuk, I. & Gandhi, N. Ventral premotor control of head and eye movements. In Society for Neuroscience (Society for Neuroscience, 2018).
Neromyliotis, E. & Moschovakis, A. K. Response properties of saccade-related neurons of the post-arcuate premotor cortex. J. Neurophysiol. 119, 2291–2306 (2018).
Guigon, E., Baraduc, P. & Desmurget, M. Coding of movement- and force-related information in primate primary motor cortex: A computational approach. Eur. J. Neurosci. 26, 250–260 (2007).
Mollazadeh, M., Aggarwal, V., Thakor, N. V. & Schieber, M. H. Principal components of hand kinematics and neurophysiological signals in motor cortex during reach to grasp movements. J. Neurophysiol. 112, 1857–1870 (2014).
Churchland, M. M., Yu, B. M., Ryu, S. I., Santhanam, G. & Shenoy, K. V. Neural variability in premotor cortex provides a signature of motor preparation. J. Neurosci. 26, 3697–3712 (2006).
Klier, E. M., Wang, H. & Crawford, J. D. The superior colliculus encodes gaze commands in retinal coordinates. Nat. Neurosci. 4, 627–632 (2001).
van Opstal, A. J. & Goossens, H. H. L. M. Linear ensemble-coding in midbrain superior colliculus specifies the saccade kinematics. Biol. Cybern. 98, 561–577 (2008).
Smalianchuk, I., Jagadisan, U. K. & Gandhi, N. J. Instantaneous midbrain control of saccade velocity. J. Neurosci. 38, 10156–10167 (2018).
Goossens, H. H. L. M. & van Opstal, A. J. Optimal control of saccades by spatial-temporal activity patterns in the monkey superior colliculus. PLoS Comput. Biol. 8, e1002508 (2012).
Goossens, H. H. L. M. & van Opstal, A. J. Dynamic ensemble coding of saccades in the monkey superior colliculus. J. Neurophysiol. 95, 2326–2341 (2006).
Mante, V., Sussillo, D., Shenoy, K. V. & Newsome, W. T. Context-dependent computation by recurrent dynamics in prefrontal cortex. Nature 503, 78–84 (2013).
Afshar, A. et al. Single-trial neural correlates of arm movement preparation. Neuron 71, 555–564 (2011).
Jagadisan, U. K. & Gandhi, N. J. Population temporal structure supplements the rate code during sensorimotor transformations. Curr. Biol. 32, 1010-1025.e9 (2022).
Hepp-Reymond, M. C., Kirkpatrick-Tanner, M., Gabernet, L., Qi, H. X. & Weber, B. Context-dependent force coding in motor and promotor cortical areas. Exp. Brain Res. 128, 123–133 (1999).
Spedden, M. E. et al. Directed connectivity between primary and premotor areas underlying ankle force control in young and older adults. Neuroimage 218, 116982 (2020).
Shin, H. et al. Interference-free, lightweight wireless neural probe system for investigating brain activity during natural competition. Biosens. Bioelectron. 195, 113665 (2022).
Acknowledgements
We thank various lab members for scientific discussions and critical feedback on previous versions of the manuscript. This work was supported by National Institute of Health Grants R01 EY022854, R01 EY024831, F31 EY027688, T32 DC011499, and P30 EY008098.
Author information
Authors and Affiliations
Contributions
I.S. and N.J.G. conceived and designed the study. I.S. performed the experiments and analyzed the data. I.S. and N.J.G. wrote the manuscript.
Corresponding author
Ethics declarations
Competing interests
The authors declare no competing interests.
Additional information
Publisher's note
Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
Rights and permissions
Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/.
About this article
Cite this article
Smalianchuk, I., Gandhi, N.J. Ventral premotor cortex encodes task relevant features during eye and head movements. Sci Rep 12, 22093 (2022). https://doi.org/10.1038/s41598-022-26479-2
Received:
Accepted:
Published:
DOI: https://doi.org/10.1038/s41598-022-26479-2
Comments
By submitting a comment you agree to abide by our Terms and Community Guidelines. If you find something abusive or that does not comply with our terms or guidelines please flag it as inappropriate.