Visual spatial information is paramount in guiding bimanual coordination, but anatomical factors, too, modulate performance in bimanual tasks. Vision conveys not only abstract spatial information, but also informs about body-related aspects such as posture. Here, we asked whether, accordingly, visual information induces body-related, or merely abstract, perceptual-spatial constraints in bimanual movement guidance. Human participants made rhythmic, symmetrical and parallel, bimanual index finger movements with the hands held in the same or different orientations. Performance was more accurate for symmetrical than parallel movements in all postures, but additionally when homologous muscles were concurrently active, such as when parallel movements were performed with differently rather than identically oriented hands. Thus, both perceptual and anatomical constraints were evident. We manipulated visual feedback with a mirror between the hands, replacing the image of the right with that of the left hand and creating the visual impression of bimanual symmetry independent of the right hand’s true movement. Symmetrical mirror feedback impaired parallel, but improved symmetrical bimanual performance compared with regular hand view. Critically, these modulations were independent of hand posture and muscle homology. Thus, visual feedback appears to contribute exclusively to spatial, but not to body-related, anatomical movement coding in the guidance of bimanual coordination.
Whether we type on a keyboard, applaud, or ride a bike – bimanual coordination is crucial in many of our everyday activities. Therefore, the principles that guide bimanual coordination have received much interest, not least to inform treatment to restore regular bimanual function in clinical settings. Beyond therapeutic considerations, coordinative action can be viewed as an ecologically valid model to understand the principles of movement planning1. Accordingly, experiments have studied the factors that constrain bimanual movement execution. A prominent and consistent finding has been that humans can perform symmetrical movements – with symmetry usually defined relative to the sagittal body midline – with higher precision and at higher speeds than parallel movements2,3,4. During symmetrical movements, the two effectors move towards opposite sides of space; for instance, one hand moves to the right while the other concurrently moves to the left. Conversely, parallel movements implicate movements towards the same direction of space; for instance, both hands synchronously move to the left or to the right.
The symmetry bias has been demonstrated across a variety of effectors and movement types, such as finger flexion and extension5,6, finger tapping7, wrist movements2, line drawing8, elbow flexion and extension9, and circling arm movements10. Given its stability across many qualitatively different movements, symmetry is thought to constitute a general organizing principle of bimanual coordination11. One popular experimental paradigm has been finger abduction and adduction, that is, sideways movements of the two index fingers with the hands held palm down. Participants perform these movements rhythmically, and we therefore refer to this task as “finger oscillations”. With the palms down, movement accuracy is high when both fingers are abducted at the same time, that is, when fingers are moved in symmetry. Accuracy is lower when one finger is abducted while the other one is concurrently adducted, that is, when fingers are moved in parallel3.
The mechanisms underlying the symmetry bias have been under debate. Early reports suggested that it originates from anatomical constraints within the motor system, that is, from interactions rooted in muscle synergies caused by hemispheric crosstalk2,3,12. Muscle synergies may arise through reciprocal connections between the cortical regions that control homologous muscles of the two body sides and result in preferred activation of homologous limb movements. In this view, symmetrical movements are stable because they involve the same muscles in both limbs, allowing efficient integration of contra- and ipsilateral motor signals. In contrast, parallel finger movements involve different muscles in the two limbs, resulting in reduced stability due to ongoing interference from conflicting ipsi- and contralateral muscle commands13.
However, others have suggested that, instead, the symmetry bias originates from interactions rooted in perception7,14. The key finding supporting this proposal was that the symmetry bias prevailed when participants performed oscillatory finger movements with the two hands held in opposite orientations, that is, one palm facing up and the other down. In this situation, symmetrical movements involve non-homologous muscles, whereas parallel movements are achieved through homologous muscles. The persistent advantage of symmetrical over parallel movements despite a reversal of the muscles involved in the bimanual movement is at odds with the idea that muscle synergies alone are responsible for the symmetry bias7,13,14.
Several studies have suggested that the previous findings of external vs. anatomical symmetry constraints are not a contradiction, but that both factors jointly influence coordination behavior1,9,15,16. According to this view, anatomical and external contributions flexibly determine bimanual coordination with their relative weighting depending on context and task demands13. In line with this proposal, we recently observed that the perceptual symmetry bias in the finger oscillation task coexisted with an advantage for using homologous muscles17, rather than relying on perceptual coding alone, as had been previously suggested7.
Whereas the role of perceptual and anatomical codes has, thus, been firmly established, it is less clear what kind of perceptual information these biases are based on. The prevalent experimental approach has been to contrast vision with posture, and to interpret performance biases induced by vision as evidence for perceptually induced, spatial guidance, and biases induced by posture as evidence for anatomical constraints of movement coordination7,12. Yet, visual information transports not just abstract spatial information, but also information about the body, presumably to contribute to the construction of a body representation. Indeed, we have found that muscle homology affected bimanual finger oscillations less in congenitally blind than in sighted individuals; this finding suggests that vision may induce not just a spatial bias, but may, in addition, contribute body-related, such as postural and muscle-related, information for motor coordination17.
One experimental method to investigate the role of body-related visual information is the use of mirror visual feedback. A mirror is placed along the body midline in the sagittal plane; participants look into the mirror from one side, so that the view of the hand behind the mirror is occluded and replaced by the mirror image of the still visible hand. Thus, although one arm is hidden from view, participants have the impression of seeing both of their hands moving in synchrony18. The strong influence of this visual manipulation on body-related, anatomical aspects is maybe most impressively demonstrated in mirror visual feedback therapy (MVT). MVT is used to treat pathological conditions involving unilateral upper extremity pain and motor dysfunction. The mirror replaces visual feedback of the affected arm with that of the intact arm. Viewing mirrored hand movements of the intact arm has been reported to aid recovery of upper extremity function and/or to alleviate pain in different pathological conditions, including stroke, complex regional pain syndrome, and orthopedic injuries, and can even reduce phantom pain after limb amputation when the mirror image of the remaining hand fills the place of the now missing limb (for reviews see19,20,21,22). Thus, in such setups, the visual manipulation of anatomical aspects strongly modulates perception.
The mirror setup can also increase movement coupling between the hands, that is, bimanual symmetrical movements are spatially more similar when mirror visual feedback is available, relative to when only one hand is visible23. In the finger oscillation paradigm, mirror feedback can create incongruence between the visually perceived and the truly performed bimanual movement; for instance, during parallel finger movements, mirror feedback feigns symmetrical movement through vision while proprioceptive information signals the true, parallel movement. In this incongruent situation, performance declines compared to regular viewing of the hands and relative to when vision is prevented entirely by closing the eyes24. In other experimental paradigms, such incongruent visual feedback can even induce phantom sensations, such as tickling or numbness, in healthy participants18,25,26,27.
Thus, a large body of evidence suggests an important role of vision for bimanual coordination, but the specific role of vision for the different aspects to which it can contribute, such as abstract spatial or body-related information, is less clear. One account, the perception-action model put forward by Bingham and colleagues, posits that bimanual coordination performance critically depends on the performer’s ability to perceptually detect the phase relationship between the two limbs, expressed in their relative movement directions14,28,29,30. Thus, the model specifies visual direction as the aspect of visual information that is relevant for coordination. Difficulty in reliably detecting relative direction presumably leads to maladaptive error detection and correction, which, in turn, impedes performance14,28,29,30. According to Bingham’s model, bimanual coordination is but a special case of any form of visually driven coordination. In fact, they point out that similar constraints appear to govern coordination of a single limb with either a visual stimulus or the limb of another person16,31,32,33. Accordingly, most experiments that have explored Bingham’s theory have employed paradigms that required unimanual coordination of a limb with moving visual stimuli presented on a display32,34,35,36. However, this experimental approach implicitly presumes that the brain abstracts from all movement parameters and, in particular, that it dismisses other specific, body-related visual information. Yet, the findings that have demonstrated an influence of anatomical in addition to perceptual factors9,15,16,17 suggest that also visual information pertaining to posture and muscles may be of relevance for bimanual coordination.
Here, we used the finger oscillation task as a strictly bimanual paradigm to scrutinize the proposal that bimanual coordination relies predominately on visual direction information, and to integrate the findings from visuo-motor and bimanual coordination that have used different experimental paradigms. The finger oscillation task allowed us to disentangle the three body-related visual aspects that could each potentially be relevant for successful bimanual coordination: first, visual feedback about the spatial direction implied by visual feedback of the performed movement (parallel vs. symmetrical); second, visual feedback about the posture of the hands (same vs. different orientation); and third, visual feedback about the muscles involved in executing the movements (homologous vs. non-homologous).
We conducted the present study to delineate the role of these three aspects of visual information for bimanual coordination. Participants executed oscillatory finger movements that were either parallel or symmetrical relative to the sagittal body midline, with the two hands held either in the same or in different orientations. Participants either viewed their two hands directly, or alternatively viewed their left hand directly and its mirror image at the location in space occupied by the hidden right hand.
Twenty participants performed the finger oscillation task, that is, they made symmetrical and parallel finger abduction and adduction movements with the index fingers of the two hands with gradually increasing speed7,17. In different blocks, the two hands had either the same orientation with both palms up or down, or different orientations, with one hand facing palm up and the other palm down. This latter manipulation reverses the muscles involved in symmetrical vs. parallel movements: whereas symmetrical movements usually require the use of homologous muscles in the two hands, this muscle configuration is now required for parallel movements. To manipulate visual afferent information, a mirror was placed between the hands in half of the experiment; it hid the right hand, and participants saw the mirror image of the left hand in its place, creating the impression that the currently performed movement was symmetrical, and that both hands had the same posture, independent of the true movement type and hand posture. We tested how the congruence and incongruence of these aspects of visual feedback with the truly performed movement affected the accuracy of bimanual movement coordination.
Operationalization of performance
Following previous work, we operationalized movement accuracy by classifying movements as “correct” when the phase difference of the two fingers deviated less than 50° from the instructed movement phase in a single movement cycle of abducting and adducting the fingers7,17. For symmetrical movements the phase difference in external space should be 180° (note, studies that focused on muscle symmetry have referred to this phase difference as 0°, based on muscle space rather than external space)7,17. In analogy, for parallel movements the phase difference should be 0° in external space. Other groups have used different analysis approaches; we show in the supplemental material that the results presented here are independent of the specific analysis method.
We assessed the statistical significance of performance differences across conditions with a Bayesian model that included parameters (referred to as beta weights) that reflected main effects and all their interactions of our factorial design, equivalent to a frequentist GLMM approach. Bayesian model analysis assesses the interval in which a given parameter lies with 95% probability, referred to as highest density interval (HDI) or credible interval. If the HDI of a group-level beta weight excludes zero, this indicates that the parameter contributes to the estimation of the dependent variable, here, movement accuracy. In contrast, an HDI that includes zero indicates that a beta weight does not contribute to the estimation of the dependent variable.
Anatomical and external contributions to bimanual coordination
We first tested whether both external and anatomical influences were at all present in our study; the following analyses then focused on which type of visual information modulated these biases. We compared conditions in which correct performance required the use of homologous and non-homologous muscles in the two hands to make symmetrical or parallel movements. If bimanual coordination were solely constrained by anatomical factors, performance should be superior whenever homologous muscles as opposed to non-homologous muscles must be used, regardless of hand posture and movement instruction. Alternatively, if movement coordination were solely constrained by external factors, the symmetry advantage should prevail regardless of whether homologous muscles are involved in the instructed movement. If both anatomical and external factors constrained bimanual coordination, performance in either movement condition should benefit from the use of homologous muscles, in addition to a general advantage of symmetrical over parallel movements.
Whether the instructed movement required the use of homologous muscles depended on the experimental factors movement instruction and hand posture. When both palms had the same orientation, symmetrical movements required using homologous muscles, and parallel movements required using non-homologous muscles. In contrast, when the hands were held in different postures, symmetrical movements required using non-homologous muscles, and parallel movements required using homologous muscles.
Performance declined with increasing movement speed, but more so for parallel than for symmetrical movements, evident in a stronger decline of movement cycles in which the phase difference was classified as correct (i.e., deviating maximally +/−50° from the expected phase difference of 180° for symmetrical, and 0° for parallel movements). In addition, performance was better with the hands in the same than in different postures for symmetrical movements, whereas the opposite performance pattern emerged for parallel movements (Fig. 1: left panels; Fig. 2). The posterior distributions of the beta weights that together reflected modulations by anatomical and external spatial coding (βinstruction, βposture, βspeed, βinstruction_posture, βinstruction_speed, βinstruction_posture_speed) excluded zero, indicating that each factor, as well as their interactions, contributed to bimanual coordination performance (Table 1, Fig. 3).
Hypothesis-driven, direct comparison of the model posterior predictions for conditions that involved homologous vs. non-homologous muscles, separately for symmetrical and parallel movements at slow and fast speeds (parameter: βinstruction_posture_speed), revealed two key findings. First, the resulting credible difference distributions excluded zero, and the estimated mean performance was larger for symmetrical than parallel movements, both at slow and fast speeds. This result confirmed superior performance of symmetrical over parallel movements and, thus, implied external-spatial contributions to performance. Second, all resulting credible difference distributions were positive, suggesting that performance benefitted from the use of homologous muscles and, thus, indicated that performance was modulated by anatomical factors. These differences were more pronounced at fast than at slow speeds (homologous minus non-homologous conditions: same-differentsymmetrical_fast: M = 2.66 [2.43 2.92]; different-sameparallel_fast: M = 1.56 [1.43 1.70]; same-differentsymmetrical_slow: M = 2.17 [1.85 2.49]; different-sameparallel_slow: M = 1.33 [1.13 1.53]).
In sum, these results indicate that bimanual coordination is constrained by external factors, but is additionally modulated by anatomical factors, replicating the result of our previous report17 in an independent sample and supporting previous accounts of a mixed influence of both in bimanual coordination1,9,15,16.
Body-related visual information integrated for action
The present study’s main aim was to determine whether, and if so, which specific kind of abstract spatial or body-related visual information constrains movement coordination. Therefore, our experiment was designed to disentangle different kinds of visual feedback: about movement direction, about hand posture, and about the muscles involved in the current action.
Each of these potential influences makes distinct predictions about the pattern of bimanual coordination performance across our experimental factors, and we will briefly introduce each predicted pattern (see Fig. 4 for a visual illustration of the three different visual feedback conditions induced by the mirror).
Visual feedback about movement direction
One potential source of information could be the direction of movement, independent of the further specification of how this movement is achieved, that is, irrespective of posture and involved muscles. In our paradigm, this influence of visual information about movement direction (symmetrical vs. parallel) would be evident in a difference between conditions in which visual and proprioceptive modalities provided congruent versus incongruent information about the type of performed movement (Fig. 4A). Without the mirror, visual and proprioceptive information about the executed movement were always congruent (uneven numbered conditions in Figs 1 and 2). With the mirror, visual-proprioceptive feedback was incongruent whenever the fingers moved parallel; in these conditions, visual feedback indicated that the fingers were moving symmetrically. If visual feedback about movement direction were relevant for bimanual coordination, performance in congruent feedback conditions (numbered 2 and 4 in Figs 1, 2, and 4A) should be superior to that in conditions with incongruent visual-proprioceptive information (numbered 6 and 8 in Figs 1, 2, and 4A). Critically, this difference should be independent of hand posture. Accordingly, congruence of visual-proprioceptive information about movement direction would be reflected in the interaction of movement instruction and mirror view.
Visual feedback about posture
A potential influence of visual information about hand posture would be evident in a difference between conditions with congruent vs. incongruent information about posture from vision and proprioception (Fig. 4B). Without the mirror, visual-proprioceptive information about posture was always congruent (uneven numbered conditions in Figs 1 and 2). With the mirror, visual-proprioceptive information was incongruent when the two hands had different postures; in these conditions, mirror feedback indicated that the hands had the same orientation. If visual feedback about hand posture were relevant for bimanual coordination, performance should be superior in congruent (numbered 2 and 6 in Figs 1, 2, and 4B) over incongruent (numbered 4 and 8 in Figs 1, 2, and 4B) visual-proprioceptive posture conditions. Critically, this performance advantage should be independent of movement instruction, that is, of whether executed movements were symmetrical or parallel. Accordingly, congruence of visual and proprioceptive feedback about hand posture would be reflected in the interaction of mirror view and hand posture.
Visual feedback about the involved muscles
A potential influence of visual information about the muscles involved in the current action would be evident in a difference between congruent vs. incongruent visual-proprioceptive information about the currently active muscles (Fig. 4C). Without the mirror, visual-proprioceptive information about involved muscles was always congruent (uneven numbered conditions in Figs 1 and 2). With the mirror, the combination of movement instruction and hand posture determined whether visual-proprioceptive feedback was congruent or not. Visual-proprioceptive information was, for instance, incongruent when participants made symmetrical movements with differently oriented hands. In this situation, the hands appeared to be oriented in the same posture due to the mirror, and, thus, vision suggested that homologous muscles were used, although truly participants had to use non-homogenous muscles. Further conflict conditions are illustrated in Fig. 4C. If visual feedback about muscles were relevant for bimanual coordination, performance in congruent apparent muscle conditions (numbered 2 and 8 in Figs 1, 2, and 4C) should be superior over incongruent conditions (numbered 4 and 6 in Figs 1, 2, and 4C). Accordingly, congruence of visual-proprioceptive feedback about involved muscles would be reflected in the interaction of movement instruction, mirror view, and hand posture.
Visual feedback about movement direction is relevant for bimanual coordination
With the mirror present, performance improved for symmetrical movements, but deteriorated for parallel movements, both relative to regular viewing without the mirror. These effects were evident in a gradual decline of the percentage of correctly executed movement cycles with increasing movement speed (Figs 1 and 2). For symmetrical movements, this effect was small due to performance near ceiling even at high speeds with the hands held in the same posture. Crucially, the effect of visual feedback varied systematically with movement instruction, but not with hand posture. The posterior distributions of the relevant model beta weights, βinstruction_mirror and βinstruction_mirror_speed, excluded zero, confirming that they contributed to explaining the probability of moving the two fingers correctly (Table 1 and Fig. 3). This result indicates an effect of visual information about movement direction, but not about hand posture and involved muscles.
To further scrutinize this result, we subtracted posterior model predictions in the non-mirrored conditions from those in the mirrored conditions, separately for symmetrical and parallel movements at slow and fast speeds (parameter: βinstruction_mirror_speed). The credible difference distributions are displayed in Fig. 5. Performance deteriorated during parallel movements in mirror as compared to non-mirrored conditions, as evident in the negative distribution of credible differences at both slow and fast speeds, all of which excluded zero. In contrast, performance improved during symmetrical movements in mirrored relative to non-mirrored conditions, as evident in the positive distribution of credible differences at fast speeds, which again excluded zero. This performance improvement was not evident at low speeds, presumably because performance was more similar overall during slow movements, in line with previous reports (see Fig. 2).
Visual information about hand posture and involved muscles are irrelevant for bimanual coordination
To further test whether, indeed, coordination relied solely on visual direction information, we directly examined the parameter estimates relevant for the potential alternatives, namely, hand posture and involved muscles.
For hand posture, the posterior distributions of the model beta weights βmirror_posture, and βmirror_posture_speed included zero, suggesting that this experimental factor did not contribute to explaining the probability of moving correctly (Table 1 and Fig. 3). Thus, statistical analysis did not provide any evidence that visual information about hand posture constrained movement coordination in the present experiment.
An effect of visual information about involved muscles would be evident in the interaction of movement instruction, mirror view, and hand posture (Fig. 4C). Note that a modulation of visual information about involved muscles would thus encompass the same factors that also indicate a modulation of visual information about movement direction, namely movement instruction and mirror view, but would warrant an additional modulation by hand posture. The posterior distributions of the corresponding model beta weight βinstruction_mirror_posture just barely excluded zero (Table 1 and Fig. 3). Nonetheless, we followed up on this finding by subtracting posterior model predictions for incongruent from congruent mirror conditions, separately for symmetrical and parallel movements. The distributions of credible differences were positive and excluded zero, indicating that performance in congruent feedback conditions was superior to performance in incongruent conditions, as would be predicted if visual information about involved muscles were relevant for coordination (congruent minus incongruent conditions: same-differentsymmetrical_mirrored: M = 2.53 [2.23 2.83]; different-sameparallel_mirrored: M = 1.56 [1.39 1.72]).
We further reasoned that, if visual feedback about the involved muscles indeed determined coordination, performance in congruent mirror conditions should be indistinguishable from performance in corresponding conditions without mirror, because in both cases, visual and proprioceptive feedback unanimously indicate that corresponding muscles are used. Additionally, along with altering visual feedback concerning muscle identity, the mirror manipulation presumably affected visual feedback concerning the relative timing of bimanual muscle activation. With regular visual feedback of the hands, the dominant hand has been observed to lead the non-dominant hand by about 25 ms in bimanual coordination tasks10. Correspondingly, mirrored feedback about the timing of muscle activation would not correspond exactly to its actual timing, given the slight lag of the non-dominant hand. Therefore, we predicted that performance in congruent mirrored conditions should be worse than in congruent non-mirrored conditions if visual information concerning involved muscles determined coordination. To test this prediction, we subtracted posterior model predictions for congruent non-mirrored from congruent mirror conditions, separately for symmetrical and parallel movements. Note that a differential effect of mirror view depending on movement instruction cannot be accounted for by a visual effect of involved muscles, as both conditions are identical concerning muscle information. If nonetheless the effect of mirror view depends on the movement instruction, this would further corroborate the effect of visual movement direction, as parallel and symmetrical movements differ concerning this aspect.
The effect of mirror view indeed differed according to the movement instruction. Performance improved with mirrored feedback, relative to non-mirrored conditions, when moving symmetrically (mirrored-non-mirroredsymmetrical_same: M = 0.41 [0.06 0.76]). The opposite pattern was evident when moving in parallel, that is, mirrored visual feedback was detrimental to performance (mirrored-non-mirroredparallel_different: M = −0.35 [−0.52–0.16]).
Contrary to the comparison of congruent vs. incongruent mirrored conditions concerning involved muscles, the comparison of congruent mirrored with congruent non-mirrored conditions, thus, did not support the notion that visual feedback about the involved muscles constrains bimanual coordination. Instead, the credible, but differential effect of mirrored visual feedback on performance depended on the movement instruction and corroborates that visual movement direction affected coordination performance.
Temporal aspects of visual feedback concerning movement direction
The performance improvement during the viewing of mirrored symmetrical feedback struck us as surprising, as one might expect that the perception of non-veridical visual movement timing feedback would be detrimental to, rather than supportive of, the production of coordinated movement. The present finding led us to speculate that the temporal synchrony of visual feedback in the mirrored condition may actually lead to a decrease of the true lag between the dominant and non-dominant hands in our experiment, potentially marking a mechanism by which the mirror-induced performance improvements observed here may be explained.
When movement direction was visually and proprioceptively congruent, performance was better in mirrored than non-mirrored conditions; this difference was small, but associated with a credible difference parameter estimate in our model. Performance of symmetrical movements was generally near ceiling, so that even substantial differences on the logit scale translate to very small differences in performance measured as percentage correct. Accordingly, the 0.45 improvement on the logit scale translates to only a 0.3% percentage correct improvement at high movement speeds (beta weight in the model: βinstruction_mirror_speed). Conversely, smaller differences on the logit scale in other conditions were much more clearly evident on the percentage correct scale. The performance improvement with mirrored relative to non-mirrored feedback (beta weight in the model: βinstruction_mirror_posture) and hands held in different orientations was estimated at 2.3% (logit: 0.19; baseline performance level: 85.2%, logit: 1.75), as compared to a 1.0% (logit: 0.26, base performance level: 95.5%, logit: 3.05) improvement with hands held in the same orientation. Nonetheless we are hesitant to capitalize on this result, as the beta weight including posture (beta weight in the model: βinstruction_mirror_posture) just barely excluded zero and the performance decline when performing parallel movements with the mirror present relative to non-mirrored visual feedback, was larger (13.7%; 0.63 logits; beta weight in the model: βinstruction_mirror_speed).
The present study aimed at specifying anatomical and external-spatial contributions to bimanual coordination performance. Previous findings, mainly from experiments requiring the coordination of limb movements with visual cues, have led to a theoretical account of bimanual coordination, and motor coordination more generally, that stresses the relevance of the perceivability of phase synchrony implied in visual direction information14,28,29,30. In contrast, findings from some bimanual coordination paradigms have stressed the importance also of anatomical factors such as the muscles involved in a particular bimanual movement, suggesting that visual information about factors other than solely movement direction may play a role in coordinative behavior of the limbs15,16,17. We exploited the well-known bias towards symmetrical over parallel finger movements to delineate different potential sources of visual modulation by introducing a mirror through which participants saw the reflection of one hand projected onto the location of the hidden, other hand. Our study revealed three key results. First, anatomical factors modulated bimanual coordination. Specifically, participants performed better when bimanual movements required the concurrent activation of homologous rather than non-homologous muscles. Second, external spatial factors, too, modulated bimanual coordination. An advantage of symmetrical movements prevailed regardless of hand posture, and, thus, irrespective of whether homologous muscles had to be activated. Third, of the three kinds of visual information manipulated in the present study – movement direction, hand posture, and the muscles involved in the performed movements –, only movement direction information modulated bimanual performance. In contrast, visual information pertaining to hand posture appeared to be irrelevant for coordination performance, and there was only weak evidence that visual information pertaining to the muscles involved in the current movement may play a role in coordination performance.
In line with the specific modulation by visual direction information we observed in the present experiment, previous studies have demonstrated that visual directional cues are relevant for bimanual coordination. For instance, most coordination tasks result in inherently stable performance only when the bimanual phase patterns are symmetrical or parallel, but not for intermediate phase differences3. Yet, participants can execute such out-of-phase movements if their movement is yoked to concurrent symmetrical or parallel visual information while the hands are hidden from view. For instance, human participants can execute four circular hand movements with one hand, and concurrently five with the other hand, only if these movements are translated into equally fast visual circular movements7. Furthermore, performance of orthogonal bimanual movements, such as one hand moving up and down, while the other hand moves to the left and right, improves if visual feedback is given in one plane, that is, as if both hands were moving up and/or down8. These studies suggest that performance of less stable coordination patterns improves if directional visual feedback indicates that an inherently stable coordination pattern, that is, symmetrical or parallel movement, is performed.
Bimanual movements can also be stable when visual feedback is not symmetrical or parallel, but if, instead, movement paths of both hands can be visually perceived as forming a common, coherent shape37. In a similar vein, participants can execute polyrhythmic two-hand movements when guided by visual displays that integrate directional information of the two hands into one common visual signal13. These so-called Lissajous displays integrate the position of the two hands into a single point on the display by mapping the movement of each limb onto one axis. Performance in this setup is best if the display shows both the visual target pattern and a cursor that indicates the current (transformed) limb position38,39,40,41. Performance declines rapidly if the display is turned off, which has been interpreted to suggest that the integration of the immediate visual direction information about the to-be-performed coordination pattern is a prerequisite for its execution38,41.
Kovacs and colleagues have interpreted these findings as empirical support of the perception-action model proposed by Bingham and colleagues, which capitalizes on visual direction information as the cardinal factor for successful bimanual coordination14,28,29,30,40. Visual conditions such as those created by the above-mentioned experimental setups then presumably aid error detection, because they facilitate the perceivability of relative movement direction39,40. In line with the idea of visual movement direction driving coordinative behavior, typical coordination phenomena, such as the advantage of symmetrical over parallel movements, persist even if movements are coordinated only visually. This is the case, for instance, when two people must coordinate their movements16,31 and when participants must coordinate their movement with moving visual stimuli on a display32,33. Using such a visual coordination paradigm, it has been demonstrated, for example, that training participants abilities’ to detect relative movement direction improves coordination performance with a moving visual stimulus on a display36. In a similar vein, perceptual detection of relative phase has been shown to be largely unaffected by alternative candidate movement parameters, such as frequency and speed, thus further scrutinizing the importance of relative movement direction for the perceivability of relative phase42. In light of these results, it has been suggested that bimanual coordination is but a special case of any form of visually driven coordination and as such similarly relies on the perceptual ability to detect relative phase from movement direction. Crucially, this conclusion presumes that the brain abstracts movement direction and dismisses all other body-specific visual information. We provide direct experimental evidence for this assumption here, using a strictly bimanual paradigm and thus bridging the gap between findings from visuo-motor and bimanual coordination that have used different experimental approaches.
Collectively, then, these results stress the importance of visual movement direction for bimanual coordination and provide a comprehensive account for the dominant role of visual direction information we observed in the present study. In contrast, a general degeneration of vision does not impair performance7,24,43, or, leads to only a minor destabilization44. Similarly, visual augmentation by marking fingers that have to move together to produce symmetric or parallel tapping patterns does not affect performance45. Moreover, previous studies have suggested that movement execution is modulated by the level of abstraction of visual effector feedback46,47. Our study did not abstract visual direction information, but, through the mirror setup, provided participants with visual feedback that appeared to reflect the real hands. This experimental situation, thus, more closely resembles the true visual feedback of everyday situations, in which we usually have full vision of our effectors48. Our results show that the brain indeed abstracts movement direction from body-related visual feedback during bimanual coordination, while discarding visual information regarding hand orientation, as well as involved muscles, and thus validates a generalization of the findings obtained with more abstract feedback situations, such as cursors on a screen, to realistic feedback situations.
It is under debate whether continuous, rhythmic movements and short, goal-directed movements rely on similar brain mechanisms. The role of visual information has been investigated in the context of bimanual goal-directed movement49,50,51 and especially in the context of unimanual goal-directed movement52. In these studies, visual information about effector position affected performance, in line with the requirement of integrating target location with current limb position53,54. For instance, visual information about the limb can dominate proprioceptive position, information a phenomenon termed ‘visual capture’55,56. Furthermore, specific resources appear to be devoted to monitoring hand position during goal-directed movement49. The relative contribution of – usually redundant – visual and proprioceptive signals to movement planning depends on the reliability of each informational source57,58,59,60,61,62, and the relative weighting of visual and proprioceptive signals differs according to the stage in motor planning60,63. Visual information appears to be most important when inferring external spatial movement parameters, whereas primarily proprioceptive feedback is used when inferring muscular-based, position-related information, as is necessary to translate a motor plan into body-or hand-centered coordinates for movement execution60,63,64.
To relate the present study to these findings from studies on goal-related movement, one can conceptualize the present repetitive finger oscillation task in an analogous framework. Here, visual direction information outweighed proprioceptive and motor signals to guide continuous bimanual coordination, in line with the finding that goal-directed movements primarily rely on visual information when external spatial movement parameters must be inferred. In contrast, visual information about hand posture and involved muscles did not affect performance, suggesting that proprioceptive information outweighed visual feedback for these properties in the present task. This pattern of results is in line with the prominent role of proprioceptive signals when muscular-based, position-related information must be derived for goal-directed movement to translate a motor plan into body- or hand-centered coordinates for movement execution. However, the repetitive nature of the present bimanual task prohibits formally distinguishing between planning and execution stages of the movements, and, thus, makes it difficult to draw firm conclusions about the potential overlap regarding the processing principles of goal-directed, unimanual and continuous, bimanual movements.
In the present task, mirrored visual movement information was always integrated for bimanual coordination, but the behavioral consequences of integration depended on whether visual movement information was congruent or incongruent with proprioceptive and motor signals. This pattern of results seems to be at odds with previous studies that reported that integration of mirrored visual feedback scaled with the degree of congruency of visual and proprioceptive movement information18,65,66. In these studies, synchronous movements led to reliance primarily on visual information, whereas asynchronous movements led to reliance primarily on proprioceptive information. Notably, the dependent measures marking integration of visual information in these studies – gap detection at, or pointing movements with, the hidden hand – were acquired after bimanual movements with mirrored visual feedback had been performed for some time. Thus, the dependent measures were unimanual and as such not indicative of visual contributions to bimanual coordination performance. Furthermore, both measures might differ considerably with regard to the reliability and relevance assigned to bimanual visual information, as compared to continuous bimanual coordination performance assessed in the present task.
Incongruence of movement-related visual, proprioceptive, and motor information led to a performance decline of bimanual coordination in our study. This result is in line with reports of MVT suggesting that incongruent sensory feedback induces phantom sensations, such as tickling and numbness, in healthy participants18,25,26,27. In contrast, congruence of mirrored visual, proprioceptive, and motor information led to a performance improvement, possibly because the mirrored movement information during symmetrical movements provided perfectly timed visual feedback of the instructed bimanual movements. These findings bear relevance on clinical applications of the mirror manipulation. So far, few standardized MVT treatment protocols exist, and those that do have specified that movements should be bilateral and performed in synchrony, but have not stressed that they should be symmetrical as well67,68. It has even been suggested that the “[…] actual manner of movement appears not to matter as long as it is bilateral and synchronized”68. Additionally, it has been suggested that therapeutic aids should be used unilaterally using the healthy arm in front of the mirror67. These and similar instructions possibly produce incongruence of proprioceptive and visual movement direction, which might produce undesired effects and explain why scientific evidence in favor of MVT as a tool to aid bimanual function is still scarce to date. Consequently, the selective performance benefit of mirrored symmetrical movements and the detrimental effect of incongruent visual movement information for bimanual coordination we report here suggest that applications of MVT should stringently ensure that congruent, symmetrical movements are performed, and further imply that unimanual mirrored handling of therapeutic aids may be disadvantageous to the facilitation of bimanual coordination.
In conclusion, bimanual coordination is guided both by anatomical, muscle-based constraints, as well as by perceptually based, visual constraints. For the latter, information about direction appears to play a key role, whereas effects of posture and muscle homology appear to be mediated only through non-visual channels, and visual cues pertaining to these aspects did not further modulate performance. These results integrate well with current models of bimanual control and goal-directed movement that posit a guiding role of abstract visual direction information for movement planning and execution.
We report how we determined sample size, all experimental manipulations, all exclusions of data, and all evaluated measures of the study. Data and analysis scripts are available online (see https://osf.io/g8jrt/).
Previous studies have typically reported significant results pertaining to posture in the finger oscillation task with N < 107,17. Here, we defined, in advance, a target sample size of 20 participants because we expected that mirror-induced effects would be smaller than posture effects, requiring a larger number of participants for statistical power. Data were acquired from 23 participants, because the data of 3 participants had to be excluded from analysis (see below). None of the participants had participated in our earlier study17. All participants were students of the University of Hamburg. They were right-handed according to questionnaire-guided self-report (average Oldfield laterality quotient of 80.4, range: 50–10069), had normal or corrected-to-normal vision, and did not report any neurological disorders, movement restrictions, or tactile sensitivity problems. They provided written informed consent and received course credit for their participation. The experiment was approved by the ethics committee of the German Psychological Society (DGPs) and all methods were performed in accordance with the relevant guidelines and regulations. Two participants aborted the first experimental session after a few trials, because they were unable to perform the bimanual coordination task. Data of a third participant was excluded because movements were accidentally instructed incorrectly. The final sample thus consisted of 20 students, 15 of them female, mean age 23.6 years (range: 20–32 years).
The experiment was designed based on previous studies that used the same paradigm7,17. Figure 6 illustrates the setup and the experimental conditions. Participants performed a finger oscillation task; they executed adduction and abduction movements, that is, right-left movements, with the two index fingers. Instructed movements were either symmetrical, that is, the index fingers moved in- or outwards at the same time, or parallel, that is, fingers moved to the right or left side in space at the same time (Fig. 6B). There were two viewing conditions: non-mirrored and mirrored (Fig. 6A). In the non-mirrored conditions, participants viewed both hands directly and, thus, received regular visual feedback. In the mirrored conditions, a mirror blocked the view of the right hand, so that participants saw the mirror image of the left hand in place of their real right hand; however, this manipulation gives rise to the subjective impression of seeing both hands just like in the non-mirrored condition. The hands were either held in the same (both palms up or down) or in different hand orientations (right palm up, left palm down, or vice versa; Fig. 6C).
The experiment comprised four experimental factors. The factors movement instruction (symmetrical vs. parallel), mirror view (non-mirrored vs. mirrored), and hand posture (both palms down vs. both palms up vs. left palm up and right palm down vs. right palm up and left palm down) were varied block-wise in randomized order. The factor speed (10 discrete speeds from 1.4 to 3.4 Hz) was varied within trials. Whereas participants are usually able to perform symmetrical and parallel movements (almost) equally well at low speeds, their performance regularly declines markedly for parallel, but not symmetrical, movements at high speeds3. During a trial, each speed level was maintained for 5 beats, resulting in 50 beats per trial, resulting in a trial duration of about 22 seconds. Each of the 16 combinations of the factors instruction, mirror view, and hand posture was presented 4 times across two sessions held on separate days.
Materials and apparatus
Participants sat at a table with both hands resting comfortably in front of the body. Finger movements were tracked with a camera-based motion tracker (Visualeyez II VZ4000v PTI; Phoenix Technologies) using infrared markers sampled at 100 Hz. Four markers were attached to each index finger, one on the finger nail, one opposite the nail on the fingertip, and one on each side between nail and tip. As a result, at least one marker per hand was visible during movement execution in all postures. Movements were instructed by metronome-like sounds presented through two loudspeakers positioned in front of the participant. Experimental protocols were controlled via MATLAB (version 7.14, The Mathworks).
In each trial, participants rhythmically moved both outstretched index fingers to the metronome sounds. Participants were instructed to complete a full movement cycle per beat, that is, move both fingers at the same time in- and outwards when moving symmetrically, or, move both fingers at the same time to the left and right in space when moving in parallel. Instructions stressed that participants should execute movements as correctly as possible, but could change to a more comfortable movement pattern if they were unable to maintain the instructed movement pattern70. Participants had to look at both hands (both real or left real/right mirrored) throughout the experiment. They rested and stretched after every 2 trials.
Data selection and trajectory analysis
We excluded two trials from one participant because the hand position on the table had accidentally been instructed incorrectly. We excluded two further trials because a participant had partially closed his/her eyes to ease performance.
We analyzed the left-right component of finger movement trajectories. Within trials, we interpolated occasional missing data (e.g., if a marker was temporally non-visible), smoothed trajectories with a low-pass filter (first-order Butterworth filter at 7.5 Hz), and normalized them by demeaning.
We then identified individual movement cycles as the interval between a consecutive maximum and minimum of the right finger’s trajectory. For symmetrical movements the phase difference should be 180°, because one finger is at its rightmost position when the other is at its leftmost position. For parallel movements the phase difference should be 0°, because both fingers move in synchrony to the left and right in space. For each cycle, we fitted a sine wave to the trajectory of each finger (see Y.Q. Chen, 2003, http://www.mathworks.com/matlabcentral/fileexchange/3730-sinefit; see Figure S1 in the supplemental results for an illustration of the sine wave fit to the raw data)17. We determined the relative phase of the two fingers as the phase difference of the two fitted sine curves. Sine fitting explicitly models movement velocities as sinusoidal and has the advantage that it produces a single phase difference value that represents performance of a given cycle. To validate this approach, we conducted an alternative analysis that extracted the phase difference along the continuous, unfitted trajectory; this analysis rendered equivalent results (see Figures S2–S5 in the supplemental results).
The final data set comprised a total of 62,536 movement cycles from 20 participants, with an average of 39 movement cycles per condition and participant (range: 25–46). The reasons for the variability of the number of movements are that participants sometimes paused or made unidentifiably small movements, especially at high speeds; furthermore, participants were sometimes off-beat and then executed fewer movement cycles than instructed.
Statistical inference: Bayesian hierarchical logistic regression
In Bayesian statistical analysis approaches, credibility is reallocated across candidate parameter values, here the slopes indicating main effects and interactions of our experimental factors, as data is cumulatively considered against prior beliefs about the parameters71. Parameter values are given a non-committal a-priori credibility, termed the ‘prior’. Bayesian model estimation determines a posterior distribution of jointly credible parameter values, given the evidence and the prior72. Conveniently, the resulting posterior distribution is directly indicative of a parameter’s most likely true value within the parameter space. In contrast to frequentist approaches, Bayesian methodology thus provides a direct estimate of parameter magnitude and uncertainty73.
For statistical inference, we dichotomized the phase difference of the two fingers into correct (1) and incorrect (0). To this end, the relative location of the two fingers during a movement cycle was compared to the expected relative difference in each condition (+/−50° around °0 and 180° for parallel and symmetrical movements, respectively7,17). The results we report were qualitatively and statistically equivalent when accuracy was dichotomized with a stricter criterion of 20° (see Figures S7–S9 in the supplemental material). We validated this dichotomization approach by submitting the data to an alternative analysis which determined the amount of time during which the phase difference between the two fingers was around the target movement frequency36, with qualitatively equivalent results (see Figure S6 in the supplemental material).
We dichotomized movement speed into slow and fast by collapsing over the five slowest and five fastest movement speeds. This analysis step greatly reduces the computational demands of model fitting, but preserves the well-known modulation of higher performance during slow as compared to fast speeds under parallel instructions. Note, that we illustrate all 10 speed levels in our figures of the raw data, both for comparison with previous studies, and to demonstrate consistency across lower and higher speed levels. Finally, we subsumed hand postures into a two-leveled factor by pooling both hands down and both hands up as ‘same hand orientation’ and left up/right down and left down/right up as ‘different hand orientation’17.
In response to the concern of a reviewer based on several earlier reports74,75, we furthermore ascertained that the effects in the right-left dimension of finger movements in our study were not due to a transfer of movement into another movement dimension (such as up-down). To this end, we ascertained that (1) the number of movement cycles identified at each speed were comparable across speeds; (2) that the highest velocities were observed in the relevant, and not in an irrelevant, dimension; and (3) that the standard deviation of movement velocity was, accordingly, highest in the relevant dimension (see Figures S10–S13 in the supplemental results).
We fitted a hierarchical Bayesian logistic regression model to the dichotomized performance measure to estimate the probability of moving correctly in a given movement cycle through the linear combination of group-level regression beta weights and participant-level intercepts. Regression beta weights are denoted βinstruction for the main effect of the factor movement instruction, βmirror for the main effect of the factor mirror view, βposture for the main effect of the factor hand posture, and βspeed for the main effect of the factor speed. Furthermore, regression beta weights were included for all possible factor combinations and are denoted βi_n with i, n denoting i factors interacting with n other factors76. For instance, the model parameter denoted βinstruction_mirror_posture represents the regression beta weight for the three-way interaction of movement instruction, mirror view, and hand posture. Beta weights were constrained to sum to zero, with the first factor level dummy-coded as 1 and the second one as −1 (βinstruction: symmetrical = 1, parallel = −1; βmirror: non-mirrored = 1, mirrored = −1; βposture: same = 1, different = −1; βspeed: fast = 1, slow = −1). Uninformative priors were chosen for all model parameters. Specifically, priors were modeled as normal distributions centered on zero, corresponding to a 0.5 probability of moving correctly. Precision, that is, the width of the normal distribution, of each prior was drawn from an inverse gamma distribution with shape parameter 1 and scale parameter 0.01 to allow for a large range of possible values77. We re-sampled our model with several alternative specifications for uninformative priors to ensure that posterior distributions were robust. For instance, we drew the normal distributions’ precision from the inverse gamma function with shape parameter 0.01 and scale parameter 0.01, rendering qualitatively identical results (not reported).
We used JAGS version 4.0.078, R version 3.2.279, and the R package runjags version 2.0.2–880 to perform Markov Chain Monte Carlo (MCMC) sampling. Specifically, we sampled 60,000 representative credible values from the joint posterior distribution of the model parameters in four independent chains. The chains were burned in (1500 samples) and every 20th sample was saved, rendering a total of 12,000 recorded samples. Stable and accurate representation of the parameter posterior distributions was ensured visually using trace, autocorrelation, and density plots, as well as numerically by examining the effective sample size (ESS), and the shrink factor81. All model parameters of interest had a minimum EES of 11,550, ensuring stable and accurate estimates of the limits comprising 95% of the posterior samples (i.e., their HDI71).
For statistical inference, the model parameters of interest are the normalized group-level regression beta weights, which indicate the influence of each factor or factor combination (i.e., interaction) in determining the probability of moving correctly in the finger oscillation task. If the HDI of a beta weight representing a specific factor or interaction does not span zero, this implies that the factor contributes to the prediction of movement accuracy. In contrast, a HDI that spans zero indicates that a beta weight representing a specific factor does not contribute to the prediction of movement accuracy. In analogy to post-hoc testing in frequentist approaches, we assessed condition differences only if the HDI of the corresponding beta weight representing the overall effect or interaction did not span zero. For such comparisons, we contrasted the posterior predictive distributions of the factor level combinations that represented our hypotheses in the model. When multiple beta weights containing the hypothesis-relevant factors did not span zero, we took the beta weight representing the highest order interaction as the basis for whether a contrast should be evaluated or not. Contrasts are reported in the form of differencea_b with a, b indicating a factor levels interacting with b other factor levels76. The distribution resulting from contrasting factor-level posterior predictive distributions are denoted as credible difference distributions. Similar to the inferential strategy applied to the beta weight posterior distributions, an HDI of a credible difference distribution that does not span zero indicates that the model predictions for the two conditions of interest are different from each other, whereas an HDI of a credible difference distribution that spans zero indicates that the model predictions for the two conditions do not differ statistically.
In the text, tables, and figures, beta weight and credible difference distributions are characterized by their mean and their upper and lower 95% HDI limit. Figures were prepared using the R package ggplot2 version 2.0.082.
Publisher's note: Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
This work was supported by the German Research Foundation (DFG Emmy Noether Grant HE 6368/1–1 to TH) and by a doctoral scholarship pursuant to the Hamburg Act for the Promotion of Young Researchers and Artists (HmbNFG) provided by the University of Hamburg awarded to JB. We acknowledge support for the Article Processing Charge by the Deutsche Forschungsgemeinschaft and the Open Access Publication Fund of Bielefeld University.