Caloric vestibular stimulation has no effect on perceived body size

It has been suggested that the vestibular system not only plays a role for our sense of balance and postural control but also might modulate higher-order body representations, such as the perceived shape and size of our body. Recent findings using virtual reality (VR) to realistically manipulate the length of whole extremities of first person biometric avatars under vestibular stimulation did not support this assumption. It has been discussed that these negative findings were due to the availability of visual feedback on the subjects’ virtual arms and legs. The present study tested this hypothesis by excluding the latter information. A newly recruited group of healthy subjects had to adjust the position of blocks in 3D space of a VR scenario such that they had the feeling that they could just touch them with their left/right hand/heel. Caloric vestibular stimulation did not alter perceived size of own extremities. Findings suggest that vestibular signals do not serve to scale the internal representation of (large parts of) our body’s metric properties. This is in obvious contrast to the egocentric representation of our body midline which allows us to perceive and adjust the position of our body with respect to the surroundings. These two qualia appear to belong to different systems of body representation in humans.

Beginning with early work by, e.g., Bonnier 1 or Schilder 2 it has been suggested that the vestibular system not only plays a role for our sense of balance and postural control but also contributes to the perception of state and presence of our body relative to the environment. In this line, it has been discussed that the multisensory (vestibular) cortex in humans 3,4 may serve as a convergence zone of various sensory input, involved in perceiving shape and size of the body and in generating bodily self-representation 1,2,[5][6][7][8] . In line with these assumptions, studies have reported that vestibular stimulation at the peripheral organ may temporarily increase the perceived length 9,10 and width 9 of the own hands or decrease the perceived width of own thighs 11 . These observations suggested that vestibular information indeed may be used to scale the internal representation of body segments. Accordingly, it could be expected that not only the perception of single body segments, such as hand or thigh, but also the perception of larger parts of one's own body image is manipulated by vestibular input. This latter prediction was recently investigated. Karnath et al. 12 used an immersive virtual reality setup with biometric avatars to investigate the effects of vestibular stimulation on size estimation of whole extremities. Healthy subjects were asked to adjust their own arms and legs (that they perceived by seeing a virtual first person avatar) to the 'correct' length from various start lengths before, during, and after vestibular stimulation. Neither vestibular stimulation of the horizontal semicircular canal by caloric irrigation (CVS) nor of the whole vestibular nerve by galvanic stimulation over the mastoid (GVS) had a modulating effect on the estimated size of extremities. Subjects showed unaltered body size perception despite a clearly induced tonic imbalance in the vestibular system.
The straightforward explanation for this unexpected finding is that size perception of (large parts of) the body is not mediated by vestibular information. However, it is also possible that visual input plays a decisive, if not the decisive 13,14 , role for the perception of one's own extremities. In that case, it is possible that the visual feedback on arms and legs may have overridden possibly existing modulations of body perception by vestibular stimulation in the study by Karnath et al. 12 . In fact, the previously reported effects of vestibular stimulation on hand size by Lopez et al. 9,10 and on thigh width by Schönherr and May 11 were based on tactile and/or proprioceptive information only; vision was excluded in these experiments. If visual feedback indeed plays this decisive role for the representation of our body image, modulations on the perceived size of arms and legs by vestibular stimulation should only be detected if vision of the extremities is excluded. The present study investigates this hypothesis. In contrast to the procedure used by Karnath et al. 12 , we altered the virtual reality (VR) scenario so that it only presented two blocks in 3D space on either the left or the right body side. Subjects were instructed to adjust the position of the blocks from different start positions until they had the feeling that they could just touch them with the tip of the left/right middle finger and with the left/right heel, respectively.

Methods
Participants. The number of participants was determined by a power analysis 12 : effect sizes were calculated based on previous data by Lopez et al. 9 on mean perceived length of the own hand obtained in healthy subjects under CVS in contrast to sham stimulation. Assuming a minimum improvement of 0.9 cm, the corresponding effect size = = .
. . indicates a medium effect according to Cohen. For our power calculation, we therefore assumed a medium effect size of f = 0.25 for the body side (left, right) × condition (pre, stimulation, post) interaction, alpha = 0.05, power = 0.80 15 , and an assumed correlation of r = 0.65 between repeated measures using GPower 3.1 software 16 . This calculation yielded a required sample size of 20 participants. Our final sample consisted of 23 newly recruited, healthy subjects with a mean age of 24.17 ± 3.13 years (range 18-30 years; 7 male; 2 left-handed). They had no previous or current mental disorders, chronic diseases, vestibular disorders or motor impairments and provided informed consent. Two other participants were excluded; one due to extreme outlier values in all conditions, the other due to technical problems. All subjects provided written informed consent (i) to participate in the study and (ii) for publication of identifying information/images in an online open-access publication. The study was conducted in accordance with the ethical guidelines from the Declaration of Helsinki and was approved by the local ethics committee of the University Tübingen.
Study design. At the beginning, the experimenter confirmed the inclusion criteria and explained the study procedure, then assessed self-reported height and weight, measured arm length, leg length, and interpupillary distance, as well as tested the ability to see three-dimensionally with parts of the Titmus test (Stereo Optical Co., Chicago, Illinois, USA). The measures of arm and leg length were taken at least twice to increase validity. The participants laid down on a medical examination couch in supine position, the head tilted ~30° forwards (Fig. 1A). In a virtual reality (VR) scenario, they saw two blocks (one further forward, one further back) presented either left or right on a blue background (Fig. 1B). The subjects were instructed that the block located further forward shall correspond with the position of the tip of the middle finger, while the block located further back shall correspond with the position of the heel. The participants had to adjust the position of the blocks from different start positions. This was done until they had the feeling that they could just touch them with the fingertip and with the heel, respectively.
The experiment was about 105 min long and analogous to the CVS experiment performed by Karnath et al. 12 . The participants experienced three conditions each with 20 trials, starting with the baseline condition before stimulation. The second condition was under caloric vestibular stimulation (CVS) of the left external auditory canal. The third condition started 15 min after the end of the second condition, i.e. at a time when the caloric nystagmus had long decayed, to ensure that the effects of the stimulation had stopped. Finally, the participants completed a short questionnaire on perceived vertigo, side effects such as nausea or other physical changes and strategies during the task.
CVS consisted of cold-water irrigation of the left external auditory canal with about 30 ml of ice water for about 1 min. Before irrigation, the experimenter inspected the left tympanic membrane with an otoscope. Directly after stimulation, eye movements were observed using Frenzel glasses. All participants showed a brisk nystagmus with the slow phase to the left side. Assessment procedures were started immediately after Frenzel glasses were put off and the VR head mounted display put on, i.e. ~15 sec after irrigation. Participants were asked to inform the experimenter as soon as the induced vertigo had ceased. Since the assessment procedure under CVS typically lasted longer than the vertigo, this condition provided trials in which subjects were under vertigo as well as trials after participants had reported that vertigo had ceased. Analogous to the CVS experiment performed by Karnath et al. 12 , only trials under vertigo were included in the data analysis.

VR setup to measure perceived arm and leg lengths. The setup was presented through an Oculus
Go head mounted display. Technically, we used a modification of the setup presented in Karnath et al. 12 so that we were able to relate the virtual distances to a body that matched the participant in terms of height, weight, arm length, and leg length. Participants were in supine position and the two blocks of 10 cm size were presented either on the right or on the left side. They could choose with which block they wanted to start moving and indicated their decision to the experimenter by lifting either the arm or leg, so that its position was in line with the respective block they were currently adjusting. The start positions of the blocks varied in each trial between ± 40%, ± 35%, ± 30%, ± 25%, ± 20% of the person's arm/leg length. Arm and leg length were both adjusted in one single trial; the order was up to the participant. Participants were told that starting positions of arm and leg blocks were random. Presentation of the each 10 trials on the left and on the right side was counterbalanced. As a result, there were 20 trials per condition (left/right), covering all start positions and sides, with ten repetitions per arm/leg (as in Karnath et al. 12 ). The sequence was the same for each participant. The participants adjusted the position of a block by verbal instructions ("closer", "further away") to the experimenter who adjusted its position accordingly. Block movements were displayed to the participant by increasing and decreasing the size along the visual axes. The morphing step size was 2.5% of the recorded arm and leg length (as in Karnath et al. 12 ). When they had the feeling that they could just touch the blocks with the fingertip of their middle finger (arm) or the heel (leg), the experimenter started the next trial. The experimenter gave no feedback on accuracy.
Outcome parameters. The participant's actual leg length was measured as the distance between the broadest point of the hips and heel in a standing position without shoes. Arm length was measured as the distance between the tuberculum majus and fingertip of the middle finger in T-pose. The experimenter recorded the measured distances. Estimated arm and leg length was derived from the VR setup analogously to Karnath et al. 12 . That is, the blocks were technically linked to an invisible body whose arm and leg lengths matched the participants' individual bodies. We therefore derived estimated arm and leg length from the same physical landmarks (distance tuberculum majus to fingertip and distance hips to heel) as the actual arm and leg length. Body perception indices (BPI) for the arms and the legs were calculated with the following formula: BPI = (estimated size/actual size) × 100. Arm and leg length estimates were averaged for each side of the body because we did not assume a priori any differences between length perception of arms and legs. The resulting BPIs for the left and the right side of the body for the three conditions were used in the data analysis (procedure as in Karnath et al. 12 ). Data analysis. A 3 × 2 ANOVA was conducted on BPIs with side (left, right) and condition (pre, stimulation, post) as within subject factors. For the second condition, only trials under vertigo were included. This resulted in 6.3 ± 2.2 (range 3-10) trials instead of 20 and led to more short start positions on the right and more long start positions on the left. We further evaluated non-significant effects of condition by comparing the baseline condition with the stimulation using equivalence tests. The smallest effect size of interest (SESOI) was defined as the effect size we had 80% power to detect (as in Karnath et al. 12 ). The tests were conducted in R with the TOSTpaired function of the TOSTER package 17,18 .

Results
The All participants experienced vertigo after CVS application with an average strength of 7.13 ± 1.69 on a rating scale from 0 (no vertigo) to 10 (very strong vertigo). The vertigo lasted on average for 2.59 ± 0.55 min. Fifteen participants (65%) were stating nausea. Three participants reported other changes in their body (13%), such as e.g. sweating (further details see Table 1). As strategies, participants reported for examples, comparisons between the estimated leg and arm length, visualization of legs and arms in the scenario, and the attempt to grab/kick the blocks (Table 1). Figure 2 and Table 2  www.nature.com/scientificreports www.nature.com/scientificreports/

Discussion
It had been discussed that the negative findings on a possible effect of vestibular signals on size perception of own extremities might have been provoked by the availability of visual feedback from the subjects' arms and legs − though not from their real but virtual arms/legs. The present study followed the general procedure of the VR experiment by Karnath et al. 12 , but this time no visual information neither of the subject's real arm/leg nor an arm/leg of a first person biometric avatar was provided. Subjects only saw two blocks in an otherwise empty space. However, despite a clearly induced tonic imbalance in the vestibular system, the newly recruited group of subjects again showed unaltered perception of arm and of leg size. Adjusted arm and leg lengths remained unchanged by stimulation, while over time, there was a descriptive, unspecific trend of increasing variability and speed in completing trials. In combination with the previous results 12 , our findings allow the conclusion that vestibular information does not serve to scale the internal representation of (large parts of) our body.
Along the assumptions on the role of vestibular input on body perception [5][6][7][8] and the empirical observations on alterations of perceived size of hand and thigh [9][10][11] , this conclusion is surprising. However, the present as well as our earlier work 12 represent two independent experiments with two independent groups of subjects, leading Which strategies did you use to solve the task?
Comparison between the estimated arm and leg length (N = 1), looking at arms and legs during the breaks (N = 1), body/gut feeling (N = 4), visualization/imagination of arms and legs/body in the scenario (N = 5), moving arms/legs (N = 2), attempt to grab/kick the blocks (N = 3), try to mask out vertigo (N = 1) Table 1. Questions and answers from the questionnaire given to the 23 subjects. M = mean, SD = standard deviation.   www.nature.com/scientificreports www.nature.com/scientificreports/ to the same conclusion. Our observations are also supported by previous work by Ferrè et al. 19 who showed that perceived length and width of the hand was not influenced by vestibular stimulation. In this latter experiment, healthy subjects used a stick to indicate with one hand the perceived location of verbally identified landmarks on the other, occluded hand. While the present study very clearly ruled out visual feedback of own extremities as the decisive factor that might prevent vestibular manipulation of size perception of large parts of one's own body, other contributing factors are conceivable (for discussion see Karnath et al. 12 ). Further studies are needed to investigate such possible influence. However, it may also be permitted to question at this point whether the general hypothesis that vestibular signals have an impact on our internal representation of the body's metric properties needs to be downsized.
Brandt and coworkers 20 have suggested a reciprocal relationship between the human vestibular and visual systems throughout the vestibulo-thalamo-cortical pathways. On this basis Lopez 8 proposed that one of the main functions of vestibular signals could be to link our body to the higher-order concept of the "self " and, in that context, modulate higher-order body representations, such as the perceived shape and size of the body. A conceptual problem of this idea is that the benefit of such a mechanism remains unclear. Wouldn't it be expected, on the contrary, that negative effects would tend to result from such a link? If any head turn or other body movements -which represent the natural stimuli for our vestibular system -would indeed induce changes of, e.g., perceived arm length, one could expect misreaches to occur if we aim to grasp an object while moving the head or body at the same time. Likewise, alterations of perceived leg length resulting from head/body movements should induce a risk of stumbling or other instabilities while walking, climbing, etc. Thus, although it is obvious that vestibular signals are integrated with body representations, it seems that this happens in different contexts or at different levels of representation.
For example, it is quite conceivable that vestibular signals could be involved in coding of somatosensory detection, i.e. lower-order body representations. Ferrè and coworkers 21 have observed increased somatosensory perceptual sensitivity as well as increased threshold for detecting pain 22 immediately after left caloric vestibular stimulation (CVS), both ipsilaterally and contralaterally. In a further study, Ferrè et al. 23 reported evidence not only for separate but also combined vestibular and visual modulation of somatosensation. One could well imagine that vestibular and somatosensory signals interact in a way that the vestibular signal, e.g., modulates the gain and/ or synaptic connections of signal processing in the somatosensory pathway(s), converges with somatosensory signals on the same higher-order neurons, and/or changes connection weights between brain areas involved 21,24,25 .
Also, there is no doubt that vestibular signals serve our perception about the spatial relation of our body midline with respect to the world. Based on neurophysiological studies in monkeys as well as fMRI and psychophysical results in humans, it has been suggested that the brain uses internal maps of our surroundings that code topographical positions in head-and trunk-centered as well as world-centered frames of reference, rather than the retinotopic positions [26][27][28][29][30][31][32][33][34][35][36][37] . The coordinate transformation into these body-related internal maps is based on the afferent input from the retina, neck muscle spindles, and vestibular organs (cf. Figs 5 and 6 in ref. 38 ) and is cortically represented in the so-called 'multisensory (vestibular) cortex in humans' 3,4 . Accordingly, artificial vestibular or neck-proprioceptive stimulation in healthy subjects has been demonstrated to alter (under well control for possible effects of stimulation on motor pointing responses) our perception of own body orientation in space ('subjective straight ahead'/'body midline') [39][40][41] . The results demonstrated that the input from the vestibular organs and neck muscles spindles contribute to computation of egocentric representations by affecting the internal representation of the body midline.
A decisive site for these computations is the multisensory (vestibular) cortex in humans 3,4 . Damage of (parts of) the right hemispheric 'multisensory (vestibular) cortex' leads to spatial neglect 3,42 − a spontaneous and sustained deviation of eyes and head toward the ipsilesional side. This rightward bias is in the same direction as the neglect patients' perceived spatial egocenter, i.e. their perception of "straight ahead" orientation of body midline 39,43 . While artificial vestibular or neck-proprioceptive stimulation has been demonstrated to cause a lateral bias of perceived spatial egocenter in healthy subjects (see above), manipulation of afferent input from these peripheral sensory organs compensates the lateral bias of neglect patients vice versa (vestibular stimulation: Silberpfennig 44 , Rubens 45 /visual, optokinetic stimulation: Pizzamiglio et al. 46 /neck-proprioceptive stimulation: Karnath et al. 47 ).

Caveats and limitations. So far previous experiments that reported an effect of vestibular stimulation on
perceived length and width of own hands and thighs [9][10][11] were performed in the real rather than a virtual reality environment. Thus, it can not be excluded that the present findings are due to the fact that vestibular stimulation has an impact on real but not on virtual reality environments. To investigate this possibility, the present observations could be supplemented by testing for possible effects of vestibular input on the perception of large body parts of the subject's real body.
A further limitation of the study might be the non-randomization of the conditions. However, we chose an A-B-A design for all subjects to control for a possible positive behavioral effect if one should occur.
In order to maximize the impact of vestibular input on the behavioral measure, the present study used CVS. Cold caloric vestibular stimulation is a highly effective type of vestibular stimulation causing nausea and vertigo (cf. Table 1). We chose this type of vestibular stimulation to make sure we capture a possible positive behavioral effect if one should occur. The negative side is that the effect of this very effective type of vestibular stimulation does not last very long (minutes). Consequently, we could only analyze about a third of the CVS trials, which resulted in an unequal number of trials between the conditions. However, the total number of analyzed trials remained reasonable for the chosen statistics.

conclusion
In conclusion, previous findings in monkeys and humans have demonstrated that the brain uses the input from different afferent channels, including vestibular signals, to elaborate internal representations of egocentric space. Integration of input from the retina, vestibular organs, and neck muscle spindles is used to represent our body midline with respect to the surroundings. In contrast, our present and previous 12 results have revealed that vestibular signals have no influence on our internal representation of shape and size of (large parts of) our body. It seems as if the egocentric representation of external space that allows us to perceive and adjust the position of our body relative to the environment clearly distinguishes from those internal representations of our body that serve the perception of our body's metric properties. These two qualia appear to belong to different systems of body representation in humans.

Data Availability
The datasets generated and analyzed during the current study are not publicly available due to the data protection agreement of the Center of Neurology at Tübingen University (approved by the local ethics committee) signed by the participants. The agreement covers data storage for a duration of 10 years at the Center of Neurology at Tübingen University. They are available via the corresponding author as well as the local ethics committee (ethik. kommission@med.uni-tuebingen.de) on reasonable request.