Introduction

When visual targets, such as letters, faces or pictures, are presented in rapid succession, correct identification of the first target (T1) can interfere with identification of a subsequent target (T2) which follows T1 closely in time. There is a brief time gap during which there is thought to be a suppression of visual attention during the processing of T1, resulting in a substantially reduced ability to correctly process a subsequent target. This post-T1 processing deficit is known as the attentional blink1,2,3, by analogy to the reduced vision during the blink of an eye. The attentional blink describes the ability to allocate attention over time and provides a measure of how fast the brain can react to successive visual targets. While this phenomenon has often been explained on the basis of an attentional bottleneck, i.e. a processing stage that can handle only one item at a time, recent work suggests a more complex picture involving attentional regulation and an extensive neural network2.

Recent studies suggest that abnormal visual experience early in life not only affects the development of low-level visual functions, but may also affect high-level attentional capacities. The attentional blink curve (probability of correct T2 report vs time after T1) seen through the amblyopic eye is less sharply tuned, or shallower than that seen through the fellow sound eye, depending on the depth of amblyopia4. Despite the reduced vision in the affected eye, the ability to detect the second target (T2) at 200 ms with the amblyopic eye is unexpectedly better than with the fellow stronger eye, but is worse at 100 ms. Unlike normal observers, amblyopes do not often confuse the second target with the letters before and after, making fewer neighbor errors with more responses consisting of letters that are not actually presented.

The deployment of rapid visual attention seems to be malleable5,6. Despite some recently reported contrary findings7,8,9, many earlier studies suggest that attentional capacities can be stretched and enhanced by playing action video games10,11,12 and subsequent studies have demonstrated that other visual functions such as spatial resolution and contrast sensitivity can be enhanced as well13,14. Perhaps most importantly, this videogame training might have clinical applications in the treatment of a variety of brain disorders, such as amblyopia. Aside from refractive error, amblyopia is one of the main causes for the loss of vision in children. This visual disorder is generally considered irreversible beyond a “critical period” of visual development. Thus, clinical treatment, which generally consists of patching the strong eye, is rarely undertaken in adults or older children. Surprisingly, a number of recent studies seem to challenge this concept and show that the amblyopic brain still retains a certain degree of functional plasticity15,16,17. One potentially useful new tool to induce visual recovery in amblyopia may be video game play, which is highly engaging.

Video game experience improves a range of visual functions in the amblyopic brain18,19, in the current study we asked whether videogame play may improve the ability to identify targets rapidly in adults with amblyopia, as it does in observers with normal vision10,20 and whether the improvement in performance, if any, transfers across the two eyes. We examined the specificity of visual learning in order to further explore the possible mechanisms for the plasticity. Interocular transfer from the trained eye to the untrained eye would be expected if the changes in performance were centrally driven. However, if the neural modifications occurred solely at the early stages of visual processing, the learning effects would be largely specific to the eye being trained.

Results

Testing paradigm

We measured the attentional blink in eight adults with amblyopia before and after action video game play (first-person-shooter game, 2 hrs/day, 4–5 days/week). A fast sequence of single letters was displayed one by one using a rapid serial visual presentation (RSVP) technique (Fig. 1). The observer had two tasks: (i) to identify a white letter (Target 1, T1: one out of twenty-five uppercase letters “A”-“Z”, with “X” excluded) embedded in a sequence of black letters and (ii) to signal the presence or absence of a black letter “X” (Target 2, T2), which was presented in half of the trials at a random time position after the onset of T1 (lag = 100–800 ms; for example, for Lag 100: “X” was the first letter to appear at 100 ms after the appearance of T1; each letter cycle was 100 ms). Additional random letters were appended to the end of the letter sequence.

Figure 1
figure 1

The attentional blink test.

We applied a standard attentional blink test to evaluate temporal attention. A sequence of letters was displayed one by one in rapid serial visual presentation. The test consisted of two visual tasks: first, to identify a white letter (T1, randomly selected from 25 uppercase alphabets from A to Z with “X” excluded; letter “H” in this example); second, to detect the presence or absence of a letter “X” (T2, response = yes/no; “yes” in this example) in the letter sequence. The target 2 was presented with a probability of 0.5, at a randomly assigned temporal position after the appearance of T1 (lag = 100–800 ms; Lag 100: “X” was the 1st letter to appear at 100 ms after the onset of T1; Lag 800: “X” was the 8th letter to appear at 800 ms after the onset of T1). A small number of random letters were appended to the end of the sequence. Each letter cycle was 100 ms (1 letter frame, 17 ms + 5 blank frames, 83 ms).

T1 detection

Since our observers had reduced visual acuity in the amblyopic eye, the letters presented were highly suprathreshold in size, at least ~5 times larger than visual acuity limit (letter size = Snellen 20/450; isolated letter acuity range: 20/95 or better) and thus quite visible to the affected eye. The mean hit rate for detecting T1 with the amblyopic eye was 85% (Fig. 2a, solid symbols), not significantly different from that of the fellow sound eyes (Fig. 2b, solid symbols; paired t test: t = −1.371, p = 0.204), indicating that the letter targets were large enough for equalizing T1 detection performance in the stronger dominant eye and the weaker amblyopic eye. The visual acuity data are summarized in Table 1.

Table 1 Clinical profile of amblyopia. All participants went through a thorough eye examination. A Bailey-Lovie logMAR letter chart, National Vision Research Institute of Australia 1978, was used in visual acuity measurement. All participants had unilateral amblyopia, except participant MM in the occlusion therapy group had bilateral amblyopia
Figure 2
figure 2

T1 detection.

(a) T1 Detection in the amblyopic eye. T1 detection accuracy was defined as the probability of correctly detecting T1, p(T1); it is also expressed in percentage throughout the main text. The letter targets displayed to observers were suprathreshold in size, 4× – 20× ≥ acuity threshold. The high hit rates reflect that the visual targets were highly visible to the amblyopic eyes in both treatment and control groups. This figure illustrates the visual acuity characteristics of each participant. (b) Comparison of T1 accuracy between the amblyopic eye and the fellow sound eye. Most data points are clustered tightly around the grey unity line and no significant difference in accuracy was found between the two eyes in each group. Our attentional task was not limited by the reduced visual acuity in the amblyopic eye. (c) The effects of videogame training and occlusion therapy on T1 detection in the amblyopic eye. No significant change in T1 accuracy was found after the intervention in each group.VG: videogame training group. OT: occlusion therapy (patching) group. VAi: isolated letter acuity.

Since videogame play improves visual acuity in the amblyopic eye18, one might expect to see an enhancement in T1 detection accuracy resulting from improved vision. However that is not what we observed. There was no significant change in T1 detection accuracy after video game play (Fig. 2c, solid symbols; mean post-training p(T1) = 0.881; paired t test: t = −1.502, p = 0.167). This is not surprising given that the letter size was initially set to be highly suprathreshold, but it is important because it ensures that our attentional task (T2 detection) was not limited by reduced vision in the amblyopic eye and that any change in T2 detection would truly reflect improved attentional performance.

T2 detection

The attentional blink refers to the impairment in detecting a second target (T2), in this study the letter “X”, when presented within 200 to 500 ms after the onset of the first target (T1). This can be seen clearly for both eyes of our amblyopic observers in Fig. 3a (open symbols). Prior to videogame play, when T2 lagged T1 by 800 ms, detection accuracy was high; p(T2|T1) ≈ 85% correct. However, when the lag was reduced below 500 ms, accuracy declined sharply, to ≈35% correct at 200 ms and then increased to 50% at the shortest lag, i.e., when T2 was displayed 100 ms after T1. Note that for both eyes, the maximum inhibition occurred at 200 ms, which is characteristic of the attentional blink. The shape of the mean attentional blink curve of the amblyopic eyes was slightly shallower than that of the fellow preferred eyes, as reported previously4. The mean detection accuracy at 200 ms was 36.7% in the amblyopic eyes, slightly greater than that in the fellow sound eyes, 33.1%. It should note that the accuracy differences between the two eyes were not statistically significant in the lag range between 100 and 300 ms (two-way RM ANOVA, f = 0.999, p = 0.351; Lag 200: paired t = 0.834).

Figure 3
figure 3

Effects of videogame play on attentional blink.

(a) T2 detection accuracy as a function of time lag. T2 detection accuracy was defined as the conditional probability of the correct detection of T2, given T1 was correctly identified. Open symbols: pre-training performance. Solid symbols: post-training performance. (b) T2 data for each individual observer. Left column: Lag 100–300. Middle column: Lag 400–500. Right column: Lag 600–800. Solid symbols: the lag condition at which the accuracy difference between pre- and post-training measurements reached a 0.05 significance level. White area represents improved accuracy after videogame training. (c) The effect of eye patching on T2 detection performance. In a control experiment, ten participants received occlusion therapy and no significant change in performance was found. AE: amblyopic eye. NAE: non-amblyopic eye.

After 40 hours of video game play, there was a marked enhancement in performance in both eyes of our amblyopic observers, most notable at 200 ms. First, consider the results for the amblyopic eyes. The U-shaped attentional blink curves shifted upward relative to the baseline (Fig. 3a, compare red solid line vs. red dashed line; two-way RM ANOVA, Lag 100–800: f = 15.973, p = 0.005). There was a significant and substantial improvement of 38% in the attentional blink. p(T2|T1)200 increased from 0.367 to 0.505, seen through the amblyopic eye (Lag 200: paired-t = 3.972, p = 0.005). No significant interaction was found between the two factors: pre/post and lag (f = 0.721; p = 0.654). The pre- and post-training data for individual observers are illustrated in the first row of Fig. 3b (left: Lag 100–300; middle: Lag 400–500; right: Lag 600–700); data points that reached the 0.05 level of significance are highlighted with solid symbols (i.e. AE, Lag 200; NAE, Lag 100–300). Note that the changes in accuracy at Lag 100 (paired-t = 1.831, p = 0.110) and Lag 300 (paired-t = 1.192, p = 0.272) were not significant at the 0.05 level.

Interocular transfer

Surprisingly, the untrained fellow eye (which was patched during videogame play) also showed similar gains in detecting T2 (Fig. 3a, blue solid line vs. blue dashed line: two-way RM ANOVA, Lag 100–800: f = 21.377, p = 0.002). The response accuracy was significantly improved for the lags ranging from 100 to 300 ms (paired-t ≥ 3.051, p ≤ 0.019), with an interaction between the two factors: pre/post and lag (f = 3.346; p = 0.005). The training effects generalized substantially from the trained amblyopic eye to the untrained sound eye and the two post-training curves (solid lines) largely overlap with each other. Similar to the pre training measurements, the post-training curve of the amblyopic eyes was slightly shallower than that of the fellow non-amblyopic eyes. Data for individuals can be found in the second row of Fig. 3b; solid symbols indicate significant improvements (i.e. column a: NAE, Lag 100–300).

Patching the strong eye

During video game play, our amblyopic observers wore an eye patch over their strong eye. Thus, one might argue that the enhanced performance could have been the result of wearing an eye patch alone. To control for that factor, a control group of another 10 amblyopic adults wore an opaque eye patch over their dominant eye, but instead of playing video games they were engaged in other visually demanding activities using the amblyopic eye, such as knitting, watching television, reading books and surfing the internet, etc. Their attentional blink performance was evaluated before and after 20 hours of eye patching (2 hrs/day, 5 days/week). In contrast to the videogame group, no significant change in mean attentional blink performance was found in the amblyopic eyes (Fig. 3c, two-way RM ANOVA, Lag 100–800: f = 0.138, p = 0.719). Clearly, the enhanced attentional blink performance cannot be simply explained by eye patching alone, or from improved test-retest performance, but rather result from videogame play.

Visual acuity (VA)

Videogame play results in improved visual acuity in the amblyopic eye18. Those findings are included here, along with new data in Fig. 4a (solid symbols). The mean acuity improvement is approximately 1.5 lines (0.16 LogMAR, on a standard LogMAR letter chart; paired t = 12.305, p < 0.001). No significant change in acuity was observed in those participants who were patched (open symbols; mean acuity difference = 0.005 LogMAR, paired t = −0.653, p = 0.522), reflecting that more active, visually demanding tasks are necessary in order to trigger visual plasticity.

Figure 4
figure 4

Improving amblyopic vision with videogame training.

(a) After 40 hours of video-game play (solid symbols), visual acuity improved significantly. In contrast, no significant change in acuity was observed in those participants in the patching group (open symbols). Red symbols: crowded letter acuity. Blue symbols: isolated letter acuity. (b) Two-stage training paradigm. One amblyopic participant (squares in Fig. 4a) practiced a Vernier acuity task, as illustrated in the inset, until acuity improvement reached a plateau. The visual task was to identify the misaligned pair of Gabor patch groupings out of three choices (top, middle or bottom; in the example, the bottom pair is misaligned). Each grouping consisted of 8 Gabor patches. (c) A dramatic boost of T2 detection performance was found at Lag 200 after stage 2 videogame training, but not after stage 1 Vernier acuity training.

Is the reduced attentional blink in our observers merely a consequence of improved visual acuity? We believe that this is not the case, because our attentional blink task was not acuity limited. Our observers were able to identify T1 quite accurately with the amblyopic eye and the T1 letter identification performance was equalized between the two eyes. Importantly, unlike T2, there was no increase in T1 accuracy following videogame play.

To isolate the role of visual acuity in our attentional blink task, we adopted a two-stage training paradigm in one observer. Our earlier studies showed that practicing positional discrimination improves amblyopic vision15; this perceptual learning (PL) task is quite different from videogame play, requiring focused spatial attention. The idea was to clamp most, or all, of the acuity improvements in the first stage of spatial acuity training, prior to videogame training in stage two. We were interested to know whether improved acuity through the static perceptual learning task resulted in a reduced attentional blink and whether improvements in visual acuity are necessary for reducing the attentional blink.

An amblyopic observer (Fig. 4a, solid squares: participant AS) practiced a Vernier acuity task, (illustrated in the inset of Fig. 4b), for an extended period of time (a total of 40 hrs in 6 weeks) to obtain a learning plateau. This observer showed a ~2-line improvement on a standard LogMAR letter chart in both crowded acuity and uncrowded acuity, 30.8% and 39.7% in visual resolution respectively (Fig. 4b). Although this observer demonstrated substantial acuity improvements after positional acuity training, there was no significant change in T2 detection accuracy at Lag 200 (Fig. 4c, open vs. semi-filled symbols). This observer was then engaged in videogame play for 40 hours (same training protocol as the videogame group) and showed no further acuity improvement (Fig. 4b). Interestingly a marked increase in T2 accuracy was observed when compared with the post-Vernier curve: Lag 200: 48.0%, Lag 300: 62.4% (Fig. 4c, semi-filled symbols vs. filled symbols). These observations support the notion that it was the videogame experience, not the improved visual acuity, that boosted visual attention in detecting T2. Note that there was some change at Lag 300 and 400 after positional acuity training, representing the effect of perceptual learning on the blink curve with a very different signature from video game play, with the main effects occurring at long lags, possibly reflecting the more sustained nature of the perceptual learning task compared to video game play.

It is not entirely clear whether the reduced attentional blink play a role in the recovery of amblyopic visual acuity. In order to assess this, we replotted the improvement in visual acuity as a function of the reduction in attentional blink. As can be seen in Fig. 5a (black solid circles), the amount of acuity improvement remained roughly constant (at ≈23%) when the improvement in T2 accuracy was less than ~20%, thereafter visual acuity increased as the attentional blink reduced. A similar trend was also obtained for another visual performance measure. We previously reported that video game play enhanced the accuracy of counting briefly presented targets in amblyopic observers18. Fig. 5b shows the improvement in visual counting performance as a function of the reduction in attentional blink. Similar to visual acuity, counting performance improves markedly when T2 accuracy improvement was greater than 20%.

Figure 5
figure 5

The role of visual attention in the normalization of visual acuity.

(a) Acuity improvement as a function of T2 accuracy improvement. The data of observer AS are included here for comparison. After phase one positional acuity training, this observer showed a substantial improvement in visual acuity, but no change in attentional performance was observed (open gray circle). After phase two videogame training, this observer did not show any further acuity improvement, but instead a dramatic boost in T2 accuracy was obtained (solid gray circle). (b) Correlation between spatial attention and temporal attention. A visual counting task was applied to examine how many locations in the visual field the brain can direct attention. A number (1–10) of black circular dots was then presented for 200 ms against a gray background. The dots were randomly positioned in 10 × 10 square cells. Detailed procedures can be found elsewhere18.

These findings suggest that high-level mechanisms mediating the attentional blink might be important in the recovery of visual acuity. However, the large circles in Fig. 5a reveal a double dissociation. The large open circle shows the results of the PL control experiment described above. Practicing a static position acuity task results in improved visual acuity, with no improvement in attentional blink. Furthermore, for this observer, subsequent videogame play resulted in a reduced attentional blink, with no further improvement in visual acuity (filled gray circle). Thus, further study might be necessary to quantify the direct effects of attentional blink training on amblyopic vision and the role of spatial versus temporal attention.

Discussion

Here we show that videogame play reduces the attentional blink in the amblyopic brain and that the enhanced performance cannot be simply explained by patching, improved visual acuity, or “test-retest” instrumental learning. Rather, we suggest that the improved performance is a direct outcome of the videogame experience. Video game play requires the player to act rapidly in response to numerous fast moving visual objects. The brain needs to rapidly direct attention not only to different locations in the visual field, but also to monitor the same location over time. The intense sensory-motor interactions while immersed in video game play might push brain functions to the limit, enabling the visual brain to adjust and provide the basis for functional plasticity.

The attentional blink deficit has been generally believed to be the consequence of the depletion of some limited attentional resources by the first target which are in turn needed for processing the second one21. Backward masking22,23,24 may also play a role, however recent studies have reported findings25 that are inconsistent with this notion. In this model, it is assumed that a processing stage can handle only one item at a time. When the processing stage is occupied by the first target, the processing of the second target is delayed. The delayed item is then overwritten and masked by subsequent trailing items during the delay. Along this line, the enhanced performance observed with video game experience could be, to some extent, the result of reduced backward masking. An earlier study indeed found that video game players suffer less interference from backward maskers when compared with non-video game players26 and pointed out that low-level visual mechanisms, in addition to high-level attentional processes, might also be involved in the optimization process.

An alternative explanation could be improvement of processing speed. There is evidence showing that video game play trains the visual system to react faster and process information more efficiently27,28. Enhanced temporal processing, or perhaps finer temporal resolution, could potentially help alleviate the deficit by enabling more rapid processing of T1, thus freeing up the resources needed to process T2. Another potential factor one should probably consider when presenting successive targets rapidly over time is temporal crowding. It is not uncommon to find abnormal visual crowding in observers with amblyopia29,30. Unfortunately, we did not include any test for temporal crowding before and after video game intervention. It is not yet clear how temporal crowding influences the attentional blink performance.

The reduced attentional blink in the amblyopic eye generalizes to the fellow eye with no direct video game experience. We speculate that the attentional network is likely to be located beyond the site where the information from the two eyes converges. Thus, the experience-dependent functional modifications triggered by the visual stimulation of one eye are passed on to the other eye. It is worth noting that this interpretation should be taken with caution, since there may be alternative explanations for the transfer. However, it is consistent with recent work showing that perceptual learning in adults with amblyopia can transfer completely through high level mechanisms, rather than through low level plasticity31.

Despite reduced visual acuity, in a previous study the amblyopic eye was found to outperform the fellow sound eye in the attentional blink task4. We note that is not the only instance the “weaker” amblyopic eye shows more superior capability than the “stronger” non-amblyopic eye. An earlier study has reported that the amblyopic eye can actually perform better with interpolative motion stimuli when compared with the sound eye32, reflecting the dominance of low spatial frequency mechanisms which are tuned for high speed target presentation. In the current study, the letter size was set to be highly suprathreshold. Further studies might be needed to quantify the attentional blink performance near the acuity threshold in the amblyopic eye.

Over the past two decades, research studies by us and others15,16,33,34 have shown that through intensive perceptual learning, the mature amblyopic brain can at least partially recover the vision loss. That was a surprising discovery since adult amblyopia had generally been considered irreversible. However, perceptual learning may not be very practical as a clinical treatment because it is highly repetitive and monotonous. In contrast, video games can be engaging and may therefore be a more appealing tool for treating amblyopia, especially in children. Video game play has shown to be effective in inducing a generalized recovery in visual acuity and a range of visual functions including positional acuity and stereoacuity18. More recent studies have demonstrated that cognitive functions in the aging brain35 and reading abilities in dyslexic children36 can be enhanced by playing video games as well. Taken together, these findings point to the effectiveness of video games in boosting brain functions in clinical situations.

Methods

Experimental design

Altogether 18 adults with amblyopia (age range: 18 to 78 years, mean age: 31.1 ± SE4.1 years) participated in three experiments. Their clinical data are summarized in Table 1. In the main experiment, participants were required to play video games in our laboratory for a total of 40 hours (2 hrs/day, 4–5 days/week) using the amblyopic eye, with the fellow non-amblyopic eye occluded with a standard black eye patch. The video game chosen was an off-the-shelf, first-person-shooter (FPS) action game: Medal of Honor - Pacific Assault, Electronic Arts, USA. Our participants either had no previous video game experience or had not played video games for a long while (>four years) before participating in the study. Since there has been no previous clinical evidence indicating that video games can modify vision in adult amblyopia in any way, in this pilot trial we decided to recruit participants for the videogame treatment groups in the beginning, in order to evaluate the feasibility of this treatment approach. It is important to note that the subject allocation was not based on the clinical characteristics of participants.

A conventional attentional blink test was used to monitor how video-game experience modifies temporal visual attention in the amblyopic brain10. Attentional blink performance was evaluated before and after a period of videogame sessions. Visual stimuli generated using the Matlab Psychophysics Toolbox were displayed on a 19-inch flat Sony G400 monitor screen at 1024 × 768 resolution (screen dimensions: 350 × 265 mm; pixel-pixel distance = 3 × 3 arcmin [horizontal × vertical]) with a 60 Hz refresh rate. All participants were given full optical corrections. Appropriate plus lens power for the viewing distance was prescribed to presbyopic observers when necessary.

The experimental procedures were approved by the University Committee for the Protection of Human Subjects and the research was conducted according to the principles expressed in the Declaration of Helsinki. Informed consent was obtained from each participant. This research study was registered with ClinicalTrials.gov (National Institutes of Health NIH, USA) as a Phase I clinical trial (NCT01223716, May 2008; NCT01115283, May 2010). The study took place in our research laboratory at the University of California, School of Optometry in Berkeley, California, from December 2006 to December 2012.

Visual Stimuli - Attentional Blink

A similar paradigm as adopted in previous studies14 was used to measure attentional performance. A series of black letters were presented against a gray background, using a rapid series visual presentation technique (Fig. 1). A white letter (T1) to be recognized briefly displayed at a random time position, between the 4th and 9th letter screen-frames in the letter sequence. All letters in the sequence were randomly selected without replacement from 25 uppercase Geneva alphabets “A” to “Z”, with “X” excluded. A black letter “X” (T2) to be detected was presented in half of the trials (p = 0.5) at a random time position after T1 (lag = 100–800 ms; Lag 100: “X” was the 1st letter to appear at 100 ms after the onset of T1; Lag 800: “X” was the 8th letter to appear at 800 ms after the onset of T1). Each run consisted of 160 trials: 80 trials with T2 (10 trials for each of the 8 lag conditions) and 80 trials without T2 randomly interleaved. A number of random letters (n = 3 to 8; out of 25 uppercase Geneva alphabets “A” to “Z”, with “X” excluded) were appended to the end of the sequence following T2, or T1 for the no “X” trials. The observer's task was to first identify the white letter - i.e. 1 out of 25 letters, then to indicate whether or not an “X” appeared after the white letter - yes or no. Of primary interest was whether the observer was able to detect the presence of the letter “X” when the white target letter was correctly identified, i.e. p(T2|T1).

Each letter cycle consisted of one letter screen-frame (16.7 ms) followed by five blank screen-frames (a total of 83.3 ms), adding up to 100 ms for each letter lag. The font size was 50; the physical dimensions of the letters E, H, X and O on the screen were 7.5 × 13, 9.17 × 13, 9.75 × 13 and 10.67 × 13.75 (H × V) mm respectively, approximately Snellen 20/450 on a standard letter chart. Careful precautions were taken to ensure the letter size used in the measurements was visible to each amblyopic observer, at least 4.7 times larger than the acuity threshold for single letters. The same letter font-size was used when testing the fellow stronger eye. The monitor screen was viewed directly 40 cm from the observer. The background luminance was 55 cd/m2 and the letter luminances were 4.65 cd/m2 and 105.5 cd/m2 for black letters and white letters, respectively (weber contrast: black letter 91.5%, white letter 91.3%). The test was performed monocularly; a standard black eye patch was used to occlude the fellow untested eye. An attention blink curve was constructed based on an average of 3 to 4 runs. The test was self-paced; a break was given between runs and whenever requested. Each run took about 15 mins. A slow demonstration version was provided to illustrate the visual tasks before actual measurements.