We present a dataset of eye-movement recordings collected from 60 participants, along with their empathy levels, towards people with movement impairments. During each round of gaze recording, participants were divided into two groups, each one completing one task. One group performed a task of free exploration of structureless images, and a second group performed a task consisting of gaze typing, i.e. writing sentences using eye-gaze movements on a card board. The eye-tracking data recorded from both tasks is stored in two datasets, which, besides gaze position, also include pupil diameter measurements. The empathy levels of participants towards non-verbal movement-impaired people were assessed twice through a questionnaire, before and after each task. The questionnaire is composed of forty questions, extending a established questionnaire of cognitive and affective empathy. Finally, our dataset presents an opportunity for analysing and evaluating, among other, the statistical features of eye-gaze trajectories in free-viewing as well as how empathy is reflected in eye features.
eye tracking device
Luminosity • Screen distance
Sample Characteristic - Organism
Sample Characteristic - Environment
Sample Characteristic - Location
Municipality of Oslo
Background & Summary
The popular saying “The eyes are the mirror of the soul” has some basis in scientific fact: our eyes are the only part of our sensory system whose dynamics during the information perception and processing can be directly seen. Because of that, eye movement data are used in diverse fields, such as marketing, to understand what catches people’s attention1,2,3; neurosciences, to assess the development stages of infants before talking ages4; artificial intelligence, to improve mechanisms of free-viewing5; virtual reality, to predict when people will load images6; adaptive interfaces, to improve the performance of partially automated vehicles7 or helping non-verbal movement-impaired people (e.g. people suffering from tetraplegia, brain paralysis and locked-in syndrome) to interact with computers8,9. In medicine, namely in neurology and psychiatry, electroencephalogram data can be useful to know ‘where’ some pathology may originate, but it is often not enough to quantify ‘how’ serious the pathology is. Different eye-tracking methods have been used in recent years to assess the severity of such pathologies as dementia10, Parkinson’s and Huntington’ disease11,12, simple migranes13, or the effect of ketamine and other drugs on the nervous system14.
In general terms, the diagnosis protocol based on gaze tracking technologies15 has the aim of finding statistical features of the two main components of gaze dynamics, that are fixations and saccades. Fixations concern the periods when the eyes focus on a small region of the image. Typically, parts of eye-gaze trajectories corresponding to fixations are characterized by small fluctuations around the focus point16. As for saccades, they alternate with fixation events and are characterized by fast gaze point relocations. This classification is important for several applications, for example, for systems that allow people with movement impairments to interact with computers17,18.
Another important feature of human eyes is the pupil size. Its changes are correlated with the intensity of information processing and our cognitive states in general19. Pupil dilation indicates arousal20, emotional state (fear, sympathy or hostility), and sudden dilation may indicate stress or anxiety. For these reasons, pupil size measurements are used in psychology, psychiatry, and psycho-physiological research21.
Of particular interest is the use of pupil dilation as a marker of empathy. Typically, our pupil dilates when we perceive empathy towards someone, e.g. when we perceive sadness in someone’s face22, when we share a laugh with someone23, or when we hear someone crying24. A recent study on empathy was performed at the Oslo Metropolitan University, in which healthy adults are put in a circumstance where they are forced to communicate through a computer using only eye movements25. Figure 1a,b shows the experiment set-up and a close-up of the eye-tracker used, respectively.
The experimental methodology comprehended a test group, in which the participants were asked to use eye movements on a letter cardboard to write sentences, and a control group, in which the participants were asked to identify objects and/or shapes in an image constructed with random pixels. This latter activity we term “foraging for visual information”. To assess the level of empathy towards non-verbal movement-impaired people, all participants in both groups answered a questionnaire before and after the intervention. The data collected with the questionnaires were stored as “Data Set IA” (before intervention) and “Data Set IB” (after intervention). The data collected with the control group, doing foraging of visual information, were stored as “Data Set II”, whereas the data collected from the test group, doing gaze typing, were stored as “Data Set III”. The three data sets composed the dataset we call EyeT4Empathy. See Fig. 1c and Table 1.
Our dataset differs from other open-access datasets26,27,28, in both the test and control group. The data corresponding to the test group can be used e.g. to assess how people can communicate with their eyes, while the control group can be used to see the “pure process” of foraging for visual information without being disturbed by finding objects. This last group allows to gather data of eye-movements while the participant is engaged only with trying to find patterns, enabling the study of the dynamical properties of gaze trajectories in a foraging mode, i.e. the eye-gaze is forced to move through the image in a similar way as a roaming predator without finding its prey.
The study of the mechanism humans employ to forage for visual information is crucial to answer fundamental questions of eye-movements. For over 20 years to the publishing of this study, it has been published repeatedly that gaze follows a Lévy-flight like dynamics29,30,31,32. This is similar to typical foraging dynamics, from albatroses or sharks foraging for food33,34 to the strategies of hunter-gatherers35 or the way children explore a theme park36. This is in direct contradiction to the cannon distinction between fixations and saccades. Such distinction has the underlying assumption that gaze-trajectories are modelled by an intermittent process (also called “composite random walk”)37, which is composed by two alternating different processes, with a feed-flight structure (see ref. 38 for details).
This question has implications beyond simple mathematical curiosity. The distinction between saccades and fixations is crucial in the field, with several algorithms proposed to distinguish between them. The properties of saccades and fixations are also used in distinguishing individuals with autism10, ADHD39, among others. On the other hand, the Hurst exponent40 has been proposed as a defining characteristic of gaze trajectories. This quantity is well defined only for Lévy flight type of dynamics and not for an intermittent process. Nonetheless, it has been suggested to be robust fundamental quantity of eye-movements41,42 and an efficient way to classify individuals43. Compared to other datasets, the one presented here extends the time participants are engaged in searching for patterns (typically 60 seconds). This is particularly relevant since gaze trajectories have been shown to have a dynamic that changes with viewing time (saccades are longer and more frequent when participants first encounter an image)44.
With this dataset, several studies can be done. One can, for example, use AI-methods to replicate gaze trajectories45 in order to generate large anonymous data-sets or study potential limitations of AI-methods; use natural language processing to improve autocorrect functions for gaze typing; study algorithms of classification of gaze movements as “saccades” or “fixations”, or use pupil diameter data along with the empathy questionnaires to study this already proposed correlation22.
While similar experiments of our control group were done previously27,28, they were done by much shorter periods of time. The inclusion of empathy levels allows us to assess an important psychological dimension. In what gaze typing is concerned, previous studies have described an optimized algorithm for gaze typing focused on improving its speed46. Still, some questions remain on how to optimize the task based on the presence of spacial noise.
Eye-tracker has also been increasingly used for emotion recognition47. Through the empathy questionnaire, this dataset can provide novel insights into to the already proven dependency from gaze dynamics and empathy48. It can also relate it with the attention of each participant by evaluating the respective accuracy or their writing. Besides the focus on disorders such as ADHD39, the increase in online learning and remote working has created the need to evaluate patterns consistent with close attention to a computer task.
The rest of the paper is organized as follows. First, in Section “Methods”, we will describe the two groups of participants and their prescribed tasks, the experimental set-up, design and protocol, hardware and the algorithm of saccade/fixation classification and the ethical considerations. In Section “Data records” we describe how the data was recorded and is organized. Considerations about hardware, equipment and experimental conditions are presented in Section “Technical validation”. Finally, Sections “Usage notes” and “Code availability” respectively contain the main instructions to assist other researchers to use the data sets in their research activities and the code to properly process the data and rebuild the results and figures presented in this paper.
Experimental design and protocol
The experiment was conducted in the AI Lab of Oslo Metropolitan University (Norway).
Participants in the control and test groups
We performed both interventions, gaze typing and foraging for visual information, with 60 participants from 20–40 years of age. Most of the participants were students. Out of 60 participants, 4 were nurses by profession and a part time student and the remaining 56 were bachelors and masters students from University of Oslo, NTNU and Oslo Metropolitan University. The data was collected at Oslo Metropolitan University. Each session was recorded separately, and approximately 10 different data sets of eye-gaze trajectories were collected for each participant resulting in a total of 502 data-sets. Each participant answered twice the questionnaire for assessing the empathy level, before and after the respective intervention. Eye-tracker data with less than 600 data points or with over 60% NaN were not included in the total 502 data-sets.
The task for the control group: free viewing
In the control group, participants were asked to identify shapes in two images (stimuli). See Fig. 2 (left and middle). Each stimulus was shown twice, during one-minute intervals, alternating between trials. Illustrative output of the eye-gaze trajectories collected during these trials is shown in Fig. 3 (top). After the recording, each participant was asked about what they saw in the image. In the process of recording, we encouraged the participants to take a short break between trials (approx. 10–30 seconds).
The task for the test group: gaze typing
For the test group, we started by explaining to each participant the uses of eye-tracking and how non-verbal movement-impaired people rely on this device to communicate and run their daily lives. This group was trained to use the digital E-tran board and the eye-tracker to write messages on the computer screen. See Fig. 2 (right). After this training, the participants were requested to write sentences, using the eye-tracking interface and the E-tran board exclusively. The sentences were the following ones: “Type your name”, “Can you help me?”, “Can you take me to the restroom?”, “I am hungry, can you do me some eggs?”, “I am uncomfortable. Can you change my position?”, “I am in pain. Can you give me medication?”. Results of the trajectories on the E-tran board are shown in Fig. 3 (bottom).
Tasks to assess empathy: an extended questionnaire from QCAE
We have used and expanded a thirty-one questions questionnaire called questionnaire of cognitive and affective empathy (QCAE)49. The extended questionnaire has nine additional questions, focusing on the empathy towards people with motion disabilities25. The results with the extended questionnaire showed a more significant difference between control and test groups25. Our dataset includes the empathy scores and the answer to all individual questions in the extended questionnaire and the standard QCAE. This is true for both data sets IA and IB.
Hardware and software
The hardware consisted of an eye-tracker Tobii Pro X3–120 and the software Tobii Pro Lab Presenter Edition (Fig. 1b). The eye-tracker has an infrared light projector and a camera. The projector releases a pattern of light that is reflected by the eye gaze. The camera component of the device captures the coordinates of eye movements and helps collect the eye-gaze data of the users. This particular eye-tracker is a good compromise between the typical equipment used in the context of universal design and those one would like to use for studying the kinetic properties of eye-movements. While the first are typically inexpensive and as small as possible, with frequencies around 40 or 60 Hz, the latter prioritize accuracy and a high sample frequency at the expense of both affordability and portability.
Before recording the data, the person responsible for data collection provided the explanation of the experiment to the participants. We also explained the participants how to maintain their posture during the experiment, to ensure the quality and accuracy of the data. To stabilize the head movement, we had chin and forehead rest tools (Fig. 1a) from the eyes to the device within a distance between 47 and 70 cm, with a mode around 60 cm. See Table 2. The room chosen for the procedure received no sunlight and the luminous flux per unit area, also called luminance, was kept constant at 115 lx. These light conditions were considerable lower than an overcast which has around 1000 lx.
The native algorithm of classification of saccades and fixations is described in detail by the manufacturer (see ref. 50) and generally follows the approach introduced in ref. 51. In a first iteration, the algorithm classifies data points into fixations and saccades based on a velocity threshold of 30°/s (degrees of visual angle per second). Afterwards, it merges fixations that are close: a set of points in between two fixations is re-labeled as fixations if they they are separated in space and time by no more than 0.5° and 75 ms respectively. Finally, a minimum fixation time is introduced (60 ms) and fixations shorter than it are discarded. Data points that do not have a saccadic velocity nor form a fixation during 60 ms are labelled ‘Unclassified’.
For the ethical consideration of data, an authorization was provided by Norsk senter for forskningsdata (NSD), which allows for the collection and publication of data (application number 119986). In compliance with ethical requirements, participants’ personal information (full name and contact information) was deleted after the procedure.
The dataset is available in the FigShare and distributed under a Creative Commons license. The dataset of eye-tracking movements52 is organized in 502 files, each one with a name following the syntax:
Here, AAA can take the values dataset_II or dataset_III, depending if the participant performed the “foraging for visual information” or the “gaze typing” experiment, respectively. As for BBB, it can take the value of letter_card, grey_blue or grey_orange. The different stimuli are show in Fig. 3. CCC stands for the participant ID number in a given test/control. The number is sequential and enables to find the files corresponding to the same person, without identifying the participant.
Finally, DDD is the trial number for each participant, typically around 4 for the control group to around 12 for the test group. The exact number of trials for each participant depended on their availability. Each eye-tracker data record is composed by several columns, an explanation of which can be found in the same folder, in the files named columns_explained.pdf and coordinate_system.pdf.
The eye-tracker raw-data53 as extracted directly from the eye-tracker can also be found with the syntax:
ParticipantCCC.tsvwhere, as before, CCC represents the participant number. Participants with an even ID-number performed the “foraging for visual information”, while participants with an odd ID-number performed “gaze typing” experiment.
The empathy assessment questionnaire contains the answers to each individual question and the final empathy score54. The file name of each questionnaire is just:
Questionnaire_AAA.csvwhere AAA can take the values datasetIA or datasetIB depending on whether it is a questionnaire before or after the eye-tracker experiments. The list with all the questions can be found in the same directory.
One of the most fundamental aspects of data quality is the reliability of the eye-tracker used. Tobii Pro X3–120 Eye Tracker is used in state-of-the-art research all over the world in the fields of medicine and cognition55,56, robotics and informatics57,58,59, marketing60 or educational sciences61. It is also noticeable that the Tobii Pro X3–120 eye-tracker is also among the best devices used in the field of Universal Design62, where it was used to develop universally designed solutions, i.e. suited to people with and without specific disabilities. This allows access to detailed statistics of the gaze-trajectories while ensuring that subjects in the test-group face the similar difficulties of non-verbal movement-impaired individuals interacting with a computer.
The experiment was performed with a head stand designed to keep the head position fixed during the experiment. Furthermore, luminosity conditions and distance to the measurement device were calibrated in order to minimize measurement errors. A five-point calibration and validation session, provided by Tobii’s software63, preceded every data collection. Moreover, according to the product specifications, in the luminosity conditions of the experiment, taking in consideration the distance to the screen and screen length, the mean error should be inferior to 1° of the visual angle. See above in section “Hardware and Software”.
Missing samples records were labeled as ‘NaN’. This can happen due to blinks or due to loss of eye-image by the eye-tracker. The ratio of ‘NaN’ values in each time series is represented in Fig. 4a and it is observed that most time-series have less than 15% of the total points labeled as ‘NaN’. Pupil measurements were extracted at 40 Hz frequency and thus the number of NaNs in this part of the dataset is about triple than that of eye-movements. Nevertheless, pupil changes are typically much slower than that of eye-movements and thus it is considered that this frequency is adequate to study the time-evolution of pupil diameter64,65.
Another important aspect is the time-resolution of the eye-tracker. Capturing frames at a frequency of 120 HZ, the resolution is adequate to separate between saccades and fixations. We see in the inset of Fig. 4a that most series have less than 20% of data points ‘Unclassified’, i.e., not labelled as either ‘Saccade’ or ‘Fixation’.
For most time series fixations represent the majority of data points (Fig. 4b) while saccades represent between 10% to 20% of data points (Fig. 4c). The criteria used to distinguish from both saccades and fixations was provided directly from Tobii’s eye-tracking software. Fixations have on average 27 points each, while 85% of the saccades have typically only between two and ten consecutive points. See insets of Fig. 4b,c. These figures show that both regimes can be reasonably separated by the eye-tracker - the sampling rate is higher than the transition rate between fixations and saccades.
Notice that the size of the stimuli are different. See Table 3. To cope with the different sizes, we labelled as NANs all points of eye-gaze trajectories beyond the limits of the stimuli.
The data processing of the data sets in the repository consisted mainly in the decomposition of each data sets into saccades and fixations, as described above. This decomposition was done using the software Tobii Pro Lab Presenter Edition. No other processing requirements were needed.
The .csv files composing the dataset were extracted directly from Tobii Pro X3–120 Eye Tracker, as well as images such as heatmaps. The original raw-data files and the python code used to generate the .csv files, figures and statistics here are available at Figshare66.
Zamani, H., Abas, A. & Amin, M. Eye tracking application on emotion analysis for marketing strategy. Journal of Telecommunication, Electronic and Computer Engineering (JTEC) 8, 87–91 (2016).
Wang, L. Test and evaluation of advertising effect based on EEG and eye tracker. Translational Neuroscience 10, 14–18 (2019).
Neomániová, K. et al. The use of eye-tracker and face reader as useful consumer neuroscience tools within logo creation. Acta Universitatis Agriculturae et Silviculturae Mendelianae Brunensis 67, 1061–1070 (2019).
Hessels, R. S. & Hooge, I. T. Eye tracking in developmental cognitive neuroscience–the good, the bad and the ugly. Developmental Cognitive Neuroscience 40, 100710 (2019).
Hu, Z. et al. Dgaze: CNN-based gaze prediction in dynamic scenes. IEEE Transactions on Visualization and Computer Graphics 26, 1902–1911 (2020).
Clay, V., König, P. & Koenig, S. Eye tracking in virtual reality. Journal of Eye Movement Research 12 (2019).
Ulahannan, A., Jennings, P., Oliveira, L. & Birrell, S. Designing an adaptive interface: Using eye tracking to classify how information usage changes over time in partially automated vehicles. IEEE Access 8, 16865–16875 (2020).
Spataro, R., Ciriacono, M., Manno, C. & La Bella, V. The eye-tracking computer device for communication in amyotrophic lateral sclerosis. Acta Neurologica Scandinavica 130, 40–45 (2014).
Loch, F. et al. An adaptive virtual training system based on universal design. IFAC-PapersOnLine 51, 335–340 (2019).
Burrell, J., Hornberger, M., Carpenter, R., Kiernan, M. & Hodges, J. Saccadic abnormalities in frontotemporal dementia. Neurology 78, 1816–1823 (2012).
Perneczky, R. et al. Saccadic latency in Parkinson’s disease correlates with executive function and brain atrophy, but not motor severity. Neurobiology of Disease 43, 79–85 (2011).
Antoniades, C. A., Xu, Z., Mason, S. L., Carpenter, R. & Barker, R. A. Huntington’ disease:cchanges in saccades and hand-tapping over 3 years. Journal of Neurology 257, 1890–1898 (2010).
Chandna, A., Chandrasekharan, D. P., Ramesh, A. V. & Carpenter, R. Altered interictal saccadic reaction time in migraine: a cross-sectional study. Cephalalgia 32, 473–480 (2012).
Pouget, P., Wattiez, N., Rivaud-Péchoux, S. & Gaymard, B. Rapid development of tolerance to sub-anaesthetic dose of ketamine: An oculomotor study in macaque monkeys. Psychopharmacology 209, 313–318 (2010).
Antoniades, C. et al. An internationally standardised antisaccade protocol. Vision Research 84, 1–5 (2013).
Rucci, M. & Poletti, M. Control and functions of fixational eye movements. Annual review of vision science 1, 499–518 (2015).
Caligari, M., Godi, M., Guglielmetti, S., Franchignoni, F. & Nardone, A. Eye tracking communication devices in amyotrophic lateral sclerosis: Impact on disability and quality of life. Amyotrophic Lateral Sclerosis and Frontotemporal Degeneration 14, 546–552 (2013).
Proudfoot, M. et al. Eye-tracking in amyotrophic lateral sclerosis: A longitudinal study of saccadic and cognitive tasks. Amyotrophic Lateral Sclerosis and Frontotemporal Degeneration 17, 101–111 (2016).
Otero, S. C., Weekes, B. S. & Hutton, S. B. Pupil size changes during recognition memory. Psychophysiology 48, 1346–1353 (2011).
Kret, M. E. The role of pupil size in communication. Is there room for learning? Cognition and Emotion 32, 1139–1145 (2018).
Kret, M. E. & Sjak-Shie, E. E. Preprocessing pupil size data: Guidelines and code. Behavior Research Methods 51, 1336–1342 (2019).
Harrison, N. A., Wilson, C. E. & Critchley, H. D. Processing of observed pupil size modulates perception of sadness and predicts empathy. Emotion 7, 724 (2007).
Egawa, S., Sejima, Y., Sato, Y. & Watanabe, T. A laughing-driven pupil response system for inducing empathy. In 2016 IEEE/SICE International Symposium on System Integration (SII), 520–525 (IEEE, 2016).
Cosme, G. et al. Pupil dilation reflects the authenticity of received nonverbal vocalizations. Scientific Reports 11, 1–14 (2021).
Bhurtel, S., Lind, P. G. & Mello, G. B. M. For a new protocol to promote empathy towards users of communication technologies. In International Conference on Human-Computer Interaction, 3–10 (Springer, 2021).
Griffith, H., Lohr, D., Abdulin, E. & Komogortsev, O. Gazebase, a large-scale, multi-stimulus, longitudinal eye movement dataset. Scientific Data 8, 1–9 (2021).
Wilming, N. et al. An extensive dataset of eye movements during viewing of complex images. Scientific Data 4, 1–11 (2017).
Kümmerer, M., Wallis, T. S. A. & Bethge, M. Saliency benchmarking made easy: Separating models, maps and metrics. In Ferrari, V., Hebert, M., Sminchisescu, C. & Weiss, Y. (eds.) Computer Vision – ECCV 2018, Lecture Notes in Computer Science, 798–814 (Springer International Publishing, 2018).
Błażejczyk, P. & Magdziarz, M. Stochastic modeling of lévy-like human eye movements? Chaos: An Interdisciplinary Journal of Nonlinear Science 31, 043129 (2021).
Brockmann, D. & Geisel, T. The ecology of gaze shifts. Neurocomputing 32, 643–650 (2000).
Brockmann, D. & Geisel, T. Are human scanpaths levy flights? In 9th International Conference on Artificial Neural Networks: ICANN, 263–268 (IET, 1999).
Stephen, D. G., Mirman, D., Magnuson, J. S. & Dixon, J. A. Lévy-like diffusion in eye movements during spoken-language comprehension. Physical Review E 79, 056114 (2009).
Viswanathan, G. M. et al. Lévy flight search patterns of wandering albatrosses. Nature 381, 413–415 (1996).
Sims, D., Humphries, N., Bradford, R. & Bruce, B. Lévy flight and brownian search patterns of a free-ranging predator reflect different prey field characteristics. Journal of Animal Ecology 81, 432–442 (2012).
Raichlen, D. A. et al. Evidence of lévy walk foraging patterns in human hunter–gatherers. Proceedings of the National Academy of Sciences 111, 728–733 (2014).
Rhee, I. et al. On the levy-walk nature of human mobility. IEEE ACM Trans Netw 19, 630–643 (2011).
Bénichou, O., Loverdo, C., Moreau, M. & Voituriez, R. Intermittent search strategies. Reviews of Modern Physics 83, 81 (2011).
Boccignone, G. & Ferraro, M. Feed and fly control of visual scanpaths for foveation image processing. Annals of Telecommunications-Annales des Télécommunications 68, 201–217 (2013).
Goto, Y. et al. Saccade eye movements as a quantitative measure of frontostriatal network in children with adhd. Brain and Development 32, 347–355 (2010).
Fernández-Martnez, M., Sánchez-Granero, M., Segovia, J. T. & Román-Sánchez, I. An accurate algorithm to calculate the hurst exponent of self-similar processes. Physics Letters A 378, 2355–2362 (2014).
Marlow, C. A. et al. Temporal structure of human gaze dynamics is invariant during free viewing. PloS one 10, e0139379 (2015).
Freije, M. et al. Multifractal detrended fluctuation analysis of eye-tracking data. In European Congress on Computational Methods in Applied Sciences and Engineering, 476–484 (Springer, 2017).
Suman, A. A. et al. Spatial and time domain analysis of eye-tracking data during screening of brain magnetic resonance images. Plos one 16, e0260717 (2021).
Unema, P. J., Pannasch, S., Joos, M. & Velichkovsky, B. M. Time course of information processing during scene perception: The relationship between saccade amplitude and fixation duration. Visual cognition 12, 473–494 (2005).
Fuhl, W., Bozkir, E. & Kasneci, E. Reinforcement learning for the privacy preservation and manipulation of eye tracking data. In International Conference on Artificial Neural Networks, 595–607 (Springer, 2021).
Majaranta, P., Ahola, U.-K. & Špakov, O. Fast gaze typing with an adjustable dwell time. In Proceedings of the SIGCHI Conference on Human Factors in Computing Systems, 357–360 (2009).
Lim, J. Z., Mountstephens, J. & Teo, J. Emotion recognition using eye-tracking: taxonomy, review and current challenges. Sensors 20, 2384 (2020).
Villani, D. et al. Visual exploration patterns of human figures in action: an eye tracker study with art paintings. Frontiers in Psychology 6, 1636 (2015).
Reniers, R. L., Corcoran, R., Drake, R., Shryane, N. M. & Völlm, B. A. The qcae: A questionnaire of cognitive and affective empathy. Journal of Personality Assessment 93, 84–95 (2011).
Olsen, A. The tobii i-vt fixation filter. Tobii Technology 21 (2012).
Komogortsev, O. V. et al. Standardization of automated analyses of oculomotor fixation and saccadic behaviors. IEEE Trans Biomed Eng 57, 2635–2645 (2010).
Lencastre, P. Eye tracker data. Figshare https://doi.org/10.6084/m9.figshare.19729636.v2 (2022).
Lencastre, P. Raw data. Figshare https://doi.org/10.6084/m9.figshare.19209714.v1 (2022).
Lencastre, P. Questionnaires. Figshare https://doi.org/10.6084/m9.figshare.19657323.v2 (2022).
Feng, Y. et al. Virtual pointer for gaze guidance in laparoscopic surgery. Surgical Endoscopy 34, 3533–3539 (2020).
Shi, Y., Zheng, Y., Du, J., Zhu, Q. & Liu, X. The impact of engineering information complexity on working memory development of construction workers: An eye-tracking investigation. In Construction Research Congress 2020: Infrastructure Systems and Sustainability, 89–98 (American Society of Civil Engineers Reston, VA, 2020).
Vrabič, N., Juroš, B. & Pompe, M. T. Automated visual acuity evaluation based on preferential looking technique and controlled with remote eye tracking. Ophthalmic Research 64, 389–397 (2021).
Netzel, R. et al. Comparative eye-tracking evaluation of scatterplots and parallel coordinates. Visual Informatics 1, 118–131 (2017).
Niehorster, D. C., Andersson, R. & Nyström, M. Titta: A toolbox for creating psychtoolbox and psychopy experiments with tobii eye trackers. Behavior Research Methods 52, 1970–1979 (2020).
Zhou, L. & Xue, F. Show products or show people: An eye-tracking study of visual branding strategy on instagram. Journal of Research in Interactive Marketing (2021).
Fayed, K., Franken, B. & Berkling, K. Understanding the use of eye-tracking recordings to measure and classify reading ability in elementary children school. CALL for Widening Participation: Short Papers from EUROCALL 2020 69 (2020).
Krohn, O. A., Varankian, V., Lind, P. G. & Mello, G. B. M. Construction of an inexpensive eye tracker for social inclusion and education. In International Conference on Human-Computer Interaction, 60–78 (Springer, 2020).
Tobii-AB. Tobii Pro x3-120 eye tracker user’ manual. Available at https://www.tobiipro.com/siteassets/tobii-pro/user-manuals/tobii-pro-x3-120-user-manual.pdf/?v=1.0.9 (2019).
Schmitz, S., Krummenauer, F., Henn, S. & Dick, H. B. Comparison of three different technologies for pupil diameter measurement. Graefe’s Archive for Clinical and Experimental Ophthalmology 241, 472–477 (2003).
Brisson, J. et al. Pupil diameter measurement errors as a function of gaze direction in corneal reflection eyetrackers. Behavior Research Methods 45, 1322–1331 (2013).
Lencastre, P. Code to read data. Figshare https://doi.org/10.6084/m9.figshare.21608238.v1 (2022).
The authors would like to thank Reiner Gross for the permission to use the painting “Precope Twins” represented in top of Fig. 3.
The authors declare no competing interests.
Publisher’s note Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
Rights and permissions
Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made. The images or other third party material in this article are included in the article’s Creative Commons license, unless indicated otherwise in a credit line to the material. If material is not included in the article’s Creative Commons license and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this license, visit http://creativecommons.org/licenses/by/4.0/.
About this article
Cite this article
Lencastre, P., Bhurtel, S., Yazidi, A. et al. EyeT4Empathy: Dataset of foraging for visual information, gaze typing and empathy assessment. Sci Data 9, 752 (2022). https://doi.org/10.1038/s41597-022-01862-w