Skip to main page content
U.S. flag

An official website of the United States government

Dot gov

The .gov means it’s official.
Federal government websites often end in .gov or .mil. Before sharing sensitive information, make sure you’re on a federal government site.

Https

The site is secure.
The https:// ensures that you are connecting to the official website and that any information you provide is encrypted and transmitted securely.

Access keys NCBI Homepage MyNCBI Homepage Main Content Main Navigation
. 2023 Mar 23;13(1):4722.
doi: 10.1038/s41598-023-31536-5.

Predicting choice behaviour in economic games using gaze data encoded as scanpath images

Affiliations

Predicting choice behaviour in economic games using gaze data encoded as scanpath images

Sean Anthony Byrne et al. Sci Rep. .

Abstract

Eye movement data has been extensively utilized by researchers interested in studying decision-making within the strategic setting of economic games. In this paper, we demonstrate that both deep learning and support vector machine classification methods are able to accurately identify participants' decision strategies before they commit to action while playing games. Our approach focuses on creating scanpath images that best capture the dynamics of a participant's gaze behaviour in a way that is meaningful for predictions to the machine learning models. Our results demonstrate a higher classification accuracy by 18% points compared to a baseline logistic regression model, which is traditionally used to analyse gaze data recorded during economic games. In a broader context, we aim to illustrate the potential for eye-tracking data to create information asymmetries in strategic environments in favour of those who collect and process the data. These information asymmetries could become especially relevant as eye-tracking is expected to become more widespread in user applications, with the seemingly imminent mass adoption of virtual reality systems and the development of devices with the ability to record eye movement outside of a laboratory setting.

PubMed Disclaimer

Conflict of interest statement

The authors declare no competing interests.

Figures

Figure 1
Figure 1
Overview of approach: (i) In this eye-tracking study, the gaze behavior of the participants was recorded as they played games presented on a computer screen, using a tower mount eye-link with a sample rate of 1000 HZ (ii) An example of how we presented the games during the eye-tracking experiment. The payoffs of the Row player are coloured in blue and the payoffs of the Column player are in red. The payoffs are separated by the maximum distance allowing for the clearest possible distinction between ocular events that happened in different areas of interest. In this example, the raw gaze recording of a single participant is displayed on the game board in order to highlight the difference between the raw gaze data and the scanapath images used as model input (iii) An example of how a two-player strategic interaction can be represented using games presented in normal-form. The Row player (the human participant in our experimental task) can choose between the actions “Top”, “Middle” and “Bottom”. The Column player (the algorithm in our experimental task) can choose between the actions “Left”, “Middle”, and “Right”. The action selected by the Row player affects the payoff received by the Column player and vice-versa. The game’s outcome is the cell given by the intersection between the action selected by the Row player and the action selected by the Column player. The Row player receives the payoff located in the bottom-left part of the cell, and the Column player gets the payoff located in the upper-right part. The equilibrium of the game is highlighted in gray. (iv) Before performing any analysis, we split the data into three sets of participants. We allocated approximately 70% of the participants for training the model, 20% for validation of the results, and kept 10% of the participants as a hold-out test set. (v) Using the hold-out test, we consider a series of independent model tests where we create sets of shortened scanpaths based on criteria such as percentage, or only allowing scanpaths to be created within a certain amount of time (e.g., 2 s or 5 s). (vi) We pass these sets of scanpaths through our fully trained model, testing its predictive ability. (vii) A stylised graphical representation of our findings, we used model accuracy as the main metric to compare the results and noticed only a small decrease in accuracy relative to the amount of data we removed from the scanpath.
Figure 2
Figure 2
(i) A prototypical representation of a scanpath displaying fixation locations, fixation duration, and saccades. (ii) An example of how we represent the data using scanpaths to increase the salience of information to the models. The circles represent the location of the payoffs for both the participant (light grey) and the opponent (dark grey). We use sequential colourmaps to represent the temporal evolution of the linear saccades and to display information regarding fixations to the model.
Figure 3
Figure 3
Scanpaths generated from the full sequence and subsequences of the data from one participant in a single game. In total, there are 8 sets of test scanpaths made subsequences. (i) Example of a full image, (ii) The colour-map used for saccades. The left side would correspond to colours of earlier saccades with the right side corresponding to later saccades. (iii, iv, v, vi) Images generated from subsequences stemming from the same participant-game at increasing percentage intervals. (vii) The colourmap chosen for fixations. The upward threshold of 20 fixations was chosen because 99% of the area of interest across trials across participants had 20 fixations or less. (viii, ix, x, xi) Images generated from subsequences stemming from the same participant-game at increasing time intervals.
Figure 4
Figure 4
Accuracy of VGG-19 model in CT 1 using subsequences via percentages (i) and time points (ii).

References

    1. Valliappan N, et al. Accelerating eye movement research via accurate and affordable smartphone eye tracking. Nat. Commun. 2020;11(1):4553. doi: 10.1038/s41467-020-18360-5. - DOI - PMC - PubMed
    1. Krafka, K. et al. Eye tracking for everyone. arXiv:1606.05814 (2016).
    1. Zhang, X., Sugano, Y., Fritz, M. & Bulling, A. Appearance-based gaze estimation in the wild. CoRRarXiv:1504.02863 (2015). - PubMed
    1. Papoutsaki, A., Laskey, J. & Huang, J. Searchgazer: Webcam eye tracking for remote studies of web search. In Proceedings of the 2017 Conference on Conference Human Information Interaction and Retrieval, CHIIR ’17, 17-26, 10.1145/3020165.3020170 (Association for Computing Machinery, New York, NY, USA, 2017).
    1. Xiong J, Hsiang E-L, He Z, Zhan T, Wu S-T. Augmented reality and virtual reality displays: Emerging technologies and future perspectives. Light Sci. Appl. 2021;10:1–30. doi: 10.1038/s41377-021-00658-8. - DOI - PMC - PubMed

Publication types