Skip to main page content
U.S. flag

An official website of the United States government

Dot gov

The .gov means it’s official.
Federal government websites often end in .gov or .mil. Before sharing sensitive information, make sure you’re on a federal government site.

Https

The site is secure.
The https:// ensures that you are connecting to the official website and that any information you provide is encrypted and transmitted securely.

Access keys NCBI Homepage MyNCBI Homepage Main Content Main Navigation
. 2013 Nov 13;8(11):e79659.
doi: 10.1371/journal.pone.0079659. eCollection 2013.

Joint attention without gaze following: human infants and their parents coordinate visual attention to objects through eye-hand coordination

Affiliations

Joint attention without gaze following: human infants and their parents coordinate visual attention to objects through eye-hand coordination

Chen Yu et al. PLoS One. .

Abstract

The coordination of visual attention among social partners is central to many components of human behavior and human development. Previous research has focused on one pathway to the coordination of looking behavior by social partners, gaze following. The extant evidence shows that even very young infants follow the direction of another's gaze but they do so only in highly constrained spatial contexts because gaze direction is not a spatially precise cue as to the visual target and not easily used in spatially complex social interactions. Our findings, derived from the moment-to-moment tracking of eye gaze of one-year-olds and their parents as they actively played with toys, provide evidence for an alternative pathway, through the coordination of hands and eyes in goal-directed action. In goal-directed actions, the hands and eyes of the actor are tightly coordinated both temporally and spatially, and thus, in contexts including manual engagement with objects, hand movements and eye movements provide redundant information about where the eyes are looking. Our findings show that one-year-olds rarely look to the parent's face and eyes in these contexts but rather infants and parents coordinate looking behavior without gaze following by attending to objects held by the self or the social partner. This pathway, through eye-hand coupling, leads to coordinated joint switches in visual attention and to an overall high rate of looking at the same object at the same time, and may be the dominant pathway through which physically active toddlers align their looking behavior with a social partner.

PubMed Disclaimer

Conflict of interest statement

Competing Interests: The authors have declared that no competing interests exist.

Figures

Figure 1
Figure 1. A dual eye tracking experimental paradigm.
Infants and parents played with a set of toys on a tabletop in a free-flowing way. Both participants wore a head-mounted eye tracker that recorded their moment-to-moment gaze direction from their egocentric views. The subject of the photograph has given written informed consent, as outlined in the PLOS consent form, to publication of their photograph.
Figure 2
Figure 2. Gaze data and joint attention measures.
(a) Examples of a coupled ROI stream from one infant (first row) and parent (second row), with each color indicating a different object or the social partner's face. Coordinated attention is measured as synchronized joint attention (third row) and sustained joint attention (fourth row, see text for definition). (b) Mean recurrence lag profiles. Cross recurrence of parent-child gaze data at different time lags is compared with a randomized baseline.
Figure 3
Figure 3. Eye-Hand coordination in parent-child interaction.
(a) Recurrence lag profiles between eyes and hands within each participant (top two panels) and across two social partners (bottom two panels) show close couplings within and across two individuals in the whole interaction. (b) Eye-Hand coordination within each participant (top two panels) and across two social partners (bottom two panels) at joint attention moments and moments without joint attention. Eyes and hands are more closely coupled at joint attention moments.
Figure 4
Figure 4. Dynamic patterns of behaviors before joint attention.
Dynamic patterns of three behaviors from the 5 second before the onset of coordinated attention, in either child-leading (left column) and parent-leading (right column) cases. Top: the proportion of time that either child or parent looked at each other's faces prior to joint attention. Only in the case of child leading, an increase of face look from parent to the child's face started around 2000 ms before the onset of joint attention. Middle: the proportion of time that either child or parent was holding the to-be-jointly-attended object prior to joint attention. In the case of child leading, both children and parents increasingly held the target object. In the case of parent leading, only the probability that the parent held the target object was dramatically increased before joint attention. Bottom: the proportion of time that they looked at each other's faces in both child-leading and parent-leading cases shows little change of mutual gaze.
Figure 5
Figure 5. Multiple pathways lead to joint attention.
(a) Joint attention is achieved through following the other's gaze direction. (b) Joint attention is achieved through hand following (dash lines) pathways because eye-hand coordination within an agent (solid lines) ensures the same object either through eye direction and hand activities.
Figure 6
Figure 6. An example recurrent plot from a pair of participants.
Eye movement recurrence at 0(block boxes) along the diagonal line indicate that two participants not only generated overall joint attention moments but also dynamically switched their attention together from one object to the other as time goes by.

Similar articles

Cited by

References

    1. Sebanz N, Knoblich G (2009) Prediction in joint action: What, when, and where. Topics in Cognitive Science 1: 353–367. - PubMed
    1. Kraut RE, Fussell SR, Siegel J (2003) Visual information as a conversational resource in collaborative physical tasks. Human-computer interaction 18: 13–49.
    1. Clark HH, Krych MA (2004) Speaking while monitoring addressees for understanding. Journal of Memory and Language 50: 62–81.
    1. Dale R, Kirkham NZ, Richardson DC (2011) Frontiers: The Dynamics of Reference and Shared Visual Attention. Frontiers in Cognition 2.. - PMC - PubMed
    1. Garrod S, Pickering MJ (2007) Alignment in dialogue. The Oxford handbook of psycholinguistics: 443–451.

Publication types