Skip to main page content
U.S. flag

An official website of the United States government

Dot gov

The .gov means it’s official.
Federal government websites often end in .gov or .mil. Before sharing sensitive information, make sure you’re on a federal government site.

Https

The site is secure.
The https:// ensures that you are connecting to the official website and that any information you provide is encrypted and transmitted securely.

Access keys NCBI Homepage MyNCBI Homepage Main Content Main Navigation
Review
. 2010 Nov;63(11):2081-105.
doi: 10.1080/17470211003624002. Epub 2010 Aug 16.

From observation to action simulation: the role of attention, eye-gaze, emotion, and body state

Affiliations
Free PMC article
Review

From observation to action simulation: the role of attention, eye-gaze, emotion, and body state

Steven P Tipper. Q J Exp Psychol (Hove). 2010 Nov.
Free PMC article

Abstract

This paper reviews recent aspects of my research. It focuses, first, on the idea that during the perception of objects and people, action-based representations are automatically activated and, second, that such action representations can feed back and influence the perception of people and objects. For example, when one is merely viewing an object such as a coffee cup, the action it affords, such as a reach to grasp, is activated even though there is no intention to act on the object. Similarly, when one is observing a person's behaviour, their actions are automatically simulated, and such action simulation can influence our perception of the person and the object with which they interacted. The experiments to be described investigate the role of attention in such vision-to-action processes, the effects of such processes on emotion, and the role of a perceiver's body state in their interpretation of visual stimuli.

PubMed Disclaimer

Figures

Figure 1.
Figure 1.
Partly inflated cortical surface showing selective activation in left SII when viewing a hand approaching and grasping or withdrawing from a noxious pain-inducing object. Red is activation encoding noxious objects independent of whether they are grasped or not; yellow is the area encoding whether the object is grasped or not independent of the kind of object; and of most importance, green is a region activated when observing the specific action of grasping a noxious/painful object. Note also the activity in midcingulate cortex (MCC) in this last condition of grasping a noxious object, an area previously implicated in coding affective components of pain observation.
Figure 2.
Figure 2.
These are some examples of the face stimuli presented in Bach and Tipper (2006). For this particular participant a finger key-press was required to identify the tennis player Greg Rusedksi (action compatible) and the soccer player Michael Owen (response incompatible), while a foot response was necessary to identify the tennis player Tim Henman (response incompatible) and the soccer player Wayne Rooney (response compatible). The images of Rusedski, Owen, Henman and Rooney were courtesy of Les Miller, Michael Kjaer, Andrew Haywood and Gordon Flood respectively.
Figure 3.
Figure 3.
The top panel shows the action instructions for participants. For this person they are required to make round shape actions when the word describes an outside object, and a square shaped action when the word describes an object typically found in a house. The lower panels depict examples of path trajectories. In these examples participants made a round action to report an object was found outside the house. The middle panel shows compatible trials where the object (e.g., moon) has a round shape, and a round action is produced. The bottom panel shows incompatible trials where the object has a square shape (e.g., billboard) while the round hand action is produced. Notice that in the compatible conditions the circle trajectories are more accurate (and reaction time, RT, to start and movement time, MT, to complete the action are fast), whereas in the incompatible conditions the shape of the object intrudes into the shape of the motor response to be produced (and RT and MT are slow). To view a colour version of this figure, please see the online issue of the Journal.
Figure 4.
Figure 4.
Examples of the door handles used in Tipper et al. (2006). These handles evoke right-hand grasps; on other trials the handles were mirror reversed, evoking left-hand grasps. Exactly the same stimuli were presented to all participants: In one group they made left and right key-presses to report whether the shape was square (Panel A) or rounded (Panel B); another group made the same responses to report whether the handle was blue (Panel A) or green (Panel B). To view a colour version of this figure, please see the online issue of the Journal.
Figure 5.
Figure 5.
Examples of the static action images employed by Bach et al. (2007). The left panel shows a kicking action where the target (red colour patch in this example) is presented on the non-action-related body site of the head. On other trials the blue or red colour target would be presented on the foot. The right panel shows the typing scene where the target colour (in this example blue) was presented on the hand. In other trials the target could be presented on the head in this display. To view a colour version of this figure, please see the online issue of the Journal.
Figure 6.
Figure 6.
This figure depicts a typical trial in the mu experiment. There was a baseline display for 1 s, followed by the presentation of a cup; 2.5 s later the grasp started where the hand approached the cup from the right and grasped it either at the top or by the handle. At the point of grasp the grey X superimposed on the cup changed colour to either blue or green. Participants monitored either the colour change or the grasp action. To view a colour version of this figure, please see the online issue of the Journal.
Figure 7.
Figure 7.
This figure shows the mu activity during action observation. At the top of the figure the sequence of visual stimuli is shown, and the mu suppression can clearly be seen as the action-evoking object becomes visible. After stimulus offset (time point 10) there is a rebound into the mu rhythm, and this is greater when participants were previously attending to action.
Figure 8.
Figure 8.
Selective reaching task: Panel (a) represents the single person condition, illustrating the relationship between prime and subsequent probe responses in the ignored repetition condition (i) and the control condition (ii). Negative priming is revealed in longer reaction times in ignored repetition than in control trials. Panel (b) shows examples of ignored repetition trials in the dual person condition. The participant (P) observes the agent (A) perform the prime reach and then executes the probe reach. In (i), the salience of the prime distractor in the far-left location is high in terms of the agent's (allocentric) frame of reference, but low in terms of the participant's own (egocentric) frame of reference. In (ii), prime distractor salience (near-right location) is low in terms of the agent's frame of reference, but high in terms of the participant's egocentric frame of reference.
Figure 9.
Figure 9.
Reaction time data representing the amount of negative priming (ignored repetition minus control; in ms) at each stimulus location in (a) the single person task and (b) the dual person task. Stimulus locations (near/right/far/left) correspond to the participant's frame of reference whose hand is positioned at the start point at the front of the display.
Figure 10.
Figure 10.
Patients with visual neglect were required to report the presence of single and dual object displays, on “left”, “right”, or “both” sides. Typically detection of the cup on the left was poor when there was also a cup on the right. This was true except for the condition shown in Panel A, where left cup detection was significantly improved. In contrast, detection of the left cup remained poor in the stimulus control condition shown in Panel B.
Figure 11.
Figure 11.
Panel A shows an example of a leftward gaze cue. In such studies gaze would also be oriented to the right on 50% of the trials, and there was no relationship between the direction of gaze and the position of the asterisk target to be localized/detected. Panel B demonstrates peripheral/exogenous cueing. The task requires participants to detect the target X as fast as possible while ignoring the brief flicker of the box, which is the peripheral cue. The cue automatically orients attention, facilitating target processing at the attended location. However, after 300 ms this facilitation effect reverts to inhibition, where target detection is impaired at the cued location. Panel C represents an example of a face display employed to investigate head-centred gaze cueing. Typically targets are detected faster in the left than in the right side of the display in this situation. In this example, the face is oriented 90° anticlockwise from the upright; in other displays the face was oriented 90° clockwise from the upright, and the eyes gazed up and down equally often. To view a colour version of this figure, please see the online issue of the Journal.
Figure 12.
Figure 12.
Frames from video displays employed to study simulation of emotion emerging from action fluency. Panel A shows an easy reach where no obstacle has to be avoided. Panel B shows a difficult reach where the object has to be moved around and placed behind a fragile object. After viewing such displays participants tended to prefer objects after easy reaches were observed (Panel A), but only if the face gazing towards the action was visible. To view a colour version of this figure, please see the online issue of the Journal.
Figure 13.
Figure 13.
A frame from a video showing reaches from an egocentric perspective. We found that objects were liked more when participants viewed actions from this egocentric perspective than from the allocentric third-person view described in Figure 12. To view a colour version of this figure, please see the online issue of the Journal.

Similar articles

Cited by

References

    1. Bach P., Peatfield N., Tipper S. P. Focusing on body sites: The role of spatial attention in action perception. Experimental Brain Research. 2007;178:509–517. - PMC - PubMed
    1. Bach P., Tipper S. P. Bend it like Beckham: Embodying the motor skills of famous athletes. Quarterly Journal of Experimental Psychology. 2006;59:2033–2039. - PMC - PubMed
    1. Bach P., Tipper S. P. Implicit action encoding influences personal-trait judgments. Cognition. 2007;102:151–178. - PMC - PubMed
    1. Barsalou L. W. Situated simulation in the human conceptual system. Amsterdam: Elsevier; 2003.
    1. Bayliss A. P., di Pellegrino G., Tipper S. P. Orienting of attention via observed eye-gaze is head-centred. Cognition. 2004;94:B1–B10. - PubMed

Publication types

LinkOut - more resources