Skip to main page content
U.S. flag

An official website of the United States government

Dot gov

The .gov means it’s official.
Federal government websites often end in .gov or .mil. Before sharing sensitive information, make sure you’re on a federal government site.

Https

The site is secure.
The https:// ensures that you are connecting to the official website and that any information you provide is encrypted and transmitted securely.

Access keys NCBI Homepage MyNCBI Homepage Main Content Main Navigation
. 2007 Oct 16;104(42):16598-603.
doi: 10.1073/pnas.0703913104. Epub 2007 Oct 1.

Category-specific attention for animals reflects ancestral priorities, not expertise

Affiliations

Category-specific attention for animals reflects ancestral priorities, not expertise

Joshua New et al. Proc Natl Acad Sci U S A. .

Abstract

Visual attention mechanisms are known to select information to process based on current goals, personal relevance, and lower-level features. Here we present evidence that human visual attention also includes a high-level category-specialized system that monitors animals in an ongoing manner. Exposed to alternations between complex natural scenes and duplicates with a single change (a change-detection paradigm), subjects are substantially faster and more accurate at detecting changes in animals relative to changes in all tested categories of inanimate objects, even vehicles, which they have been trained for years to monitor for sudden life-or-death changes in trajectory. This animate monitoring bias could not be accounted for by differences in lower-level visual characteristics, how interesting the target objects were, experience, or expertise, implicating mechanisms that evolved to direct attention differentially to objects by virtue of their membership in ancestrally important categories, regardless of their current utility.

PubMed Disclaimer

Conflict of interest statement

The authors declare no conflict of interest.

Figures

Fig. 1.
Fig. 1.
Diagram illustrating the sequence and timing of each trial in Exp 1–5.
Fig. 2.
Fig. 2.
Sample stimuli with targets circled. Although they are small (measured in pixels), peripheral, and blend into the background, the human (A) and elephant (E) were detected 100% of the time, and the hit rate for the tiny pigeon (B) was 91%. In contrast, average hit rates were 76% for the silo (C) and 67% for the high-contrast mug in the foreground (F), yet both are substantially larger in pixels than the elephant and pigeon. The simple comparison between the elephant and the minivan (D) is equally instructive. They occur in a similar visual background, yet changes to the high-contrast red minivan were detected only 72% of the time (compared with the smaller low-contrast elephant's 100% detection rate).
Fig. 3.
Fig. 3.
Changes to animals and people are detected faster and more accurately than changes to plants and artifacts. Graphs show proportion of changes detected as a function of time and semantic category. (Inset) Mean RT for each category (people, animals, plants, moveable/manipulable artifacts, and fixed artifacts). (A) Results for Exp 1. Animate targets: RT M = 3,034 msec (SD, 882), hit rate M = 89.8% (SD, 7.4). Inanimate targets: RT M = 4,772 msec (SD, 1,404), hit rate M = 64.9% (SD, 15.7). (B) Results for Exp 2. Animate targets: RT M = 3,346 (SD, 893), hit rate M = 88.7% (SD, 8.0). Inanimate RT M = 4,996 (SD, 1,284), hit rate M = 67.5% (SD, 16.5). (C) Results for Exp 5. RT: animate M = 2,661 msec (SD, 770). Hit rate, animate vs. vehicle: 90.6% (SD, 7.8) vs. 63.5% (SD, 18.8), P = 10−15.
Fig. 4.
Fig. 4.
Disrupting recognition eliminates the advantage of animates in change detection, showing that the animate advantage is driven by category, not by lower-level visual features. Graphs show proportion of changes detected as a function of time and category when recognition is disrupted. (Inset) Mean RT for each category. (A) Results for Exp 3 using inverted stimuli. RT, animate M = 5,399 (SD, 2,139), inanimate M = 5,813 (SD, 2,405). (See SI Appendices 1.4 and 1.5.) (B) Exp 4, blurred stimuli. RT, animate M = 5,792 (SD, 2,705), inanimate M = 5,337 (SD, 2,121). Accuracy; animate M = 45.2% (SD, 15.1), inanimate M = 56.7% (SD, 13.5), greater accuracy for inanimates; P = 0.0001, r = 0.67.

Comment in

  • Proc Natl Acad Sci U S A. 104:16396.

Similar articles

Cited by

References

    1. Cosmides L, Tooby J. In: Metarepresentations: A Multidisciplinary Perspective. Sperber D, editor. New York: Oxford Univ Press; 2000. pp. 53–115.
    1. Tooby J, DeVore I. In: Primate Models of Hominid Behavior. Kinzey W, editor. New York: SUNY Press; 1987. pp. 183–237.
    1. Shinoda H, Hayhoe M, Shrivastava A. Vision Res. 2001;41:3535–3545. - PubMed
    1. Werner S, Thies B. Visual Cognit. 2000;7:163–173.
    1. Myles-Worsley M, Johnston W, Simons M. J Exp Psychol Learn Mem Cognit. 1988;14:553–557. - PubMed

Publication types

LinkOut - more resources