Skip to main page content
U.S. flag

An official website of the United States government

Dot gov

The .gov means it’s official.
Federal government websites often end in .gov or .mil. Before sharing sensitive information, make sure you’re on a federal government site.

Https

The site is secure.
The https:// ensures that you are connecting to the official website and that any information you provide is encrypted and transmitted securely.

Access keys NCBI Homepage MyNCBI Homepage Main Content Main Navigation
. 2009 May 26;106(21):8748-53.
doi: 10.1073/pnas.0811583106. Epub 2009 May 8.

A rodent model for the study of invariant visual object recognition

Affiliations

A rodent model for the study of invariant visual object recognition

Davide Zoccolan et al. Proc Natl Acad Sci U S A. .

Abstract

The human visual system is able to recognize objects despite tremendous variation in their appearance on the retina resulting from variation in view, size, lighting, etc. This ability--known as "invariant" object recognition--is central to visual perception, yet its computational underpinnings are poorly understood. Traditionally, nonhuman primates have been the animal model-of-choice for investigating the neuronal substrates of invariant recognition, because their visual systems closely mirror our own. Meanwhile, simpler and more accessible animal models such as rodents have been largely overlooked as possible models of higher-level visual functions, because their brains are often assumed to lack advanced visual processing machinery. As a result, little is known about rodents' ability to process complex visual stimuli in the face of real-world image variation. In the present work, we show that rats possess more advanced visual abilities than previously appreciated. Specifically, we trained pigmented rats to perform a visual task that required them to recognize objects despite substantial variation in their appearance, due to changes in size, view, and lighting. Critically, rats were able to spontaneously generalize to previously unseen transformations of learned objects. These results provide the first systematic evidence for invariant object recognition in rats and argue for an increased focus on rodents as models for studying high-level visual processing.

PubMed Disclaimer

Conflict of interest statement

The authors declare no conflict of interest.

Figures

Fig. 1.
Fig. 1.
Visual stimuli and behavioral task. (A) Default views (0° in-depth rotation) of the target objects that rats were trained to discriminate during phase I of the study (each object's default size was 40° visual angle). (B) Rats were trained in an operant box that was equipped with an LCD monitor, a central touch sensor, and 2 flanking feeding tubes (also functioning as touch sensors). Following initiation of a behavioral trial (triggered by the rat licking the central sensor), 1 of the 2 target objects was presented on the monitor, and the animal had to lick either the left or the right feeding tube (depending on the object identity) to receive reward.
Fig. 2.
Fig. 2.
Rats' group mean and individual performances across the range of object transformations tested during phase III of the study. (A) The set of object transformations used in phase III, consisting of all possible combinations of 6 sizes and 9 in-depth azimuth rotations of both target objects. Green frames show the default object views trained during phase I. Light blue frames show the subset of transformations (14) for each object during phases II. All of the remaining transformations (40 for each object) were novel to the animals. (B) Left plot shows the animals' group mean performance (n = 6) for each of the tested object transformations depicted in A—the percentage of correct trials is both color-coded and reported as a numeric value, together with its significance according to a 1-tailed t test (see key for significance levels). The right plots show the performance of each individual subject for each object condition, and its significance according to a one-tailed Binomial test (see key for significance levels). Black frames show the quadrants of adjacent transformations for which feedback was not provided to the subjects, to better assess generalization (counterbalanced across animals).
Fig. 3.
Fig. 3.
Generalization of recognition performance. (A) Mean performances obtained by pooling across different subsets of object conditions tested during phase III (error bars indicate SEM). The first grouping of bars (in gray) shows performance with previously trained object transformations (first bar), with the set of novel transformations for which feedback was withheld (second bar), and with a size-matched (i.e., acuity-matched) subset of novel transformations for which feedback was provided (third bar). Diagrams below each bar show which conditions were included in each subset according to the convention set forth in Fig. 2. Performances over these 3 groups of conditions were all significantly higher than chance (one-tailed t test; ***, P < 0.001) but not significantly different from each other. The white (fourth) bar shows the performance over the special case “no-feedback” condition that was always separated from the nearest “feedback” condition by at least 10° in size and 30° in azimuth. Such a condition existed only within the top-left and the top-right no-feedback quadrants (see diagram) and was tested for rats R2, R5, R3, and R6 (see Fig. 2B, Right). (B) Group mean performance (n = 6; black line) over the full set of novel object transformations tested during phase III, computed for the first, second, third, etc., presentation of each object in the set (shaded area shows the SEM). All performances along the curve were significantly above chance (one-tailed t test, P < 0.005) and were not significantly different from each other.
Fig. 4.
Fig. 4.
Generalization to novel lighting and elevation conditions. (A) Examples of lighting conditions tested during phase IVa of our study, with the first column showing default lighting conditions (i.e., same as during phase I–III) and the second column showing novel lighting conditions. The bottom examples show how manipulating lighting often produced a reversal in the relative luminance of different areas over the surface of an object. Under default lighting, the blue-framed image region was brighter than the red-framed region (top), but this relationship was reversed under the novel lighting condition (bottom). (B) Examples of elevation conditions tested during phase IVb of our study, with the first column showing default (0°) elevation conditions (i.e., same as during phase I–IVa) and the second column showing novel (±10° and ±20°) elevation conditions. Note the variation in the objects' silhouette produced by changing the elevation. (C) Rats' mean performance with the novel lighting conditions (ordinate) is plotted against performance with the matching “default” lighting conditions (abscissa). Performance on the novel lighting conditions was high overall, and in all but one condition was significantly above chance (one-tailed t test; see key for significance levels). The black arrow indicates the performance over the bottom example conditions shown in C. (D) Rats' mean performance with the novel elevation conditions (ordinate) is plotted against performance with the matching “default” elevation conditions (abscissa). Color convention as in C. In both C and D, error bars indicate standard errors of the means.

References

    1. Pinto N, Cox DD, DiCarlo JJ. Why is real-world visual object recognition hard? PLoS Comput Biol. 2008;4:e27. - PMC - PubMed
    1. Logothetis NK, Sheinberg DL. Visual object recognition. Ann Rev Neurosci. 1996;19:577–621. - PubMed
    1. Prusky GT, Harker KT, Douglas RM, Whishaw IQ. Variation in visual acuity within pigmented, and between pigmented and albino rat strains. Behav Brain Res. 2002;136:339–348. - PubMed
    1. Prusky GT, West PW, Douglas RM. Behavioral assessment of visual acuity in mice and rats. Vision Res. 2000;40:2201–2209. - PubMed
    1. Birch D, Jacobs GH. Spatial contrast sensitivity in albino and pigmented rats. Vision Res. 1979;19:933–937. - PubMed

Publication types