Skip to main page content
U.S. flag

An official website of the United States government

Dot gov

The .gov means it’s official.
Federal government websites often end in .gov or .mil. Before sharing sensitive information, make sure you’re on a federal government site.

Https

The site is secure.
The https:// ensures that you are connecting to the official website and that any information you provide is encrypted and transmitted securely.

Access keys NCBI Homepage MyNCBI Homepage Main Content Main Navigation
. 2013 Feb:93:50-61.
doi: 10.1016/j.beproc.2012.08.005. Epub 2012 Sep 1.

Monkeys show recognition without priming in a classification task

Affiliations

Monkeys show recognition without priming in a classification task

Benjamin M Basile et al. Behav Processes. 2013 Feb.

Abstract

Humans show visual perceptual priming by identifying degraded images faster and more accurately if they have seen the original images, while simultaneously failing to recognize the same images. Such priming is commonly thought, with little evidence, to be widely distributed phylogenetically. Following Brodbeck (1997), we trained rhesus monkeys (Macaca mulatta) to categorize photographs according to content (e.g., birds, fish, flowers, people). In probe trials, we tested whether monkeys were faster or more accurate at categorizing degraded versions of previously seen images (primed) than degraded versions of novel images (unprimed). Monkeys categorized reliably, but showed no benefit from having previously seen the images. This finding was robust across manipulations of image quality (color, grayscale, line drawings), type of image degradation (occlusion, blurring), levels of processing, and number of repetitions of the prime. By contrast, in probe matching-to-sample trials, monkeys recognized the primes, demonstrating that they remembered the primes and could discriminate them from other images in the same category under the conditions used to test for priming. Two experiments that replicated Brodbeck's (1997) procedures also produced no evidence of priming. This inability to find priming in monkeys under perceptual conditions sufficient for recognition presents a puzzle.

PubMed Disclaimer

Figures

Figure 1
Figure 1
Diagram of a prime (left), and primed and unprimed classification trials (right). For primes at the beginning of the session, monkeys saw and touched (FR=10) a novel, unmasked photograph. On classification trials, monkeys touched a green start box (FR=2) to initiate the trial, saw and touched (FR=2) a masked sample image, and finally classified the masked image by touching (FR=2) one of the four colored symbols. The symbols used were: red triangle = birds, yellow star = fish, blue plus = people, green circle = flowers. On primed trials, the to-be-classified image had been seen once as one of the primes. On unprimed trials, the to-be-classified image was completely novel.
Figure 2
Figure 2
Examples of stimuli used in Experiments 1-11. A) Color photographs with black checkerboard masks, as used in Experiments 1, 2, and 10. B) Black & white photographs with black checkerboard masks, as used in Experiments 3 and 4. C) Black & white photographs with blur masks, as used in Experiments 5 and 6. D) Line drawings with white checkerboard masks, as used in Experiments 7 and 8. E) Line drawings with a white checkerboard mask in which the arrangement of un-occluded elements was either left intact or scrambled, as used in Experiment 9. F) Color photographs of cats or cars with black masks composed of randomly-placed black squares, as used in Experiment 11a. G) Color photographs of male or female rhesus monkeys with black masks composed of randomly-placed black squares, as used in Experiment 11b. For A-E, the same photograph is depicted to emphasize the various manipulations; however, primed and unprimed images were always novel for each experiment.
Figure 3
Figure 3
Classification accuracy and response latency for primed and unprimed images in Experiment 1. Left two bars: mean proportion (±SEM) of masked primed and unprimed images correctly classified. The dashed line represents the proportion correct that would be expected by chance. Right two bars: group mean of the individual median response latencies to correctly classify primed and unprimed images.
Figure 4
Figure 4
Classification accuracy and response latency for primed and unprimed images in Experiment 2. Left two bars: mean proportion (±SEM) of masked primed and unprimed images correctly classified. The dashed line represents the proportion correct that would be expected by chance. Right two bars: group mean of the individual median response latencies to correctly classify primed and unprimed images.
Figure 5
Figure 5
Classification accuracy and response latency for primed and unprimed images in Experiment 3. Left two bars: mean proportion (±SEM) of masked primed and unprimed images correctly classified. The dashed line represents the proportion correct that would be expected by chance. Right two bars: group mean of the individual median response latencies to correctly classify primed and unprimed images.
Figure 6
Figure 6
Classification accuracy and response latency for primed and unprimed images in Experiment 4. Left two bars: mean proportion (±SEM) of masked primed and unprimed images correctly classified. The dashed line represents the proportion correct that would be expected by chance. Right two bars: group mean of the individual median response latencies to correctly classify primed and unprimed images.
Figure 7
Figure 7
Classification accuracy and response latency for primed and unprimed images in Experiment 5. Left two bars: mean proportion (±SEM) of masked primed and unprimed images correctly classified. The dashed line represents the proportion correct that would be expected by chance. Right two bars: group mean of the individual median response latencies to correctly classify primed and unprimed images.
Figure 8
Figure 8
Classification accuracy and response latency for primes, primed images, and unprimed images in Experiment 6. Left three bars: mean proportion (±SEM) of unmasked primes, masked primed images, and masked unprimed images correctly classified. The dashed line represents the proportion correct that would be expected by chance. Right two bars: group mean of the individual median response latencies to correctly classify primed and unprimed images.
Figure 9
Figure 9
Classification accuracy and response latency for primes, primed images, and unprimed images in Experiment 7. Left three bars: mean proportion (±SEM) of unmasked primes, masked primed images, and masked unprimed images correctly classified. The dashed line represents the proportion correct that would be expected by chance. Right two bars: group mean of the individual median response latencies to correctly classify primed and unprimed images.
Figure 10
Figure 10
Classification accuracy and response latency for primed and unprimed images in Experiment 8. Left two bars: mean proportion (±SEM) of masked primed and unprimed images correctly classified. The dashed line represents the proportion correct that would be expected by chance. Right two bars: group mean of the individual median response latencies to correctly classify primed and unprimed images.
Figure 11
Figure 11
Classification accuracy for masked images that were intact or scrambled in Experiment 9. Bars depict mean proportion correct (±SEM). The dashed line represents the proportion correct that would be expected by chance.
Figure 12
Figure 12
Classification and recognition accuracy for primed and unprimed images in Experiment 10. Left two bars: mean proportion (±SEM) of masked primed and masked unprimed images correctly classified. Right two bars: mean proportion (±SEM) of unmasked primes and masked primes correctly recognized. The dashed line represents the proportion correct that would be expected by chance. Asterisks mark recognition accuracy that is significantly higher than chance.
Figure 13
Figure 13
Classification accuracy and response latency for primed and unprimed images in Experiment 11a. Left two bars: mean proportion (±SEM) of masked primed and unprimed images correctly classified. The dashed line represents the proportion correct that would be expected by chance. Right two bars: group mean of the individual median response latencies to correctly classify primed and unprimed images.
Figure 14
Figure 14
Classification accuracy and response latency for primed and unprimed images in Experiment 11b. Left two bars: mean proportion (±SEM) of masked primed and unprimed images correctly classified. The dashed line represents the proportion correct that would be expected by chance. Right two bars: group mean of the individual median response latencies to correctly classify primed and unprimed images.

References

    1. Aron A, Aron E. Statistics for psychology. Prentice Hall; Upper Saddle River, NJ: 1999.
    1. Aust U, Huber L. The role of item- and category-specific information in the discrimination of people versus nonpeople images by pigeons. Animal Learning & Behavior. 2001;29(2):107–119.
    1. Bar M, Biederman I. Subliminal visual priming. Psychological Science. 1998;9(6):464–469.
    1. Basile BM, Hampton RR. Rhesus monkeys (macaca mulatta) show robust primacy and recency in memory for lists from small, but not large, image sets. Behavioural Processes. 2010;83(2):183–190. - PMC - PubMed
    1. Basile BM, Hampton RR. Monkeys recall and reproduce simple shapes from memory. Current Biology. 2011;21(9):774–778. - PMC - PubMed

Publication types

LinkOut - more resources