Skip to main page content
U.S. flag

An official website of the United States government

Dot gov

The .gov means it’s official.
Federal government websites often end in .gov or .mil. Before sharing sensitive information, make sure you’re on a federal government site.

Https

The site is secure.
The https:// ensures that you are connecting to the official website and that any information you provide is encrypted and transmitted securely.

Access keys NCBI Homepage MyNCBI Homepage Main Content Main Navigation
. 2015 Aug;41(4):1124-38.
doi: 10.1037/xhp0000073. Epub 2015 May 25.

Incidental auditory category learning

Affiliations

Incidental auditory category learning

Yafit Gabay et al. J Exp Psychol Hum Percept Perform. 2015 Aug.

Abstract

Very little is known about how auditory categories are learned incidentally, without instructions to search for category-diagnostic dimensions, overt category decisions, or experimenter-provided feedback. This is an important gap because learning in the natural environment does not arise from explicit feedback and there is evidence that the learning systems engaged by traditional tasks are distinct from those recruited by incidental category learning. We examined incidental auditory category learning with a novel paradigm, the Systematic Multimodal Associations Reaction Time (SMART) task, in which participants rapidly detect and report the appearance of a visual target in 1 of 4 possible screen locations. Although the overt task is rapid visual detection, a brief sequence of sounds precedes each visual target. These sounds are drawn from 1 of 4 distinct sound categories that predict the location of the upcoming visual target. These many-to-one auditory-to-visuomotor correspondences support incidental auditory category learning. Participants incidentally learn categories of complex acoustic exemplars and generalize this learning to novel exemplars and tasks. Further, learning is facilitated when category exemplar variability is more tightly coupled to the visuomotor associations than when the same stimulus variability is experienced across trials. We relate these findings to phonetic category learning.

PubMed Disclaimer

Figures

Figure 1
Figure 1
Schematic spectrograms show the artificial nonspeech auditory category exemplars across time and frequency, for each uni-dimensional (UD1/UD2) and multidimensional (MD1/MD2) category. The dashed grey lines show the lower-frequency spectral peak that is common to all exemplars of a given category. Each colored line shows the higher-frequency spectral peak corresponding to a single category exemplar. See text for further details.
Figure 2
Figure 2
Overview of the Systematic Multimodal Associations Reaction Time (SMART) task. (a) There is a consistent mapping between auditory categories and screen locations, with acoustically-variable sound exemplars associated with the category-consistent visual location. (b) The order of events in an example trial of the task. A sound category is randomly selected and an exemplar from it is chosen and presented. This is followed the appearance of a red ‘X’ in the corresponding screen location. Participants then respond by pressing the key corresponding to the position of the ‘X’.
Figure 3
Figure 3
Reaction time (RT) to detect the visual target as a function of Block, presented across experiments. The RT Cost is the difference in average reaction time across Blocks 3 and 4 (and 8 and 9 in Experiment 4), summarized in the bottom panel.
Figure 4
Figure 4
Average accuracy in the post-training overt categorization task across experiments. Note that there was no overt categorization task conducted in Experiment 2a. All sounds categorized in the overt categorization task were novel category exemplars not experienced in training. The dashed line represents chance-level performance.

References

    1. Ashby FG, Maddox WT. Human category learning. Annu Rev Psychol. 2005;56:149–178. - PubMed
    1. Ashby FG, Maddox WT, Bohil CJ. Observational versus feedback training in rule-based and information-integration category learning. Memory & cognition. 2002;30(5):666–677. - PubMed
    1. Bradlow AR, Pisoni DB, Akahane-Yamada R, Tohkura Yi. Training Japanese listeners to identify English/r/and/l: IV. Some effects of perceptual learning on speech production. The Journal of the Acoustical Society of America. 1997;101(4):2299. - PMC - PubMed
    1. Chandrasekaran B, Yi HG, Maddox WT. Dual-learning systems during speech category learning. Psychonomic bulletin & review. 2014;21(2):488–495. - PMC - PubMed
    1. Clapper JP. The effects of prior knowledge on incidental category learning. Journal of Experimental Psychology: Learning, Memory, and Cognition. 2012;38(6):1558. - PubMed

Publication types