Skip to main page content
U.S. flag

An official website of the United States government

Dot gov

The .gov means it’s official.
Federal government websites often end in .gov or .mil. Before sharing sensitive information, make sure you’re on a federal government site.

Https

The site is secure.
The https:// ensures that you are connecting to the official website and that any information you provide is encrypted and transmitted securely.

Access keys NCBI Homepage MyNCBI Homepage Main Content Main Navigation
. 2015 Jan 29:8:451.
doi: 10.3389/fnins.2014.00451. eCollection 2014.

Perceptual factors contribute more than acoustical factors to sound localization abilities with virtual sources

Affiliations

Perceptual factors contribute more than acoustical factors to sound localization abilities with virtual sources

Guillaume Andéol et al. Front Neurosci. .

Erratum in

Abstract

Human sound localization abilities rely on binaural and spectral cues. Spectral cues arise from interactions between the sound wave and the listener's body (head-related transfer function, HRTF). Large individual differences were reported in localization abilities, even in young normal-hearing adults. Several studies have attempted to determine whether localization abilities depend mostly on acoustical cues or on perceptual processes involved in the analysis of these cues. These studies have yielded inconsistent findings, which could result from methodological issues. In this study, we measured sound localization performance with normal and modified acoustical cues (i.e., with individual and non-individual HRTFs, respectively) in 20 naïve listeners. Test conditions were chosen to address most methodological issues from past studies. Procedural training was provided prior to sound localization tests. The results showed no direct relationship between behavioral results and an acoustical metrics (spectral-shape prominence of individual HRTFs). Despite uncertainties due to technical issues with the normalization of the HRTFs, large acoustical differences between individual and non-individual HRTFs appeared to be needed to produce behavioral effects. A subset of 15 listeners then trained in the sound localization task with individual HRTFs. Training included either visual correct-answer feedback (for the test group) or no feedback (for the control group), and was assumed to elicit perceptual learning for the test group only. Few listeners from the control group, but most listeners from the test group, showed significant training-induced learning. For the test group, learning was related to pre-training performance (i.e., the poorer the pre-training performance, the greater the learning amount) and was retained after 1 month. The results are interpreted as being in favor of a larger contribution of perceptual factors than of acoustical factors to sound localization abilities with virtual sources.

Keywords: head-related transfer function; individual differences; perceptual learning; procedural learning; sound localization.

PubMed Disclaimer

Figures

Figure 1
Figure 1
Interior view (left) and exterior schematic view (right) of the experimental apparatus.
Figure 2
Figure 2
Individual judgment position against target position with individual and non-individual HRTFs (black and gray dots, respectively) at the pre-test in the up/down dimension. Each panel couple is for a different listener (N = 20).
Figure 3
Figure 3
Same as Figure 2 but for the front/back dimension. The front/back reversal rate for individual and non-individual HRTFs are indicated in each panel couple.
Figure 4
Figure 4
Same as Figure 2 but for the left/right dimension.
Figure 5
Figure 5
Individual localization scores at the pre-test against spectral strength with individual HRTFs. (A–C) Up/down errors (in °) for high, middle, and low target elevations. (D–F) Up → down, down → up, and front/back reversal rates (in %).
Figure 6
Figure 6
Individual localization scores with non-individual against with individual HRTFs at the pre-test. (A–C) Up/down errors (in °) for high, middle, and low target elevations. (D–F) Up → down, down → up, and front/back reversal rates (in %). Each symbol is for a different listener. Circles and bars represent the means and 95% confidence intervals averaged across about 30 (up/down error) to 96 (front/back reversals) target positions. Filled circles indicate the listeners with significant difference between individual and non-individual HRTFs according to Wilcoxon tests.
Figure 7
Figure 7
Individual signed differences in localization score against ISD between non-individual and individual HRTFs. (A–C) Up/down errors (in °) for high, middle, and low target elevations. (D–F) Up → down, down → up, and front/back reversal rates (in %).
Figure 8
Figure 8
Individual judgment position against target position with individual HRTFs at the pre- and post-tests (black and gray dots, respectively) for the test and control listeners (left and right columns, respectively) in the up/down dimension. Each panel couple is for a different listener.
Figure 9
Figure 9
Same as in Figure 8 but for front/back dimension.
Figure 10
Figure 10
Individual learning amounts (pre-test minus post-test localization score) against pre-test scores for the test and control listeners (blue and pink symbols, respectively) with individual HRTFs. (A–C) Up/down errors (in °) for high, middle, and low target elevations. (D–F) Up → down, down → up, and front/back reversal rates (in %). Filled symbols indicate the listeners with significant difference between pre- and post-tests according to Wilcoxon tests.
Figure A1
Figure A1
Individual judgment position against target position using correct and incorrect DTFs (black and gray dots, respectively) with individual HRTFs in the left/right, up/down, and front/back dimensions. Each panel couple is for a different listener (N = 5).

References

    1. Ahissar M. (2001). Perceptual training: a tool for both modifying the brain and exploring it. Proc. Natl. Acad. Sci. U.S.A. 98, 11842–11843. 10.1073/pnas.221461598 - DOI - PMC - PubMed
    1. Amitay S., Halliday L., Taylor J., Sohoglu E., Moore D. R. (2010). Motivation and intelligence drive auditory perceptual learning. PLoS ONE 5:e9816. 10.1371/journal.pone.0009816 - DOI - PMC - PubMed
    1. Amitay S., Hawkey D. J. C., Moore D. R. (2005). Auditory frequency discrimination learning is affected by stimulus variability. Percept. Psychophys. 67, 691–698. 10.3758/BF03193525 - DOI - PubMed
    1. Andéol G., Guillaume A., Micheyl C., Savel S., Pellieux L., Moulin A. (2011). Auditory efferents facilitate sound localization in noise in humans. J. Neurosci. 31, 6759–6763. 10.1523/JNEUROSCI.0248-11.2011 - DOI - PMC - PubMed
    1. Andéol G., Macpherson E. A., Sabin A. T. (2013). Sound localization in noise and sensitivity to spectral shape. Hear. Res. 304, 20–27. 10.1016/j.heares.2013.06.001 - DOI - PubMed

LinkOut - more resources