Skip to main page content
U.S. flag

An official website of the United States government

Dot gov

The .gov means it’s official.
Federal government websites often end in .gov or .mil. Before sharing sensitive information, make sure you’re on a federal government site.

Https

The site is secure.
The https:// ensures that you are connecting to the official website and that any information you provide is encrypted and transmitted securely.

Access keys NCBI Homepage MyNCBI Homepage Main Content Main Navigation
. 2008 Sep;190(1):11-30.
doi: 10.1007/s00221-008-1445-2. Epub 2008 Jun 25.

Human sound localization: measurements in untrained, head-unrestrained subjects using gaze as a pointer

Affiliations

Human sound localization: measurements in untrained, head-unrestrained subjects using gaze as a pointer

Luis C Populin. Exp Brain Res. 2008 Sep.

Abstract

Studies of sound localization in humans have used various behavioral measures to quantify the observers' perceptions; a non-comprehensive list includes verbal reports, head pointing, gun pointing, stylus pointing, and laser aiming. Comparison of localization performance reveals that in humans, just as in animals, different results are obtained with different experimental tasks. Accordingly, to circumvent problems associated with task selection and training, this study used gaze, an ethologically valid behavior for spatial pointing in species with a specialized area of the fovea, to measure sound localization perception of human subjects. Orienting using gaze as a pointer does not require training, preserves the natural link between perception and action, and allows for direct behavioral comparisons across species. The results revealed, unexpectedly, a large degree of variability across subjects in both accuracy and precision. The magnitude of the average angular localization errors for the most eccentric horizontal targets, however, were very similar to those documented in studies that used head pointing, whereas the magnitude of the localization errors for the frontal targets were considerably larger. In addition, an overall improvement in sound localization in the context of the memory-saccade task, as well as a lack of effect of initial eye and head position on perceived sound location were documented.

PubMed Disclaimer

Figures

Fig. 1
Fig. 1
Schematic representation of the experimental tasks. a Fixation Task. Either an acoustic or visual target was presented anywhere in the frontal hemifield without behavioral requirements. The subject was instructed to respond by looking at the perceived location of the sources. b Memory-saccade task. This task started with a fixation LED presented straight ahead. During fixation, a target was presented elsewhere in the frontal hemifield. The subject’s task was to remember the position of the target while maintaining fixation and to look at it when the fixation LED was turned off. Trials in which the subject oriented to the target before the fixation LED was turned off were immediately terminated. c Dissociated saccade task. A red fixation LED flashing at 5 Hz signaled to the subject the presentation of a trial of this type. The subject was required to fixate the flashing LED at (0°, 0°) or (±16°, 0°) and to align the head with it. Next, a green LED was presented from the same position or either from the left or the right (±16°, 0°) of the subject. The subject was instructed to direct his gaze to the green LED without moving his head; this accomplished the dissociation of eye and head position. Coinciding with the offset of the green LED an acoustic target was presented elsewhere in the frontal hemifield. In some trials the red flashing LED straight ahead turned solid green, effectively aligning the eyes and the head at the starting position. The subject was required to look at the perceived location of the target at the offset of the green LED
Fig. 2
Fig. 2
Gaze shifts and head movement to broadband noise targets recorded in the context of the fixation task from three subjects. The data are plotted in spherical coordinates from the perspective of an observer outside the sphere. The parallels on the sphere, which is tilted 10° downward to facilitate visualization of the data, are plotted every 10° and the meridians are plotted every 20°. The position of the acoustic targets is illustrated with round, hollow symbols on the gaze plots (a, c, e). All three subjects started their orienting movements from or near the straight ahead position, despite the fact that no instructions were provided to the effect. Note the differences among the subjects in the trajectories of their gaze and head movements, their accuracy, and precision
Fig. 3
Fig. 3
Summary of visual and sound localization performance from all nine subjects combined. (a–c) Spherical plots illustrating visual localization performance in the form of final gaze position from the perspective of an observer outside the sphere, a frontal view of which is shown in b. The parallels on the spheres are plotted 10° apart and the meridians 20° apart; the straight-ahead position corresponds to the intersection of the line representing the equator and the meridian at the center of the plot. Parts b and c are the lateral portions of the sphere plotted in a, rotated 45° toward the midline to depict more clearly the data corresponding to the most eccentric targets. The position of each target is represented with small filled symbols, and their corresponding final mean gaze positions, to which they are connected with thin broken lines, are represented with larger filled symbols. The ellipses/circles surrounding the final gaze position symbols represent one standard deviation functions. (e–f) Final mean gaze position plots illustrating auditory localization performance for all nine subjects
Fig. 4
Fig. 4
Summary of sound localization performance from subject DJT. The data are plotted as described in Fig. 3
Fig. 5
Fig. 5
Summary of sound localization performance from subject JMB. The data are plotted as described in Fig. 3
Fig. 6
Fig. 6
Summary of sound localization performance from subject MPS. The data are plotted as described in Fig. 3
Fig. 7
Fig. 7
Average gaze shift latency to horizontal and vertical acoustic targets recorded in the context of the fixation task. The equivalent measure from horizontal visual targets is included for comparison. Data from all nine subjects are included
Fig. 8
Fig. 8
Summary of sound localization of remembered acoustic targets. The data were collected with the memory-saccade task and are presented separately for each of the two subjects tested. The data are plotted as described in Fig. 3
Fig. 9
Fig. 9
Localization of acoustic targets presented while the position of the eyes and head were misaligned. Data were collected from three subjects with the dissociated saccade task. (a, b; g, h; and m, n) Gaze and head horizontal position plotted as a function of time and synchronized to the onset of the acoustic target. Note that gaze position during the fixation period, before the onset of the acoustic target at time 0 ms, was (±16°, 0°) or (0°, 0°). (e, k, q) Corresponding summary plots of final gaze position for each of the three starting gaze position conditions. (c, d; i, j; and o, p) Gaze and head horizontal position plotted as a function of time and synchronized to the onset of the acoustic targets at time 0 ms. Note that in this condition the gaze of the animal was aligned straight ahead at the time of presentation of the acoustic stimuli, but the head was aligned with LEDs at (±16°, 0°) or (0°, 0°). (f, l, r) Corresponding summary plots of final gaze position for each of the three starting head position conditions. The hollow symbols represent the initial gaze and head positions as labeled (R right, C center, and L left)
Fig. 10
Fig. 10
Comparison of sound localization performance for the azimuthal dimension across studies. Average angular error (a) and average Kappa-1 (b) from the condition without behavioral constraints or spatial references (i.e., the fixation task) are plotted with heavy open circles and heavy solid line; data from individual subjects are plotted with small open circles. Data from Makous and Middlebrooks (1990) are plotted with upward pointing triangles. Data from Wightman and Kistler (1990), as reported in Fig. 7 of Makous and Middlebrooks (1990) are plotted with open squares; the broken lines transecting the symbols represent the extent of azimuth over which the data were averaged. Angular error data from Carlile et al. (1997) are plotted with rhomboids. Data from the rhesus monkey acquired under conditions similar to those of the present study by Populin (2006) are plotted with asterisks and broken lines

References

    1. Albano JE, Mishkin M, Westbrook LE, Wurt RH. Visuomotor deficits following ablation of monkey superior colliculus. J Neurophysiol. 1982;48:338–351. - PubMed
    1. Biesiadecki MG, Populin LC. Effects of target modality on primate gaze shifts. Soc Neurosci Abstr. 2005;35:858.15.
    1. Buttler RA, Humanski RA, Musicant AD. Binaural and monaural localization of sound in two-dimensional space. Perception. 1990;19:241–256. - PubMed
    1. Carlile S, Leong P, Hyams S. The nature and distribution of errors in sound localization by human listeners. Hearing Res. 1997;114:179–196. - PubMed
    1. Fisher NI, Lewis T, Embleton EJJ. Statistical analysis of spherical data. Cambridge UP; Cambridge, UK: 1987.

Publication types

LinkOut - more resources