Skip to main page content
U.S. flag

An official website of the United States government

Dot gov

The .gov means it’s official.
Federal government websites often end in .gov or .mil. Before sharing sensitive information, make sure you’re on a federal government site.

Https

The site is secure.
The https:// ensures that you are connecting to the official website and that any information you provide is encrypted and transmitted securely.

Access keys NCBI Homepage MyNCBI Homepage Main Content Main Navigation
. 2015 Nov 11:1626:218-31.
doi: 10.1016/j.brainres.2015.06.001. Epub 2015 Jun 26.

Neural measures of a Japanese consonant length discrimination by Japanese and American English listeners: Effects of attention

Affiliations

Neural measures of a Japanese consonant length discrimination by Japanese and American English listeners: Effects of attention

Miwako Hisagi et al. Brain Res. .

Abstract

This study examined automaticity of discrimination of a Japanese length contrast for consonants (miʃi vs. miʃʃi) in native (Japanese) and non-native (American-English) listeners using behavioral measures and the event-related potential (ERP) mismatch negativity (MMN). Attention to the auditory input was manipulated either away from the auditory input via a visual oddball task (Visual Attend), or to the input by asking the listeners to count auditory deviants (Auditory Attend). Results showed a larger MMN when attention was focused on the consonant contrast than away from it for both groups. The MMN was larger for consonant duration increments than decrements. No difference in MMN between the language groups was observed, but the Japanese listeners did show better behavioral discrimination than the American English listeners. In addition, behavioral responses showed a weak, but significant correlation with MMN amplitude. These findings suggest that both acoustic-phonetic properties and phonological experience affects automaticity of speech processing. This article is part of a Special Issue entitled SI: Prediction and Attention.

Keywords: Attention; Consonants; Cross-linguistic; Japanese temporal-cues; Mismatch Negativity (MMN); Speech perception.

PubMed Disclaimer

Figures

Fig. 1
Fig. 1
Visual and Auditory Attend Tasks (short vs. long). The ERP responses to the standard stimulus for the short (miʃi) and long (miʃʃi) stimuli for each group and each task are displayed at Fz. AE = American English and JP = Japanese.
Fig. 2
Fig. 2
Visual and Auditory Attend Tasks (American English =AE vs. Japanese = JP). The ERP responses to the deviant stimulus for AE and JP for each stimulus (miʃi vs. miʃʃi) and each task are displayed at Fz.
Fig. 3
Fig. 3
Frontal-Inferior Model. The Grand Means for each Group (American English (AE) and Japanese (JP)) are compared for each stimulus (short and long) in each task (Auditory Attend and Visual Attend). Error bars show the standard error of the mean. The model consists of the average of midline and right frontal sites 3, 4, 5, 54, 55 and 58 and the inferior posterior sites (sites 31, 35, 36, 39, 40, 44, multiplied by −1 and then averaged with frontal sites). The MMN peak is identified with arrows.
Fig. 4
Fig. 4
The mean values of short (left) and long (right) deviant between the Auditory and Visual tasks. Eleven, 20-ms time intervals were examined, with the first (1) representing 220-240 ms and the final (11) representing 420-440 ms. Vertical bars denotes 95% confidence intervals.
Fig. 5
Fig. 5
Pz-Vertex Model. The Grand Means for each Group (American English (AE) and Japanese (JP)) are compared for each stimulus (short and long) in each task (Auditory Attend and Visual Attend). Error bars show the standard error of the mean. The model consists of the average of the following posterior sites 30, 42, 43, 55, 65. P3b is identified with arrows.
Fig. 6
Fig. 6
Anterior Model. The Grand Means for each Group (American English (AE) and Japanese (JP)) are compared for each stimulus (short and long) in each task (Auditory Attend and Visual Attend). Error bars show the standard error of the mean. The model consists of the average of the following anterior sites 6, 10, 11 and 14.
Fig. 7
Fig. 7
Locations of electrode sites identified in the Principal Components Analysis to be included in the statistical analysis. The anterior sites 6, 10, 11 and 14 were used to create an Anterior Model, the frontal 3, 4, 5, 54, 55 and 58 and inferior sites 31, 35, 36, 39, 40 ,44 were used to create a Frontal-Inferior Model, and the posterior sites 30, 42, 43, 55, 65 were used to create a Pz-Vertex model to be used in the analyses.

References

    1. Best CT. A direct realist view of cross-language speech perception. In: Strange W, editor. Speech perception and linguistic experience: Issues in cross -language research. York Press; Baltimore: 1995. pp. 171–204.
    1. Bell AJ, Sejnowski TJ. An information maximisation approach to blind separation and blind deconvolution. Neural Computation. 1995;7(6):1129–1159. - PubMed
    1. Bohn O-S, Flege J. Interlingual Identification and the Role of Foreign Language Experience in L2 Vowel Perception. Applied Psycholinguistics. 1990;11:303–328.
    1. Burnham DK. Developmental loss of speech perception: Exposure to and experience with a first language. Applied Psycholinguistics. 1986;7:207–240.
    1. Delorme A, Makeig S. EEGLAB: an open source toolbox for analysis of single-trial EEG dynamics including independent component analysis. Journal of Neuroscience Methods. 2004;134:9–21. - PubMed

Publication types

LinkOut - more resources