Skip to main page content
U.S. flag

An official website of the United States government

Dot gov

The .gov means it’s official.
Federal government websites often end in .gov or .mil. Before sharing sensitive information, make sure you’re on a federal government site.

Https

The site is secure.
The https:// ensures that you are connecting to the official website and that any information you provide is encrypted and transmitted securely.

Access keys NCBI Homepage MyNCBI Homepage Main Content Main Navigation
. 2010 Mar 31;5(3):e9889.
doi: 10.1371/journal.pone.0009889.

Words and melody are intertwined in perception of sung words: EEG and behavioral evidence

Affiliations

Words and melody are intertwined in perception of sung words: EEG and behavioral evidence

Reyna L Gordon et al. PLoS One. .

Abstract

Language and music, two of the most unique human cognitive abilities, are combined in song, rendering it an ecological model for comparing speech and music cognition. The present study was designed to determine whether words and melodies in song are processed interactively or independently, and to examine the influence of attention on the processing of words and melodies in song. Event-Related brain Potentials (ERPs) and behavioral data were recorded while non-musicians listened to pairs of sung words (prime and target) presented in four experimental conditions: same word, same melody; same word, different melody; different word, same melody; different word, different melody. Participants were asked to attend to either the words or the melody, and to perform a same/different task. In both attentional tasks, different word targets elicited an N400 component, as predicted based on previous results. Most interestingly, different melodies (sung with the same word) elicited an N400 component followed by a late positive component. Finally, ERP and behavioral data converged in showing interactions between the linguistic and melodic dimensions of sung words. The finding that the N400 effect, a well-established marker of semantic processing, was modulated by musical melody in song suggests that variations in musical features affect word processing in sung language. Implications of the interactions between words and melody are discussed in light of evidence for shared neural processing resources between the phonological/semantic aspects of language and the melodic/harmonic aspects of music.

PubMed Disclaimer

Conflict of interest statement

Competing Interests: The authors have declared that no competing interests exist.

Figures

Figure 1
Figure 1. Stimuli examples.
Examples of stimuli in the four experimental conditions: same word, same melody (a); same word, different melody (b); different word, same melody (c); different word, different melody (d).
Figure 2
Figure 2. Word effect.
Grand average ERPs timelocked to the onset of targets with the same word as the prime (solid line) or a different word than the prime (dashed line), in the Linguistic Task (A) and Musical Task (B). Selected traces from 9 electrodes are presented. In this figure, amplitude (in microvolts) is plotted on the ordinate (negative up) and the time (in milliseconds) is on the abscissa.
Figure 3
Figure 3. Melody effect.
Grand average ERPs timelocked to the onset of targets with the same melody as the prime (solid line) or a different melody than the prime (dashed line), in the Linguistic Task (A) and Musical Task (B). Selected traces from 9 electrodes are presented. In this figure, amplitude (in microvolts) is plotted on the ordinate (negative up) and the time (in milliseconds) is on the abscissa.
Figure 4
Figure 4. Word by Melody interaction.
(A) For each of the 4 experimental conditions (averaged across both tasks because there was no Task x Word x Melody interaction): the reaction time in milliseconds (gray bars, left Y-axis) and the magnitude (µV) of the mean amplitude of the ERPs in the 300–500 ms latency range, averaged across all electrodes (black bars, right Y-axis). (B) ERPs associated with the 4 experimental conditions (averaged across both tasks because there was no Task x Word x Melody interaction) for electrodes Cz (top) and Pz (bottom). Solid line: same word, same melody; dotted line: same word, different melody; dashed line: different word, same melody; dashed-dotted line: different word, different melody.
Figure 5
Figure 5. Additive model test.
Mean amplitude (in µV) of ERP difference waves in the 300–500 ms latency band, for double variations observed (W≠M≠ minus W = M = ) and the modeled sum of simple variations (W≠M =  minus W = M = ) + (W = M≠ minus W = M = ), at midline electrodes (dark gray bars) and lateral electrodes (light gray bars).

References

    1. Besson M, Schön D. Comparison between language and music. Annals of the New York Academy of Sciences. 2001;930:232–258. - PubMed
    1. Patel AD, Peretz I. Is music autonomous from language? A neuropsychological appraisal. In: Deliege I, Sloboda J, editors. Perception and cognition of music. London: Erlbaum Psychology Press; 1997. pp. 191–215.
    1. Peretz I, Coltheart M. Modularity of music processing. Nature Neuroscience. 2003;6:688–691. - PubMed
    1. Koelsch S. Neural substrates of processing syntax and semantics in music. Current Opinion in Neurobiology. 2005;15:207–212. - PubMed
    1. Patel AD. New York: Oxford University Press; 2008. Music, Language, and the Brain.

Publication types