Skip to main page content
U.S. flag

An official website of the United States government

Dot gov

The .gov means it’s official.
Federal government websites often end in .gov or .mil. Before sharing sensitive information, make sure you’re on a federal government site.

Https

The site is secure.
The https:// ensures that you are connecting to the official website and that any information you provide is encrypted and transmitted securely.

Access keys NCBI Homepage MyNCBI Homepage Main Content Main Navigation
Review
. 2004 Apr;10(2):142-52.
doi: 10.1177/1073858403261018.

Spatiotemporal dynamics of word processing in the human cortex

Affiliations
Review

Spatiotemporal dynamics of word processing in the human cortex

Ksenija Marinković. Neuroscientist. 2004 Apr.

Abstract

Understanding language relies on concurrent activation of multiple areas within a distributed neural network. Hemodynamic measures (fMRI and PET) indicate their location, and electromagnetic measures (magnetoencephalography and electroencephalography) reveal the timing of brain activity during language processing. Their combination can show the spatiotemporal characteristics (where and when) of the underlying neural network. Activity to written and spoken words starts in sensory-specific areas and progresses anteriorly via respective ventral ("what") processing streams toward the simultaneously active supramodal regions. The process of understanding a word in its current context peaks about 400 ms after word onset. It is carried out mainly through interactions of the temporal and inferior prefrontal areas on the left during word reading and bilateral temporo-prefrontal areas during speech processing. Neurophysiological evidence suggests that lexical access, semantic associations, and contextual integration may be simultaneous as the brain uses available information in a concurrent manner, with the final goal of rapidly comprehending verbal input. Because the same areas may participate in multiple stages of semantic or syntactic processing, it is crucial to consider both spatial and temporal aspects of their interactions to appreciate how the brain understands words.

PubMed Disclaimer

Figures

Figure 1
Figure 1. Electromagnetic and hemodynamic methods
This Figure illustrates responses to words measured with Event-related Potentials (ERP), Magnetoencephalography (MEG) and functional Magnetic Resonance Imaging (fMRI). Sample waveforms from one ERP channel (a), and three complementary sensors at one MEG location (c) reflect neural activity in real time. The N400 deflection (and its magnetic equivalent N400m), thought to index semantic integration, are marked with blue arrows. These electromagnetic methods, however, cannot unambiguously localize the generators of the activity measured on the scalp. Topographic estimates of the ERP signal is illustrated in (b) and the anatomically-constrained MEG method in (d) – see Box 1 for explanation. The bottom row shows an example of the fMRI activity to words as seen in axial slices (e), or on the cortical surface that is inflated for better visibility (f). The fMRI has excellent spatial resolution revealing activations in the left inferotemporal and left prefrontal regions to words, but cannot accurately reflect the timing of their engagement (fMRI data from Oscar-Berman, with permission).
Figure 2
Figure 2
(in Box 1) The basis of the anatomically-constrained MEG analysis method. MEG signals are recorded with a whole-head device and presented as waveforms (1a) or magnetic fields (1b) on the surface of the head. Based on high-resolution anatomical MRI (2a), cortical surface for each subject is reconstructed (2b) and used to estimate signal generators. The activity is estimated as it unfolds in time, resulting in brain movies (3). Because most of the cortex is hidden in folds, the reconstructed surface is inflated for better visibility of the estimated activity. Dark gray denotes the folds and the light gray the crowns of the cortical gyri.
Figure 3
Figure 3. Group average aMEG estimated activity to spoken and written words
Subjects were presented with either spoken or written words denoting animals or objects and were asked to estimate their size. A comparison of the group average activation to spoken (auditory modality) or written words (visual modality) obtained in the same group of subjects is shown. Snapshots of “brain movies” at selected latencies illustrate how the activity starts in sensory-specific areas and spreads anteriorly via ventral (“what”) processing streams towards the highly overlapping, apparently supramodal temporal and prefrontal regions. The process of understanding a word peaks at about 400 ms after word onset (known as N400) and results from interactive contributions of these areas. Whereas processing of written words was left-lateralized, understanding spoken words engaged bilateral regions with left-dominant prefrontal activity (adapted from Marinkovic et al., 2003, permission pending ).
Figure 4
Figure 4. Intracranial ERPs from inferotemporal cortex during a word recognition task
An electrode was implanted in the inferotemporal area with the purpose of directing surgical treatment of epilepsy. Intracranial ERPs can unambiguously identify the timing and location of the brain processes related to a task based on steep potential gradients and inversions. In this case, large and locally generated potentials were evoked during early (170 ms, marked with ▼), transitional (220 ms, marked with ●), and later integration (450 ms, marked with ■) processing stages. This evidence suggests that adjacent or overlapping regions in the inferotemporal area may play distinct roles in different aspects of verbal processing, but with different timing and at different processing stages. Consequently, it is important to consider both spatial and temporal information in order to gain a realistic view of word processing in the brain. (adapted from Halgren et al., 1994, permission pending).
Figure 5
Figure 5. Time-collapsed intracranial N400
Based on intracranial ERP recordings across many patients (Halgren and others 1994), the areas contributing to the N400 (pink color) evoked by written words are located along the ventral visual stream, confirming the localization estimates obtained with noninvasive methods such as aMEG. With the exception of the temporopolar region, these observations are in general agreement with the fMRI studies of language processing. Other included colors denote areas that generate other intracranial ERP deflections.
Figure 6
Figure 6. A two-stage model of processing spoken and written words
aMEG and other evidence indicates that word-evoked activity starts in sensory-specific areas and progresses anteriorly towards the sensory-nonspecific regions primarily in the temporal and prefrontal regions. During the first ~200 ms, material-specific processing takes place in the areas along the ventral processing streams and is then forwarded to distributed supramodal areas for further processing. The brain seems to utilize all relevant information concurrently in an effort to understand verbal input as rapidly and completely as possible. Sustained interactions among multiple areas allow for the semantic, mnemonic, emotional and contextual integration of meaning.
Figure 7
Figure 7. Early left prefrontal activity to spoken words
Group average aMEG to spoken words during a semantic task at 240 ms after acoustic onset. Subjects heard a series of words denoting objects or animals and were asked to judge their size. Words that share the initial phoneme with many words (high density neighborhood) evoke more left prefrontal activation at 240 ms than the words that have fewer competitors. aLIPC may provide the top-down facilitation during understanding of spoken words, in accord with other evidence showing its contribution to ambiguous situations.

Similar articles

Cited by

References

    1. Bar M. A cortical mechanism for triggering top-down facilitation in visual object recognition. J Cogn Neurosci. 2003;15(4):600–9. - PubMed
    1. Besson M, Schon D. Comparison between language and music. Ann N Y Acad Sci. 2001;930:232–58. - PubMed
    1. Booth JR, Burman DD, Meyer JR, Gitelman DR, Parrish TB, Mesulam MM. Modality independence of word comprehension. Hum Brain Mapp. 2002;16(4):251–61. - PMC - PubMed
    1. Brownell HH, Simpson TL, Bihrle AM, Potter HH, Gardner H. Appreciation of metaphoric alternative word meanings by left and right brain-damaged patients. Neuropsychologia. 1990;28(4):375–83. - PubMed
    1. Buckner RL, Koutstaal W, Schacter DL, Rosen BR. Functional MRI evidence for a role of frontal and inferior temporal cortex in amodal components of priming. Brain. 2000;123(Pt 3):620–40. - PubMed

Publication types

MeSH terms

LinkOut - more resources