Skip to main page content
U.S. flag

An official website of the United States government

Dot gov

The .gov means it’s official.
Federal government websites often end in .gov or .mil. Before sharing sensitive information, make sure you’re on a federal government site.

Https

The site is secure.
The https:// ensures that you are connecting to the official website and that any information you provide is encrypted and transmitted securely.

Access keys NCBI Homepage MyNCBI Homepage Main Content Main Navigation
. 2009 Jan 1;3(1):128-156.
doi: 10.1111/j.1749-818X.2008.00121.x.

Watching the Word Go by: On the Time-course of Component Processes in Visual Word Recognition

Affiliations

Watching the Word Go by: On the Time-course of Component Processes in Visual Word Recognition

Jonathan Grainger et al. Lang Linguist Compass. .

Abstract

We describe a functional architecture for word recognition that focuses on how orthographic and phonological information cooperates in initial form-based processing of printed word stimuli prior to accessing semantic information. Component processes of orthographic processing and orthography-to-phonology translation are described, and the behavioral evidence in favor of such mechanisms is briefly summarized. Our theoretical framework is then used to interpret the results of a large number of recent experiments that have combined the masked priming paradigm with electrophysiological recordings. These experiments revealed a series of components in the event-related potential (ERP), thought to reflect the cascade of underlying processes involved in the transition from visual feature extraction to semantic activation. We provide a tentative mapping of ERP components onto component processes in the model, hence specifying the relative time-course of these processes and their functional significance.

PubMed Disclaimer

Figures

Fig 1
Fig 1
Architecture of a bi-modal interactive activation model (BIAM) of word recognition in which a feature/sublexical/lexical division is imposed on orthographic (O) and phonological (P) representations. In this architecture, orthography and phonology communicate directly at the level of whole-word representations (O-words, P-words), and also via a sublexical interface (O ⇔ P). Semantic representations (S-units) receive activation from whole-word orthographic and phonological representations (the details of inhibitory within-level and between-level connections are not shown).
Fig. 2
Fig. 2
Details of the orthographic and phonological pathways of the BIAM. Visual features extracted from a printed word feed activation into a bank of retinotopic letter detectors (1). Information from different processing slots in the alphabetic array provides input to a relative position code for letter identities (2) and a graphemic code (3). The relative-position coded letter identities control activation at the level of whole-word orthographic representations (5). The graphemic code enables activation of the corresponding phonemes (4), which in turn activate compatible whole-word phonological representations (6).
Fig. 3
Fig. 3
Grand average ERPs at the right frontal and occipital electrode sites for English words (a), French words (b) and a scalp voltage map centered on the 150 ms epoch (c). Note that in this and all subsequent ERP figures negative voltages are plotted in the upward direction. Stimulus onset (target) is indicated by the vertical calibration bar transecting the x-axis and 100 millisecond increments in time are indicated by tic marks. Voltage maps are differences between two conditions and reflect interpolated voltages across the scalp at a particular point in time.
Fig. 4
Fig. 4
Scalp map of ERPs from Chauncey et al. (2008) showing the effect of stimulus font on the N/P150 component. This map was formed from differences waves centered at 150 ms post-target onset. The difference waves were calculated by subtracting target ERPs with a font change from prime to target from target ERPs where prime and target font were the same.
Fig. 5
Fig. 5
N/P150 repetition effect when primes are aligned with targets (b) or displaced by one letter space to the left (a) or to the right (c – adapted from Dufau et al. 2008).
Fig. 6
Fig. 6
ERPs to words in a masked word priming study (Kiyonaga et al., 2007) for visual word targets show an N250 effect (a), visual nonword targets show a similar effect (b), scalp maps of the N250 effect (c), no N250 was observed to auditory targets in cross-modal masked priming (d), and N250 effects were observed in a second study (Holcomb and Grainger 2006) to both fully repeated words and partial word repetitions (e).
Fig. 7
Fig. 7
(a) ERPs generated by target words following masked presentation of pseudohomophone primes (e.g., bakon-BACON) versus orthographic control primes (e.g., bafon-BACON). (b) and (c) Voltage maps calculated by subtracting target ERPs from the pseudohomophone condition from their control ERPs (b) and the Transposed Letter ERPs from their controls (c). Adapted from Grainger et al. (2006a).
Fig. 8
Fig. 8
ERPs to visual targets in a masked priming paradigm when the task was semantic categorization (a – from Holcomb and Grainger 2006) and lexical decision (b – from Kiyonaga et al. 2007).
Fig. 9
Fig. 9
Scalp voltage maps for visual masked priming repetition effects in three different time epochs surrounding the three ERP components reported in Holcomb and Grainger (2006).
Fig. 10
Fig. 10
Target ERPs from the SOA experiment of Holcomb and Grainger (2007).
Fig. 11
Fig. 11
Target ERPs from the prime duration experiment of Holcomb and Grainger (2007).
Fig. 12
Fig. 12
ERP masked repetition priming effects mapped onto the Bi-modal Interactive Activation Model (note that we have turned the model on its side to better accommodate the temporal correspondence between the model and the ERP effects). This version of the BIAM incorporates the breakdown of sublexical orthographic representations (O-units) into a location-specific, retinotopic (R) code and a location-invariant, word-centered (W) code, as described in Figure 2.
Fig. 13
Fig. 13
Auditory Target ERPs in a cross-modal masked priming paradigm from two prime durations (adapted from Kiyonaga et al. 2007).
Fig. 14
Fig. 14
Supraliminal cross-modal repetition priming results of Holcomb et al. (2005, 200 ms SOA) showing asymmetry of effects as a function of modality (visual primes – auditory targets on the left, auditory primes – visual targets on the right).
Fig. 15
Fig. 15
Masked Repetition and Semantic priming (from Holcomb and Grainger, in press).
Fig. 16
Fig. 16
From Midgley et al. (in press). L1–L2 translation priming (top) and L2–L1 translation priming (bottom).

References

    1. Anderson JE, Holcomb PJ. Auditory and visual semantic priming using different stimulus onset asynchronies: An event-related brain potential study. Psychophysiology. 1995;32:177–90. - PubMed
    1. Bar M, Kassam KS, Ghuman AS, Boshyan J, Schmid AM, Dale AM, Hamalainen MA, Marinkovic K, Schacter DL, Rosen BR, Halgren E. Top-down facilitation of visual recognition. Proceedings of the National Academy of Sciences USA. 2006;103:449–54. - PMC - PubMed
    1. Barber HA, Kutas M. Interplay between computational models and cognitive electrophysiology in visual word recognition. Brain Research Reviews. 2007;53:98–123. - PubMed
    1. Bentin S, McCarthy G, Wood CC. Event-related potentials, lexical decision and semantic priming. Electroencephalography and Clinical Neurophysiology. 1985;60:343–55. - PubMed
    1. Bentin S, Mouchetant-Rostaing Y, Giard MH, Echallier JF, Pernier J. ERP manifestations of processing printed words at different psycholinguistic levels: Time course and scalp distribution. Journal of Cognitive Neuroscience. 1999;11:35–60. - PubMed

LinkOut - more resources