Skip to main page content
U.S. flag

An official website of the United States government

Dot gov

The .gov means it’s official.
Federal government websites often end in .gov or .mil. Before sharing sensitive information, make sure you’re on a federal government site.

Https

The site is secure.
The https:// ensures that you are connecting to the official website and that any information you provide is encrypted and transmitted securely.

Access keys NCBI Homepage MyNCBI Homepage Main Content Main Navigation
. 2011 Sep 8;71(5):941-53.
doi: 10.1016/j.neuron.2011.06.036.

Visual feature-tolerance in the reading network

Affiliations

Visual feature-tolerance in the reading network

Andreas M Rauschecker et al. Neuron. .

Abstract

A century of neurology and neuroscience shows that seeing words depends on ventral occipital-temporal (VOT) circuitry. Typically, reading is learned using high-contrast line-contour words. We explored whether a specific VOT region, the visual word form area (VWFA), learns to see only these words or recognizes words independent of the specific shape-defining visual features. Word forms were created using atypical features (motion-dots, luminance-dots) whose statistical properties control word-visibility. We measured fMRI responses as word form visibility varied, and we used TMS to interfere with neural processing in specific cortical circuits, while subjects performed a lexical decision task. For all features, VWFA responses increased with word-visibility and correlated with performance. TMS applied to motion-specialized area hMT+ disrupted reading performance for motion-dots, but not line-contours or luminance-dots. A quantitative model describes feature-convergence in the VWFA and relates VWFA responses to behavioral performance. These findings suggest how visual feature-tolerance in the reading network arises through signal convergence from feature-specialized cortical areas.

PubMed Disclaimer

Figures

Figure 1
Figure 1. Alternative hypotheses of how information is communicated from V1 to language circuits
Different visual features are processed by functionally specialized regions in visual cortex. For example, words defined purely by motion cues may be processed by area hMT+. In hypothesis A, different cortical areas have separate access to the language network. In hypothesis B, all word stimuli, regardless of feature type, are converted to a common representation en route to the VWFA in VOT, which has unique access to the language network. Dotted connections represent communication between regions specifically for motion-defined stimuli, and solid connections represent communication for words defined by line contours. The response to different stimulus types in VWFA and hMT+, based on the difference in the black dotted connection, differentiates the two hypotheses. Schematic line contour and motion-dot stimuli are shown.
Figure 2
Figure 2. VWFA BOLD amplitude increases with visibility of words defined by different visual features
(Left column) Percent signal change for the stimulus events, as measured by the weight of the linear regressor (beta-weight), increases with word visibility. The three panels on the left show the VWFA response increase for words defined by motion-dots, luminance-dots, and line contours. (Right column) The response time course, averaged across all subjects, peaks at the same time and reaches a similar peak level for the three feature types. The colors of the bars and time course lines indicate corresponding conditions. The baseline (0% level) is defined by the average of the three values prior to stimulus onset. Error bars are +/− 1 SEM across subjects. See also Figure S1A for a related experiment and Figure S2 and Movies S1 and S2 for example stimuli.
Figure 3
Figure 3. BOLD response increases with lexical decision performance in VWFA but not V1
(A) The left panel shows percent correct in the lexical decision task and normalized BOLD signal amplitude for every subject, visibility level, and feature type (LC = line contour, Lum = luminance-dot, Mot = motion-dot, Mix = motion and luminance dots combined). The filled circles are the mean (+/− 1 SD) averaged across lexical performance bins (width = 6%). The BOLD signal is normalized by the maximum BOLD signal within that ROI for each subject across feature types and visibility. The right panels show the same points separated by feature type. The dashed lines are linear regression fits, and the insets show the regression coefficient (R) and significance levels (p). (B) The same analysis as in panel (A), but for a region of interest in left V1.
Figure 3
Figure 3. BOLD response increases with lexical decision performance in VWFA but not V1
(A) The left panel shows percent correct in the lexical decision task and normalized BOLD signal amplitude for every subject, visibility level, and feature type (LC = line contour, Lum = luminance-dot, Mot = motion-dot, Mix = motion and luminance dots combined). The filled circles are the mean (+/− 1 SD) averaged across lexical performance bins (width = 6%). The BOLD signal is normalized by the maximum BOLD signal within that ROI for each subject across feature types and visibility. The right panels show the same points separated by feature type. The dashed lines are linear regression fits, and the insets show the regression coefficient (R) and significance levels (p). (B) The same analysis as in panel (A), but for a region of interest in left V1.
Figure 4
Figure 4. Human MT+ BOLD responses increase with visibility and lexical decision performance for motion-dot words
(A) Left hMT+ BOLD responses increase with visibility for motion-dot words. The plots in this figure follow the same conventions as the VWFA analysis in Figure 2. (B) The three plots show lexical decision performance (% correct) and normalized BOLD signal amplitude in left hMT+ separated by feature type. BOLD responses increase with lexical decision performance for motion- and luminance-dots, but not line contours. Other details as in Figure 3.
Figure 4
Figure 4. Human MT+ BOLD responses increase with visibility and lexical decision performance for motion-dot words
(A) Left hMT+ BOLD responses increase with visibility for motion-dot words. The plots in this figure follow the same conventions as the VWFA analysis in Figure 2. (B) The three plots show lexical decision performance (% correct) and normalized BOLD signal amplitude in left hMT+ separated by feature type. BOLD responses increase with lexical decision performance for motion- and luminance-dots, but not line contours. Other details as in Figure 3.
Figure 5
Figure 5. TMS to left hMT+ disrupts lexical decision performance only for motion-dot words
The average performance (% correct) is shown as a function of stimulus-pulse onset asynchrony (SOA). Subjects were consistently and significantly impaired at the lexical decision task for motion-dot words at an SOA of 87ms (indicated by the arrow; 2nd pulse at 132ms). There was no significant difference in performance for luminance-dot and line-contour words at any SOA (right panels). Chance performance is 50% (bottom dashed line), and the expected (no TMS effect) performance is 82% based on psychophysical visibility thresholds set prior to each subject’s TMS session (top dashed line).
Figure 6
Figure 6. BOLD response amplitudes for increasing levels of motion-dot word visibility in multiple visual field maps and regions of interest
The responses are shown for several left visual field maps (V1, the ventral portions of V2 and V3, hV4, VO-1/2), left hMT+, the VWFA and the right-hemisphere homologue of the VWFA (rVWFA). Responses for hMT+ and VWFA are as shown in Figures 2 and 4, respectively, and are included here for comparison. Response amplitude increases with motion coherence in hV4, hMT+, rVWFA, and VWFA. Other details as in Figure 2.
Figure 7
Figure 7. A model of responses to combinations of motion- and luminance-dot features
(A) Psychophysical thresholds on a lexical decision task to combinations of luminance- and motion-dot features (N=5). The dotted line is the predicted performance if features combine additively. The dashed curve is the predicted performance from a probability summation model with an exponent of n=3, which was the across-subject average value fit to the psychometric functions for motion-dot coherence and luminance-dot coherence separately. The outer boundary of the box is the predicted performance from a high-threshold model in which signals are completely independent. The features combine according to a rule that is sub-additive (n=1.7) but more effective than pure probability summation. The inset shows +/− 1 SEM across all subjects and mixture conditions. (B) VWFA BOLD response amplitudes with increasing motion-dot coherence at different fixed luminance-dot coherence levels. The curves are predictions from a probability summation model (see main text). The black, dark gray, and light gray are measured response levels (points) and model predictions (curves) for the three luminance-dot coherence levels. The normalized BOLD signal is the VWFA response divided by the response in left V1. The model parameters are shown in the inset; the exponent (n=1.7) is derived from the psychophysical data (panel A); the other parameters are fit to the data. See text for model details. Error bars are +/−1 SEM between subjects.
Figure 8
Figure 8. Location of the VWFA and visual field maps
(A) In individual subjects, we performed retinotopic mapping to define the boundaries of multiple visual areas (V1, V2, V3, hV4, VO-1, VO-2). The map boundaries are shown by the blue lines. The VWFA localizer contrasted words with phase-scrambled words (p<0.001, uncorrected). All significantly responsive gray matter voxels on the ventral occipito-temporal cortex anterior to hV4 and falling outside of known retinotopic areas were included in the VWFA ROI (outlined in black). (B) Coronal slices showing the position of the VWFA ROI for each subject; the MNI y-coordinate is shown in the inset. VWFA activation is outlined by dotted circles. Left hemisphere cortical surface renderings adjacent to each slice show a ventral view with all identifiable retinotopic areas outlined in blue, contrast maps in orange, and the VWFA outlined in black. We could not identify retinotopic area VO-2 in S4 and S6. A parietal activation seen in several slices, which is not studied in this paper, was also present routinely. See also Figures S2 and S3 and Table S1.

Similar articles

Cited by

References

    1. Ben-Shachar M, Dougherty RF, Deutsch GK, Wandell BA. Contrast responsivity in MT+ correlates with phonological awareness and reading measures in children. Neuroimage. 2007a;37:1396–1406. - PMC - PubMed
    1. Ben-Shachar M, Dougherty RF, Deutsch GK, Wandell BA. Differential sensitivity to words and shapes in ventral occipito-temporal cortex. Cereb Cortex. 2007b;17:1604–1611. - PubMed
    1. Ben-Shachar M, Dougherty RF, Wandell BA. White matter pathways in reading. Curr Opin Neurobiol. 2007c;17:258–270. - PubMed
    1. Binder JR, Medler DA, Westbury CF, Liebenthal E, Buchanan L. Tuning of the human left fusiform gyrus to sublexical orthographic structure. Neuroimage. 2006;33:739–748. - PMC - PubMed
    1. Blanke O, Brooks A, Mercier M, Spinelli L, Adriani M, Lavanchy L, Safran AB, Landis T. Distinct mechanisms of form-from-motion perception in human extrastriate cortex. Neuropsychologia. 2007;45:644–653. - PubMed

Publication types

MeSH terms

LinkOut - more resources