Skip to main page content
U.S. flag

An official website of the United States government

Dot gov

The .gov means it’s official.
Federal government websites often end in .gov or .mil. Before sharing sensitive information, make sure you’re on a federal government site.

Https

The site is secure.
The https:// ensures that you are connecting to the official website and that any information you provide is encrypted and transmitted securely.

Access keys NCBI Homepage MyNCBI Homepage Main Content Main Navigation
. 2020 Jan 22;40(4):917-931.
doi: 10.1523/JNEUROSCI.2700-19.2019. Epub 2019 Dec 20.

Categorical Biases in Human Occipitoparietal Cortex

Affiliations

Categorical Biases in Human Occipitoparietal Cortex

Edward F Ester et al. J Neurosci. .

Abstract

Categorization allows organisms to generalize existing knowledge to novel stimuli and to discriminate between physically similar yet conceptually different stimuli. Humans, nonhuman primates, and rodents can readily learn arbitrary categories defined by low-level visual features, and learning distorts perceptual sensitivity for category-defining features such that differences between physically similar yet categorically distinct exemplars are enhanced, whereas differences between equally similar but categorically identical stimuli are reduced. We report a possible basis for these distortions in human occipitoparietal cortex. In three experiments, we used an inverted encoding model to recover population-level representations of stimuli from multivoxel and multielectrode patterns of human brain activity while human participants (both sexes) classified continuous stimulus sets into discrete groups. In each experiment, reconstructed representations of to-be-categorized stimuli were systematically biased toward the center of the appropriate category. These biases were largest for exemplars near a category boundary, predicted participants' overt category judgments, emerged shortly after stimulus onset, and could not be explained by mechanisms of response selection or motor preparation. Collectively, our findings suggest that category learning can influence processing at the earliest stages of cortical visual processing.SIGNIFICANCE STATEMENT Category learning enhances perceptual sensitivity for physically similar yet categorically different stimuli. We report a possible mechanism for these changes in human occipitoparietal cortex. In three experiments, we used an inverted encoding model to recover population-level representations of stimuli from multivariate patterns in occipitoparietal cortex while participants categorized sets of continuous stimuli into discrete groups. The recovered representations were systematically biased by category membership, with larger biases for exemplars adjacent to a category boundary. These results suggest that mechanisms of categorization shape information processing at the earliest stages of the visual system.

Keywords: EEG; categorization; fMRI; human; occipital cortex.

PubMed Disclaimer

Figures

Figure 1.
Figure 1.
Overview of Experiment 1. A, Participants viewed displays containing a circular aperture of iso-oriented bars. On each trial, the bars were assigned 1 of 15 unique orientations from 0° to 168°. B, We randomly selected and designated one stimulus orientation as a category boundary (black dashed line) such that the seven orientations counterclockwise from this value were assigned to Category 1 (red lines) and the seven orientations clockwise from this value were assigned to Category 2 (blue lines). C, After training, participants rarely miscategorized orientations. D, Response latencies are significantly longer for oriented exemplars near the category boundary. RT, Response time. C, D, Shaded regions represent ±1 within-participant SEM.
Figure 2.
Figure 2.
Category decoding performance. A, We trained classifiers on activation patterns evoked by exemplars at the center of each category boundary during the orientation mapping and category discrimination task (blue lines) and then used the trained classifier to predict the category membership of exemplars adjacent to the category boundary (red lines). B, Decoding accuracy was significantly higher during the category discrimination task relative to the orientation mapping task (p = 0.01), suggesting that activation patterns evoked by exemplars adjacent to the category boundary became more similar to activation patterns evoked by exemplars at the center of each category during the categorization task. The absence of robust decoding performance during the orientation mapping task cannot be attributed to poor signal or a uniform enhancement of orientation representations by attention, as a decoder trained and tested on activation patterns associated with exemplars at the center of each category (C) yielded above-chance decoding during both behavioral tasks (D). Decoding performance was computed from activation patterns in V1. Error bars indicate ±1 SEM.
Figure 3.
Figure 3.
Inverted encoding model. A, In the first phase of the analysis, we estimated an orientation selectivity profile for each voxel retinotopically organized V1-hV4/V3a using data from an independent orientation mapping task. Specifically, we modeled the response of each voxel as a set of 15 hypothetical orientation channels, each with an idealized response function. B, In the second phase of the analysis, we computed the response of each orientation channel from the estimated orientation weights and the pattern of responses across voxels measured during each trial of the category discrimination task. The resulting reconstructed CRF contains a representation of the stimulus orientation (example; 24°), which we quantified via a curve-fitting procedure.
Figure 4.
Figure 4.
Reconstructions of stimulus orientation during the orientation mapping task (blue) and the category discrimination task (red). Reconstructions were computed using a leave-one-run-out cross validation approach where data from N − 1 runs were used to estimate channel weights and data from the remaining run were used to estimate channel responses. This procedure was iterated until all runs had been used to estimate channel responses, and the results were averaged over permutations. No categorical biases were observed in any visual area for either task. Shaded regions represent ±1 within-participant SEM. a.u., Arbitrary units.
Figure 5.
Figure 5.
Reconstructed representations of orientation in early visual cortex. The vertical bar at 0° indicates the actual stimulus orientation presented on each trial. CRFs from Category 1 and Category 2 trials have been arranged and averaged such that any categorical bias would manifest as a clockwise (rightward) shift in the orientation representation toward the center of Category B. Shaded regions represent ±1 within-participant SEM (see Materials and Methods). There is a change in scale between visual areas V1–V3 and hV4–V3A. a.u., Arbitrary units.
Figure 6.
Figure 6.
Stimulus reconstructions during Category 1 and Category 2 trials. Shaded regions represent ±1 within-participant SEM. a.u., Arbitrary units.
Figure 7.
Figure 7.
Participant-level stimulus reconstructions. Each panel plots a reconstructed representation of stimulus orientation for a given participant (columns) and visual area (rows). Dashed blue lines indicate the estimated peak of each reconstruction (obtained via curve-fitting). Ordinate units are arbitrary.
Figure 8.
Figure 8.
Categorical biases predict choice behavior. Each plot represents a logistic regression of each orientation channel's response onto trial-by-trial variability in category judgments. A positive coefficient indicates a positive relationship between an orientation channel's response and the correct category judgment (i.e., Category B), whereas a negative coefficient indicates a negative relationship between an orientation channel's response and correct category judgment (i.e., Category A). Red and blue horizontal lines at the top of each plot indicate orientation channels whose estimated β coefficients are significantly <0 or >0, respectively (FDR-corrected permutation test; p < 0.05). Shaded regions represent ±1 within-participant SEM.
Figure 9.
Figure 9.
Category biases scale inversely with distance from the category boundary. A, The reconstructions shown in Figure 3 sorted by the absolute angular distance between each exemplar and the category boundary. In our case, the 15 orientations were bisected into two groups of 7, with the remaining orientation serving as the category boundary. Thus, the maximum absolute angular distance between each orientation category and the category boundary was 48°. Participant-level reconstructions were pooled and averaged across visual areas V1, V2, and V3 as no differences were observed across these regions. Shaded regions represent ±1 within-participant SEM. B, The amount of bias for exemplars located 1, 2, 3, or 4 steps from the category boundary (quantified via a curve-fitting analysis). Error bars indicate 95% CIs. a.u., Arbitrary units.
Figure 10.
Figure 10.
Cortical areas supporting robust decoding of category information. We trained a linear support vector machine to discriminate between activation patterns associated with Category A and Category B exemplars (see Searchlight classification analysis). The trained classifier revealed robust category information in multiple visual, parietal, temporal, and prefrontal cortical areas, including many regions previously associated with categorization (e.g., posterior parietal cortex and lateral PFC).
Figure 11.
Figure 11.
Stimulus reconstructions in visual, parietal, and frontal cortical areas during the orientation mapping and categorization tasks. During the orientation mapping task, participants detected and reported the identity of a target presented in a stream of letters at fixation. During the categorization experiment, participants categorized stimulus orientation into two discrete groups. Shaded regions represent ±1 within-participant SEM. IPL, Inferior parietal lobule; IPS, intraparietal sulcus; sPCS, superior precentral sulcus; IT, inferotemporal cortex, IFG, inferior frontal gyrus; a.u., arbitrary units.
Figure 12.
Figure 12.
Summary of Experiment 2. A, Participants viewed displays containing an aperture of iso-oriented bars flickering at 30 Hz. B, The 30 Hz flicker entrained a frequency-specific response known as a SSVEP. C, Evoked 30 Hz power was largest over occipitoparietal electrode sites. We computed stimulus reconstructions (Fig. 7) using the 32 scalp electrodes with the highest power. Scale bar: the proportion of participants (of 27) for which each electrode site was ranked in the top 32 of all 128 scalp electrodes. D, E, Participants categorized stimuli with a high degree of accuracy; incorrect and slow responses were observed only for exemplars adjacent to a category boundary. Shaded regions represent ±1 within-participant SEM.
Figure 13.
Figure 13.
Category biases emerge shortly after stimulus onset. A, Time-resolved reconstruction of stimulus orientation. Dashed vertical lines at time 0.0 and 3.0 s indicate stimulus onset and offset, respectively. B, Average CRF during the first 250 ms of each trial. The reconstructed representation exhibits a robust category bias (p < 0.01; bootstrap test). a.u., Arbitrary units.
Figure 14.
Figure 14.
Stimulus and category information is absent in pretrial EEG activity. Time-averaged reconstruction computed over an interval spanning −250 to 0 ms relative to stimulus onset. The center of the reconstruction was statistically indistinguishable from 0° (p = 0.234; bootstrap test).
Figure 15.
Figure 15.
Reconstructions of stimulus orientation during the orientation mapping task (A) and the category discrimination task (B) during Experiment 2. Vertical dashed lines at time 0.0 and 3.0 indicate the start and end of each trial, respectively. Reconstructions were computed using a leave-one-run-out cross validation approach where data from N − 1 runs were used to estimate channel weights and data from the remaining run were used to estimate channel responses. This procedure was iterated until all runs had been used to estimate channel responses, and the results were averaged over permutations. Units of response are arbitrary.
Figure 16.
Figure 16.
No systematic biases in eye position during orientation categorization (Experiment 2). We regressed trial-by-trial records of stimulus orientation (A) or category (B) onto horizontal EOG activity. Thus, positive coefficients reflect a systematic relationship between stimulus orientation (or category) and eye position. No such biases were observed. Black vertical dashed lines at 0.0 and 3.0 indicate the start and end of each trial, respectively. Shaded regions represent the 95% within-participant CI of the mean.
Figure 17.
Figure 17.
Design and results of Experiment 3. A, Possible stimulus locations. The orientation of the category boundary (red dashed line) was randomly determined for each participant (example shown). B, DMC task. Participants remembered the position of a sample disc over a blank delay and then judged whether the location of a probe disc was drawn from the same location category or a different location category. In this example, the categories are defined by the boundary shown in A. C, Location-specific reconstructions computed during the DMC task. Vertical dashed lines at 0.0 and 2.0 s indicate the onset of the sample and probe epochs, respectively. Participants could not prepare a response until the onset of the probe display, yet a robust category bias was observed during the delay period. This suggests that category biases observed in Experiments 1 and 2 are not solely due to mechanisms of response selection.
Figure 18.
Figure 18.
No systematic biases in eye position during location categorization (Experiment 3). We regressed trial-by-trial records of stimulus location (A) or category (B) onto horizontal EOG activity. Thus, positive coefficients reflect a systematic relationship between stimulus orientation (or category) and eye position. No such biases were observed. Black vertical dashed lines at 0.0 and 3.0 indicate the start and end of each trial. Shaded regions represent the 95% within-participant CI of the mean.

Comment in

Similar articles

Cited by

References

    1. Ashby FG, Maddox WT (2005) Human category learning. Annu Rev Psychol 56:148–178. 10.1146/annurev.psych.56.091103.070217 - DOI - PubMed
    1. Astafiev SV, Stanley CM, Shulman GL, Corbetta M (2004) Extrastriate body area in human occipital cortex responds to the performance of motor actions. Nat Neurosci 7:542–548. 10.1038/nn1241 - DOI - PubMed
    1. Birman D, Gardner JL (2016) Parietal and prefrontal: categorical differences? Nat Neurosci 19:5–7. 10.1038/nn.4204 - DOI - PubMed
    1. Blankertz B, Lemm S, Treder M, Haufe S, Muller KR (2011) Single-trial analysis and classification of ERP components - a tutorial. Neuroimage 56:814–825. - PubMed
    1. Breakspear M, Heitmann S, Daffertshofer A (2010) Generative models of cortical oscillations: neurobiological implication of the Kuramoto model. Front Hum Neurosci 4:190. 10.3389/fnhum.2010.00190 - DOI - PMC - PubMed

Publication types

LinkOut - more resources