Skip to main page content
U.S. flag

An official website of the United States government

Dot gov

The .gov means it’s official.
Federal government websites often end in .gov or .mil. Before sharing sensitive information, make sure you’re on a federal government site.

Https

The site is secure.
The https:// ensures that you are connecting to the official website and that any information you provide is encrypted and transmitted securely.

Access keys NCBI Homepage MyNCBI Homepage Main Content Main Navigation
. 2019 Aug;22(8):1336-1344.
doi: 10.1038/s41593-019-0428-x. Epub 2019 Jul 1.

Coexisting representations of sensory and mnemonic information in human visual cortex

Affiliations

Coexisting representations of sensory and mnemonic information in human visual cortex

Rosanne L Rademaker et al. Nat Neurosci. 2019 Aug.

Abstract

Traversing sensory environments requires keeping relevant information in mind while simultaneously processing new inputs. Visual information is kept in working memory via feature-selective responses in early visual cortex, but recent work has suggested that new sensory inputs obligatorily wipe out this information. Here we show region-wide multiplexing abilities in classic sensory areas, with population-level response patterns in early visual cortex representing the contents of working memory alongside new sensory inputs. In a second experiment, we show that when people get distracted, this leads to both disruptions of mnemonic information in early visual cortex and decrements in behavioral recall. Representations in the intraparietal sulcus reflect actively remembered information encoded in a transformed format, but not task-irrelevant sensory inputs. Together, these results suggest that early visual areas play a key role in supporting high-resolution working memory representations that can serve as a template for comparison with incoming sensory information.

PubMed Disclaimer

Conflict of interest statement

Conflict of interest: The authors declare no competing interests

Figures

Figure 1
Figure 1
Experiment 1 paradigm and results (a) After a valid cued about the distractor condition (here, the blue fixation cued a noise distractor) a 0.5s target orientation was remembered for 13 seconds. During this delay, participants viewed a grey screen or an 11-second contrast-reversing distractor. Distractors could be a Fourier filtered noise stimulus (depicted), or an oriented grating (its orientation pseudo-randomly selected on every trial). After the delay participants had 3 seconds to rotate a recall probe to match the remembered orientation. (b) There were no differences in behavioral error between the three distractor conditions, as indicated by a non-parametric one-way repeated measures within-subject ANOVA (F(2,10) = 0.044; p = 0.943). Grey lines indicate individual subjects. (c) Model-based reconstructions of the remembered orientation during the three different distractor conditions (left), and of the physically-present orientation on trials with a grating distractor (right). Reconstructions were based on the average activation patterns 5.6–13.6 seconds after target onset. (d) The degree to which memory and sensory stimuli were represented during the delay was quantified by projecting the channel response at each degree onto a vector centered on the true orientation (i.e. zero), and taking the mean of all these projected vectors. On the left, a cartoon reconstruction is defined by 18 points/degrees (note: in reality there were 180 degrees). On the right, this cartoon reconstruction is wrapped onto a circle. We show for one point/degree how the channel response (h) is projected onto the true orientation (remembered or sensed) resulting in vector b. Knowing the angle (A) between the true orientation and the orientation at this particular point/degree, we solve for b using trigonometric ratios for right triangles (i.e.cosA=b/h). The mean of all projected vectors (all b) indexes the amount of information at the true orientation, and is our metric for reconstruction fidelity. (e) Reconstruction fidelity for remembered (shades of teal) and sensed distractor (grey) orientations is significantly above chance in almost all ROIs (based on one-sided randomization tests comparing fidelity in each condition and ROI to zero; see Methods). Black asterisks next to ROI names (under the x-axis) indicate significant differences in memory fidelity between the three distractor conditions in that ROI, as determined by non-parametric one-way repeated-measures within-subjects ANOVA’s performed separately for each ROI (see Methods; for exact p-values and post-hoc tests see Supplementary Tables 1 and 2). One, two, or three asterisks indicate significance levels of p ≤ 0.05, p ≤ 0.01, or p ≤ 0.001, respectively (uncorrected for multiple comparisons). Dots indicate individual subject fidelities in each condition and ROI. For b, c, and e, error bars / areas represent ± 1 within-subject SEM around the average across n=6 independent subjects.
Figure 2
Figure 2
Model-based reconstructions of remembered orientations and sensed distractor orientations over time in V1. (a) The time axis starts at “0” which is trial onset, and each slice shows the mean reconstruction across participants at each 800ms TR (for a total of 21 TRs). Reconstructions for the remembered orientation are shown in the three left-most panels (shades of teal), and sensed distractor orientation reconstructions are shown in the right-most panel (grey). (b) The fidelity of timepoint-by-timepoint reconstructions in V1 (quantification of (a)), with time 0 representing target onset. The three gray background panels represent the target, distractor, and recall epochs of the working memory trial. Small, medium, and large dots at the bottom indicate significance at each time point at p ≤ 0.05, p ≤ 0.01, and p ≤ 0.001, respectively (based on one-sided randomization tests comparing fidelity in each condition and at each timepoint to zero, uncorrected for multiple comparisons; see Methods). Shaded error areas represent ± 1 within-subject SEM around the average across n=6 independent subjects.
Figure 3
Figure 3
Experiment 2 paradigm and results (a) Irrelevant but fully predictable distractors were cued by a change in fixation color (here, blue indicated that picture distractors would be shown during the delay) prior to a 500ms target presentation. Participants remembered the target orientation for 12 seconds, while they either viewed a grey screen, or an 11-second on-off flickering distractor (a pseudo-randomly oriented grating, or pictures of faces or gazebos). After the memory delay participants rotated a dial to match the remembered orientation. Photo used with permission. (b) Distractor presence negatively impacted behavioral performance, as indicated by a non-parametric one-way repeated measures within-subject ANOVA (F(2,12) = 10.154; p < 0.001). Errors were smaller when no distractor was shown during the delay, compared to when distractor gratings (t(6) = 6.272; p < 0.001) or pictures (t(6) = 3.375; p = 0.018) were shown. Performance did not differ between grating and picture distractors (t(6) = 1.184; p = 0.184). Post-hoc tests were non-parametric uncorrected paired-sample t-tests. Grey lines indicate individual subjects. (c) Model-based reconstructions of the remembered orientation during the three different distractor conditions (left), and of the sensed distractor orientation on trials with a grating distractor (right). There reconstructions were generated with an IEM trained on independent localizer data, and based on the average activation patterns 5.6–13.6 seconds after target onset. (d) Reconstruction fidelity for remembered orientations without distraction (dark teal) and for sensed distractor orientations (grey) is significantly above zero in all ROIs except IPS0 and IPS1 (based on one-sided randomization tests in each condition and ROI; see Methods). However, reconstruction fidelity is less robust when a distractor was presented throughout the delay (mid-teal and yellow for grating and picture distractors, respectively). Black asterisks next to ROI names indicate significant differences in memory fidelity during the three distractor conditions in that ROI, as determined by non-parametric one-way repeated-measures within-subjects ANOVA’s performed separately for each ROI (see Methods; for exact p-values and post-hoc tests see Supplementary Tables 3 and 4). Dots indicate individual subject fidelities in each condition and ROI. (e) The fidelity of timepoint-by-timepoint reconstructions in V1. Time “0” represents target onset, and the three gray panels represent the target, distractor, and recall epochs of the working memory trial. One, two, or three asterisks in b and d (small, medium, or large dots at the bottom of e) indicate significance levels of p ≤ 0.05, p ≤ 0.01, or p ≤ 0.001, respectively (uncorrected). For b, c, d, and e, error bars / areas represent ± 1 within-subject SEM around the average across n=7 independent subjects.
Figure 4
Figure 4
Reconstruction fidelity when training and testing an IEM on data from the memory delay in Experiment 1 (a) and Experiment 2 (b). There are robust memory representations throughout the visual hierarchy, including retinotopic IPS. This implies that the representational format in IPS is not in a stimulus-driven format. The proposed transformed nature of the IPS code is also supported by the lack of information about the directly sensed grating distractor (grey bars). As before, differences in memory fidelity between the three distractor conditions (black asterisks next to ROI names) were virtually absent in Experiment 1 (a; for exact p-values and post-hoc tests see Supplementary Tables 5 and 6), while in Experiment 2 the presence of distractors was accompanied by a drop in memory fidelity in many ROIs (b; for exact p-values and post-hoc tests see Supplementary Tables 7 and 8). Note however that mnemonic representations in IPS were unaffected by visual distraction (see also Supplementary Fig. 13). One, two, or three asterisks indicate significance levels of p ≤ 0.05, p ≤ 0.01, or p ≤ 0.001, respectively. Dots indicate individual subject fidelities in each condition and ROI. Error bars represent ± 1 within-subject SEM (for n=6 and n=7 independent subjects in a and b respectively). Statistical testing was identical to Figs. 1e and 3d. When ascending the visual hierarchy from V1 to V4, a weakening sensory representation paired with a strengthening mnemonic representation illustrates the top-down nature of VWM (compare grey and mid-teal bars). This signature interaction was present in both Experiment 1 (F(4,20) = 13.6, p < 0.001) and Experiment 2 (F(4,24) = 7.769, p < 0.001), as indicated by non-parametric two-way repeated measures within-subject ANOVA’s.
Figure 5
Figure 5
Decoding analyses yield highly comparable results to the IEM analyses. (a) In Experiments 1 and 2 we used random orientations (1°–180°), while relevant previous work has used orthogonal orientations,. To closely mimic the two-way classification performed in previous work, we divided our random orientations into four bins, and performed two two-way classifications: The first classification determined whether orientations were around vertical (between 157.5° and 22.5°) or horizontal (between 67.5° and 112.5°) – shown schematically in the left diagram. The second classification determined whether orientations were around one or the other oblique (i.e. between 22.5°–67.5° or between 112.5°–157.5°) – shown schematically in the right diagram. Decoding performance was averaged across these two-way classifications to yield an overall classification accuracy for each ROI. For all decoding analyses we ensured balanced training sets. (b) We trained the SVM on independent data from the visual mapping tasks. Results mirrored those from the IEM analyses. In Experiment 1 (top) we found above chance decoding in V1–V4 and LO1, but not IPS0 and IPS1. There were no differences between the three distractor conditions in any of the ROIs (all F(5,10) < 1.024, all p > 0.429). Also in Experiment 2 (bottom) there was little above chance decoding in IPS regions. In V1–V4 and LO1, memory decoding in Experiment 2 differed between the three distractor conditions (all F(5,10) > 10.419, all p < 0.004), and was generally better when no visual distraction was presented during the delay, compared to delays with a grating or a picture distractor. In both Experiments 1 and 2, the grating distractor condition revealed an interaction between remembered and sensed representations (compare mid-teal and grey bars), considered a signature of top-down processing (F(4,20) = 2.469, p = 0.046 and F(4,24) = 3.198, p = 0.024, respectively). (c) We also trained the SVM on data from the memory delay via a leave-one-out cross-validation procedure. This led to robust decoding of mnemonic information in IPS0 and IPS1 for both Experiments 1 (top) and 2 (bottom), implying a non-stimulus driven mnemonic code in these areas. Lack of information about the ignored sensory distractor orientation (grey bars) further corroborates that IPS uses non-stimulus driven codes to represent task-relevant information. In Experiment 1 (top) the three distractor conditions differed in V1 and LO1 (F(2,10) = 3.517, p = 0.045 and F(2,10) = 12.723, p = 0.003, respectively) but not in any other ROIs (all F(2,10) < 1.062, all p > 0.386). In Experiment 2 (bottom) the three distractor conditions differed in almost all ROIs (V1–IPS0, all F(2,12) > 5.399, all p < 0.022;). Again, both Experiments 1 and 2 revealed an interaction between remembered and sensed representations (compare mid-teal and grey bars) in the grating distractor condition (F(4,20) = 11.499, p < 0.001 and F(4,24) = 3.331, p = 0.029, respectively). For both b and c, statistical testing was identical to that in Figs. 1e, 3d, and 4 with the exception that randomization tests were against chance (0.5) instead of zero (see also Methods). One, two, or three asterisks indicate significance levels of p ≤ 0.05, p ≤ 0.01, or p ≤ 0.001, respectively. Dots indicate individual subject decoding in each condition and ROI. Error bars represent ± 1 within-subject SEM (for n=6 and n=7 independent subjects in Experiments 1 and 2, respectively).

References

    1. Harrison SA & Tong F. Decoding reveals the contents of visual working memory in early visual areas. Nature 458, 632–635 (2009). - PMC - PubMed
    1. Serences JT, Ester EF, Vogel EK & Awh E. Stimulus-specific delay activity in human primary visual cortex. Psych. Sci 20, 207–214 (2009). - PMC - PubMed
    1. Riggall AC & Postle BR. The relationship between working memory storage and elevated activity as measured with functional magnetic resonance imagine. J. Neurosci 32, 12990–12998 (2012). - PMC - PubMed
    1. Christophel TB, Hebart MN & Haynes JD. Decoding the contents of visual short-term memory from human visual and parietal cortex. J. Neurosci 32, 12983–12989 (2012). - PMC - PubMed
    1. Ester EF, Anderson DE, Serences JT & Awh E. A neural measure of precision in visual working memory. J. Cog. Neurosci 25, 754–761 (2013). - PMC - PubMed

Publication types