Skip to main page content
U.S. flag

An official website of the United States government

Dot gov

The .gov means it’s official.
Federal government websites often end in .gov or .mil. Before sharing sensitive information, make sure you’re on a federal government site.

Https

The site is secure.
The https:// ensures that you are connecting to the official website and that any information you provide is encrypted and transmitted securely.

Access keys NCBI Homepage MyNCBI Homepage Main Content Main Navigation
[Preprint]. 2023 Jun 7:2023.06.05.543698.
doi: 10.1101/2023.06.05.543698.

Recurrent pattern completion drives the neocortical representation of sensory inference

Affiliations

Recurrent pattern completion drives the neocortical representation of sensory inference

Hyeyoung Shin et al. bioRxiv. .

Update in

  • Recurrent pattern completion drives the neocortical representation of sensory inference.
    Shin H, Ogando MB, Abdeladim L, Jagadisan UK, Durand S, Hardcastle B, Belski H, Cabasco H, Loefler H, Bawany A, Wilkes J, Nguyen K, Suarez L, Johnson T, Han W, Ouellette B, Grasso C, Swapp J, Ha V, Young A, Caldejon S, Williford A, Groblewski PA, Olsen S, Kiselycznyk C, Lecoq J, Adesnik H. Shin H, et al. Nat Neurosci. 2025 Sep 15. doi: 10.1038/s41593-025-02055-5. Online ahead of print. Nat Neurosci. 2025. PMID: 40954310

Abstract

When sensory information is incomplete or ambiguous, the brain relies on prior expectations to infer perceptual objects. Despite the centrality of this process to perception, the neural mechanism of sensory inference is not known. Illusory contours (ICs) are key tools to study sensory inference because they contain edges or objects that are implied only by their spatial context. Using cellular resolution, mesoscale two-photon calcium imaging and multi-Neuropixels recordings in the mouse visual cortex, we identified a sparse subset of neurons in the primary visual cortex (V1) and higher visual areas that respond emergently to ICs. We found that these highly selective 'IC-encoders' mediate the neural representation of IC inference. Strikingly, selective activation of these neurons using two-photon holographic optogenetics was sufficient to recreate IC representation in the rest of the V1 network, in the absence of any visual stimulus. This outlines a model in which primary sensory cortex facilitates sensory inference by selectively strengthening input patterns that match prior expectations through local, recurrent circuitry. Our data thus suggest a clear computational purpose for recurrence in the generation of holistic percepts under sensory ambiguity. More generally, selective reinforcement of top-down predictions by pattern-completing recurrent circuits in lower sensory cortices may constitute a key step in sensory inference.

PubMed Disclaimer

Figures

Extended Data Figure 1.
Extended Data Figure 1.. IC-encoders’ receptive fields are not biased towards the illusory gap region.
a, Experimental schematic. Throughout each IC visual block, the four white circles stayed in place. b, Visual stimuli used for defining segment responders. For example, bottom-right (BR) segment responder was defined as neurons that has significantly larger responses to BRin compared to BRout. c, Size tuning of V1L23 neurons (black), and IC-encoders (green) (mean ± SEM across neurons). Circular patches of drifting gratings were shown in various sizes only in the center position. Therefore, the size tuning analysis was limited to neurons with RF in the center. IC-encoders are surround suppressed to the same extent as the general population. d, Positions of the circular patches of drifting grating used for RF mapping (see also Fig. 1c). Center position coincides with the illusory gap region. e, RF position histograms of IC-encoders, showing that they are not biased towards the illusory gap region. f, RF position histograms of segment responders, showing that they are biased towards the image segments that are used to define them.
Extended Data Figure 2.
Extended Data Figure 2.. Images with illusory bars (IC) do not evoke greater responses than images without (LC).
a, Proportion of IC-encoders, out of all neurons, in each area. b, Proportion of IC-encoders, out of visually responsive neurons, in each area. Visually responsive neurons are defined as neurons that respond to at least one of IC/LC images (p<0.05 Kruskal-Wallis test across responses to ‘blank’ (four white circles) vs IC1 vs LC1 vs LC2 vs IC2). c-d, Response on IC vs LC trials, averaged across all neurons (c) or visually responsive neurons (d) for each session (mean ± SEM across sessions, *p<0.05 right-tailed Wilcoxon signed-rank test). IC images do not evoke greater responses than LC images.
Extended Data Figure 3.
Extended Data Figure 3.. Decoder prediction is consistent across decoder types.
a, Cross validation accuracy comparison across sessions, between SVM with a linear kernel (SVM-Linear, x-axis) vs SVM with a quadratic polynomial kernel (SVM-Poly2, y-axis on left panel), or radial basis function kernel (SVM-RBF, y-axis on center panel), or fully connected artificial neural network (ANN, y-axis on right panel). b, Proportion of trials in the held-out test set that have matching decoder predictions, between SVM-Linear decoder and the other three decoder types. For comparison, match proportion was also calculated with the trial order shuffled; for each session, match proportions were averaged across 1000 shuffles. c, Inference performance comparison across sessions, where inference performance is defined as P(TRE→IC) − P(TRE→LC). d, Proportion of TRE trials that have matching decoder predictions.
Extended Data Figure 4.
Extended Data Figure 4.. Holography evoked effects.
a, Schematic of how IC representation arises in the visual cortical hierarchy. Black arrows show connectivity posited in prior literature,. Red arrows are supported by our finding that photoactivation of IC-encoders recreates activity patterns visually evoked by the IC images in the rest of the network (Fig. 4c–d). b, Physiological point spread function (PPSF) is measured by parametrically moving the holographic target position away from the center of the soma and measuring the targeted neurons’ z-dF/F. For axial PPSF, + direction indicates increasing depth. c, Relationship between cross-validation accuracy of decoders trained to discriminate visual trial types (IC1 vs LC1 vs LC2 vs IC2) and the proportion of holography trials decoded as corresponding visual trials. Decoder was trained on non-stimulated neurons, as described in Fig. 4. Each data point represents one session. d, Relationship between number of targets in each hologram and the proportion of holography trials decoded as corresponding visual trials. e, Holography evoked responses in non-stimulated neurons, plotted separately for neurons that are visually responsive to the IC1 images (red) vs non-responsive neurons (black). Both groups contain a very sparse number of driven neurons, and a larger number of suppressed neurons. On holography trials where IC-encoders were stimulated, IC-responsive neurons appear more driven than non-responsive neurons, consistent with the notion that IC-encoders drive neural pattern completion. While this effect is significant when pooling across neurons, the effect is not significant across sessions, suggesting that multivariate analysis with decoders is a more robust indicator of neural pattern completion.
Figure 1.
Figure 1.. Mouse V1 neurons respond to illusory contours despite the lack of visual information within their receptive fields.
a, Kanizsa triangle. b, A subset of visual stimuli used in this experiment. c, RF positions of each neuron was mapped using 16 degree circular grating patches, appearing in one of the 9 positions depicted. Circular patch in position 1 corresponds to the gap region between four white circles in b. d, Schematic of 6 Neuropixels probes insertion into V1, LM, RL, AL, PM and AM. e, Evoked activity at each position of V1 neurons that exclusively responded to grating patches in position 1, corresponding to the illusory gap region, and not to any of the other positions shown in c (n=24 exclusively center-responsive neurons out of 2,395 V1 neurons; 14 sessions from 14 mice). f, Peri-stimulus time histogram (PSTH) of exclusively center-responsive V1 neurons on IC and IRE trials (top: averaged, bottom: individual units). g, For the subset of units in f that significantly respond to IC images, preferred orientation was compared between ICs (IC stimuli) and real edges (IRE stimuli). h, Schematic of the brain with areas demarcated. Inset: example 2p image of GCaMP6 expressing neurons in V1. i-k, Same as e-g, but for 2p imaging dataset (n=298 exclusively center-responsive neurons out of 18,576 V1 layer 2/3 neurons; 29 sessions from 5 mice).
Figure 2.
Figure 2.. Illusory contour inference is represented in layer 2/3 of V1 and LM.
a, Visual stimuli used for decoding analyses. LC stimuli were designed such that the sum of parts for the IC image pair would be equivalent to that of LC image pair: LC1 image is constructed from bottom half of IC1 and top half of IC2, and LC2 is constructed as vice versa. b, IC1-encoders and IC2-encoders constitute IC-encoders. Figure shows average evoked responses of these neurons. c, Orientation tuning curves, measured with standard static gratings, averaged over each IC-encoder subgroup (mean ± SEM across neurons). d, 10-fold cross-validation performance of linear SVM trained on the 4 trial types shown in a. Confusion matrix shows the average decoding performance of V1 neurons in the Neuropixels dataset (n=14 sessions; average 180 V1 neurons per session, 400 repetitions per trial type). e, TRE stimuli, which have equivalent pixel overlap with an IC and an LC. f, Using the decoder described in c, neural activity evoked by TRE images was classified into one of the following 4 labels; IC1, LC1, LC2, IC2 (inference decoding). g, Inference decoding of each visual cortical area in the Neuropixels dataset (mean ± SEM across sessions, *p<0.05, right-tailed Wilcoxon signed-rank test; n=14 sessions). h, Inference decoding of the 2p dataset. The proportion of TRE trials that are decoded as the corresponding IC (green) and LC (orange) images were averaged across the two image sets (mean ± SEM across sessions, *p<0.05, right-tailed Wilcoxon signed-rank test; n=8/20/19 sessions for V1L4 FOV, V1L2/3 FOV, and mesoscope, respectively. For each visual area, sessions with less than 100 neurons in that area was discarded).
Figure 3.
Figure 3.. IC-encoders mediate the representation of illusory contour inference in V1 layer 2/3.
a, Top: IC and LC stimuli. Bottom left: average response of neurons defined as ‘IC-encoders’ to the IC and LC stimuli (1.6% of ROIs in V1 L2/3 FOV, n=24 sessions from 4 mice). Bottom right: average response of neurons defined as segment responders (3.5% of ROIs). b, Decoder performance when zeroing out subsets of neurons, IC-encoders or segment responders, in the input to the decoder (mean ± SEM across sessions, *p<0.05, right-tailed Wilcoxon signed-rank test). c, Decoder performance when decoding only off of subsets of neurons (same subsets as b; mean ± SEM across sessions, *p<0.05, right-tailed Wilcoxon signed-rank test).
Figure 4.
Figure 4.. Two-photon holographic optogenetic stimulation of IC-encoders is sufficient for recurrent pattern completion of illusory contour representation in V1 layer 2/3.
a, Experimental pipeline, consisting of three stages: First, visual responses are imaged; second, visual response properties are analyzed; third, functional subsets of neurons are stimulated via 2p holographic optogenetics (n=24 sessions from 4 mice). b, Holography evoked activity of targeted neurons on distinct holography trials in stage 3. Four distinct subsets of neurons are targeted; IC1-encoders, IC2-encoders, BR- and TL-segment responders, and BL- and TR-segment responders. c, Decoder trained on visual trials was used to classify holography evoked activity in non-stimulated neurons (neurons >50μm from all holography targets). d, Replotting of data in c, with significance test against chance performance of 0.25. (*p<0.05 right-tailed Wilcoxon signed-rank test across n=24 sessions).

References

    1. Kanizsa G. Subjective contours. Sci. Am. 234, 48–52 (1976). - PubMed
    1. Nieder A. Seeing more than meets the eye: Processing of illusory contours in animals. J. Comp. Physiol. A Neuroethol. Sensory, Neural, Behav. Physiol. 188, 249–260 (2002). - PubMed
    1. Nieder A. & Wagner H. Perception and neuronal coding of subjective contours in the owl. Nat. Neurosci. 2, 660–663 (1999). - PubMed
    1. Fuss T., Bleckmann H. & Schluessel V. The brain creates illusions not just for us: Sharks (Chiloscyllium griseum) can ‘see the magic’ as well. Front. Neural Circuits 8, 1–17 (2014). - PMC - PubMed
    1. Okuyama-Uchimura F. & Komai S. Mouse Ability to Perceive Subjective Contours. Perception 45, 315–327 (2016). - PubMed

Publication types

LinkOut - more resources