Skip to main page content
U.S. flag

An official website of the United States government

Dot gov

The .gov means it’s official.
Federal government websites often end in .gov or .mil. Before sharing sensitive information, make sure you’re on a federal government site.

Https

The site is secure.
The https:// ensures that you are connecting to the official website and that any information you provide is encrypted and transmitted securely.

Access keys NCBI Homepage MyNCBI Homepage Main Content Main Navigation
. 2023 Oct 25;43(43):7119-7129.
doi: 10.1523/JNEUROSCI.0512-23.2023. Epub 2023 Sep 12.

Stimulus-Specific Prediction Error Neurons in Mouse Auditory Cortex

Affiliations

Stimulus-Specific Prediction Error Neurons in Mouse Auditory Cortex

Nicholas J Audette et al. J Neurosci. .

Abstract

Comparing expectation with experience is an important neural computation performed throughout the brain and is a hallmark of predictive processing. Experiments that alter the sensory outcome of an animal's behavior reveal enhanced neural responses to unexpected self-generated stimuli, indicating that populations of neurons in sensory cortex may reflect prediction errors (PEs), mismatches between expectation and experience. However, enhanced neural responses to self-generated stimuli could also arise through nonpredictive mechanisms, such as the movement-based facilitation of a neuron's inherent sound responses. If sensory prediction error neurons exist in sensory cortex, it is unknown whether they manifest as general error responses, or respond with specificity to errors in distinct stimulus dimensions. To answer these questions, we trained mice of either sex to expect the outcome of a simple sound-generating behavior and recorded auditory cortex activity as mice heard either the expected sound or sounds that deviated from expectation in one of multiple distinct dimensions. Our data reveal that the auditory cortex learns to suppress responses to self-generated sounds along multiple acoustic dimensions simultaneously. We identify a distinct population of auditory cortex neurons that are not responsive to passive sounds or to the expected sound but that encode prediction errors. These prediction error neurons are abundant only in animals with a learned motor-sensory expectation, and encode one or two specific violations rather than a generic error signal. Together, these findings reveal that cortical predictions about self-generated sounds have specificity in multiple simultaneous dimensions and that cortical prediction error neurons encode specific violations from expectation.SIGNIFICANCE STATEMENT Audette et. al record neural activity in the auditory cortex while mice perform a sound-generating forelimb movement and measure neural responses to sounds that violate an animal's expectation in different ways. They find that predictions about self-generated sounds are highly specific across multiple stimulus dimensions and that a population of typically nonsound-responsive neurons respond to sounds that violate an animal's expectation in a specific way. These results identify specific prediction error (PE) signals in the mouse auditory cortex and suggest that errors may be calculated early in sensory processing.

Keywords: behavior; cortex; expectation; hearing; mouse; prediction.

PubMed Disclaimer

Figures

Figure 1.
Figure 1.
Specific suppression of expected sounds across multiple acoustic dimensions. A, Schematic of head-fixed lever press training paradigm (left) and stimulus and reward timing for lever movements (right). Gray area indicates home position. B, Global average lever movement trace (black) and individual animal average lever movement traces (gray) with position measured as a fraction of the reward threshold. All movements were included, even those that did not reach the reward threshold. C, Histogram of global lever movement durations (black, mean 0.77 s, median 0.274 s) averaged across individual animal histograms (gray). D, Histogram of global intermovement interval (black, mean 2.9 s, median 0.86 s) averaged across individual animal histograms (gray). E, Schematic of multiarray recording sessions in trained mice (left) and aggregate neural responses to expected and multiple unexpected sounds in the passive (darker) and movement-evoked (lighter) context (right). Of the 1016 regular-spiking neurons we recorded (N = 5 animals), a subset of neurons are analyzed for each sound type if they respond to that sound in either context (p < 0.01, 0–60 ms after sound onset). Cell counts are listed below each PSTH. Color differences represent sound frequency, and the likelihood of each lever press producing a given sound type during the recording session is displayed in black bar.
Figure 2.
Figure 2.
Precise suppression of expected sound responses in individual neurons. A, Modulation (see Materials and Methods) of individual neurons comparing responses to sounds heard in the active and passive condition to each tone type. Negative values indicate weaker responses in the active condition, i.e., suppression. A one-way ANOVA detected differences among the groups (F statistic p = 2 × 10−17), with Exp and Freq being significantly different from all other groups (Exp, p < 0.005; Freq, p < 0.005). Neuron values and inclusion are the same as (Fig. 1E). B, Identical experimental setup and analysis as A, but performed in a subset of mice trained to perform the lever task in the absence of sound (F statistic p = 0.01). The expected sound was assigned as the sound heard on 90% of trials, although mice had no prior experience with the sound. C, Matrix (left) representing the absolute difference in neural modulation for stimuli heard in sound trained (top) and silent trained (right) mice. Comparison (bottom) of the absolute difference between the modulation of probe sounds and the expected sound (corresponding to the top row in heatmaps) in sound trained and silent trained mice. Error bars represent standard error in each dimension. D, Average responses across trials of three individual neurons to each tone type, showing suppression that is specific for the expected sound at the individual neuron level. E, Confusion matrix showing how accurately sounds could be decoded from auditory cortex neural responses on individual trials. Matrix shows decoding performance averaged across four animals.
Figure 3.
Figure 3.
Abundant prediction error neurons in mouse auditory cortex. A, Number of neurons responsive (p < 0.01) to a given sound in the active context (light), passive context (dark), or both (white). B, Example neuron depicting the identification of putative prediction error neurons, defined as neurons which respond to a given stimulus type in the active context, but not in the passive context, not at the time of expected sound on omission trials, and not to the expected self-generated sound. Stimulus window of 0–60 ms after sound onset compared with the 60 ms before sound onset. C, Number of neurons that fulfil our putative prediction error criteria for each unexpected trial type. D, Visual representation of each prediction error neuron's responsiveness (white) to task tones heard in the active condition (left), responsiveness in passive condition (middle), and whether a neuron obeyed our prediction error criteria for a given stimulus (right, see Fig. 3B). To match our strict prediction error criteria, a probability value of 0.01 was used for actively heard unexpected sounds, while a cutoff of p = 0.1 was used for the expected, self-generated sound and all passive sounds. Rows with color represent example neurons in E. E, Responses of two example neurons to sounds heard actively (top) and passively (bottom). Black PSTHs show significant responses using the p values described in A. Scale bar: 25 ms, 50 Sp/s. F, Histogram representing the number of passively heard stimuli to which each neuron responded (p > 0.1) for prediction error neurons (green) and nonprediction error neurons (black). This includes all task sounds heard passively as well as passively heard pure tones at half octave intervals between 4 and 32 kHz. G, Histogram representing the neural response to passive tones, averaged across all passive sounds, including half octave separated pure tones described in F. H, Quantification of the number of different stimuli for which a neuron signals prediction error. I, Color-coded matrix showing the number of prediction error neurons that are shared across pairs of stimuli.
Figure 4.
Figure 4.
Prediction error responses in auditory cortex are short-latency. A, Raster of example neuron showing action potential timing following frequency probe sounds, with the first spike on a given trial (orange) used to calculate an average onset latency. B, Histogram of average onset latency following frequency probe trials for prediction error neurons (green), all neurons responsive to the frequency probe (orange), and latency of neurons responsive to the passive frequency probe following passive presentation. No difference between prediction error neuron latencies and general latencies in the active condition (p = 0.97) or to passive sound responses (p = 0.97, p = 0.87, KS test).
Figure 5.
Figure 5.
Prediction error neurons reflect the violation of a learned expectation. A, Quantification of the number of putative prediction error neurons in trained animals, and in an identical experiment and analysis in animals trained to make lever presses in silence. Each dot represents the fraction of neurons responsive in any context to a given sound in a recording session that met the criteria for prediction error neurons. B, Comparison between the number of prediction error neurons for a stimulus (as in A) and how “different” a stimulus was from the expected sound. Differences were quantified between neural responses to each probe sound and the expected sound in the passive condition (see Materials and Methods). Each dot represents one unexpected stimulus in a sound trained animal (left, N = 4) and difference values were mean-normalized within animal to enable a comparison across animals. Linear regression is shown with shaded standard error. p values and correlation coefficients are listed. Identical analysis but comparing the fraction of PE neurons to a stimulus and the absolute magnitude of an animal's population response to that stimulus in the passive condition (right). C, Same as B but for mice trained in a silent version of the lever task (N = 3).

Update of

References

    1. Audette NJ, Zhou W, Chioma A, La Schneider DM (2022) Precise movement-based predictions in the mouse auditory cortex. Curr Biol 32:4925–4940.e6. 10.1016/j.cub.2022.09.064 - DOI - PMC - PubMed
    1. Ayaz A, Stäuble A, Hamada M, Wulf MA, Saleem AB, Helmchen F (2019) Layer-specific integration of locomotion and sensory information in mouse barrel cortex. Nat Commun 10:2585. 10.1038/s41467-019-10564-8 - DOI - PMC - PubMed
    1. Bastos AM, Usrey WM, Adams RA, Mangun GR, Fries P, Friston KJ (2012) Canonical microcircuits for predictive coding. Neuron 76:695–711. 10.1016/j.neuron.2012.10.038 - DOI - PMC - PubMed
    1. Clancy KB, Orsolic I, Mrsic-Flogel TD (2019) Locomotion-dependent remapping of distributed cortical networks. Nat Neurosci 22:778–786. 10.1038/s41593-019-0357-8 - DOI - PMC - PubMed
    1. Cristianini N, Shawe-Taylor J (2000) An introduction to support vector machines and other kernel-based learning methods. Cambridge: Cambridge University Press.

Publication types

LinkOut - more resources