Skip to main page content
U.S. flag

An official website of the United States government

Dot gov

The .gov means it’s official.
Federal government websites often end in .gov or .mil. Before sharing sensitive information, make sure you’re on a federal government site.

Https

The site is secure.
The https:// ensures that you are connecting to the official website and that any information you provide is encrypted and transmitted securely.

Access keys NCBI Homepage MyNCBI Homepage Main Content Main Navigation
. 2022 Nov 21;32(22):4925-4940.e6.
doi: 10.1016/j.cub.2022.09.064. Epub 2022 Oct 24.

Precise movement-based predictions in the mouse auditory cortex

Affiliations

Precise movement-based predictions in the mouse auditory cortex

Nicholas J Audette et al. Curr Biol. .

Abstract

Many of the sensations experienced by an organism are caused by their own actions, and accurately anticipating both the sensory features and timing of self-generated stimuli is crucial to a variety of behaviors. In the auditory cortex, neural responses to self-generated sounds exhibit frequency-specific suppression, suggesting that movement-based predictions may be implemented early in sensory processing. However, it remains unknown whether this modulation results from a behaviorally specific and temporally precise prediction, nor is it known whether corresponding expectation signals are present locally in the auditory cortex. To address these questions, we trained mice to expect the precise acoustic outcome of a forelimb movement using a closed-loop sound-generating lever. Dense neuronal recordings in the auditory cortex revealed suppression of responses to self-generated sounds that was specific to the expected acoustic features, to a precise position within the movement, and to the movement that was coupled to sound during training. Prediction-based suppression was concentrated in L2/3 and L5, where deviations from expectation also recruited a population of prediction-error neurons that was otherwise unresponsive. Recording in the absence of sound revealed abundant movement signals in deep layers that were biased toward neurons tuned to the expected sound, as well as expectation signals that were present throughout the cortex and peaked at the time of expected auditory feedback. Together, these findings identify distinct populations of auditory cortical neurons with movement, expectation, and error signals consistent with a learned internal model linking an action to its specific acoustic outcome.

Keywords: cortex; expectation; hearing; learning; mouse; movement; prediction.

PubMed Disclaimer

Conflict of interest statement

Declaration of interests The authors declare no competing interests.

Figures

Figure 1.
Figure 1.. Frequency-specific suppression of self-generated sounds in the mouse auditory cortex.
(A) Schematic of multi-array recordings during head-fixed lever press paradigm. (B) Stimulus and reward timing for lever movements (top). Gray box indicates home position. Example individual mouse average lever trajectory (bottom, black) and individual trials (green) measured in absolute lever displacement and position relative to reward threshold. (C) Histogram of reach distances normalized to reward threshold across training for example mouse (top), and all mice (bottom). (D) Sound delay (top) measured as time of sound from movement onset and lever velocity (bottom) measured as the speed from movement onset to movement peak. (E) Movement, sound, and reward coupling measured as probability of hearing a sound during a movement and probability of reward given a sound-generating movement (N=10 mice). (F) Schematic (top) of deviant auditory feedback, and measurement of lever trajectory RMSE difference between the 30 trials prior to the onset of deviants and after the onset of deviants, regardless of trial type. Quantification (bottom) of lever trajectory RMSE differences before and after the onset deviant trials, compared to average RMSE difference between groups comprised of the same trials, but assigned random identities (mean of 1000 permutations). (G) Schematic of frequency-probe session. (H) Heat maps (left) of all neural responses (n=928, N=10) to the lever-associated sound (top, cyan) and frequency probe sound (bottom, orange) in the active condition (light, left) and passive condition (dark, right). PSTHs (right) of neural response across tone type and movement condition. (I) Scatter of neural responses to the expected or probe tone in the active vs passive condition, with correlation value. (J) Schematic of radial modulation index calculation using scattered data from (I) (top), with individual neuron examples (bottom) showing neural response to the expected-frequency tone. Modulation index (MI) value shown for each neuron is linked by color to a corresponding datapoint on scatter. (K) Modulation index of individual neurons comparing active and passive responses to the expected and probe frequencies. (L) Connected points display the average neural modulation of expected and probe frequencies for an individual animal. See also Figure S1 and S2 and Video S1.
Figure 2.
Figure 2.. Frequency-specific suppression is present for self-generated sounds that are important to ongoing behavior.
(A) Schematic where mice trained to expect lever-generated sound at reward threshold experience auditory feedback shifted by one octave on 50% of trials randomly interleaved. (B) Movement, sound, and reward coupling measured as in Figure 1E (N=7 mice). (C) Average lever trajectory +/− standard error across mice for interleaved expected-sound trials and frequency-deviant trials (top). Quantification (below) of change in movement duration and maximum lever position across interleaved expected and probe trials. (D) Global average lick rate +/− standard error across mice for expected and frequency-deviant trials (top). Quantification (bottom) of change in lick onset and lick rate for expected and probe trials. (E) Heat maps (left) of all neural responses (n=770, N=7) to the lever-associated sound (top, cyan) and frequency probe sound (bottom, orange) in the active condition (light, left) and passive condition (dark, right). PSTHs (right) of neural response to lever-associated and probe tone across conditions. (F) Modulation index of individual neurons comparing active and passive responses to the expected and probe frequencies. (G) Connected points display the average neural modulation of expected and probe frequencies for an individual animal. (H) Quantification of the average difference between the modulation of expected and probe frequencies for individual animals trained on the standard behavior (black, data from Figure 1J) or the reward threshold behavior (gray). See also Figure S1.
Figure 3.
Figure 3.. Frequency-specific suppression is confined to a specific position within the sound-associated movement.
(A) Schematic with mice experiencing background tones in addition to lever-generated sounds early in movement (N=4). (B) Average modulation relative to passive tones of neural responses to lever-associated (cyan) and probe (orange) tones heard during movement, or during 200ms bins prior to lever movement onset, starting at the time listed. (C) Schematic of within-movement probe experiment (bottom) where expected and unexpected frequency tones are played at three different positions (N=5). Arrows (above) shows time difference between sound positions, averaged across animals. (D) Raster (bottom) and PSTH (top) of neural response to the expected-frequency tone heard at three positions for an example neuron. (E) Average modulation of neural response to the expected (cyan) and probe frequency (orange) tones at three different lever positions. Neurons are included at each position if they have a significant response to the corresponding tone at that position or in the passive condition. (F) Schematic of movement-specificity experiment where mouse licking triggers closed-loop sounds, randomly interleaved (N=6). (G) Average modulation of expected and probe sounds in these animals when heard at the expected lever position (left) or when triggered by licking (right). (H) Average modulation of the expected and frequency probe sounds heard in animals trained with a sound produced by licking (N=2). See also Figure S3.
Figure 4.
Figure 4.. Movement signals in the auditory cortex display an expectation for the timing and frequency of self-generated sounds.
(A) Schematic of omission trials on the standard lever task (N=5). (B) Average population PSTH of all neurons responsive to any of 6 positions throughout movement (see STAR Methods), aligned to time of expected sound. (C) Schematic showing the 6 positions used to identify movement-responsive neurons. (D) PSTHs of movement-responsive neurons aligned to each of the positions shown with corresponding color in (C). (E) Z-scored response of a movement-responsive neuron over raster plot of spike times across trials, aligned to time of expected sound onset. (F) Histogram of movement-signal onset time on omission trials for neurons in standard trained mice (gray) and silent-trained mice (blue). (G) Same as (F) but for peak time of movement signal. (H) Intersection of peak time and strength of movement signal for standard (gray) and silent-trained (blue) mice, with histogram of movement signal strength (right). (I) Venn diagram showing intersections of movement responses (gray) and sound responses to the expected (cyan) and probe (orange) frequency tone across the omission dataset population. (J) PSTH (top) showing the movement activity on omission trials of neurons responsive to the expected sound (cyan), the probe sound (orange), or both sounds (dark cyan). X-axis notch indicates the time of expected sound. Quantification (bottom) of neural response rate in the 100ms prior to the expected tone time for each population. (K,L) Same as (I,J), but for neurons in the standard dataset that lacked omission trials. See Figure S4.
Figure 5.
Figure 5.. Laminar pattern of movement, sound, and expectation signals.
(A) Schematic of laminar and depth assignment based on current-source-density features (see Figure S5A and STAR Methods). (B) Moving average histogram showing number (gray) and fraction (black) of neurons responsive during lever movement when sound is omitted (see Figure 4A,B). Dotted line shows fraction of neurons responsive prior to sound onset in core dataset. (C) Onset time of movement responsive neurons plotted against cortical depth with moving average line (averaged across 50um). (D) Same as (C) but for movement response peak. (E) Intersection of peak time and z-score strength of movement signals by cortical layer. (F,G) Moving average (across 50um) of individual neuron responses to the expected frequency sound (F, cyan) or probe frequency sound (G, orange) heard in the active (light) or passive (dark) context by neural depth. (H) Moving average of neural modulation to expected and probe sounds across cortical depth (overlapping 50um bins). (I,J) Modulation of neural responses in the active and passive condition for expected (cyan) or probe (orange) sounds in individual layers (I) and corresponding cumulative distribution histograms of modulation by layer (J). See Figure S5.
Figure 6.
Figure 6.. Divergent processing of expected and unexpected self-generated sounds across individual auditory cortex neurons.
(A-D) Schematic and activity of a sample neuron exemplifying cells responsive (p<0.05) to all sounds across all conditions (A), responsive to both sounds only during movement (B), responsive to a specific frequency in both contexts (C), and responsive only to a specific frequency during movement (D). Raster plots (bottom right) show action potentials across trials for expected (cyan) and probe (orange) frequencies heard in the active (light) or passive (dark) condition, and PSTHs (top right) show average neural response across trials. (E-F) Fraction of neurons with the described response pattern (i.e. ‘Specific’ or ‘Frequency’) relative to the total number of neurons responsive to the expected (E, n=377) or unexpected (F, n=530) tone.
Figure 7.
Figure 7.. Movement, expectation, sound, and error signals are distributed across distinct functional groups.
(A) Schematic (left) where individual neurons are characterized by their responses to different task variables and then clustered (Core dataset, N=10, see STAR Methods). Silhouette test (top right) showing clustering quality for different cluster numbers (k-values), and elbow test (bottom right) showing the clustering error by cluster number. (B) Visualization of cluster groups (k=7) by plotting neurons in a low-dimension projection of feature-space generated by UMAP algorithm (see STAR Methods) and color coded by cluster identity. (C) Neuron responses sorted by cluster group, shown as z-scored activity over time to each sound type, and as response p-values following sound onset or prior to sound onset for movement signals. (D) Functional characterization of select cluster groups. From left to right: PSTHs of average neural response to expected (cyan) and probe frequency (orange) sounds in the active (light) and passive (dark) conditions. Note different Y axis scales. Quantification of average response strength across all neurons to each sound type and movement signals. Quantification of neural modulation to expected and probe frequency sounds. Neurons were included if they have a significant response to the given tone in either the active or passive condition. Right, quantification of neuron counts by depth (top, black) and fraction of all neurons in each layer (bottom, blue). See Figure S6.

References

    1. Wolpert DM, Ghahramani Z, and Jordan MI (1995). An internal model for sensorimotor integration. Science (80-. ). 269, 1880–1882. 10.1126/science.7569931. - DOI - PubMed
    1. Nelson A, Schneider DM, Takatoh J, Sakurai K, Wang F, and Mooney R (2013). A circuit for motor cortical modulation of auditory cortical activity. J. Neurosci. 33, 14342–14353. 10.1523/JNEUROSCI.2275-13.2013. - DOI - PMC - PubMed
    1. Keller GB, and Mrsic-Flogel TD (2018). Predictive Processing: A Canonical Cortical Computation. Neuron 100, 424–435. 10.1016/j.neuron.2018.10.003. - DOI - PMC - PubMed
    1. Schneider DM, Nelson A, and Mooney R (2014). A synaptic and circuit basis for corollary discharge in the auditory cortex. Nature 513, 189–194. 10.1038/nature13724. - DOI - PMC - PubMed
    1. Schneider DM, and Mooney R (2018). How movement modulates hearing. Annu. Rev. Neurosci. 41, 553–572. 10.1146/annurev-neuro-072116-031215. - DOI - PMC - PubMed

Publication types

LinkOut - more resources