Skip to main page content
U.S. flag

An official website of the United States government

Dot gov

The .gov means it’s official.
Federal government websites often end in .gov or .mil. Before sharing sensitive information, make sure you’re on a federal government site.

Https

The site is secure.
The https:// ensures that you are connecting to the official website and that any information you provide is encrypted and transmitted securely.

Access keys NCBI Homepage MyNCBI Homepage Main Content Main Navigation
. 2024 May 26;14(11):1577.
doi: 10.3390/ani14111577.

Dynamic Nonlinear Spatial Integrations on Encoding Contrasting Stimuli of Tectal Neurons

Affiliations

Dynamic Nonlinear Spatial Integrations on Encoding Contrasting Stimuli of Tectal Neurons

Shuman Huang et al. Animals (Basel). .

Abstract

Animals detect targets using a variety of visual cues, with the visual salience of these cues determining which environmental features receive priority attention and further processing. Surround modulation plays a crucial role in generating visual saliency, which has been extensively studied in avian tectal neurons. Recent work has reported that the suppression of tectal neurons induced by motion contrasting stimulus is stronger than that by luminance contrasting stimulus. However, the underlying mechanism remains poorly understood. In this study, we built a computational model (called Generalized Linear-Dynamic Modulation) which incorporates independent nonlinear tuning mechanisms for excitatory and inhibitory inputs. This model aims to describe how tectal neurons encode contrasting stimuli. The results showed that: (1) The dynamic nonlinear integration structure substantially improved the accuracy (significant difference (p < 0.001, paired t-test) in the goodness of fit between the two models) of the predicted responses to contrasting stimuli, verifying the nonlinear processing performed by tectal neurons. (2) The modulation difference between luminance and motion contrasting stimuli emerged from the predicted response by the full model but not by that with only excitatory synaptic input (spatial luminance: 89 ± 2.8% (GL_DM) vs. 87 ± 2.1% (GL_DMexc); motion contrasting stimuli: 87 ± 1.7% (GL_DM) vs. 83 ± 2.2% (GL_DMexc)). These results validate the proposed model and further suggest the role of dynamic nonlinear spatial integrations in contextual visual information processing, especially in spatial integration, which is important for object detection performed by birds.

Keywords: contrasting stimuli; dynamic nonlinear spatial integrations; optic tectum; surround modulation.

PubMed Disclaimer

Conflict of interest statement

The authors declare that they have no competing interests.

Figures

Figure 1
Figure 1
Schematic of the generalized linear model combined with the difference of Gaussian model (DoG), hereinafter referred to as GLM.
Figure 2
Figure 2
(a) Schematic of the luminance contrasting encoding sub-model. The contrasting encoding model (referred to as GL_DM) mainly consists of two parts: the photoelectric transduction model and the conductance-based encoding model (CBEM). (b) Schematic of the motion direction contrasting encoding sub-model. The arrow direction indicates direction of movement. The green and red arrows indicate that the direction of movement is opposite.
Figure 3
Figure 3
The predicted neuronal response by the computational model. (a) Luminance contrasting stimulus at three contrast levels. The red circle in each subfigure indicates the receptive field area for the example recording site. (b) Motion contrasting stimuli at two contrast levels. The red circle in each subfigure indicates the receptive field area for the example recording site. The Arrow direction indicates direction of movement. (c) The peri-stimulus time histogram (PSTH) for luminance contrasting stimulus from the raw data, GL_DM, and the GLM. (d) The peri-stimulus time histogram (PSTH) for motion contrasting stimuli from the raw data, GL_DM, and the GLM. (e) The statistical results of the mean firing rate to each stimulus are shown in (a). (f) The statistical results of the mean firing rate to each stimulus are shown in (b).
Figure 4
Figure 4
The residuals and goodness of fit for the computational model. (a) The residual plots for the GL_DM model and GLM model under luminance contrasting stimulus; (b) the residual plots for the GL_DM model and GLM model under motion contrasting stimuli; (c) goodness of fit plots for the GL_DM model and GLM model under luminance contrasting stimulus; (d) goodness of fit for the GL_DM model and GLM model under motion contrasting stimuli.
Figure 5
Figure 5
The comparative results of luminance modulation index between static and motion modes. (a) Luminance modulation index between static and motion modes calculated from the GL_DM model; (b) luminance modulation index between static and motion modes calculated from the GLM model.
Figure 6
Figure 6
Probing the suppression modulation on spatial luminance and motion direction contrasting stimuli. (a) The excitatory synaptic input and inhibitory synaptic input, as well as their corresponding total synaptic input to luminance contrasting stimulus at three contrast levels. (b) Similar to (a), but to motion contrasting stimulus at two contrast levels. The red circle in each subfigure indicates the receptive field area for the example recording site. The Arrow direction indicates direction of movement. (c) The statistical results of GL_DMexc and GL_DM and prediction performance for different levels of contrasting stimuli. The horizontal line indicates the median of each group of data, and the whiskers indicate the lowest and highest points within 1.5× the interquartile ranges of the lower or upper quartile, respectively. The “***” indicates a significant difference between the two groups of data (Wilcoxon signed-rank test, p < 0.001).

Similar articles

Cited by

References

    1. Tadin D. Strong evidence against a common center-surround mechanism in visual processing. J. Vis. 2022;22:58. doi: 10.1167/jov.22.3.58. - DOI
    1. Turner M.H., Schwartz G.W., Rieke F. Receptive field center-surround interactions mediate context-dependent spatial contrast encoding in the retina. eLife. 2018;7:e38841. doi: 10.7554/eLife.38841. - DOI - PMC - PubMed
    1. Cui Y., Wang Y.V., Park S.J., Demb J.B., Butts D.A. Divisive suppression explains high-precision firing and contrast adaptation in retinal ganglion cells. eLife. 2016;5:e19460. doi: 10.7554/eLife.19460. - DOI - PMC - PubMed
    1. Barchini J., Shi X., Chen H., Cang J. Bidirectional encoding of motion contrast in the mouse superior colliculus. eLife. 2018;7:e35261. doi: 10.7554/eLife.35261. - DOI - PMC - PubMed
    1. Fisher T.G., Alitto H.J., Usrey W.M. Retinal and Nonretinal Contributions to Extraclassical Surround Suppression in the Lateral Geniculate Nucleus. J. Neurosci. Off. J. Soc. Neurosci. 2017;37:226–235. doi: 10.1523/JNEUROSCI.1577-16.2016. - DOI - PMC - PubMed

LinkOut - more resources