Skip to main page content
U.S. flag

An official website of the United States government

Dot gov

The .gov means it’s official.
Federal government websites often end in .gov or .mil. Before sharing sensitive information, make sure you’re on a federal government site.

Https

The site is secure.
The https:// ensures that you are connecting to the official website and that any information you provide is encrypted and transmitted securely.

Access keys NCBI Homepage MyNCBI Homepage Main Content Main Navigation
. 2023 Jan 23:2023:7037124.
doi: 10.1155/2023/7037124. eCollection 2023.

An Extension Network of Dendritic Neurons

Affiliations

An Extension Network of Dendritic Neurons

Qianyi Peng et al. Comput Intell Neurosci. .

Abstract

Deep learning (DL) has achieved breakthrough successes in various tasks, owing to its layer-by-layer information processing and sufficient model complexity. However, DL suffers from the issues of both redundant model complexity and low interpretability, which are mainly because of its oversimplified basic McCulloch-Pitts neuron unit. A widely recognized biologically plausible dendritic neuron model (DNM) has demonstrated its effectiveness in alleviating the aforementioned issues, but it can only solve binary classification tasks, which significantly limits its applicability. In this study, a novel extended network based on the dendritic structure is innovatively proposed, thereby enabling it to solve multiple-class classification problems. Also, for the first time, an efficient error-back-propagation learning algorithm is derived. In the extensive experimental results, the effectiveness and superiority of the proposed method in comparison with other nine state-of-the-art classifiers on ten datasets are demonstrated, including a real-world quality of web service application. The experimental results suggest that the proposed learning algorithm is competent and reliable in terms of classification performance and stability and has a notable advantage in small-scale disequilibrium data. Additionally, aspects of network structure constrained by scale are examined.

PubMed Disclaimer

Conflict of interest statement

The authors declare that they have no conflicts of interest.

Figures

Figure 1
Figure 1
(a) The function of a biological neuron is completely different depending on the shape of its dendrites and the location of its synapses. (b) McCulloch–Pitts neuron model: no interaction in dendrite morphology and dendrites. (c) Single dendritic neural model: faithful representation of dendrite morphology and dendrites fixated on binary classification.
Figure 2
Figure 2
(a) The network is disordered without QoS rules. (b) The network is in order with QoS rules.
Figure 3
Figure 3
Multiple dendritic neural networks: applied on comprehensive applications based on the dendritic structure.
Figure 4
Figure 4
The structure of the proposed MDNN. The framed rectangle represents the structure of a single neuron.
Figure 5
Figure 5
The relation among layers.
Figure 6
Figure 6
(a) The comparison of receiver operating characteristic curves for the platinum class of the QWS dataset. (b) The comparison of receiver operating characteristic curves for the gold class of the QWS dataset. (c) The comparison of receiver operating characteristic curves for the silver class of the QWS dataset. (d) The comparison of receiver operating characteristic curves for the bronze class of the QWS dataset.
Figure 7
Figure 7
Dendritic changes in the structure of randomly selected samples before and after network training were applied to the QWS dataset. Z-axis represents the neuron, and X-axis and Y-axis are attributes and hidden layers, respectively.
Figure 8
Figure 8
(a) The dendritic states of MDNN are fixated on the QWS dataset. Circle 0 and Circle 1 stand for constant-0 and constant-1 connection, respectively. In addition, black-filled square and circle stand for inhibitory and excitatory states, respectively. (b) Logical circle realization of MDNN on the QWS dataset, where λ = qj,i,m/wj,i,m.

References

    1. Lecun Y., Bengio Y., Hinton G. Deep learning. Nature . 2015;521(7553):436–444. doi: 10.1038/nature14539. - DOI - PubMed
    1. Sze V., Chen Yu-H., Yang T.-Ju, Emer J. S. Efficient processing of deep neural networks: a tutorial and survey. Proceedings of the IEEE . 2017;105(12):2295–2329. doi: 10.1109/jproc.2017.2761740. - DOI
    1. Dayhoff J. E., DeLeo J. M. Artificial neural networks: opening the black box. Cancer . 2001;91(S8):1615–1635. doi: 10.1002/1097-0142(20010415)91:8+<1615::aid-cncr1175>3.0.co;2-l. - DOI - PubMed
    1. Yu Y., Lei Z., Wang Y., Zhang T., Peng C., Gao S. Improving dendritic neuron model with dynamic scale-free network-based differential evolution. IEEE/CAA Journal of Automatica Sinica . 2022;9(1):99–110. doi: 10.1109/jas.2021.1004284. - DOI
    1. Wojciech S., Thomas W., Klaus-Robert M. . Explainable artificial intelligence: Understanding, Visualizing and Interpreting Deep Learning Models. 2017. https://arxiv.org/abs/1708.08296 .

LinkOut - more resources