Skip to main page content
U.S. flag

An official website of the United States government

Dot gov

The .gov means it’s official.
Federal government websites often end in .gov or .mil. Before sharing sensitive information, make sure you’re on a federal government site.

Https

The site is secure.
The https:// ensures that you are connecting to the official website and that any information you provide is encrypted and transmitted securely.

Access keys NCBI Homepage MyNCBI Homepage Main Content Main Navigation
. 2022 Nov 10;24(11):1635.
doi: 10.3390/e24111635.

Rosenblatt's First Theorem and Frugality of Deep Learning

Affiliations

Rosenblatt's First Theorem and Frugality of Deep Learning

Alexander Kirdin et al. Entropy (Basel). .

Abstract

The Rosenblatt's first theorem about the omnipotence of shallow networks states that elementary perceptrons can solve any classification problem if there are no discrepancies in the training set. Minsky and Papert considered elementary perceptrons with restrictions on the neural inputs: a bounded number of connections or a relatively small diameter of the receptive field for each neuron at the hidden layer. They proved that under these constraints, an elementary perceptron cannot solve some problems, such as the connectivity of input images or the parity of pixels in them. In this note, we demonstrated Rosenblatt's first theorem at work, showed how an elementary perceptron can solve a version of the travel maze problem, and analysed the complexity of that solution. We also constructed a deep network algorithm for the same problem. It is much more efficient. The shallow network uses an exponentially large number of neurons on the hidden layer (Rosenblatt's A-elements), whereas for the deep network, the second-order polynomial complexity is sufficient. We demonstrated that for the same complex problem, the deep network can be much smaller and reveal a heuristic behind this effect.

Keywords: classification; complexity; deep network; elementary perceptron; shallow network; travel maze problem.

PubMed Disclaimer

Conflict of interest statement

The authors declare no conflict of interest.

Figures

Figure 1
Figure 1
Rosenblatt’s elementary perceptron (re-drawn from the Rosenblatt book [1]).
Figure 2
Figure 2
Have we chosen the right delicacies (right) for our guests (left)? (a) A prototype travel maze problem. (b) A simplified form of the problem with piece-wise linear paths for further formal description (Section 2). Complexity depends on the number of guests and the number of links in a path.
Figure 3
Figure 3
Game diagram with L stages (a formalized and simplified version of the travel maze problem).
Figure 4
Figure 4
A shallow (fully connected) neural network for the travel maze problem (Figure 3). It differs from the classical elementary perceptron (Figure 1) by n2 output neurons instead of one and can be considered as a union of n2 elementary perceptrons with joint retina and hidden layer of A-elements.
Figure 5
Figure 5
The case n=2,L=3.
Figure 6
Figure 6
The case n=3,L=2.
Figure 7
Figure 7
A deep neural network diagram for simplified travel maze problem.

References

    1. Rosenblatt F. Principles of Neurodynamics: Perceptrons and the Theory of Brain Mechanisms. Spartan Books; Washington, DC, USA: 1962.
    1. Venkatesh B., Anuradha J. A review of feature selection and its methods. Cybern. Inf. Technol. 2019;19:3–26. doi: 10.2478/cait-2019-0001. - DOI
    1. Al-Tashi Q., Abdulkadir S.J., Rais H.M., Mirjalili S., Alhussian H. Approaches to multi-objective feature selection: A systematic literature review. IEEE Access. 2020;8:125076–125096. doi: 10.1109/ACCESS.2020.3007291. - DOI
    1. Rong M., Gong D., Gao X. Feature selection and its use in big data: Challenges, methods, and trends. IEEE Access. 2019;7:19709–19725. doi: 10.1109/ACCESS.2019.2894366. - DOI
    1. Minsky M., Papert S. Perceptrons. MIT Press; Cambridge, MA, USA: 1988.

LinkOut - more resources