Skip to main page content
U.S. flag

An official website of the United States government

Dot gov

The .gov means it’s official.
Federal government websites often end in .gov or .mil. Before sharing sensitive information, make sure you’re on a federal government site.

Https

The site is secure.
The https:// ensures that you are connecting to the official website and that any information you provide is encrypted and transmitted securely.

Access keys NCBI Homepage MyNCBI Homepage Main Content Main Navigation
Review
. 2025 Jun;32(3):1032-1040.
doi: 10.3758/s13423-024-02608-y. Epub 2024 Nov 13.

Simple Recurrent Networks are Interactive

Affiliations
Review

Simple Recurrent Networks are Interactive

James S Magnuson et al. Psychon Bull Rev. 2025 Jun.

Abstract

There is disagreement among cognitive scientists as to whether a key computational framework - the Simple Recurrent Network (SRN; Elman, Machine Learning, 7(2), 195-225, 1991; Elman, Cognitive Science, 14(2), 179-211, 1990) - is a feedforward system. SRNs have been essential tools in advancing theories of learning, development, and processing in cognitive science for more than three decades. If SRNs were feedforward systems, there would be pervasive theoretical implications: Anything an SRN can do would therefore be explainable without interaction (feedback). However, despite claims that SRNs (and by extension recurrent neural networks more generally) are feedforward (Norris, 1993), this is not the case. Feedforward networks by definition are acyclic graphs - they contain no loops. SRNs contain loops - from hidden units back to hidden units with a time delay - and are therefore cyclic graphs. As we demonstrate, they are interactive in the sense normally implied for networks with feedback connections between layers: In an SRN, bottom-up inputs are inextricably mixed with previous model-internal computations. Inputs are transmitted to hidden units by multiplying them by input-to-hidden weights. However, hidden units simultaneously receive their own previous activations as input via hidden-to-hidden connections with a one-step time delay (typically via context units). These are added to the input-to-hidden values, and the sums are transformed by an activation function. Thus, bottom-up inputs are mixed with the products of potentially many preceding transformations of inputs and model-internal states. We discuss theoretical implications through a key example from psycholinguistics where the status of SRNs as feedforward or interactive has crucial ramifications.

Keywords: Interaction; Neural networks.

PubMed Disclaimer

Conflict of interest statement

Declarations. Conflicts of interest/Competing interests: None. Ethics approval: Not applicable. Consent to participate: Not applicable. Consent for publication: Not applicable.

References

    1. Botvinick, M. M., & Plaut, D. C. (2004). Doing without schema hierarchies: A recurrent connectionist approach to normal and impaired routine sequential action. Psychological Review, 111(2), 395–429. https://doi.org/10.1037/0033-295X.111.2.395 - DOI - PubMed
    1. Botvinick, M. M., & Plaut, D. C. (2006). Short-term memory for serial order: A recurrent neural network model. Psychological Review, 113(2), 201–233. https://doi.org/10.1037/0033-295X.113.2.201 - DOI - PubMed
    1. Cairns, P., Shillcock, R., Chater, N., & Levy, J. P. (1995). Bottom-up connectionist modelling of speech. Connectionist models of memory and language (pp. 289–310). UCL Press Limited.
    1. Christiansen, M. H., & Chater, N. (1999a). Connectionist natural language processing: The state of the art. Cognitive Science, 23(4), 417–437. https://doi.org/10.1207/s15516709cog2304_2 - DOI
    1. Christiansen, M. H., & Chater, N. (1999b). Toward a connectionist model of recursion in human linguistic performance. Cognitive Science, 23(2), 157–205. https://doi.org/10.1207/s15516709cog2302_2 - DOI

LinkOut - more resources