Skip to main page content
U.S. flag

An official website of the United States government

Dot gov

The .gov means it’s official.
Federal government websites often end in .gov or .mil. Before sharing sensitive information, make sure you’re on a federal government site.

Https

The site is secure.
The https:// ensures that you are connecting to the official website and that any information you provide is encrypted and transmitted securely.

Access keys NCBI Homepage MyNCBI Homepage Main Content Main Navigation
. 2010 Aug;14(8):348-56.
doi: 10.1016/j.tics.2010.06.002. Epub 2010 Jul 2.

Letting structure emerge: connectionist and dynamical systems approaches to cognition

Affiliations

Letting structure emerge: connectionist and dynamical systems approaches to cognition

James L McClelland et al. Trends Cogn Sci. 2010 Aug.

Abstract

Connectionist and dynamical systems approaches explain human thought, language and behavior in terms of the emergent consequences of a large number of simple noncognitive processes. We view the entities that serve as the basis for structured probabilistic approaches as abstractions that are occasionally useful but often misleading: they have no real basis in the actual processes that give rise to linguistic and cognitive abilities or to the development of these abilities. Although structured probabilistic approaches can be useful in determining what would be optimal under certain assumptions, we propose that connectionist, dynamical systems, and related approaches, which focus on explaining the mechanisms that give rise to cognition, will be essential in achieving a full understanding of cognition and development.

PubMed Disclaimer

Figures

Figure 1
Figure 1
Top: The A not-B task. On the A trials, an experimenter hides an object repeatedly in one location (A), for example under a lid. The infant watches the hiding, a delay of several seconds is imposed, and then the hiding box is pushed close to the infant and the the infant is allowed to reach to the hiding location and retrieve the object. This is repeated several times – hiding in location A, delay, infant retrieval of the object. On the critical B trial, the experimenter hides the object in a new adjacent location (B), under a second lid. After the delay, the infant is allowed to reach. Bottom Left: A DFT simulation of activation in the dynamic field on a B trial. The activation rises at the B location during the hiding event, but then due to the cooperativity in the field and memory for previous reaches, activation begings to rise at A during the delay and the start of the reach inhibiting the activation at B and resulting in a simulated reach to A. Bottom right: A baby in a posture-shift A not-B task.
Figure 2
Figure 2
Top, Left: the connectionist network used by Rogers and McClelland [20], first used by Rumelhart and Todd [70], to explore the emergence of structure from experience. The network is trained by presenting item-context input pairs (e.g. robin can) and then propagating activation forward (to the right) to activate units standing for possible completions of simple three-term propositions. Learning occurs by comparing the output to a pattern representing the valid completions (in this case, move/grow/fly), then adjusting connection weights throughout the network to reduce the discrepancy between the network’s output and the valid completions. Learning occurs gradually, affecting how different items are represented at the Representation layer, and also at the subsequent Hidden layer, where the representations are shaded by context. Learning occurs gradually, producing progressive differentiation. Bottom, Left: At first the network treats all items similarly, as shown in the hierarchical clustering analysis of the patterns of activation at the representation layer. As learning progresses over successive sweeps through the set of item-context-output training patterns, the network first differentiates the plants form the animals and later differentiates the different types of animals and different types of plants. Upper right: The middle panel shows the similarity structure in the learned representation layer patterns in a different way for a larger set of items, while the flanking panels show how this similarity structure is re-organized in different contexts. Note that in the can context, the plants are all represented as similar, because they all do the same thing (they just grow). Bottom right: Naming response of the network when the input is ‘goat’ at different points in training. Note the transient tendency to activate ‘dog’ before the correct response ‘goat’ is acquired. In this instance, the network was trained in an environments where dogs were more frequent than any other type of animal. Before the dog is differentiated from other animal types, the network treats all animals the same, naming them all with the most common animal name, dog. As differentiation occurs the correct name of the goat is finally learned. All panels reproduced with permission from [20].
Box Figure
Box Figure
Elman’s Simple Recurrent Network. Each rectangle represents a pool of simple processing units, and each dashed arrow represents a set of learnable connections from the units in one pool to the units in another. A stream of items is presented to the input layer of the network, one after another. For each item, the task is to predict the next item. The pattern on the hidden layer from processing the previous item is copied back to the context layer, thereby allowing context to influence the processing of the next incoming item. Reproduced with permission from [21].

Comment in

References

    1. Johnson SB. Emergence: The connected lives of ants, brains, cities, and software. New York: Scribner’s; 2001.
    1. Griffiths TL, Chater N, Kemp C, Perfors A, Tenenbaum J. Probabilistic models of cognition: Exploring the laws of thought. Trends in Cognitive Sciences. 2010;XX:xxx–xxx. - PubMed
    1. Marr D. Vision. W. H. Freeman; San Francisco, CA: 1982.
    1. Chomsky N. Aspects of the theory of syntax. Cambridge, MA: MIT Press; 1965.
    1. Sternberg D, McClelland JL. When should we expect indirect effects in human contingency learning? In: Taatgen NA, van Rijn H, editors. Proceedings of the 31st Annual Conference of the Cognitive Science Society. Austin, TX: Cognitive Science Society; 2009. pp. 206–211.