Skip to main page content
U.S. flag

An official website of the United States government

Dot gov

The .gov means it’s official.
Federal government websites often end in .gov or .mil. Before sharing sensitive information, make sure you’re on a federal government site.

Https

The site is secure.
The https:// ensures that you are connecting to the official website and that any information you provide is encrypted and transmitted securely.

Access keys NCBI Homepage MyNCBI Homepage Main Content Main Navigation
Review
. 2010 Jan 20;206(2):157-65.
doi: 10.1016/j.bbr.2009.08.031. Epub 2009 Aug 29.

Structure learning in action

Affiliations
Review

Structure learning in action

Daniel A Braun et al. Behav Brain Res. .

Abstract

'Learning to learn' phenomena have been widely investigated in cognition, perception and more recently also in action. During concept learning tasks, for example, it has been suggested that characteristic features are abstracted from a set of examples with the consequence that learning of similar tasks is facilitated-a process termed 'learning to learn'. From a computational point of view such an extraction of invariants can be regarded as learning of an underlying structure. Here we review the evidence for structure learning as a 'learning to learn' mechanism, especially in sensorimotor control where the motor system has to adapt to variable environments. We review studies demonstrating that common features of variable environments are extracted during sensorimotor learning and exploited for efficient adaptation in novel tasks. We conclude that structure learning plays a fundamental role in skill learning and may underlie the unsurpassed flexibility and adaptability of the motor system.

PubMed Disclaimer

Figures

Fig. 1
Fig. 1
Schematic diagram of structural learning. (A) The task space is defined by two parameters, but for the given task only certain parameter combinations occur (black line). This relationship is indicated by the curved structure which can be parameterized by a one-dimensional meta-parameter μ. However, a parametric learner that is ignorant of the structure has to explore the full two-dimensional space when re-adjusting the parameter settings. (B) A structural learner, in contrast, takes the relationship between the parameters into account. By adjusting only the meta-parameter μ the learning problem is effectively one-dimensional. Reprinted with permission from .
Fig. 2
Fig. 2
Example of a causal Bayesian network. (A) Four possible structures. The arrows represent the causal structure of the three variables pressure, barometer and storm that are represented by nodes. Structural learning is determining which of the possible structures is the best model of the data. In this case the readings of the barometer and the probability of a storm occurring are correlated but independent when conditioned on the variable pressure suggesting structure 2. (B) Parametric learning involves specifying the probability distribution that quantifies the strength of the causal connections given a particular structure. In this case there is a 0.01 probability of the barometer being broken and giving a false reading.
Fig. 3
Fig. 3
Structural learning of visuomotor rotations. (A) Learning curves for a block of +60° rotation trials performed by a group that had experienced random rotations before (R-learner, red), a control group that had only experienced movements with veridical feedback (blue) and a group that experienced random linear transforms and ±60° rotations (green). The rotation group shows strong facilitation. (B) Learning curves for a subsequent block of −60° rotation trials performed by the same groups. The interference effect that can be seen in the control group is strongly reduced in the rotation group. (C) Learning curves for a subsequent block of +60° rotation trials performed by the same groups. Again the random rotation group shows a performance advantage in the first 10 trials. The median error over all subjects is shown and the pertinent interquartile confidence interval. Reprinted with permission from (For interpretation of the references to colour in this figure legend, the reader is referred to the web version of the article.).
Fig. 4
Fig. 4
Structural learning of 3D rotations. (A) Angular error in probe blocks of horizontal (red) and vertical (blue) 45°-rotations experienced by a group that experienced random horizontal rotations before. There is a clear facilitation for learning the horizontal rotation. The black line indicates performance in the block of null-rotation (washout) trials preceding the probe block. (B) Performance error in the same probe blocks for a group that experienced random vertical rotations before. The facilitation pattern is reversed. (C and D) Movement variance shortly before trial end for both kinds of probe blocks. The variance in the task-irrelevant direction — perpendicular to the displacement direction — is significantly reduced for isostructural probe blocks (ellipses show standard deviation). This suggests that subjects explored less outside the structure they had learned during the random rotation blocks. (E and F) Circular histograms of initial movement adaptation from the 1st trial of the probe block to the 2nd trial. Subjects responded to probe blocks from the same structure in a consistent way correcting towards the required target. In contrast, in case of probe trials for a different structure, subjects also showed components of learning in the direction of the previously learned structure. Reprinted with permission from (For interpretation of the references to colour in this figure legend, the reader is referred to the web version of the article.).
Fig. 5
Fig. 5
Evolution of within-trial adaptive behaviour for random rotation trials. (A) Mean hand trajectories for ±90° rotation trials in the first 10 batches averaged over trials and subjects (each batch consisted of 200 trials, approximately 5% of which were ±90° rotation trials). The −90° rotation trials have been mirrored about the y axis to allow averaging. Dark blue colours indicate early batches, green colours intermediate batches, red colours indicate later batches. (B) The minimum distance to the target averaged for the same trials as A (error bars indicate standard deviation over all trajectories and all subjects). This shows that subjects’ performance improves over batches. (C) Mean speed profiles for ±90° rotations of the same batches. In early batches, movements are comparatively slow and online adaptation is reflected in a second peak of the speed profile which is initially noisy and unstructured. (D) The magnitude of the second peak increases over batches (same format as B). (E) Standard deviation profiles for ±90° rotation trajectories computed for each trial batch. (F) Standard deviation of the last 500 ms of movement. Over consecutive batches the variability is reduced in the second part of the movement. Reprinted with permission from (For interpretation of the references to colour in this figure legend, the reader is referred to the web version of the article.).
Fig. 6
Fig. 6
Structural learning in Bayesian networks. (A) The nodes of the Bayesian Network represent random variables such as sensory inputs Rj and motor outputs Uk. The arrows indicate causal dependencies that are usually expressed via parameterized probability density functions. Learning the parameters of the full joint probability distribution in this network will require substantial computations. (B) In this network there is a hidden variable μ that corresponds to what we have called a ‘meta-parameter’. The joint probability distribution over all variables splits up into a product of conditional distributions with regard to μ. This substantially reduces the dimensionality of the parameter space. In our experiments μ corresponds for instance to internal variables specific for rotations. Reprinted with permission from .

References

    1. Ashby W.R. 2nd ed. Chapman & Hall; London: 1960. Design for a brain: the origin of adaptive behavior.
    1. Pearl J. Morgan Kaufmann Publishers; San Mateo, CA: 1988. Probabilistic reasoning in intelligent systems: networks of plausible inference.
    1. Boyen X., Friedman N., Koller D. Proceedings of the 15th annual conference on uncertainty in artificial intelligence (UAI-99) Morgan Kaufmann; 1999. Discovering the hidden structure of complex dynamic systems; pp. 91–100.
    1. Åström K.J., Wittenmark B. 2nd ed. Addison-Wesley; Reading, MA: 1995. Adaptive control.
    1. Friedman N. Proceedings of the 14th annual conference on uncertainty in artificial intelligence (UAI-98) Morgan Kaufmann; 1998. The Bayesian structural EM algorithm; pp. 129–139.

Publication types