Skip to main page content
U.S. flag

An official website of the United States government

Dot gov

The .gov means it’s official.
Federal government websites often end in .gov or .mil. Before sharing sensitive information, make sure you’re on a federal government site.

Https

The site is secure.
The https:// ensures that you are connecting to the official website and that any information you provide is encrypted and transmitted securely.

Access keys NCBI Homepage MyNCBI Homepage Main Content Main Navigation
. 2025 Mar 26;12(3):241907.
doi: 10.1098/rsos.241907. eCollection 2025 Mar.

An associative account of collective learning

Affiliations

An associative account of collective learning

Matthew Gildea et al. R Soc Open Sci. .

Abstract

Associative learning is an important adaptive mechanism that is well conserved among a broad range of species. Although it is typically studied in isolated animals, associative learning can occur in the presence of conspecifics in nature. Although many social aspects of individual learning have received much attention, the study of collective learning-the acquisition of knowledge in groups of animals through shared experience-has a much shorter history. Consequently, the conditions under which collective learning emerges and the mechanisms that underlie such emergence are still largely unexplored. Here, we develop a parsimonious model of collective learning based on the complementary integration of associative learning and collective intelligence. The model assumes (i) a simple associative learning rule, based on the Rescorla-Wagner model, in which the actions of conspecifics serve as cues and (ii) a horse-race action selection rule. Simulations of this model show no benefit of group training over individual training in a simple discrimination task (A+/B-). However, a group-training advantage emerges after the discrimination task is reversed (A-/B+). Model predictions suggest that, in a dynamic environment, tracking the actions of conspecifics that are solving the same problem can yield superior learning to individual animals and enhanced performance to the group.

Keywords: associative learning; collective intelligence; collective learning; groups; mathematical modelling; reversal learning.

PubMed Disclaimer

Conflict of interest statement

We declare we have no competing interests.

Figures

Schematic Representation of the Conspecific Cue Model (CCM) of Collective Learning.
Figure 1.
Schematic representation of the CCM of collective learning. Left panel: three rats are presented with two options, square versus star. Centre panel: following an action-selection rule (equation (4.1)), two subjects, rats A and C, choose the square and one subject, rat B, the star. Right panel: only the square was baited with reward. A learning rule (equation (3.1)) strengthens the association between the square and the reward for rat A (and C), and between the choices of rat C (and A) and the reward; for rat B, the rule weakens the association between the star and the reward and does not change the association between rat A or C and reward.
Algorithmic Representation of CCM Note. Choice component.
Figure 2.
Algorithmic representation of CCM. Choice component: actions are selected from n options with probability p(j). Selection of option j depends on the probability of non-associative exploration (m), the sum of the corresponding non-social and social cues [ΣV(j)], a scale constant (c) and a binary control variable that indicates whether j has already been rejected [q(j)]. The selection process is repeated until a single option is selected and then executed. Learning component: once all agents execute an option, for each agent that executed i, the associative strength of non-social and social cues [V(Ni) and V(Si)] of the selected option i increase toward λ if it was reinforced; otherwise, they decrease toward zero. Agents are then removed from every option, so every ΣV(j) becomes simply ΣV(Nj), and binary control variable q is reset, before advancing to the next trial.
Simulations of CCM under Various Scenarios Note. Curves trace the mean of 1000 runs of 10 simulated agents
Figure 3.
Simulations of CCM under various scenarios. Curves trace 1000 runs of 10 simulated agents, based on the algorithm in figure 2. The dashed vertical line separates two training phases: cue discrimination (acquisition) and its reversal. The dotted horizontal line indicates chance performance. Each curve was generated using different βsoc-r and βsoc-e values (βsoc-e = βsoc-r /2). Other model parameters were fixed: starting associative strengths (V) of social and non-social cues = 0; βnon-r (learning rate for reinforced selection of non-social cues) = 0.001; βnon-e (learning rate for extinguished selection of non-social cues) = βnon-r /2 = 0.0005; λ was unity; m = 0.05; c = 0.2. Note that, for the βsoc-r = 0.001 condition (dark red curve), the learning rate for social and non-social cues is the same. The two βsoc-r = 0.000 only differ in the number of acquisition trials (600 versus 1000). The numbers in brackets indicate cue reliability, [p(r|corr), p(r|incr)]. See text for more details. See electronic supplementary material for simulation code.
Simulation of CCM with Fixed and Random Dummies.
Figure 4.
Simulation of CCM with fixed and random dummies. Left panels illustrate two alternative training conditions for an individual agent. In the Fixed Dummies condition (top), there are five redundant non-social cues in each option (shapes and dummy rats). In the Random Dummies condition (bottom), two non-social cues are always placed in the same option (shapes), whereas the other nine non-social cues are placed randomly with equal probability across options on each trial (dummy rats). The right panel shows simulated performance under these conditions. Simulations were conducted as in figure 3, left panel, except where indicated otherwise. The βsoc-r = 0.001 and 0.010 conditions are included again for reference. In the Fixed Dummies condition, βnon-r = 0.005, βnon-e = 0.0025, λ = 0.2 and c = 0.2 (see equation (4.2) and text for rationale). In the Random Dummies condition, default non-social parameters were reinstated. In both conditions, performance of a single agent was tracked over 10 000 runs of the simulation.
Figure 5.
Figure 5.
Individual performance and learning following reversal training. Curves trace the mean individual proportion of correct choices [p(correct), left y-axis] and associative strengths of non-social cues [V(correct) and V(incorrect), right y-axis], obtained with various social learning rates after the reversal training of 10 agents (βsoc-r). To avoid differences in performance at the onset of the reversal phase, the acquisition phase was terminated when mean p(correct) > 0.75 over a moving window of 50 trials. Other simulation parameters were the same as in figure 3, left panel.

References

    1. De Houwer J, Barnes-Holmes D, Moors A. 2013. What is learning? On the nature and merits of a functional definition of learning. Psychon. Bull. Rev. 20, 631–642. (10.3758/s13423-013-0386-3) - DOI - PubMed
    1. De Houwer J, Hughes S. 2023. Learning in individual organisms, genes, machines, and groups: a new way of defining and relating learning in different systems. Perspect. Psychol. Sci. 18, 649–663. (10.1177/17456916221114886) - DOI - PubMed
    1. Breed MD, Moore J. 2016. Learning. In Animal behavior, pp. 145–173. Burlington, MA: Elsevier. (10.1016/B978-0-12-801532-2.00005-2) - DOI
    1. Byrne JH, LaBar KS, LeDoux JE, Schafe GE, Thompson RF. 2014. Learning and memory. In From molecules to networks (eds Byrne JH, Heidelberger R, Waxham MN), pp. 591–637. San Diego, CA: Elsevier. (10.1016/B978-0-12-397179-1.00020-8) - DOI
    1. Fantino E, Stolarz-Fantino S. 2012. Associative learning. In Encyclopedia of human behavior (ed. Ramachandran VS), pp. 204–210. London, UK: Elsevier. (10.1016/B978-0-12-375000-6.00036-7) - DOI

LinkOut - more resources