Skip to main page content
U.S. flag

An official website of the United States government

Dot gov

The .gov means it’s official.
Federal government websites often end in .gov or .mil. Before sharing sensitive information, make sure you’re on a federal government site.

Https

The site is secure.
The https:// ensures that you are connecting to the official website and that any information you provide is encrypted and transmitted securely.

Access keys NCBI Homepage MyNCBI Homepage Main Content Main Navigation
. 2022 Aug 30;17(8):e0273608.
doi: 10.1371/journal.pone.0273608. eCollection 2022.

On the evolutionary language game in structured and adaptive populations

Affiliations

On the evolutionary language game in structured and adaptive populations

Kaloyan Danovski et al. PLoS One. .

Abstract

We propose an evolutionary model for the emergence of shared linguistic convention in a population of agents whose social structure is modelled by complex networks. Through agent-based simulations, we show a process of convergence towards a common language, and explore how the topology of the underlying networks affects its dynamics. We find that small-world effects act to speed up convergence, but observe no effect of topology on the communicative efficiency of common languages. We further explore differences in agent learning, discriminating between scenarios in which new agents learn from their parents (vertical transmission) versus scenarios in which they learn from their neighbors (oblique transmission), finding that vertical transmission results in faster convergence and generally higher communicability. Optimal languages can be formed when parental learning is dominant, but a small amount of neighbor learning is included. As a last point, we illustrate an exclusion effect leading to core-periphery networks in an adaptive networks setting when agents attempt to reconnect towards better communicators in the population.

PubMed Disclaimer

Conflict of interest statement

The authors have declared that no competing interests exist.

Figures

Fig 1
Fig 1. Illustration of the convergence dynamics towards a common language.
Each node represents a single agent, and is colored (a) based on the agent’s payoff, with a lighter color implying higher payoff, and (b) based on agents’ languages, with each color representing a distinct language. In the initial generation, all agents are assigned different, randomly generated languages (b1) that are not well-suited for collective communication (a1). Correspondingly, payoffs are similar and low. As the simulation progresses, some languages are adopted by multiple agents (b2), and all languages become more alike, yielding higher payoffs (a2). By the end, all agents adopt the same language (b3), and the payoff of communication is the maximum possible given that language (a3). (Colors between (a) and (b) are not related).
Fig 2
Fig 2. Example of evolutionary dynamics for various realizations of the Monte Carlo simulation and an averaged over 30 runs.
The average payoff FN is shown for both individual runs (blue lines) and their average (orange line). This example is for N = 400 run on a scale-free network, with Fmax = 5 and tmax = 2 × 106.
Fig 3
Fig 3. Differences in mean convergence time to a common language tconv on different network topologies (left, plot) and average shortest path length L for all networks (right, table).
Convergence times are roughly correlated with average shortest path lengths, with the exception of even-sized lattices. Results are for N = 500 and bars indicate standard errors.
Fig 4
Fig 4. Scaling of convergence time tconv (left) and network properties (right, average clustering C and average shortest path length L) for small-world networks, generated using the Watts-Strogatz model [44].
Average clustering C is defined as the average number of triangles, out of all possible triangles, that pass through a node, averaged over all nodes. Average shortest path length L is defined as the length of the shortest path connecting any two nodes on the network, averaged over all nodes. The convergence times of ring graphs and random networks are given, showing that small-world graphs approach the behavior of random networks as p increases, as expected. This is more likely a result of shorter average paths, which decrease sharply with an increase in p, while average clustering changes much more slowly for the same range. Results are for N = 400. Convergence times are averaged over 30 simulation runs each. Network properties are averaged over 50 network realizations.
Fig 5
Fig 5. Differences in final payoffs of languages Fconv after convergence on different network topologies.
There are no significant differences in final payoffs for different network topologies, except for even-sized lattices. Results are for N = 500 and bars indicate standard errors.
Fig 6
Fig 6. Demonstration of a gridlock pattern on 2D regular lattices.
The pattern can either occur as two languages in a checkered pattern on the lattice (left), or as one dominant language distributed in a pattern, and multiple different languages in between (middle). Adding a single edge between any two nodes (right) disturbs the pattern and leads to a convergence similar to that of odd-sized lattices. A lattice with static boundaries is shown here for visualization purposes—periodic boundaries were used in simulations.
Fig 7
Fig 7. Scaling of convergence time tconv with population size N on different networks.
Sharper increases in tconv correspond to larger average path lengths, although high clustering could also have an effect on ring graphs. Bars indicate standard errors.
Fig 8
Fig 8. The effect of neighbor influence δ on the language dynamics of the model.
Larger δ results in slower convergence and less optimal languages, except for the range δ ∈ (0.1, 0.2), where Fconv is maximized. Results are for N = 400, averaged over 24 runs each.
Fig 9
Fig 9. Effects of different rewiring rules and rewire probability λ on average payoffs FN.
We show (a) uniform disconnection and uniform reconnection, as a base case; (b) fitness-inverse disconnection and uniform reconnection; (c) uniform disconnection and fitness-proportional reconnection; and (d) fitness-inverse disconnection and fitness-proportional reconnection (see previous section for definition of these rewiring rules.) Three different values of λ are shown for each configuration of rewire rules: 0.1, 0.5, and 0.9. The x-axis is normalized for the number of reproduction events. Simulation complexity and limits on computational resources did not allow us to simulate all cases for an equal number of reproduction steps. Results are for random networks and N = 400, with neighbor influence δ = 0.5, averaged over 16 runs.
Fig 10
Fig 10. Illustration of core-periphery structure as a result of reconnecting with fitness-proportional probability.
The central cluster has a degree distribution identical to that of a random network.

Similar articles

Cited by

  • Accelerating language emergence by functional pressures.
    Vithanage K, Wijesinghe R, Xavier A, Tissera D, Jayasena S, Fernando S. Vithanage K, et al. PLoS One. 2023 Dec 14;18(12):e0295748. doi: 10.1371/journal.pone.0295748. eCollection 2023. PLoS One. 2023. PMID: 38096195 Free PMC article.

References

    1. Fitch WT. The Evolution of Language. Cambridge: Cambridge University Press; 2010.
    1. Steels L. The Synthetic Modeling of Language Origins. Evolution of Communication. 1997;1(1):1–34. doi: 10.1075/eoc.1.1.02ste - DOI
    1. Kretzschmar J. Language and Complex Systems. Cambridge: Cambridge University Press; 2015.
    1. Dediu D, de Boer B. Language Evolution Needs Its Own Journal. Journal of Language Evolution. 2016;1(1):1–6. doi: 10.1093/jole/lzv001 - DOI
    1. Patriarca M, Heinsalu E, Leonard JL. Languages in Space and Time: Models and Methods from Complex Systems Theory. 1st ed. Cambridge University Press; 2020.

Publication types