Skip to main page content
U.S. flag

An official website of the United States government

Dot gov

The .gov means it’s official.
Federal government websites often end in .gov or .mil. Before sharing sensitive information, make sure you’re on a federal government site.

Https

The site is secure.
The https:// ensures that you are connecting to the official website and that any information you provide is encrypted and transmitted securely.

Access keys NCBI Homepage MyNCBI Homepage Main Content Main Navigation
Review
. 2025 Jan 2;387(6735):eadp7478.
doi: 10.1126/science.adp7478. Epub 2025 Feb 14.

Neuroevolution insights into biological neural computation

Affiliations
Review

Neuroevolution insights into biological neural computation

Risto Miikkulainen. Science. .

Abstract

This article reviews existing work and future opportunities in neuroevolution, an area of machine learning in which evolutionary optimization methods such as genetic algorithms are used to construct neural networks to achieve desired behavior. The article takes a neuroscience perspective, identifying where neuroevolution can lead to insights about the structure, function, and developmental and evolutionary origins of biological neural circuitry that can be studied in further neuroscience experiments. It proposes optimization under environmental constraints as a unifying theme and suggests the evolution of language as a grand challenge whose time may have come.

PubMed Disclaimer

Figures

Fig. 1.
Fig. 1.. A general framework for neuroevolution.
Neuroevolution is a method for optimizing the design of neural networks through a biologically inspired population-based search. The process starts with a population of neural networks, encoded, e.g., as a set of weights in a fixed network topology, concatenated into a string, and initialized randomly. Each encoding is decoded into a network, which is then evaluated in the task to estimate its fitness, i.e., to see how well it performs in the task. The encodings of networks that perform well become parents for the next generation of networks: They are mutated and recombined with other good encodings to form offspring networks. These offspring networks replace those that performed poorly in the original population. Some of these offspring networks are likely to include good parts of both parents and therefore perform better than their parents. This process repeats until networks are eventually created that solve the task. Note that gradient information is not necessary; only high-level fitness information is needed. Thus, neuroevolution is a population-based search that discovers and uses building blocks as well as random exploration, resulting in network designs that perform well in a desired task. It can thus be used to evaluate hypotheses not only about what the circuitry does and how it does it, but also why that particular circuitry exists, what the alternatives are, and how the circuits could possibly be repaired.
Fig. 2.
Fig. 2.. Evolution of command neurons in a navigation and foraging task.
In the simulated grid world, there are a number of poison and food items. The agent needs to first navigate to the bottom left area where the food items are, eat as many of them as possible, and avoid poison items at all times. The agent’s behavior was controlled by neural networks that were evolved through genetic algorithms over time. These networks were then analyzed using simulated neuroscience techniques such as lesions and receptive field analysis. The networks that were the most successful (i.e., ate more food and no poison) had evolved command neurons. As soon as the first food item was consumed, these neurons were turned on, switching the behavior from navigation to foraging. Networks with command neurons were able to separate the behaviors better, arrive in the food zone faster, avoid poison better, and forage more effectively than those in which the two behaviors were coactivated and mixed. Similar command neurons have been observed in biology; the experiment demonstrates how they may arise as an advantage in evolving effective behavior in the domain. [Copyright © 2001, Massachusetts Institute of Technology, reprinted with permission from (35)]
Fig. 3.
Fig. 3.. Synergy of evolution and learning through evolved pattern generators.
In this experiment, the task was to recognize handwritten digits on a 10×10 simulated retina. The recognition system consisted of 10 neurons that adapted through competitive Hebbian learning. That is, each neuron responded according to how close its 10×10 weight vector was to the 10×10 pattern on the retina. (A) The weight vectors of each neuron (Unit) were initialized randomly. (B) During learning, input samples were randomly chosen and compared with the weight vectors. The neuron whose weight vector was closest to the input was found and adapted toward the input through Hebbian learning. The final weight vectors thus resemble the inputs. However, such learning is often ineffective: Some neurons do not learn at all; for example, 7, 8, and 9 are mapped to the same neuron and thus cannot be distinguished. (C) The prenatal internally generated patterns were Gaussians with evolved location, size, elongation, and orientation. The most successful pattern generators emphasized mostly the locations in the horizontal midline. (D) Prenatal training with such patterns takes place only in two units but it is enough to separate 7, 8, and 9 in postnatal learning. (E) After postnatal learning with actual handwritten digit patterns, most examples are categorized correctly. Thus, evolution was able to discover appropriate pretraining that made the postnatal learning work, demonstrating the synergy of evolution and learning. For a detailed demo with animations, see https://nn.cs.utexas.edu/demos/ne-review. [Copyright © 2007, IEEE, reprinted with permission from (77)]
Fig. 4.
Fig. 4.. Complex coordinated behavior emerging from constrained species interactions.
In this behavior, hyenas form a mob that attacks a group of lions, gaining possession of their kill. (A) Screen capture of a video documenting a mobbing event. Lions are much stronger and can easily kill any hyena that approaches them. Successful mobbing behavior requires forming a large cooperating team, building up coherence and boldness among them, and coordinating the attack precisely in time and space to leave the lions a way out. This behavior is more complex than other behaviors that the hyenas exhibit, is largely hereditary, and may represent an evolutionary breakthrough. (B) Simulation of mobbing. A lion and several hyenas are placed in a 100×100 grid world. If four or more hyenas enter the interaction circle simultaneously, they get a high reward; if fewer than four, they get killed. In the experiment, the neural networks controlling the hyenas were evolved with NEAT. Successful mobbing behavior emerged through stepping stones. At first, there are diverse behaviors, including risk takers, which approach the lions despite the risk, and risk evaders, which stay away from the lions. Their combinations allow a behavior to emerge in which the hyena approaches the lion up to the closest safe distance but stays there. Mobbing then emerges when a sufficient number of hyenas attack from that distance at the same time. In a prolonged evolution, such mobbing becomes more robust, versatile, and effective. It is easier to rediscover if it is ever lost, and can form a foundation for other complex coordinated behaviors as well. Neuroevolution simulations thus provide crucial insight into the diversity of behaviors and how complex behaviors originate from them. For the video and animations of these behaviors, see https://nn.cs.utexas.edu/demos/ne-review. [Panel (A) image: Mara Hyena Project; panel (B): Copyright © 2020, IEEE, reprinted with permission from (111)]
Fig. 5.
Fig. 5.. Taking advantage of symmetry in four-legged walking.
In this experiment, neuroevolution was extended to take advantage of symmetry in the four-legged robot (116). (A) Each leg was controlled by a separate neural network that received input from all the other such networks. (B) Evolution started with a fully symmetric interaction between the four networks and broke the symmetry as needed, i.e., by allowing the weights on the different connections to diverge (as indicated by the colors). The most symmetric designs were capable of all the major gaits on the flat ground (bound, trot, pace, pronk) and were able to switch between them to get over obstacles. The discovered asymmetries then made more challenging behaviors possible. (C) For instance, a controller evolved to cross a slippery incline requires a less symmetric solution than a straightforward walk on flat ground. It uses the front downslope leg primarily to push up so that the robot can walk straight. Similarly, an asymmetric solution evolved to compensate for a missing leg (145). In this manner, neuroevolution can demonstrate how principles such as symmetry help to construct robust behavior. For animations of these behaviors, see https://nn.cs.utexas.edu/demos/ne-review. [Panels (A) and (B): Copyright © 2011, IEEE, reprinted with permission from (116)]
Fig. 6.
Fig. 6.. Evolution of communication code for mate selection and hunting.
The agents were able to move in a simulated one-dimensional world in which their fitness depended on successful mating and hunting (138). (A) Each agent in the population was controlled by an evolved neural network that received the current task (either mate selection or hunting), the distance to the prey, and the message from the other agent as its input. At its output, it decided to mate or move and generated a message that the other agents could use to decide whether to mate or move. For mating to be successful, the agents both needed to want to mate and be compatible. Each agent in the original population was assigned a 2-bit trait that determined this compatibility, and each offspring inherited it from the parent. For prey capture to be successful, they needed to move onto its location at the same time. (B) Over evolution, the agents discovered a messaging code that allowed them to communicate their trait, their intention to mate, and their readiness to capture the prey effectively to other agents. It turned out that if mate selection was evolved first, instead of evolving prey capture first or at the same time, then successful behaviors evolved faster. Moreover, the agents develop a more effective and parsimonious code for both tasks. The mate-selection code was simpler, and it was possible to complexify it to serve hunting as well. The final code used fewer symbols, and the code for readiness to mate was reused for readiness to capture on the prey. This result thus suggests that communication may have originally evolved for mate selection and later adapted to other uses.

References

    1. Hasson U, Nastase SA, Goldstein A, Direct fit to nature: An evolutionary perspective on biological and artificial neural networks. Neuron 105, 416–434 (2020). doi: 10.1016/j.neuron.2019.12.002; - DOI - PMC - PubMed
    1. Miikkulainen R, Creative AI through evolutionary computation: Principles and examples. SN Comput. Sci 2, 163 (2021). doi: 10.1007/s42979-021-00540-9; - DOI - PMC - PubMed
    1. Miikkulainen R, Forrest S, A biological perspective on evolutionary computation. Nat. Mach. Intell 3, 9–15 (2021). doi: 10.1038/s42256-020-00278-8 - DOI
    1. Gras R, Golestani A, Hendry AP, Cristescu ME, Speciation without pre-defined fitness functions. PLOS ONE 10, e0137838 (2015). doi: 10.1371/journal.pone.0137838; - DOI - PMC - PubMed
    1. Lehman J et al. , The surprising creativity of digital evolution: A collection of anecdotes from the evolutionary computation and artificial life research communities. Artif. Life 26, 274–306 (2020). doi: 10.1162/artl_a_00319; - DOI - PubMed

LinkOut - more resources