Skip to main page content
U.S. flag

An official website of the United States government

Dot gov

The .gov means it’s official.
Federal government websites often end in .gov or .mil. Before sharing sensitive information, make sure you’re on a federal government site.

Https

The site is secure.
The https:// ensures that you are connecting to the official website and that any information you provide is encrypted and transmitted securely.

Access keys NCBI Homepage MyNCBI Homepage Main Content Main Navigation
Review
. 2019 Jun 10;374(1774):20180369.
doi: 10.1098/rstb.2018.0369.

The Cognitive Lens: a primer on conceptual tools for analysing information processing in developmental and regenerative morphogenesis

Affiliations
Review

The Cognitive Lens: a primer on conceptual tools for analysing information processing in developmental and regenerative morphogenesis

Santosh Manicka et al. Philos Trans R Soc Lond B Biol Sci. .

Abstract

Brains exhibit plasticity, multi-scale integration of information, computation and memory, having evolved by specialization of non-neural cells that already possessed many of the same molecular components and functions. The emerging field of basal cognition provides many examples of decision-making throughout a wide range of non-neural systems. How can biological information processing across scales of size and complexity be quantitatively characterized and exploited in biomedical settings? We use pattern regulation as a context in which to introduce the Cognitive Lens-a strategy using well-established concepts from cognitive and computer science to complement mechanistic investigation in biology. To facilitate the assimilation and application of these approaches across biology, we review tools from various quantitative disciplines, including dynamical systems, information theory and least-action principles. We propose that these tools can be extended beyond neural settings to predict and control systems-level outcomes, and to understand biological patterning as a form of primitive cognition. We hypothesize that a cognitive-level information-processing view of the functions of living systems can complement reductive perspectives, improving efficient top-down control of organism-level outcomes. Exploration of the deep parallels across diverse quantitative paradigms will drive integrative advances in evolutionary biology, regenerative medicine, synthetic bioengineering, cognitive neuroscience and artificial intelligence. This article is part of the theme issue 'Liquid brains, solid brains: How distributed cognitive architectures process information'.

Keywords: cognition; computation; dynamical systems; information theory; patterning; regeneration.

PubMed Disclaimer

Conflict of interest statement

We declare we have no competing interests.

Figures

Figure 1.
Figure 1.
Illustrations of cognitive processes in embryogenesis and regeneration. Figure modified with permission after [1]. (a) An egg will reliably give rise to a species-specific anatomical outcome. (b) This process is usually described as a feed-forward system where the activity of gene-regulatory networks (GRNs) within cells result in the expression of effector proteins that, via structural properties of proteins and physical forces, will result in the emergence of complex shape. This class of models (bottom-up process driven by self-organization and parallel activity of large numbers of local agents) is difficult to apply to several biological phenomena. Regulative development can alter subsequent steps to reach the correct anatomical goal state despite drastic deviations of the starting state. (c) For example, mammalian embryos can be divided in half, giving rise to perfectly normal monozygotic twins each of which has regenerated the missing cell mass. (d) Mammalian embryos can also be combined, giving rise to a normal embryo in which no parts are duplicated. (e) Such capabilities suggest that pattern control is fundamentally a homeostatic process—a closed-loop system using feedback to minimize the error (distance) between a current shape and a target morphology. Although these kinds of decision-making models are commonplace in engineering, they are only recently beginning to be employed in biology [2,3]. This kind of pattern-homeostatic process must store a setpoint that serves as a stop condition; however, as with most types of memory, it can be specifically modified by experience. In the phenomenon of trophic memory (f), damage created at a specific point on the branched structure of deer antlers is recalled as ectopic branch points in subsequent years' antler regeneration. This reveals the ability of cells at the scalp to remember the spatial location of specific damage events and alter cell behaviour to adjust the resulting pattern appropriately—a pattern memory that stretches across months of time and considerable spatial distance and is able to modify low-level (cellular) growth rules to construct a pre-determined stored pattern that differs from the genome-default for this species. (g) A similar capability was recently shown in a molecularly tractable model system [4,5], in which genetically normal planarian flatworms were bioelectrically reprogrammed to regenerate two-headed animals when cut in subsequent rounds of asexual reproduction in plain water. (h) The decision-making revealed by the cells, tissues and organs in these examples of dynamic remodelling toward specific target states could be implemented by cybernetic processes at various positions along a scale of proto-cognitive complexity [6]. Panels (a,c,d) were created by Jeremy Guay of Peregrine Creative. Panel (c) contains a photo by Oudeschool via Wikimedia Commons. Panels (f) and (g) are reprinted with permission from [7] and [8] respectively. Panel (h) is modified after [6]. (Online version in colour.)
Figure 2.
Figure 2.
Mapping between various tools and the most related cognitive concepts. A taxonomy mind-map of tools to analyse cognitive phenomena, broadly decomposed into deterministic and statistical. The deterministic toolset further consists of dynamical and algorithmic sub-categories, while the statistical set consists of the information-theoretic and least-action principles sub-categories (see §3a–e for detailed explanation). The mapping between the tools and the cognitive phenomena is not necessarily one-to-one. For example, the dynamical concept of ‘attractor’ can be used to study both the cognitive concepts of ‘decision-making’ and ‘memory’. We explain some of the cognitive phenomena and the tools mentioned here in §§2 and 3; for definitions of those not described in those sections we refer the reader to the Glossary (box 1). Moreover, the list of tools and phenomena shown here is not exhaustive. For example, the well-known ‘Hamilton's principle’ (Glossary) is a type of least-action principle that belongs to the dynamical systems toolset (not shown here). Finally, the mappings between the tools and the phenomena that could be studied with them are proposals, some of which are described in §3. (Online version in colour.)
Figure 3.
Figure 3.
Cognitive systems. A schematic of an analysis approach for cognitive systems: what to analyse (Marr's three levels of analysis) and how to analyse (proposed tools of analysis spanning across the levels). Any tool can in principle be used to study any level (figure 2). Here is an example of how to relate dynamical systems tools with the three levels, in the context of the problem of associative learning (§3f). At the computational level, the associative learning problem may be specified as ‘associate two stimuli, natural and neutral, of which the natural stimulus evokes a response while the neutral one does not. In dynamical systems (DS) language, this may be translated as ‘a system with two attractors, each associated with a stimulus, corresponding to low-response and high-response’ (please see figure 6 for details). At the algorithmic level of analysis, the problem may be solved as ‘every time both stimuli are supplied, let the ability of the natural stimulus to evoke a response also strengthen the ability of the neutral stimulus to evoke the response such that over time the two stimuli become equivalent’. In DS terms, this may be translated as ‘let the internal state associated with the natural stimulus steer that associated with the neutral stimulus to the high-response attractor’. Finally, at the implementation level, the problem becomes ‘design a network with three nodes consisting of two stimuli and one response, where there is a connection between each stimulus and the response, such that the strength of the connection between the neutral stimulus and the response increases over time with the joint application of the two stimuli’. In DS terms, this is equivalent to ‘design a 3-variable (two weights and one response) coupled DS with a positive feedback loop such that the weight-state of the natural stimulus steers the weight-state of the neutral stimulus from the low-response basin of attraction through the basin-boundary to the high-response basin of attraction’. (Online version in colour.)
Figure 4.
Figure 4.
The two main types of artificial neural networks (ANNs). Schematics of ANNs, with the arrows representing connections between neurons, and the numbers 1 and 0 representing the possible binary states of the neurons (processing units). The grey level of the edges represents the associated weights. Panels show schematics of possible mechanisms by which: (a) a feed-forward ANN might distinguish huskies from wolves; and (b) a recurrent ANN might predict meaningful sequences of words from an input sequence. (Online version in colour.)
Figure 5.
Figure 5.
A GRN model of associative learning. A GRN model adapted from [141]. The dynamics of w1, w2 and p follow the ‘Hill’ function, which is traditionally used to model gene activation and repression behaviour through a binding process. An application of conditioned stimulus (CS) alone initially does not evoke a response (concentration of p is close to zero). However, an application of CS following a joint application of US–CS stimuli manages to evoke a response. This is because CS is ‘associated’ with US (unconditioned stimulus) during the joint application, in the sense that the GRN learns to ‘think’ that CS is equivalent to US during subsequent applications of CS alone. (Online version in colour.)
Figure 6.
Figure 6.
A cognitive view of associative learning as offered by the tools of dynamical systems. Each panel illustrates the flow together with the phase portrait of the GRN in the space of p and w2 (the w1 axis is ignored for conciseness, since it is not informative). Here, ‘response’ represents the concentration levels of p. The red and green curves in the top and bottom panels, respectively, depict representative trajectories. The red and green trajectories are each split over time across the horizontal panels in their respective rows, as depicted by grey dashed lines connecting the consecutive pieces whose endpoints are marked by colour filled circles. Note that the endpoint of one piece and the starting point of the following piece are of the same colour since they represent the same states. The overall initial state of the two trajectories (green filled circle) are the same. Also shown in each panel are the stable equilibrium and saddle points. The top panels show CS alone cannot evoke a response (red trajectory eventually reaches a low-response state in panel (c)). The bottom panels show that following an association of CS with US, CS alone can evoke a response (the green trajectory eventually reaches a high-response state in panel (f)). Notice that there are two attractors (hence two basins of attraction) when CS alone is applied (right panels). In the dynamical systems view, associative learning is about steering the internal state associated with CS (w2) into the basin of attraction associated with high value of p with the help of application of US. More specifically, a minimum value of w2 is necessary and sufficient to evoke a high response; this is termed the ‘learning threshold’ (the black dashed line in panels (a,c,f)). Here, associative learning is accomplished by w1 ‘shepherding’ w2 above the learning threshold.
Figure 7.
Figure 7.
A neural network (NN) model of associative learning. A NN model that performs the same task as the GRN in figure 5. This NN was adapted from [139], but we supplied the appropriate model parameters (electronic supplementary material, Supplement 1). This model consists of the two stimuli, US and CS, and the response (p) just as described for the GRN model above. The main difference is that w1 and w2 in this case are not response neurons (they are molecules in the GRN) but the synaptic weights between US-p and CS-p respectively. Furthermore, this NN follows the Hebbian rewiring principle of ‘neurons that fire together wire together’. The dynamical portrait of the behaviour is very similar to the one for the GRN (electronic supplementary material, Supplement 1, figure S2). Finally, we show an example of how information theory can be used to quantify cognition. We show the normalized MI between the behaviours of w1 and w2 (both change with time, as described above), showing that it significantly increases during the learning step, and it remains higher after learning compared with the MI before learning. (a) Schematic of the NN. US, CS and p (response) represent the same as in the GRN above. The difference is that (1) the dynamics of p follow the sigmoidal activation which is traditionally used to model the integrate-and-fire behaviour of neurons and (2) the synaptic weights are influenced by the activities of the pre-synaptic and post-synaptic neurons following the Hebbian principle. (b) The behaviour (response p) of the NN before, during and after the association step (middle box). The normalized mutual information (MI) between the behaviours of w1 and w2 is also shown during the three phases. Clearly, the MI increases during and after learning, even though there is no direct connection between w1 and w2, thus demonstrating the power of information theoretic tools. (Online version in colour.)

References

    1. Levin M, Martyniuk CJ. 2017. The bioelectric code: an ancient computational medium for dynamic control of growth and form. Biosystems 164, 76–93. (10.1016/j.biosystems.2017.08.009) - DOI - PMC - PubMed
    1. Pezzulo G, Levin M. 2016. Top-down models in biology: explanation and control of complex living systems above the molecular level. J. R. Soc. Interface 13, 20160555 (10.1098/rsif.2016.0555) - DOI - PMC - PubMed
    1. Barkai N, Ben-Zvi D. 2009. 'Big frog, small frog'—maintaining proportions in embryonic development. FEBS J. 276, 1196–1207. (10.1111/j.1742-4658.2008.06854.x) - DOI - PubMed
    1. Durant F, Morokuma J, Fields C, Williams K, Adams DS, Levin M. 2017. Long-term, stochastic editing of regenerative anatomy via targeting endogenous bioelectric gradients. Biophys. J. 112, 2231–2243. (10.1016/j.bpj.2017.04.011) - DOI - PMC - PubMed
    1. Oviedo NJ, Morokuma J, Walentek P, Kema IP, Gu MB, Ahn JM, Hwang JS, Gojobori T, Levin M. 2010. Long-range neural and gap junction protein-mediated cues control polarity during planarian regeneration. Dev. Biol. 339, 188–199. (10.1016/j.ydbio.2009.12.012) - DOI - PMC - PubMed

Publication types

LinkOut - more resources