Skip to main page content
U.S. flag

An official website of the United States government

Dot gov

The .gov means it’s official.
Federal government websites often end in .gov or .mil. Before sharing sensitive information, make sure you’re on a federal government site.

Https

The site is secure.
The https:// ensures that you are connecting to the official website and that any information you provide is encrypted and transmitted securely.

Access keys NCBI Homepage MyNCBI Homepage Main Content Main Navigation
. 2023 Feb 21;19(2):e1010894.
doi: 10.1371/journal.pcbi.1010894. eCollection 2023 Feb.

Resonant learning in scale-free networks

Affiliations

Resonant learning in scale-free networks

Samuel Goldman et al. PLoS Comput Biol. .

Abstract

Large networks of interconnected components, such as genes or machines, can coordinate complex behavioral dynamics. One outstanding question has been to identify the design principles that allow such networks to learn new behaviors. Here, we use Boolean networks as prototypes to demonstrate how periodic activation of network hubs provides a network-level advantage in evolutionary learning. Surprisingly, we find that a network can simultaneously learn distinct target functions upon distinct hub oscillations. We term this emergent property resonant learning, as the new selected dynamical behaviors depend on the choice of the period of the hub oscillations. Furthermore, this procedure accelerates the learning of new behaviors by an order of magnitude faster than without oscillations. While it is well-established that modular network architecture can be selected through evolutionary learning to produce different network behaviors, forced hub oscillations emerge as an alternative evolutionary learning strategy for which network modularity is not necessarily required.

PubMed Disclaimer

Conflict of interest statement

The authors have declared that no competing interests exist.

Figures

Fig 1
Fig 1
Effect of hub oscillations on downstream nodes (A) Illustrative example of network topology in which downstream nodes (red) respond to a given oscillatory input hub behavior (blue). (B) Single Node Frequency Response. Power spectrum (bottom) calculated from a randomly selected downstream node’s time series when hub node oscillates at specific frequencies (top). The plotted frequency here is directly related to the period, (T=1f). The power spectra of output nodes exhibit both higher and lower frequencies than the input signal. The corresponding time domain sequences can be found in the SI (Fig 3 in S1 Text). (C) Input-output relationship network response. The frequency response of this same network is shown for all non-frozen nodes (rows) when the input hub oscillates with a frequency of f = 0.1, equivalent to T = 10. The input square wave time series and its associated power spectrum are shown above the output power spectrum for clarity. The frequency axis is normalized with the input frequency of the hub, N0 = f = 0.1, showing that harmonics dominate the behavior of downstream nodes. This procedure is repeated for input period T = 8 in the SI (S4 Fig).
Fig 2
Fig 2. Creating new attractor landscapes.
(A) Oscillating hub creates new attractor cycles. In a non-oscillating setting, network states (blue) will converge on deterministic attractor cycles (bottom of funnel). However, by oscillating σhub new attractor cycles (red) are created. New cycles are separated from the ground state cycle by a non-zero height that represents the number of states for the new attractor to relax to the ground state when the hub stops oscillating. (B) Visualizing a real attractor landscape. A sample network with N = 1000 and γ = 1.9, is probed from various input conditions when the network state is fixed to either the ON or OFF state. Arrows represent network state transitions and attractor cycle states are shown in blue for this network when the input is blocked at either an ON or OFF state. New attractor states that appear when the input node oscillates are also shown in red, with an example attractor cycle for input oscillation T = 4 highlighted with a cycle. (C) Relaxation time of the network. The average relaxation time, t, is depicted against the scale-free parameter γ for networks of 1000 nodes. The relaxation time defines the time it takes for the network to settle onto a new ground state cycle when the hub is switched from σhub = 1 to 0 (and vice versa). (D) Heights of new attractors as a function of the scale-free parameter and the period of hub oscillations. For each T = 4, T = 8, T = 20, and T = 40, the average height of the new attractors from 10 distinct networks is plotted, (1,000 random initial conditions). New attractor cycles for more chaotic networks (γ>1.9) yields larger heights i.e. new cycle states are further out in the reference basins.
Fig 3
Fig 3. Network evolution in the presence of oscillating hub.
(A) Fitness function for learning multiple targets. An arbitrarily selected output node downstream in the network is scored by its similarity to a randomly generated target function (see SI for more details). (B) Learning multiple functions. The network learns multiple random target functions to match corresponding input periods. The hub node oscillates successively at all pre-selected input periods, T1, T2,..,TM in separate runs beginning from random conditions. In each learning cycle, we score how close the attractor cycle at the output node matches the corresponding target function f1(t), f2(t),..fM(t) by its mean squared fitness function across all trials. (C) Scale-free networks with oscillating hub can learn multiple target functions. 10 different simulations with N = 500, γ = 1.9, show that the network can learn a maximum of 7 different target functions associated with 7 input frequencies before a loss in performance. To decrease variance emerging from trial-to-trial variability, behavior is averaged over 30 trials for 1–3 targets, 20 trials for 4–5 targets, and 10 trials for 6–7 targets. (D) Networks with a hub that has a fixed state learn much slower. Rather than allowing the network to learn in the presence of an oscillating hub with pre-selected period, a single fixed state for the hub and one target cycle are provided, like the previous learning procedure (forcing the hub node to stay constant). Interestingly, while we might expect the network to learn faster when it must only be concerned with as few as 3 of the 2500 total input states, the network learns far slower under this alternative scheme, than in the presence of an oscillatory hub. (E) Summary plots of results from (C) and (D) for the learning cycle 103 and 105. Networks with the resonant learning scheme are more fit than networks that must learn with a non-oscillating hub, regardless of whether the hub node is fixed or allowed to update freely. Bars are plotted with standard error.
Fig 4
Fig 4. Learning is solely dependent on the timescale of oscillations.
(A) Different periodic hub input time series. Three different input types are defined: square wave inputs which we have used previously, a white noise pattern with a set sequence and no characteristic timescale, and a repeated pattern input with a fixed period. (B) Input period oscillation is more important than input pattern. Multiplexed learning of three different target functions of length 8, 10, 12 for networks N = 500, γ = 1.9 on the three types of input sequences are shown. Repeated pattern inputs and square wave inputs yield faster learning than white noise inputs, showing that the timescale associated hub oscillations matter more than the specific repeated pattern.

Similar articles

References

    1. Albert R. Scale-free networks in cell biology. J Cell Sci. 2005;118(Pt 21):4947–57. Epub 2005/10/29. doi: 10.1242/jcs.02714 . - DOI - PubMed
    1. Barabasi AL, Albert R. Emergence of scaling in random networks. Science. 1999;286(5439):509–12. Epub 1999/10/16. doi: 10.1126/science.286.5439.509 . - DOI - PubMed
    1. Arnosti DN, Chamberlin MJ. Secondary sigma factor controls transcription of flagellar and chemotaxis genes in Escherichia coli. Proc Natl Acad Sci U S A. 1989;86(3):830–4. Epub 1989/02/01. doi: 10.1073/pnas.86.3.830 ; PubMed Central PMCID: PMC286571. - DOI - PMC - PubMed
    1. Boylan SA, Redfield AR, Brody MS, Price CW. Stress-induced activation of the sigma B transcription factor of Bacillus subtilis. J Bacteriol. 1993;175(24):7931–7. Epub 1993/12/01. doi: 10.1128/jb.175.24.7931-7937.1993 ; PubMed Central PMCID: PMC206971. - DOI - PMC - PubMed
    1. Moffitt JR, Lee JB, Cluzel P. The single-cell chemostat: an agarose-based, microfluidic device for high-throughput, single-cell studies of bacteria and bacterial communities. Lab Chip. 2012;12(8):1487–94. Epub 2012/03/08. doi: 10.1039/c2lc00009a ; PubMed Central PMCID: PMC3646658. - DOI - PMC - PubMed

Publication types