Skip to main page content
U.S. flag

An official website of the United States government

Dot gov

The .gov means it’s official.
Federal government websites often end in .gov or .mil. Before sharing sensitive information, make sure you’re on a federal government site.

Https

The site is secure.
The https:// ensures that you are connecting to the official website and that any information you provide is encrypted and transmitted securely.

Access keys NCBI Homepage MyNCBI Homepage Main Content Main Navigation
. 2021 Dec:34:6062-6074.

Bubblewrap: Online tiling and real-time flow prediction on neural manifolds

Affiliations

Bubblewrap: Online tiling and real-time flow prediction on neural manifolds

Anne Draelos et al. Adv Neural Inf Process Syst. 2021 Dec.

Abstract

While most classic studies of function in experimental neuroscience have focused on the coding properties of individual neurons, recent developments in recording technologies have resulted in an increasing emphasis on the dynamics of neural populations. This has given rise to a wide variety of models for analyzing population activity in relation to experimental variables, but direct testing of many neural population hypotheses requires intervening in the system based on current neural state, necessitating models capable of inferring neural state online. Existing approaches, primarily based on dynamical systems, require strong parametric assumptions that are easily violated in the noise-dominated regime and do not scale well to the thousands of data channels in modern experiments. To address this problem, we propose a method that combines fast, stable dimensionality reduction with a soft tiling of the resulting neural manifold, allowing dynamics to be approximated as a probability flow between tiles. This method can be fit efficiently using online expectation maximization, scales to tens of thousands of tiles, and outperforms existing methods when dynamics are noise-dominated or feature multi-modal transition probabilities. The resulting model can be trained at kiloHertz data rates, produces accurate approximations of neural dynamics within minutes, and generates predictions on submillisecond time scales. It retains predictive performance throughout many time steps into the future and is fast enough to serve as a component of closed-loop causal experiments.

PubMed Disclaimer

Figures

Figure 1:
Figure 1:. Timing and stability of two-stage dimension reduction.
a) Distortion (ε) as a function of number of dimensions retained (n) for both sparse random projections and proSVD on random Gaussian data with batch size b = 1000. b) Time required for the dimensionality reduction in (a), amortized for batch size. While random projections are extremely efficient, proSVD time costs grow with the number of dimensions retained. c) Pareto front for the time-distortion tradeoff of random projections followed by proSVD. Color indicates n, the number of dimensions retained by random projections. Black arrow indicates the particular tradeoff we chose of n = 200. d–f) Embedding of a single trial (green line) into the basis defined by streaming SVD for different amounts of data seen. Dotted line indicates the same trial embedded using SVD on the full data set. Rapid changes in estimates of singular vectors early on lead to an unstable representation. g–i) Same trial and conventions as (d–f) for the proSVD embedding. Dotted lines in the two rows are the same curve in different projections.
Figure 2:
Figure 2:. Modeling of low-dimensional dynamical systems.
a) Bubblewrap end tiling of a 2D Van der Pol oscillator (data in gray; 5% noise case corresponding to line 1 of Table 1). Tile center locations are in black with covariance ‘bubbles’ for 3 sigma in orange. b) Bubblewrap end tiling of a 3D Lorenz attractor (5% noise), where tiles are plotted similarly to (a). c) Log predictive probability across all timepoints for each comparative model for the 2D Van der Pol, 0.05 case (top) and for the 3D Lorenz, 0.05 case (bottom).
Figure 3:
Figure 3:. Bubblewrap results on experimental datasets.
a) Bubblewrap results for example trials (blue) from the monkey reach dataset [48, 49], projected onto the first jPCA plane. All trials are shown in gray. The tile center locations which were closest to the trajectories are plotted along with their covariance “bubbles.” Additionally, large transition probabilities from each tile center are plotted as black lines connecting the nodes. Bubblewrap learns both within-trial and across-trial transitions, as shown by the probability weights. b) Bubblewrap results on widefield calcium imaging from [50, 51], visualized with UMAP. A single trajectory comprising ≈ 1.5s of data is shown in blue. Covariance “bubbles” and transition probabilities omitted for clarity. c) Bubblewrap results when applied to videos of mouse behavior [50, 51], visualized by projection onto the first SVD plane. Blue line: 3.3s of data. d, e, f) Log predictive probability (blue) and entropy (green) over time for the respective datasets in (a,b,c). Black lines are exponential weighted moving averages of the data. Dashed green line indicates maximum entropy (log2(N)).
Figure 4:
Figure 4:. High-throughput data & benchmarking.
a) Bubblewrap results for example trajectories (blue) in the Neuropixels dataset [52, 53] (data in gray) visualized with UMAP. b) Log predictive probability (blue) and entropy (green) over time. Black lines are exponential weighted moving averages of the data. Dashed green line indicates maximum entropy. c) Average cycle time (log scale) during learning or prediction (last bar) for each timepoint. Neuropixels (NP) is run as in (a,b) with no optimization and all heuristics, and Bubblewrap is easily able to learn at rates much faster than acquisition (30 ms). By turning off the global mean and covariance and priors updates and only taking a gradient step for L every 30 timepoints, we are able to run at close to 1 kHz (NPb). All other bars show example timings from Van der Pol synthetic datasets optimized for speed: 104 dim, where we randomly project down to 200 dimensions and used proSVD to project to 10 dimensions for subsequent Bubblewrap modeling learning; N = 20k, 10k, and 1k nodes, showing how our algorithm scales with the number of tiles; and Prediction, showing the time cost to predict one step ahead for the N = 1k case.
Figure 5:
Figure 5:. Multi-step ahead predictive performance.
(top) Mean log predictive probability as a function of the number of steps ahead used for prediction for each of the four experimental datasets studied. Colors indicate model. (bottom) Bubblewrap entropy as a function of the number of steps ahead used for prediction. Higher entropy indicates more uncertainty about future states. Dashed lines denote maximum entropy for each dataset (log of the number of tiles).

References

    1. Ahrens Misha B, Orger Michael B, Robson Drew N, Li Jennifer M, and Keller Philipp J. Whole-brain functional imaging at cellular resolution using light-sheet microscopy. Nature methods, 10(5):413–420, 2013. - PubMed
    1. Emiliani Valentina, Cohen Adam E, Deisseroth Karl, and Häusser Michael. All-optical interrogation of neural circuits. Journal of Neuroscience, 35(41):13917–13926, 2015. - PMC - PubMed
    1. Stevenson Ian H and Kording Konrad P. How advances in neural recording affect data analysis. Nature neuroscience, 14(2):139–142, 2011. - PMC - PubMed
    1. Steinmetz Nicholas A, Koch Christof, Kenneth D Harris, and Matteo Carandini. Challenges and opportunities for large-scale electrophysiology with neuropixels probes. Current opinion in neurobiology, 50:92–100, 2018. - PMC - PubMed
    1. Steinmetz Nicholas A, Aydin Cagatay, Lebedeva Anna, Okun Michael, Pachitariu Marius, Bauza Marius, Beau Maxime, Bhagat Jai, Claudia Böhm Martijn Broux, et al. Neuropixels 2.0: A miniaturized high-density probe for stable, long-term brain recordings. bioRxiv, 2020. - PMC - PubMed

LinkOut - more resources