Skip to main page content
U.S. flag

An official website of the United States government

Dot gov

The .gov means it’s official.
Federal government websites often end in .gov or .mil. Before sharing sensitive information, make sure you’re on a federal government site.

Https

The site is secure.
The https:// ensures that you are connecting to the official website and that any information you provide is encrypted and transmitted securely.

Access keys NCBI Homepage MyNCBI Homepage Main Content Main Navigation
. 2021 Jul 9;2(7):100273.
doi: 10.1016/j.patter.2021.100273.

Neural algorithmic reasoning

Affiliations

Neural algorithmic reasoning

Petar Veličković et al. Patterns (N Y). .

Abstract

We present neural algorithmic reasoning-the art of building neural networks that are able to execute algorithmic computation-and provide our opinion on its transformative potential for running classical algorithms on inputs previously considered inaccessible to them.

PubMed Disclaimer

Figures

Figure 1
Figure 1
The blueprint of neural algorithmic reasoning We assume that our real-world problem requires learning a mapping from natural inputs, x, to natural outputs, y—for example, fastest-time routing based on real-time traffic information. Note that natural inputs are often likely high-dimensional, noisy, and prone to changing rapidly—as is often the case in the traffic example. Further, we assume that solving our problem would benefit from applying an algorithm, A—however, A only operates over abstract inputs, x¯. In this case, A could be Dijkstra’s algorithm for shortest paths, which operates over weighted graphs with exactly one scalar weight per node, producing the shortest path tree. First, an algorithmic reasoner is trained to imitate A, learning a function g(P(f(x¯))), optimizing it to be close to ground-truth abstract outputs, A(x¯). P is a processor network operating in a high-dimensional latent space, which, if trained correctly, will be able to imitate the individual steps of A. f and g are encoder and decoder networks, respectively, designed to carry abstract data to and from P’s latent input space. Once trained, we can replace f and g with f˜ and g˜—encoders and decoders designed to process natural inputs into the latent space of P and decode P’s representations into natural outputs, respectively. Keeping P’s parameters fixed, we can then learn a function g˜(P(f˜(x))), allowing us an end-to-end differentiable function from x to y, without any low-dimensional bottlenecks—hence it is a great target for neural network optimization.

References

    1. Cormen T.H., Leiserson C.E., Rivest R.L., Stein C. MIT press; 2009. Introduction to Algorithms.
    1. Li Y., Gimeno F., Kohli P., Vinyals O. Strong generalization and efficiency in neural programs. arXiv. 2020 arXiv:2007.03629.
    1. Reed S., De Freitas N. Neural programmer-interpreters. arXiv. 2015 arXiv:1511.06279.
    1. Graves A., Wayne G., Danihelka I. Neural turing machines. arXiv. 2014 arXiv:1410.5401.
    1. Kaiser L., Sutskever I. Neural gpus learn algorithms. arXiv. 2015 arXiv:1511.08228.

LinkOut - more resources