Skip to main page content
U.S. flag

An official website of the United States government

Dot gov

The .gov means it’s official.
Federal government websites often end in .gov or .mil. Before sharing sensitive information, make sure you’re on a federal government site.

Https

The site is secure.
The https:// ensures that you are connecting to the official website and that any information you provide is encrypted and transmitted securely.

Access keys NCBI Homepage MyNCBI Homepage Main Content Main Navigation
. 2023 Dec:37:56979-56997.
Epub 2024 May 30.

Brain-like Flexible Visual Inference by Harnessing Feedback-Feedforward Alignment

Affiliations

Brain-like Flexible Visual Inference by Harnessing Feedback-Feedforward Alignment

Tahereh Toosi et al. Adv Neural Inf Process Syst. 2023 Dec.

Abstract

In natural vision, feedback connections support versatile visual inference capabilities such as making sense of the occluded or noisy bottom-up sensory information or mediating pure top-down processes such as imagination. However, the mechanisms by which the feedback pathway learns to give rise to these capabilities flexibly are not clear. We propose that top-down effects emerge through alignment between feedforward and feedback pathways, each optimizing its own objectives. To achieve this co-optimization, we introduce Feedback-Feedforward Alignment (FFA), a learning algorithm that leverages feedback and feedforward pathways as mutual credit assignment computational graphs, enabling alignment. In our study, we demonstrate the effectiveness of FFA in co-optimizing classification and reconstruction tasks on widely used MNIST and CIFAR10 datasets. Notably, the alignment mechanism in FFA endows feedback connections with emergent visual inference functions, including denoising, resolving occlusions, hallucination, and imagination. Moreover, FFA offers bio-plausibility compared to traditional back-propagation (BP) methods in implementation. By repurposing the computational graph of credit assignment into a goal-driven feedback pathway, FFA alleviates weight transport problems encountered in BP, enhancing the bio-plausibility of the learning algorithm. Our study presents FFA as a promising proof-of-concept for the mechanisms underlying how feedback connections in the visual cortex support flexible visual functions. This work also contributes to the broader field of visual inference underlying perceptual phenomena and has implications for developing more biologically inspired learning algorithms.

PubMed Disclaimer

Figures

Figure 1:
Figure 1:
Feedback-Feedforward Alignment. Learning: backpropagation and feedback alignment train a discriminator with symmetric WfT or fixed random Ri weights, respectively. FFA maps input x to latents y as in a discriminator but also reconstructs the input x^ from the latent. The forward and backward pathways also pass gradients back for their counterpart performing inference in the opposite direction. Inference: We run forward and feedback connections trained under FFA in a loop to update the activations (x) for each of the inference tasks e.g. mental imagery. Δ shows the difference between the input signal and the reconstructed (output). * shows the imposed upper bound. See algorithm 2 in Section 8.4.
Figure 2:
Figure 2:
Co-optimization in FFA. A) Accuracy and reconstruction performance for FFA and control algorithms as a function of epochs. B) Dual-task performance for a variety of feedforward discriminative and autoencoder architectures trained under BP or FA compared to FFA training (details for architecture in Suppl. 8.1). The shaded area represents the desired corner. C) Robustness to input Gaussian noise (μ = 0 and varying σ2 between 0 and 1) as measured by test accuracy on the noisy input.
Figure 3:
Figure 3:
Denoising in FFA. Closed-loop inference on noisy inputs (σ2 = 0.4) performed by FFA and control algorithms assuming a static read-out for discrimination set by iteration 0. Shown at right, the sample reconstructions recovered by FFA and control autoencoders over 4 iterations (no clipping or other processing was performed on these images).
Figure 4:
Figure 4:
Resolving occlusion. A 15×15 black square occludes the digits in the first columns as shown in the second column. Each row shows a sample occluded digit (5, 8, and 9) and the corresponding resolved images under high noise and low noise conditions. For high noise and low noise visual inference, the resolved digit is depicted in 5th and the last columns, respectively. For details regarding the sample intermediate iterations refer to Figure Suppl. 10.
Figure 5:
Figure 5:
Hallucination. Without external input, we let the inference algorithm run on the FFA-trained network until convergence (the last column) for high noise (upper) and low noise (lower) inference. The sample iterations are linearly spaced and for high noise, there are typically twice as many iterations needed. Refer to Section 8.8 for iteration values.
Figure 6:
Figure 6:
Visual imagery. Generated samples (upper panels) using the inference algorithm on the FFA-trained network when top-down signal ‘5’ (left) and ‘3’ (right) was activated. The sample iterations (equally spaced) for sample generations were shown in the lower panel. Each row corresponds to an inference noise level. Refer to Section 8.7 for iteration and β values

Update of

References

    1. Abid H, Ahmad F, Lee SY, Park HW, Im D, Ahmad I, and Chaudhary SU (2016). A functional magnetic resonance imaging investigation of visual hallucinations in the human striate cortex. Behav. Brain Funct, 12(1):31. - PMC - PubMed
    1. Akrout M (2019). On the Adversarial Robustness of Neural Networks without Weight Transport. arXiv:1908.03560 [cs, stat]. arXiv: 1908.03560.
    1. Akrout M, Wilson C, Humphreys PC, Lillicrap T, and Tweed D (2019). Deep Learning without Weight Transport. arXiv:1904.05391 [cs, stat]. arXiv: 1904.05391.
    1. Bartunov S, Santoro A, Richards BA, Marris L, Hinton GE, and Lillicrap T (2018). Assessing the Scalability of Biologically-Motivated Deep Learning Algorithms and Architectures. arXiv:1807.04587 [cs, stat]. arXiv: 1807.04587.
    1. Bengio Y (2014). How Auto-Encoders Could Provide Credit Assignment in Deep Networks via Target Propagation. arXiv:1407.7906 [cs]. arXiv: 1407.7906.

LinkOut - more resources