(1/6) Excited to share a new preprint from our lab! Can large, deep nonlinear neural networks trained with indirect, low-dimensional error signals compete with full-fledged backpropagation? Tl;dr: Yes! https://arxiv.org/abs/2502.20580.
Comments
Log in with your Bluesky account to leave a comment
(2/6) Backpropagation, the gold standard for training neural networks, relies on precise but high-dimensional error signals. Other proposed alternatives, like Feedback alignment do not challenge this assumption. Could simpler, low-dimensional signals achieve similar results?
(3/6) We developed a local learning rule leveraging low-dimensional error feedback, decoupling forward and backward passes. Surprisingly, performance matches traditional backpropagation across deep nonlinear networks—including convolutional nets and even transformers!
(4/6) The key insight: The dimensionality of the error signal doesn’t need to scale with network size—only with task complexity. Low-dimensional feedback can effectively guide learning even in very large nonlinear networks. The trick is to align feedback weights to the error (the gradient loss).
(5/6) Applying our method to a simple ventral visual stream model replicates the results of Lindsey, @suryaganguli.bsky.social and @stphtphsn.bsky.social and shows that the bottleneck in the error signal—not the feedforward pass—shapes the receptive fields and representations of the lower layers.
(6/6) Our findings challenge some prevailing conceptions about gradient-based learning, opening new avenues for understanding efficient neural learning in both artificial systems and the brain.
Read our full preprint here: https://arxiv.org/abs/2502.20580 #Neuroscience #MachineLearning #DeepLearning
Comments
Read our full preprint here: https://arxiv.org/abs/2502.20580 #Neuroscience #MachineLearning #DeepLearning