Traces propagation: Memory-efficient and scalable forward-only learning in spiking neural networks

Spiking Neural Networks (SNNs) provide an efficient framework for5 processing dynamic spatio-temporal signals and for investigating the learning principles6 underlying biological neural systems. A key challenge in training SNNs is to solve7 both spatial and temporal credit assignment. The dominant approach for training8 SNNs is Backpropagation Through Time (BPTT) with surrogate gradients. However,9 BPTT is in stark contrast with the spatial and temporal locality observed in biological10 neural systems, and leads to high computational and memory demands, limiting11 efficient training strategies and on-device learning. Although existing local learning12 rules achieve local temporal credit assignment by leveraging eligibility traces, they13 fail to address the spatial credit assignment without resorting to auxiliary layer-14 wise matrices, which increase memory overhead and hinder scalability, especially on15 embedded devices. In this work, we propose Traces Propagation (TP), a forward-16 only, memory-efficient, scalable, and fully local learning rule that combines eligibility17 traces with a layer-wise contrastive loss without requiring auxiliary layer-wise matrices.18 TP outperforms other fully local learning rules on NMNIST and SHD datasets. On19 more complex datasets such as DVS-GESTURE and DVS-CIFAR10, TP showcases20 competitive performance and scales effectively to deeper SNN architectures such as21 VGG-9, while providing favorable memory scaling compared to prior fully local scalable22 rules, for datasets with a significant number of classes. Finally, we show that TP is well23 suited for practical fine-tuning tasks, such as keyword spotting on the Google Speech24 Commands dataset, thus paving the way for efficient learning at the edge.25