More than often the large majority of inputs to cortical neurons is recurrent and comes from other cortical neurons. In fact, many neurons in cortex may entirely be lacking direct synaptic connections to both input (senses) and output (muscles). These “loneliest neurons”, how Mark Humphreys calls them on his blog, have a problem. They do not know their contribution to the function that the entire network is computing.
This raises the important question how these neurons learn useful representations which support the network function and neural processing as a whole. The challenge the “loneliest neurons” are facing is a form of a credit assignment problem. The typical remedy to credit assignment is to introduce some form of feedback into the learning algorithm.
Triggered by the work of Tim Lillicrap and colleagues there has been a recent surge in interest in identifying viable credit assignment strategies in biological neural networks. In exploratory work with Surya Ganguli, we have recently extended some of these ideas to the domain of spiking neural networks with the help of surrogate gradients. This now puts us into the unique and exciting position of training spiking neural networks to solve complex spatiotemporal problems. While these results are promising, lots of work remains to be done in improving our understanding of plausible spatiotemporal credit assignment in biologically inspired neural networks. Consequently, biologically plausible learning in the “loneliest neurons” remains one of the primary research interests in the lab.
- Neftci, E.O., Mostafa, H., and Zenke, F. (2019).
Surrogate Gradient Learning in Spiking Neural Networks.
ArXiv:1901.09948 [Cs, q-Bio].
- Zenke, F. and Ganguli, S. (2018).
SuperSpike: Supervised learning in multi-layer spiking neural networks.
Neural Comput 30, 1514–1541. doi: 10.1162/neco_a_01086
fulltext | preprint