Functional spiking neural networks and neuromorphic learning algorithms through surrogate gradients

More than often the large majority of inputs to cortical neurons are recurrent and comes from other cortical neurons. Many neurons in the cortex may entirely be lacking direct synaptic connections to both input (senses) and output (muscles). These “loneliest neurons”, how Mark Humphreys calls them on his blog, have a problem. They do not know their contribution to the function that the entire network is computing.

This raises the important question of how these neurons learn useful representations that support the network function and neural processing as a whole. The challenge the “loneliest neurons” are facing is a form of a credit assignment problem. The typical remedy to credit assignment is to introduce some form of feedback into the learning algorithm.

Triggered by the work of Tim Lillicrap and colleagues there has been a recent surge in interest in identifying viable credit assignment strategies in biological neural networks. In exploratory work with Surya Ganguli, we have extended some of these ideas to the domain of spiking neural networks with the help of surrogate gradients. This now puts us into the unique and exciting position of training spiking neural networks to solve complex spatiotemporal problems. While these results are promising, lots of work remains to be done in improving our understanding of plausible spatiotemporal credit assignment in biologically inspired neural networks. Consequently, biologically plausible learning in the “loneliest neurons” remains one of the primary research interests in the lab.

Further reading

  • Cramer, B., Billaudelle, S., Kanya, S., Leibfried, A., Grübl, A., Karasenko, V., Pehle, C., Schreiber, K., Stradmann, Y., Weis, J., et al. (2021). Surrogate gradients for analog neuromorphic computing. ArXiv:2006.07239 [Cs, q-Bio, Stat].
  • Zenke, F., and Neftci, E.O. (2021). Brain-Inspired Learning on Neuromorphic Substrates. Proceedings of the IEEE 109, 935–950.
    fulltext | preprint
  • Zenke, F., and Vogels, T.P. (2021). The Remarkable Robustness of Surrogate Gradient Learning for Instilling Complex Function in Spiking Neural Networks. Neural Computation 33, 899–925.
    fulltext | preprint
  • Cramer, B., Stradmann, Y., Schemmel, J., and Zenke, F. (2020). The Heidelberg Spiking Data Sets for the Systematic Evaluation of Spiking Neural Networks. IEEE Transactions on Neural Networks and Learning Systems 1–14.
    fulltext | preprint | data | code
  • Neftci, E.O., Mostafa, H., Zenke, F., (2019).
    Surrogate Gradient Learning in Spiking Neural Networks: Bringing the Power of Gradient-based optimization to spiking neural networks.
    IEEE SPM 36, 51–63.
    fulltextpreprint | code
  • Zenke, F. and Ganguli, S. (2018).
    SuperSpike: Supervised learning in multi-layer spiking neural networks.
    Neural Comput 30, 1514–1541.  doi: 10.1162/neco_a_01086
    fulltext | preprint