I am very much looking forward to presenting some recent work with Surya on learning in spiking neural networks at the CoSyNe workshop “Deep learning” and the brain (6.20–6.50p on Monday, 27 February 2017 in “Wasatch”).
Update: Preprint available.
In my talk I will revisit the problem of training multi-layer spiking neural networks using an objective function approach. Due to the non-differentiable nature of spiking neurons and their non-trivial history-dependence induced by the spike reset, it is generally not possible to apply gradient-based learning methods like the ones used to train deep neural networks in machine learning.
During my presentation, I will one-by-one address the core problems typically encountered when trying to train spiking neural networks and introduce Superspike, a new approach to training deterministic spiking neural networks to solve complex and non-linearly separable temporal tasks.
Importantly, Superspike has a direct interpretation as a Hebbian three-factor learning rule. Moreover, I am going to share some of my ideas on how I think similar algorithms could be implemented in neurobiology. For instance, when combined with feedback alignment (Lillicrap et al. 2016) the weight transport problem can be alleviated (see the Figure below for a simple example). With all that said, it would be great if you would care to join me for my talk. I am looking forward to fruitful discussions during the workshop and your feedback.