We just put up a new preprint https://www.biorxiv.org/content/10.1101/2020.06.29.176925v1 in which we take a careful look at what makes surrogate gradients work. Spiking neural networks are notoriously hard to train using gradient-based methods due to theirContinue reading
Tag: preprint
Preprint: The remarkable robustness of surrogate gradient learning for instilling complex function in spiking neural networks
We just put up a new preprint https://www.biorxiv.org/content/10.1101/2020.06.29.176925v1 in which we take a careful look at what makes surrogate gradients work. Spiking neural networks are notoriously hard to train using gradient-based methods due to theirContinue reading
Preprint: The Heidelberg spiking datasets for the systematic evaluation of spiking neural networks
Update (2020-12-30) Now published: Cramer, B., Stradmann, Y., Schemmel, J., and Zenke, F. (2020). The Heidelberg Spiking Data Sets for the Systematic Evaluation of Spiking Neural Networks. IEEE Transactions on Neural Networks and Learning SystemsContinue reading
The temporal paradox of Hebbian learning and homeostatic plasticity
I am happy that our article on “The temporal paradox of Hebbian learning and homeostatic plasticity” was just published in Current Opinion in Neurobiology (full text). This article essentially concisely presents the main arguments forContinue reading