In the 11th episode of the Theoretical Neuroscience Podcast Gaute and Friedemann discuss about end-to-end learning with spiking neurons and self-supervised learning.
Author: fzenke
Elucidating the theoretical underpinnings of surrogate gradient learning in spiking neural networks
Surrogate gradients (SGs) are empirically successful at training spiking neural networks (SNNs). But why do they work so well, and what is their theoretical basis? In our new preprint led by Julia, we answer theseContinue reading
Hiring: Elucidating the circuit mechanisms for strategic planning using deep reinforcement learning
We are looking for a fearless new lab member to investigate the neuronal circuit mechanisms for strategic planning. Strategic planning is a hallmark of intelligent behavior. Squirrels bury nuts to prepare for winter, crows storeContinue reading
Improving equilibrium propagation without weight symmetry through Jacobian homeostasis
We are happy our new paper “Improving equilibrium propagation (EP) without weight symmetry through Jacobian homeostasis,” led by Axel accepted at ICLR 2024. Preprint: https://arxiv.org/abs/2309.02214Code: https://github.com/Laborieux-Axel/generalized-holo-ep EP prescribes a local learning rule and uses recurrentContinue reading
The lab at NeurIPS 2023
We’re at NeurIPS with two papers this year. If you are in New Orleans, come to see us! Dis-inhibitory neuronal circuits can control the sign of synaptic plasticity. Rossbroich, J. and Zenke, F. (2023) doi:Continue reading
Paper: Disinhibitory neuronal circuits are ideally poised to control the sign of synaptic plasticity
In Julian’s new paper “Dis-inhibitory neuronal circuits can control the sign of synaptic plasticity“ accepted at NeurIPS we look at how to reconcile normative theories of gradient-based learning in the brain with phenomenological models ofContinue reading
Paper: Implicit variance regularization in non-contrastive SSL
New paper from the lab accepted at NeurIPS: “Implicit variance regularization in non-contrastive SSL.” In our article, first-authored by Manu and Axel, we add further understanding to how non-contrastive self-supervised learning (SSL) methods avoid collapse.Continue reading