Complex synapses for continual learning

Most of our existing plasticity models are essentially a mathematical description of how to move a single parameter w, the synaptic weight, of a synapse up and down. We are good at casting this into a differential equation framework which works for both rate-based and spiking neurons and can be fitted to data from STDP experiments.

Illustration of the three synaptic state variables in the Ziegler model evolving in double well potentials. Adapted from Ziegler et al. (2015).

However, chemical synapses are complicated dynamical systems in their own right with a potentially high-dimensional state space and possibly complex latent dynamics. But what is the role if this synaptic complexity? Is it just an epiphenomenon of how biology implements LTP and LTD or do the complicated temporal dynamics do something essential for learning (Redondo and Morris, 2011; Ziegler et al. 2015; Zenke et al. 2015)? Unlikely. In fact a body of theoretical work suggests that synaptic complexity may be crucial for memory retention (Fusi et al., 2005; Lahiri and Ganguli, 2013; Benna and Fusi, 2016).

Until recently, most existing theoretical work was focused on memory retention at the single synapse level. However, just a while ago we have shown that “synaptic intelligence”, i.e. specific complex synaptic dynamics are can in fact be sufficient to avoid catastrophic forgetting in deep neural networks. However, this only gives us a glimpse of what complex synaptic dynamics could in principle be capable of and opens the door to many exciting future research directions.

Validation accuracy of a deep CNN on a combined CIFAR10/100 continual learning problem with complex synaptic dynamics (green) and with standard simple synapses (blue). The network with consolidation consistently outperforms the network without consolidation on all tasks but the most recently learned one. Moreover, the network trained with consolidation performs consistently better on tasks with limited data (tasks>1)  than a network which has only been trained on a single task (gray). This illustrates the benefit of complex synapses over simple ones also for transfer learning. The figure was adapted from our ICML2017 paper [7].


Bibliography