I am happy to announce that the SuperSpike paper and code are finally published. Here is an example of a network with one hidden layer which is learning to produce a Radcliffe Camera spike train from frozen Poisson input spike trains. The animation is a bit slow initially, but after some time you will see how the hidden layer activity starts to alight in a meaningful way to produce the desired output. Also check out this video of a spiking autoencoder which learns to compress a Brandenburg gate spike train through a bottle neck of only 32 hidden units. Of course these are only toy examples, but since virtually any cost function and input/output combination can be cast into the SuperSpike formalism, it has several immediate future uses for i) testing and developing analysis methods for spiking data ii) hypothesis building of how spiking networks solve specific tasks (e.g. in the early sensory systems) and finally iii) engineering spiking networks to solve complex spatiotemporal tasks. Cool beanz.