Here we are collecting typos and errors that have been reported to us by our attentive readers of our paper:
Zenke, F., and Vogels, T.P. (2021). The Remarkable Robustness of Surrogate Gradient Learning for Instilling Complex Function in Spiking Neural Networks. Neural Computation 33, 899–925.
https://direct.mit.edu/neco/article/33/4/899/97482/The-Remarkable-Robustness-of-Surrogate-Gradient
Thanks for reporting these problems. And our apologies that they made it into the final manuscript.
Updates May 22, 2021
- Typo on page 910 in the sentence: “Indeed, we found that the best-performing models in this case were recurrent and achieving state-of-the-art classification accuracy of (0.82 ± 0.02) % (see Figure 6c).” — What we meant to say was: (82 ± 2) %.
- Missing parameter values in Table 1: In addition to networks of size 100 for Randman and MNIST indicated in the table, we also simulated larger networks with 100, 200, 512, and 800 units with and without recurrent connections in our parameter sweeps.
- Specifically, the quoted MNIST test accuracy of (98.3 ± 0.9)% was achieved with one hidden layer with 512 recurrently connected units and not with 100 units.