Seminars

Here we list computational neuroscience talks which includes seminars organized by the Computational Neuroscience Initiative Basel (CNIB) targeted at a wider audience with diverse backgrounds.

Upcoming Seminars

ics calendar link

Richard Naud, University of Ottawa (web)
Tue 12 July 2022, 14:00 — TBA
Neurotheory Seminar

TBA

About the speaker: Dr. Richard Naud received his B.Sc. and M.Sc. degrees in Physics from McGill University. During his doctoral studies at EPFL, he designed statistical methods for the automatic characterization of neuronal dynamics. In parallel, he co-organized a Spike-Timing Prediction Challenge for the comparison and benchmarking of mathematical neuron models. After obtaining his Ph. D. in 2011, he developed quasi-renewal theory, which allows to decode neural responses in the presence of adaptation. He then obtained a post-doctoral scholarship of the FRQNT to work with André Longtin, uOttawa, identifying computational roles of active dendrites and inhibition in sensory systems. After, he became a research assistant at the Teschnische Universität in Berlin, where he established computational roles of active dendrites and inhibition in the neocortex. He started at the University of Ottawa in 2016.



Andrew Saxe, Gatsby Computational Neuroscience Unit, UCL (web)
Tue 19 July 2022 — Hybrid (FMI 5.30 + Zoom)
CNIB

TBD




Past Seminars

Years | 2022 | 2021 | 2020 | 2019

2022

Tue 07 June 2022
Mark Goldman, UC Davis
Integrators in short- and long-term memory


Tue 26 April 2022
Joel Zylberberg, York University
AI for Neuroscience for AI


Tue 08 March 2022
Eleni Vasilaki, University of Sheffield, currently visiting professor at INI Zurich
Signal neutrality, scalar property, and collapsing boundaries as consequences of a learned multi-time scale strategy
This event was canceled due to illness

We postulate that three fundamental elements underline a decision making process: perception of time passing, information processing in multiple time scales and reward maximisation. We build a simple reinforcement learning agent upon these principles that we train on a Shadlen-like experimental setup. Our results, similar to the experimental data, demonstrate three emerging signatures. (1) Signal neutrality: insensitivity to the signal coherence in the interval preceding the decision. (2) Scalar property: the mean of the response times varies glaringly for different signal coherences, yet the shape of the distributions stays almost unchanged. (3) Collapsing boundaries: the ``effective" decision-making boundary changes over time in a manner reminiscent of the theoretical optimal. Removing the perception of time or the multiple timescales from the model does not preserve the distinguishing signatures. Our results suggest an alternative explanation for signal neutrality. We propose that it is not part of motor planning. It is part of the decision-making process and emerges from information processing on multiple time scales.

About the speaker: Professor Eleni Vasilaki is visiting professor at the Institute of Neuroinformatics (UZH and ETHZ). She is Chair of Bioinspired Machine Learning and the head of the Machine Learning Group in the Department of Computer Science at the University of Sheffield, UK. Inspired by biology, Prof. Vasilaki and her team design novel machine learning techniques with a focus on reinforcement learning and reservoir computing. She also works closely with material scientists and engineers to design hardware that computes in a brain-like manner. More at https://www.gleichstellung.uzh.ch/de/projekte/gastprofessur_inge_strauch/eleni_vasilaki.html



Tue 01 March 2022
Adrienne Fairhall, University of Washington
Rich representations in dopamine


Tue 15 February 2022
Laureline Logiaco, Columbia University
Neural network mechanisms of flexible autonomous motor sequencing

One of the fundamental functions of the brain is to flexibly plan and control movement production at different timescales in order to efficiently shape structured behaviors. I will present research elucidating how these complex computations are performed in the mammalian brain, with an emphasis on autonomous motor control. After briefly mentioning research on the mechanisms underlying high-level planning, I will focus on the efficient interface of these high-level control commands with motor cortical dynamics to drive muscles. I will notably take advantage of the fact that the anatomy of the circuits underlying the latter computation is better known. Specifically, I will show how these architectural constraints lead to a principled understanding of how the combination of hardwired circuits and strategically positioned plastic connections located within loops can create a form of efficient modularity. I will show that this modular architecture can balance two different objectives: first, supporting the flexible recombination of an extensible library of re-usable motor primitives; and second, promoting the efficient use of neural resources by taking advantage of shared connections between modules. I will finally show that these insights are relevant for designing artificial neural networks able to flexibly and robustly compose hierarchical continuous behaviors from a library of motor primitives.

About the speaker: Laureline's research uses a multidisciplinary approach to investigate the network mechanisms of neural computations. Her Ph.D. was co-advised by Angelo Arleo at University Pierre and Marie Curie (France) and Wulfram Gerstner at Ecole Polytechnique Federale de Lausanne (Switzerland), focusing on both model-driven data analysis and theory of neural network dynamics. She is now a senior post-doc at the Center for Theoretical Neuroscience at Columbia University, working with Sean Escola and Larry Abbott on principles of computation in neural networks.





2021

Wed 13 October 2021
Sara Solla, Northwestern University
Stability of neural dynamics underlies stereotyped learned behavior


Tue 25 May 2021
Misha Tsodyks, Weizmann
Mathematical models of human memory


Thu 11 March 2021
Nicole Rust, UPenn
Single-trial image memory

Humans have a remarkable ability to remember the images that they have seen, even after seeing thousands, each only once and only for a few seconds. In this talk, I will describe our recent work focused on the neural mechanisms that support visual familiarity memory. In the first part of the talk, I will describe the correlates of the natural variation with which some images are inherently more memorable than others, both the brain as well as deep neural networks trained to categorize objects. In the second part of the talk, I will describe how these results challenge current proposals about how visual familiarity is signaled in the brain, as well as evidence in support of a novel theory about how familiarity is decoded to drive behavior.



Tue 16 February 2021
Rava da Silveira, ENS Paris and IOB Basel
Efficient Random Codes in a Shallow Neural Network


Tue 19 January 2021
SueYeon Chung, Columbia University, New York
Neural manifolds in deep networks and the brain




2020

Tue 15 December 2020
Robert Rosenbaum, University of Notre Dame, Indiana
Universal Properties of Neuronal Networks with Excitatory-Inhibitory Balance


Tue 24 November 2020
Yoram Burak, Hebrew University, Jerusalem
Linking neural representations of space by grid cells and place cells in the hippocampal formation


Thu 20 August 2020
Nicole Rust, UPenn, USA
Note: Joint event with Basel Seminar in Neuroscience
This event was canceled due to COVID19.


Thu 16 July 2020
Mehrdad Jazayeri, MIT, USA
Note: Joint event with Basel Seminar in Neuroscience
This event was canceled due to COVID19.


Mon 06 July 2020
Richard Naud, University of Ottawa, Canada
Note: This event will be held at a later date.
This event was canceled due to COVID19.


Mon 22 June 2020
Viola Priesemann, MPI for Dynamics and Self-Organization, Germany
Information flow and spreading dynamics in neural networks and beyond
Note: This is a virtual talk on Zoom.

Biological as well as artificial networks show amazing information processing properties. A popular hypothesis is that neural networks profit from operating close to a continuous phase transition, because at a phase transitions, several computational properties are maximized. We show that maximizing these properties is advantageous for some tasks – but not for others. We then show how homeostatic plasticity enables us to tune networks away or towards a phase transition, and thereby adapt the network to task requirements. Thereby we shed light on the operation of biological neural networks, and inform the design and self-organization of artificial ones. – In a second part of the talk, we address the spread of SARS-CoV-2 in Germany. We quantify how governmental policies and the concurrent behavioral changes led to a transition from exponential growth to decline of novel case numbers. We conclude with discussing potential scenarios of the SARS-CoV-2 dynamics for the months to come.



Tue 09 June 2020
Everton Agnes, University of Oxford, UK
Flexible, robust, and stable learning with interacting synapses


Fri 05 June 2020
Larry Abbott, Columbia University, USA
Note: Joint event with Basel Seminar in Neuroscience
This event was canceled due to COVID19.


Wed 18 March 2020
Peter Dayan, MPI for Biological Cybernetics, Germany
Replay and Preplay in Human Planning
This event was canceled due to COVID19.


Thu 20 February 2020
Nao Uchida, Harvard, USA
A normative perspective on the diversity of dopamine signals




2019

Thu 05 December 2019
Elad Schneidman, Weizmann, Israel
Learning the code of large neural populations using random projections


Wed 27 November 2019
Wulfram Gerstner, EPFL, Switzerland
Eligibility traces and three-factor learning rules


Wed 13 November 2019
Emre Neftci, UC Irvine, USA
Data and power efficient intelligence with neuromorphic hardware