Seminars

Here we list Neurotheory and Computational Neuroscience Initiative Basel (CNIB) seminars, the latter being targeted at a broader audience with diverse backgrounds.

Upcoming Seminars

ics calendar link


Past Seminars

Years | 2022 | 2021 | 2020 | 2019

2022

Mon 05 September 2022 (Neurotheory Seminar)
David Kastner, UCSF
Differences in the evolution of learning due to a high-risk autism gene in rats

Tremendous strides have been made in determining genes that substantially increase risk for autism spectrum disorders (ASD). However, we still struggle to understand how specific genes lead to variations in behavior, let alone to the complex changes seen in ASD. A major impasse for our understanding has been inconsistent and subtle behavioral phenotypes in a variety of rodent models of ASD. I will present a different approach for behavioral phenotyping. Using a high-throughput and automated behavioral system, we have developed a novel data-driven method to determine differences in the way groups of animals learn. Instead of applying analyses based off strong assumptions about the causes of behavior, we take a more agnostic approach, allowing the data to inform how groups of animals differ. Applying this method to Scn2a heterozygous rats we find multiple consistent differences in the way they learn compared to wild-type littermates. Scn2a expresses a sodium channel, and it is a high-risk ASD gene. The richer behavioral phenotyping provides far better substrate to determine how a specific gene leads to neuropsychiatric disease.

About the speaker: David Kastner is an Instructor and a Physician Scientist Scholar Program Fellow in the Department of Psychiatry and Behavioral Sciences at the University of California, San Francisco. He earned his MD-PhD from Stanford University, where he studied neural computations performed by the retina. He spent a year in Switzerland at EPFL as a Fulbright Scholar, modeling synaptic level memory consolidation and reconsolidation. He did psychiatry residency and post-doctoral training at UCSF, studying inter-animal variability in spatial learning. His laboratory studies how animals learn, and how that learning changes in the context of neuro-psychiatric disorders, including autism spectrum disorders. The laboratory employs a variety of techniques from high-throughput automated behavior, computational modeling, large-scale electrophysiology, and brain lesioning to determine the computational principles that transform neural activity into behavior.



Tue 16 August 2022 (Neurotheory Seminar) organized by Keller Lab
Matthew Cook, INI, Zurich
Barefoot on hierarchies


Thu 11 August 2022 (CNIB Seminar)
Everton Joao Agnes, Biozentrum
Linking accessibility, allocation, and inhibitory gating in a model of context-dependent associative memory


Tue 19 July 2022 (CNIB Seminar)
Andrew Saxe, Gatsby Computational Neuroscience Unit, UCL
The Neural Race Reduction: Dynamics of nonlinear representation learning


Tue 12 July 2022 (Neurotheory Seminar)
Richard Naud, University of Ottawa
Learning cortical representations proceeds in two stages


Tue 07 June 2022 (CNIB Seminar)
Mark Goldman, UC Davis
Integrators in short- and long-term memory


Tue 26 April 2022 (CNIB Seminar)
Joel Zylberberg, York University
AI for Neuroscience for AI


Tue 08 March 2022 (Neurotheory Seminar)
Eleni Vasilaki, University of Sheffield, currently visiting professor at INI Zurich
Signal neutrality, scalar property, and collapsing boundaries as consequences of a learned multi-time scale strategy
This event was canceled due to illness

We postulate that three fundamental elements underline a decision making process: perception of time passing, information processing in multiple time scales and reward maximisation. We build a simple reinforcement learning agent upon these principles that we train on a Shadlen-like experimental setup. Our results, similar to the experimental data, demonstrate three emerging signatures. (1) Signal neutrality: insensitivity to the signal coherence in the interval preceding the decision. (2) Scalar property: the mean of the response times varies glaringly for different signal coherences, yet the shape of the distributions stays almost unchanged. (3) Collapsing boundaries: the ``effective" decision-making boundary changes over time in a manner reminiscent of the theoretical optimal. Removing the perception of time or the multiple timescales from the model does not preserve the distinguishing signatures. Our results suggest an alternative explanation for signal neutrality. We propose that it is not part of motor planning. It is part of the decision-making process and emerges from information processing on multiple time scales.

About the speaker: Professor Eleni Vasilaki is visiting professor at the Institute of Neuroinformatics (UZH and ETHZ). She is Chair of Bioinspired Machine Learning and the head of the Machine Learning Group in the Department of Computer Science at the University of Sheffield, UK. Inspired by biology, Prof. Vasilaki and her team design novel machine learning techniques with a focus on reinforcement learning and reservoir computing. She also works closely with material scientists and engineers to design hardware that computes in a brain-like manner. More at https://www.gleichstellung.uzh.ch/de/projekte/gastprofessur_inge_strauch/eleni_vasilaki.html



Tue 01 March 2022 (CNIB Seminar)
Adrienne Fairhall, University of Washington
Rich representations in dopamine


Tue 15 February 2022 (Neurotheory Seminar)
Laureline Logiaco, Columbia University
Neural network mechanisms of flexible autonomous motor sequencing

One of the fundamental functions of the brain is to flexibly plan and control movement production at different timescales in order to efficiently shape structured behaviors. I will present research elucidating how these complex computations are performed in the mammalian brain, with an emphasis on autonomous motor control. After briefly mentioning research on the mechanisms underlying high-level planning, I will focus on the efficient interface of these high-level control commands with motor cortical dynamics to drive muscles. I will notably take advantage of the fact that the anatomy of the circuits underlying the latter computation is better known. Specifically, I will show how these architectural constraints lead to a principled understanding of how the combination of hardwired circuits and strategically positioned plastic connections located within loops can create a form of efficient modularity. I will show that this modular architecture can balance two different objectives: first, supporting the flexible recombination of an extensible library of re-usable motor primitives; and second, promoting the efficient use of neural resources by taking advantage of shared connections between modules. I will finally show that these insights are relevant for designing artificial neural networks able to flexibly and robustly compose hierarchical continuous behaviors from a library of motor primitives.

About the speaker: Laureline's research uses a multidisciplinary approach to investigate the network mechanisms of neural computations. Her Ph.D. was co-advised by Angelo Arleo at University Pierre and Marie Curie (France) and Wulfram Gerstner at Ecole Polytechnique Federale de Lausanne (Switzerland), focusing on both model-driven data analysis and theory of neural network dynamics. She is now a senior post-doc at the Center for Theoretical Neuroscience at Columbia University, working with Sean Escola and Larry Abbott on principles of computation in neural networks.





2021

Wed 13 October 2021 (CNIB Seminar)
Sara Solla, Northwestern University
Stability of neural dynamics underlies stereotyped learned behavior


Tue 25 May 2021 (CNIB Seminar)
Misha Tsodyks, Weizmann
Mathematical models of human memory


Thu 11 March 2021 (Basel Neuroscience Seminars) organized by Zenke Lab
Nicole Rust, UPenn
Single-trial image memory

Humans have a remarkable ability to remember the images that they have seen, even after seeing thousands, each only once and only for a few seconds. In this talk, I will describe our recent work focused on the neural mechanisms that support visual familiarity memory. In the first part of the talk, I will describe the correlates of the natural variation with which some images are inherently more memorable than others, both the brain as well as deep neural networks trained to categorize objects. In the second part of the talk, I will describe how these results challenge current proposals about how visual familiarity is signaled in the brain, as well as evidence in support of a novel theory about how familiarity is decoded to drive behavior.



Tue 16 February 2021 (CNIB Seminar)
Rava da Silveira, ENS Paris and IOB Basel
Efficient Random Codes in a Shallow Neural Network


Tue 19 January 2021 (Neurotheory Seminar)
SueYeon Chung, Columbia University, New York
Neural manifolds in deep networks and the brain




2020

Tue 15 December 2020 (CNIB Seminar)
Robert Rosenbaum, University of Notre Dame, Indiana
Universal Properties of Neuronal Networks with Excitatory-Inhibitory Balance


Tue 24 November 2020 (CNIB Seminar)
Yoram Burak, Hebrew University, Jerusalem
Linking neural representations of space by grid cells and place cells in the hippocampal formation


Thu 20 August 2020 (Basel Neuroscience Seminar)
Nicole Rust, UPenn, USA
This event was canceled due to COVID19.


Thu 16 July 2020 organized by R.A. da Silveira
Mehrdad Jazayeri, MIT, USA
This event was canceled due to COVID19.


Mon 06 July 2020 (Neurotheory Seminar)
Richard Naud, University of Ottawa, Canada
This event was canceled due to COVID19.


Mon 22 June 2020 (Neurotheory Seminar)
Viola Priesemann, MPI for Dynamics and Self-Organization, Germany
Information flow and spreading dynamics in neural networks and beyond

Biological as well as artificial networks show amazing information processing properties. A popular hypothesis is that neural networks profit from operating close to a continuous phase transition, because at a phase transitions, several computational properties are maximized. We show that maximizing these properties is advantageous for some tasks – but not for others. We then show how homeostatic plasticity enables us to tune networks away or towards a phase transition, and thereby adapt the network to task requirements. Thereby we shed light on the operation of biological neural networks, and inform the design and self-organization of artificial ones. – In a second part of the talk, we address the spread of SARS-CoV-2 in Germany. We quantify how governmental policies and the concurrent behavioral changes led to a transition from exponential growth to decline of novel case numbers. We conclude with discussing potential scenarios of the SARS-CoV-2 dynamics for the months to come.



Tue 09 June 2020 organized by Alex Schier (Biozentrum)
Everton Agnes, University of Oxford, UK
Flexible, robust, and stable learning with interacting synapses


Fri 05 June 2020 organized by R.A. da Silveira
Larry Abbott, Columbia University, USA
This event was canceled due to COVID19.


Wed 18 March 2020 (CNIB Seminar)
Peter Dayan, MPI for Biological Cybernetics, Germany
Replay and Preplay in Human Planning
This event was canceled due to COVID19.


Thu 20 February 2020 (FMI students and post-doc seminars)
Nao Uchida, Harvard, USA
A normative perspective on the diversity of dopamine signals




2019

Thu 05 December 2019 organized by R. Friedrich
Elad Schneidman, Weizmann, Israel
Learning the code of large neural populations using random projections


Wed 27 November 2019 (Neurotheory Seminar)
Wulfram Gerstner, EPFL, Switzerland
Eligibility traces and three-factor learning rules


Wed 13 November 2019 (Neurotheory Seminar)
Emre Neftci, UC Irvine, USA
Data and power efficient intelligence with neuromorphic hardware