Seminars

Upcoming


Alexander Mathis, EPFL (web)
Wed 27 November 2024, 14:00 | FMI, 0.60/0.61
Neurotheory Seminar

Leveraging biomechanics to gain insights into the sensorimotor system

Abstract: Efficient musculoskeletal simulators, and machine learning algorithms provide new computational approaches to tackle the grand challenge of understanding the sensorimotor system. First, I’ll talk about theory-driven approaches to test what the goal of the proprioceptive pathway is. Second, I’ll show that taking inspiration from sport science, we can train reinforcement learning algorithms to carry out skilled object manipulation tasks of the hand with 39 muscles. Interestingly, these models have a number of emergent properties that compare favorably to humans and challenge the notion of muscle synergies as a simplifying control principle.

About the speaker: Alexander studied pure mathematics with a minor in logic and theory of science at the Ludwig Maximilian's University in Munich. For his PhD also at LMU, he worked on optimal coding approaches to elucidate the properties of grid cells. As a postdoctoral fellow with Prof. Venkatesh N. Murthy at Harvard University and Prof. Matthias Bethge at Tuebingen AI, he decided to study olfactory behaviors such as odor-guided navigation, social behaviors and the cocktail party problem in mice. During this time, he increasingly got interested sensorimotor behaviors beyond olfaction and started working on proprioception, motor adaption, as well as computer vision tools for measuring animal behavior. In his group, he is interested in elucidating how the brain gives rise to adaptive behavior. They work at the intersection of computational neuroscience and machine learning. Ultimately, being interested in reverse engineering the algorithms of the brain, in order to figure out how the brain works and to build better artificial intelligence systems. His group develops machine learning tools for behavioral (e.g. DeepLabCut, AmadeusGPT, DLC2action) and neural data analysis and conversely try to learn from the brain to solve challenging machine learning problems such as learning motor skills.



Mackenzie Mathis, EPFL (web)
Thu 30 January 2025, 16:00 | Biozentrum
Basel Neuroscience Seminar hosted by Andreas Keller

About the speaker: Mackenzie Mathis's long-term research goal is to understand the neural circuits and computations underlying adaptive behavior in intelligent systems, focusing on motor learning and control. To achieve this, Mathis has embarked on studying motor circuits at multiple levels, from the molecular developmental programs of motor neurons to the systems level of skilled motor behaviors in mice, and to deep neural networks. Mathis has a multidisciplinary background in neuroscience, encompassing molecules, circuits, and deep learning approaches. As a technician in the co-directed laboratory of Dr. Christopher Henderson and Dr. Hynek Wichterle, and advised by Dr. Thomas Jessell, Mathis contributed to multiple teams aiming to understand the developmental regulation of spinal motor neurons. This knowledge was utilized to build one of the first in vitro models of ALS from stem cells for high throughput drug screening. During graduate training in Dr. Naoshige Uchida’s laboratory at Harvard University, Mathis developed the first behavioral mouse model of motor adaptation and contributed to the first single-unit recordings and optogenetic "tagging" of serotonin neurons in vivo, alongside dopamine neurons. Mathis received the Harvard Rowland Fellowship in the fall of 2016, and after defending their PhD, became a postdoctoral fellow with Prof. Matthias Bethge at the University of Tübingen. There, Mathis led the development of deep learning-based methods to track poses of animals, a project completed and extended in their independent lab.



Tatjana Tchumatchenko, University of Bonn (web)
Thu 05 June 2025, 16:00 | Biozentrum
Basel Neuroscience Seminar hosted by Everton Agnes

About the speaker: Tatjana Tchumatchenko is a physicist by training, specializing in computational neuroscience. She is a professor at the Institute for Physiological Chemistry at the University of Mainz Medical Center and serves as a group leader at the Institute of Experimental Epileptology and Cognition Research at the University of Bonn Medical Center. Her research group analyzes data at the molecular level, formulating partial differential equations (PDEs) and ordinary differential equations (ODEs) to describe protein motion in dendrites. They also simulate and analyze neural circuit activity and develop machine learning prediction models for behavioral and activity data. By integrating mathematics, physics, and computer science, his team aims to enhance the understanding of how neurons encode incoming information and perform computations.



Directly see upcoming seminars in your calendar by adding the following ics calendar url.


Past Seminars

Years | 2024 | 2023 | 2022 | 2021 | 2020 | 2019

2024

Thu 07 November 2024 | Basel Neuroscience Seminar hosted by Anissa Kempf
Pavan Ramdya, EPFL
Reverse-engineering Drosophila action selection and motor control


Mon 21 October 2024 | Neurotheory Seminar
Fabian Mikulasch, MPI for Dynamics and Self-organization, Göttingen
Modeling mismatch responses with predictive spiking neurons

Abstract: Prediction mismatch responses in cortex seem to signal the difference between an internal model of the animal and sensory observations. Often these responses are interpreted as evidence for the existence of error neurons, which guide inference in models of hierarchical predictive coding. Here we show that prediction mismatch responses also arise naturally in a spiking encoding of sensory signals, where spikes predict the future signal. We argue that prediction mismatch responses might not reflect the computation of errors per se, but rather the reorganization of the neural code when new information is incorporated.

About the speaker: Mikulasch studied Cognitive Science, Uni Tübingen and Complex Systems, Chalmers, Gothenburg, and did a PhD on computations and learning in spiking neurons in the Lab of Viola Priesemann, Max Planck Institute for Dynamics and Self-organization, Göttingen. He is especially interested in the principles of how neural networks can learn a useful model of sensory data, and how this could be achieved in cortex.



Tue 17 September 2024 | Basel Neuroscience Seminar hosted by Zenke Lab
Stefano Fusi, Columbia University
The geometry of abstraction in human and non-human primates

Abstract: Neurons in the mammalian brain often exhibit complex, non-linear responses to multiple task variables (mixed selectivity). Despite the diversity of these responses, which are seemingly disorganized, it is often possible to observe an interesting structure in the representational geometry: task-relevant variables are encoded in approximately orthogonal subspaces in the neural activity space. This encoding is a signature of low dimensional disentangled representations, it is typically the result of a process of abstraction and allows linear readouts to generalize to novel situations readily. We show that these representations are observed in cognitive areas of the brain of human and non-human primates performing complex context-dependent tasks. In humans, it is possible to observe the formation of these representations in the hippocampus. These studies are the result of a collaboration with the groups of Daniel Salzman (Columbia) and Ueli Rutishauser (Cedars Sinai/Caltech).

About the speaker: Stefano Fusi was born in Florence, Italy, and graduated in 1992 from the Sapienza University of Rome with a degree in physics. After his degree, he obtained a researcher position at the Italian National Institute for Nuclear Physics in Rome and started to work in the field of theoretical neuroscience. In 1999, he received a Ph.D. in physics from the Hebrew University of Jerusalem, Israel, and moved to the University of Bern, Switzerland, as a postdoctoral fellow. After visiting Brandeis University as a postdoctoral fellow in 2003, in 2005 he was awarded a professorial fellowship by the Swiss National Science Foundation and became an assistant professor at the Swiss Federal Institute of Technology in Zurich (ETHZ), Switzerland. In 2009, he joined the Department of Neuroscience at Columbia University where he is a Professor of Neuroscience at the Center for Theoretical Neuroscience.



Mon 15 July 2024 | Internal Seminar hosted by Zenke Lab
Agnes Korcsak-Gorzo, FZ Julich
Function in spiking networks: event-based e-prop plasticity & spike-based tempering

Abstract: Recent advances in machine learning raise the question of how these algorithms differ from or resemble brain mechanisms solving the same task. One approach to addressing this question is incrementally incorporating biological constraints into these algorithms. This talk features insights from two projects following this approach for memory encoding and retrieval in spiking neural networks. In the first project, we ported the three-factor synaptic plasticity rule eligibility propagation (e-prop) from TensorFlow to the spiking neural network simulator NEST, transitioning from a time-driven update scheme to an event-driven one. The implementation proved to scale well to networks of several million neurons. Adding more biological constraints improved computational efficiency while maintaining learning performance. In the second project, we improved the ability of spiking restricted Boltzmann machines to spontaneously visit all memorized classes after training on high-dimensional data. In the energy landscape, training encodes each class as a deep local mode, but deeper modes hinder switching. Establishing a mathematical link between the abstract system temperature and the background rate enabled us to control the energy barriers. Inspired by the idea of "simulated tempering" to flatten the energy landscape periodically, we showed that changing the background Poisson rate, akin to cortical oscillations, improves mixing between classes while maintaining high accuracy of retrieved representations. These projects offer a glimpse of how exploring possible implementations of machine learning principles in the biological substrate helps develop powerful and efficient spike-based algorithms and deepen our understanding of brain phenomena.



Thu 13 June 2024 | Neurotheory Seminar hosted by Zenke Lab
Martin Schrimpf, EPFL
Vision and Language in Brains and Machines

Abstract: Research in the brain and cognitive sciences attempts to uncover the neural mechanisms underlying intelligent behavior. Due to the complexities of brain processing, studies necessarily had to start with a narrow scope of experimental investigation and computational modeling. I will argue that it is time for our field to take the next step: build system models that capture neural mechanisms and supported behaviors in entire domains of intelligence. To make progress on system models, we are developing the Brain-Score platform which, to date, hosts over 50 benchmarks of neural and behavioral experiments that models can be tested on. By systematically evaluating a wide variety of model candidates, we not only identify models beginning to match a range of brain data (~50% explained variance), but also discover key relationships: Models’ brain scores are predicted by their object categorization performance in vision and their next-word prediction performance in language. The better models predict internal neural activity, the better they match human behavioral outputs, with architecture substantially contributing to brain-like representations. Using the integrative benchmarks, we develop improved state-of-the-art system models that more closely match shallow recurrent neuroanatomy, predict primate temporal processing, and are more robust to image corruptions. Finally I will argue that the newest generation of models can be used to predict the behavioral effects of neural interventions, and to drive new experiments.

About the speaker: Martin's research focuses on a computational understanding of the neural mechanisms underlying natural intelligence in vision and language. To achieve this goal, he bridges Deep Learning, Neuroscience, and Cognitive Science, building artificial neural network models that match the brain’s neural representations in their internal processing and are aligned to human behavior in their outputs. He completed his PhD at the MIT Brain and Cognitive Sciences department advised by Jim DiCarlo with collaborations with Ev Fedorenko and Josh Tenenbaum, following Bachelor’s and Master’s degrees in computer science at TUM, LMU, and UNA. He previously worked at Harvard, Salesforce, and other industry positions. Martin also co-founded two startups. Among others, his work has been recognized in the news at Science magazine, MIT News, and Scientific American; and with awards such as the Neuro-Irv and Helga Cooper Open Science Prize, the McGovern fellowship, and the Takeda fellowship in AI + Health.



Thu 16 May 2024 | Basel Neuroscience Seminar hosted by Rava Azeredo da Silveira
Sam Gershman, Harvard University
The emergence of belief

Abstract: Reinforcement learning models have successfully accounted for many aspects of dopamine activity, but they typically assume a fully observable environment state, leading to discrepancies with experimental data. In the real world, stimuli often provide ambiguous information about the underlying state. It turns out that the same reinforcement learning machinery can be applied to this "partially observable" setting by operating on a "belief state" (the conditional distribution over states given the observed stimuli). I will present several experimental studies and computational analyses that provide support for this model. However, I will then show how the brain could potentially solve this problem another way, without a belief state, by training a recurrent neural network end-to-end to predict reward. In this new model, beliefs are an emergent property of a more general learning system. This model is also able to explain the relevant experimental data, while also making some new predictions that we are beginning to test.

About the speaker: Samuel Gershman received his B.A. in Neuroscience and Behavior from Columbia University in 2007 and his Ph.D. in Psychology and Neuroscience from Princeton University in 2013. From 2013-2015 he was a postdoctoral fellow in the Department of Brain and Cognitive Sciences at MIT.



Thu 01 February 2024 | Basel Neuroscience Seminar hosted by Zenke Lab
Tim Behrens, University of Oxford and Sainsbury Wellcome Centre
Representing the structure of problems


Thu 18 January 2024 | Basel Neuroscience Seminar hosted by Zenke Lab
Julijana Gjorgjieva, TU Munich
Shaping cortical computations via short- and long-term synaptic plasticity

Abstract: Synapses and neuronal properties in the brain are constantly modified by various plasticity mechanisms operating at different timescales. While extensive experimental studies have characterized these plasticity mechanisms, understanding their functional implications purely experimentally is challenging due to their interplay, and the complexity induced by the diversity of cell types. Combining theoretical analysis and computational modeling, I will show how different plasticity mechanisms refine connectivity and shape cortical computations. I will present our work on synaptic plasticity organizing circuits at the subcellualr and cellular levels, and homeostatic mechanisms maintaining stability in neural circuits enabling them to perform different computations.

About the speaker: Professor Gjorgjieva (b. 1983) conducts research in the fields of computational and theoretical neuroscience. She is interested in how brain circuits become tuned to maintain a balance between constant change as we learn new things, and robustness to produce reliable behavior. In particular, she concentrates on two aspects of neural circuit organization, looking at how it emerges from the interaction of neuronal and synaptic properties during development, and from optimality and energy conservation principles that operate over the longer timescales of evolution. Professor Gjorgjieva studied mathematics at Harvey Mudd College in California, USA. After obtaining a PhD in Applied Mathematics at the University of Cambridge in 2011, she spent five years in the USA as a postdoctoral research fellow at Harvard University and Brandeis University, supported by grants from the Swartz Foundation and the Burroughs-Wellcome Fund. In 2016, she set up an independent research group at the Max Planck Institute for Brain Research in Frankfurt and joined TUM as an assistant professor shortly after as part of the MaxPlanck@TUM program. She received tenure and joined TUM as a full professor in 2022. She is also a member of the Bernstein Center for Computational Neuroscience in Munich.





2023

Thu 14 December 2023 | CNIB Seminar hosted by Flavio Donato and CNIB
Juan Alvaro Gallego, Imperial College London
Understanding how the brain learns and controls movement through neural manifolds
Note: Workshop "Revealing and studying neural manifolds", 10:30-12:30, Biozentrum Room U1.191


Mon 27 November 2023 | Neurotheory Seminar hosted by Everton Agnes
Valerio Mante, INI, University of Zurich
Modular and distributed neural computations of decisions and actions


Tue 17 October 2023 | Bernoulli Lecture hosted by Bernoulli Network for the Behavioral Sciences
Peter Dayan, Max Planck Institute for Biological Cybernetics
Risk in Sequential Decisions


Wed 19 July 2023 | Neurotheory Seminar
Rafal Bogacz, University of Oxford
Modelling learning in the brain with predictive coding

Abstract: Predictive coding is an influential framework for modelling information processing and learning in the cortical circuits. This talk will give an overview of recent work on analysing the properties of predictive coding networks and relating them with biological circuits. The first part will focus on their relationship with backpropagation algorithm often employed to train artificial deep networks and to model learning in the brain. It will be demonstrated that predictive coding networks employ a fundamentally different principle, and learn more effectively than backpropagation in many tasks animals and humans need to solve. Next, it will be discussed how predictive coding can be extended to capture variability of neural responses, temporal prediction, and memory functions of the hippocampal circuit.

About the speaker: Rafal Bogacz is a Professor of Computational Neuroscience at the University of Oxford. His work focusses on modelling learning processes in the brain. He graduated in computer science at Wroclaw University of Technology in Poland, then he did a PhD in computational neuroscience at the University of Bristol, and next he worked as a postdoctoral researcher at Princeton University jointly in the Departments of Applied Mathematics and Psychology.



Fri 16 June 2023 | Neurotheory Seminar
Eleni Vasilaki, University of Sheffield
Signal Neutrality, Scalar Property, and Collapsing Boundaries: Consequences of a Learned Multi-Time Scale Strategy

Abstract: We propose that effective decision-making is grounded on three fundamental elements: perception of time passing, information processing at multiple time scales, and reward maximisation. Based on these principles, we construct a simple reinforcement learning agent and train it on a Shadlen-like experimental setup.
Our results, in alignment with experimental data, reveal three emergent characteristics. (1) Signal neutrality: the agent demonstrates insensitivity to the signal coherence in the interval preceding the decision. (2) Scalar property: while the mean response times vary noticeably across different signal coherences, the shape of the distributions remains remarkably consistent. (3) Collapsing boundaries: the "effective" decision-making boundary shifts over time, paralleling the theoretical optimal. Eliminating either the perception of time or the multiple time scales from the model results in the loss of these distinctive signatures. Our findings suggest an alternative interpretation for signal neutrality. We argue that rather than being an aspect of motor planning, it emerges as part of the decision-making process from information processing across multiple time scales.

About the speaker: Professor Eleni Vasilaki is the Chair of Bioinspired Machine Learning and leads the Machine Learning Group at the Department of Computer Science, University of Sheffield, UK. Drawing inspiration from biological principles, her team aims to explore and advance the field of machine learning, with a particular emphasis on reinforcement learning and reservoir computing. Additionally, they actively collaborate with material scientists and engineers to develop innovative hardware designs that emulate the computational processes observed in the brain. Eleni completed her undergraduate degree in Informatics and Telecommunications and later earned a Master's in Microelectronics from the University of Athens. She then pursued a DPhil in Computer Science and Artificial Intelligence at the University of Sussex. Her postdoctoral positions took her to the University of Bern from 2004 to 2006 and the Swiss Federal Institute of Technology Lausanne (EPFL) from 2007 to 2009. In 2009, she joined the University of Sheffield as a lecturer and was promoted to professor in 2016. In 2021, in recognition of her significant contributions to the field, Eleni was appointed the Inge Strauch Visiting Professor at the Institute of Neuroinformatics, a collaboration between the University of Zurich and ETH Zurich. This appointment underscores Eleni's ongoing dedication to exploring the intersection of machine learning and neuroscience.



Mon 15 May 2023 | Neurotheory Seminar
Davide Zoccolan, Sissa
Slowness as the guiding principle to learn invariance: causal and functional evidence from rat visual cortex

Abstract: Object processing pathways, such as the primate ventral stream, face the challenge of building representations that are increasingly selective for the identity of visual objects while becoming gradually more tolerant (or invariant) to changes in their appearance (resulting, e.g., from translation, rotation, scaling). A key question in vision sciences is to understand how such transformation-tolerant object representations are formed. A leading hypothesis is that invariance is learned in an unsupervised way by exploiting the spatiotemporal continuity of visual experience, i.e., the natural tendency of different object views to occur nearby in time. This would allow visual neurons to become tuned to the content of dynamic visual scenes that varies more slowly over time (i.e., object identity) while discarding other faster-varying, lower-level visual attributes (e.g., local features). In this seminar, I will present both causal and functional evidence, based on recordings from rat visual cortex, supporting this hypothesis.

In a first study, we focused on the properties of two classes of neurons in primary visual cortex (V1): simple cells, which encode edge orientation in a position-sensitive manner and complex cells, which also encode orientation but in a position-invariant way. We found that degrading the temporal continuity of visual experience during early postnatal life leads to a sizable reduction of the number of complex cells and to an impairment of their functional properties while fully sparing the development of simple cells. This causally implicates adaptation to the temporal structure of the visual input in the development of transformation tolerance but not of shape tuning, thus tightly constraining computational models of unsupervised cortical learning.

In a second study, we tested the hypothesis that the temporal persistence (i.e., slowness) of neuronal responses to temporally structured dynamic stimuli increases along the visual cortical hierarchy. By probing the rat analog of the ventral stream with movies, we uncovered a hierarchy of temporal scales, with deeper areas encoding visual information more persistently. Furthermore, the impact of intrinsic dynamics on the stability of stimulus representations grew gradually along the hierarchy. This suggests that building representations that become progressively slower along the cortical processing hierarchy is indeed an objective that could guide the learning of invariance.

About the speaker: Davide Zoccolan is a professor of neurophysiology at the Scuola Internazionale Superiore di Studi Avanzati (SISSA) of Trieste. He received his M.S. in physics at the University of Turin in 1997 and his Ph.D. in biophysics at SISSA in 2002. He worked as a postdoctoral associate at the Massachusetts Institute of Technology with James DiCarlo and Tomaso Poggio and as a postdoctoral fellow at Harvard University with David Cox. In 2009, he established the SISSA Visual Neuroscience Lab, where he studies the neuronal basis of high-level visual functions using a combination of psychophysics and electrophysiology in rats, as well as computational modeling.



Wed 26 April 2023 | Neurotheory Seminar hosted by A. Vasilevskaya, Keller Lab
Katharina Wilmes, University of Bern
Uncertainty-modulated prediction errors in cortical microcircuits

Abstract: To make contextually appropriate predictions in a stochastic environment, the brain needs to take uncertainty into account. Prediction error neurons have been identified in layer 2/3 of diverse brain areas. How uncertainty modulates prediction error activity and hence learning is, however, unclear. Here, we use a normative approach to derive how prediction errors should be modulated by uncertainty and postulate that such uncertainty-weighted prediction errors (UPE) are represented by layer 2/3 pyramidal neurons. We further hypothesise that the layer 2/3 circuit calculates the UPE through the subtractive and divisive inhibition by different inhibitory cell types. We ascribe different roles to somatostatin-positive (SST), and parvalbumin-positive (PV) interneurons. By implementing the calculation of UPEs in a microcircuit model, we show that different cell types in cortical circuits can compute means, variances and UPEs with local activity-dependent plasticity rules. Finally, we show that the resulting UPEs enable adaptive learning rates.

About the speaker: Katharina Wilmes is an advanced postdoc in Computational Neuroscience Group at the University of Bern. She studied Cognitive Sciences and then did her PhD in Computational Neuroscience with Susanne Schreiber and Henning Sprekeler at the Institute for Theoretical Biology, Humboldt University of Berlin and the Bernstein Center for Computational Neuroscience. After a postdoctoral stay at the Imperial College London in the group of Claudia Clopath, she moved to Switzerland to work between theory and experiment at the University of Bern.



Wed 05 April 2023 | Neurotheory Seminar
Jean-Pascal Pfister, University of Bern
Efficient Sampling-Based Bayesian Active Learning for synaptic characterization

Abstract: Bayesian Active Learning (BAL) is an efficient framework for learning the parameters of a model, in which input stimuli are selected to maximize the mutual information between the observations and the unknown parameters. However, the applicability of BAL to experiments is limited as it requires performing high-dimensional integrations and optimizations in real time: current methods are either too time consuming, or only applicable to specific models. Here, we propose an Efficient Sampling-Based Bayesian Active Learning (ESB-BAL) framework, which is efficient enough to be used in real-time biological experiments. We apply our method to the problem of estimating the parameters of a chemical synapse from the postsynaptic responses to evoked presynaptic action potentials. Using synthetic data and synaptic whole-cell patch-clamp recordings, we show that our method can improve the precision of model-based inferences, thereby paving the way towards more systematic and efficient experimental designs in physiology.

About the speaker: Jean-Pascal Pfister is an associate professor at the Department of Physiology (University of Bern) and at the Institute of Neuroinformatics (University of Zurich / ETH). He is heading the theoretical neuroscience group and aims at discovering fundamental computational principles that govern brain dynamics. During his PhD at EPFL, he developed several models of spike-timing dependent plasticity (including the “triplet model”). During his post-doc in Cambridge (UK) and his sabbatical in Harvard he developed statistical models of synaptic dynamics, neuronal dynamics and neural network dynamics.



Mon 30 January 2023 | Neurotheory Seminar
Giulia D'Angelo, IIT, Genova
What attracts your attention?
Note: This is an internal event


Thu 26 January 2023 | Neurotheory Seminar
Tim Vogels, IST Austria
On the purpose and origin of spontaneous neural activity

Abstract: So-called spontaneous activity is a central hallmark of most nervous systems. Such non-causal firing is contrary to the tenet of spikes as a means of communication, and its purpose remains unclear. We propose that non-input driven firing can serve as a release valve to protect neurons from the toxic conditions arising in mitochondria from lower-than-baseline energy consumption. We built a set of models that incorporate homeostatic control of metabolic products--ATP, ADP, and reactive oxygen species, among others--by changes in firing. Our theory accounts for key features of neuronal activity observed in many studies that range from ion channels function all the way to resting state dynamics. We propose an integrated, crucial role for metabolic spiking that links metabolic homeostasis and neuronal function, and makes testable predictions. Finally, we link the hallmark symptom of Parkinson's Disease--the absence of dopamine in the striatum--with failure to initiate metabolic spikes.

About the speaker: Tim Vogels is a Professor of Theoretical Neuroscience and research leader at the Institute of Science and Technology (IST) Austria. He studied Physics at the Technical University of Berlin and received a PhD in neuroscience from Brandeis University in 2007. After a postdoctoral stay at Columbia University and Ecole Polytechnique Federale de Lausanne (EPFL) he arrived in Oxford in 2013, where he led a research group in Theoretical and Computational Neuroscience at the Centre for Neural Circuits and Behaviour (Basel Neuroscience), part of the Department of Physiology, Anatomy and Genetics (DPAG), University of Oxford. Together with Rafal Bogacz, Prof Vogels co-organized the NeuroTheory initiative.





2022

Mon 17 October 2022 | Neurotheory Seminar
Adriana Perez Rotondo, University of Cambridge
How cerebellar architecture facilitates rapid online learning
Note: This is an internal seminar

Abstract: The cerebellum has a distinctive circuit architecture comprising the majority of neurons in the brain. Marr-Albus theory and more recent extensions demonstrate the utility of this architecture for particular types of learning tasks related to the separation of input patterns. However, it is unclear how the circuit architecture facilitates known functional roles of the cerebellum. In particular, the cerebellum is critically involved in refining motor plans even during ongoing execution of the associated movement. Why would a cerebellar-like circuit architecture be effective at this type of ‘online’ learning problem? We build a mathematical theory, reinforced with computer simulations, that captures some of the particular difficulties associated with online learning tasks. For instance, synaptic plasticity responsible for learning during a movement only has access to a narrow time window of recent movement errors, whereas it ideally depends upon the entire trajectory of errors, from the movement’s start to its finish. The theory then demonstrates how the distinctive input expansion in the cerebellum, where mossy fibre signals are recorded in a much larger number of granule cells, mitigates the impact of such difficulties. As such, the energy cost of this large, seemingly redundantly connected circuit might be an inevitable cost of precise, fast, motor learning.



Mon 05 September 2022 | Neurotheory Seminar
David Kastner, UCSF
Differences in the evolution of learning due to a high-risk autism gene in rats

Abstract: Tremendous strides have been made in determining genes that substantially increase risk for autism spectrum disorders (ASD). However, we still struggle to understand how specific genes lead to variations in behavior, let alone to the complex changes seen in ASD. A major impasse for our understanding has been inconsistent and subtle behavioral phenotypes in a variety of rodent models of ASD. I will present a different approach for behavioral phenotyping. Using a high-throughput and automated behavioral system, we have developed a novel data-driven method to determine differences in the way groups of animals learn. Instead of applying analyses based off strong assumptions about the causes of behavior, we take a more agnostic approach, allowing the data to inform how groups of animals differ. Applying this method to Scn2a heterozygous rats we find multiple consistent differences in the way they learn compared to wild-type littermates. Scn2a expresses a sodium channel, and it is a high-risk ASD gene. The richer behavioral phenotyping provides far better substrate to determine how a specific gene leads to neuropsychiatric disease.

About the speaker: David Kastner is an Instructor and a Physician Scientist Scholar Program Fellow in the Department of Psychiatry and Behavioral Sciences at the University of California, San Francisco. He earned his MD-PhD from Stanford University, where he studied neural computations performed by the retina. He spent a year in Switzerland at EPFL as a Fulbright Scholar, modeling synaptic level memory consolidation and reconsolidation. He did psychiatry residency and post-doctoral training at UCSF, studying inter-animal variability in spatial learning. His laboratory studies how animals learn, and how that learning changes in the context of neuro-psychiatric disorders, including autism spectrum disorders. The laboratory employs a variety of techniques from high-throughput automated behavior, computational modeling, large-scale electrophysiology, and brain lesioning to determine the computational principles that transform neural activity into behavior.



Tue 16 August 2022 | Neurotheory Seminar hosted by Keller Lab
Matthew Cook, INI, Zurich
Barefoot on hierarchies


Thu 11 August 2022 | CNIB Seminar
Everton Joao Agnes, Biozentrum
Linking accessibility, allocation, and inhibitory gating in a model of context-dependent associative memory


Tue 19 July 2022 | CNIB Seminar
Andrew Saxe, Gatsby Computational Neuroscience Unit, UCL
The Neural Race Reduction: Dynamics of nonlinear representation learning


Tue 12 July 2022 | Neurotheory Seminar
Richard Naud, University of Ottawa
Learning cortical representations proceeds in two stages


Tue 07 June 2022 | CNIB Seminar
Mark Goldman, UC Davis
Integrators in short- and long-term memory


Tue 26 April 2022 | CNIB Seminar
Joel Zylberberg, York University
AI for Neuroscience for AI


Tue 08 March 2022 | Neurotheory Seminar
Eleni Vasilaki, University of Sheffield, currently visiting professor at INI Zurich
Signal neutrality, scalar property, and collapsing boundaries as consequences of a learned multi-time scale strategy
This event was canceled due to illness

Abstract: We postulate that three fundamental elements underline a decision making process: perception of time passing, information processing in multiple time scales and reward maximisation. We build a simple reinforcement learning agent upon these principles that we train on a Shadlen-like experimental setup. Our results, similar to the experimental data, demonstrate three emerging signatures. (1) Signal neutrality: insensitivity to the signal coherence in the interval preceding the decision. (2) Scalar property: the mean of the response times varies glaringly for different signal coherences, yet the shape of the distributions stays almost unchanged. (3) Collapsing boundaries: the ``effective" decision-making boundary changes over time in a manner reminiscent of the theoretical optimal. Removing the perception of time or the multiple timescales from the model does not preserve the distinguishing signatures. Our results suggest an alternative explanation for signal neutrality. We propose that it is not part of motor planning. It is part of the decision-making process and emerges from information processing on multiple time scales.

About the speaker: Professor Eleni Vasilaki is visiting professor at the Institute of Neuroinformatics (UZH and ETHZ). She is Chair of Bioinspired Machine Learning and the head of the Machine Learning Group in the Department of Computer Science at the University of Sheffield, UK. Inspired by biology, Prof. Vasilaki and her team design novel machine learning techniques with a focus on reinforcement learning and reservoir computing. She also works closely with material scientists and engineers to design hardware that computes in a brain-like manner. More at https://www.gleichstellung.uzh.ch/de/projekte/gastprofessur_inge_strauch/eleni_vasilaki.html



Tue 01 March 2022 | CNIB Seminar
Adrienne Fairhall, University of Washington
Rich representations in dopamine


Tue 15 February 2022 | Neurotheory Seminar
Laureline Logiaco, Columbia University
Neural network mechanisms of flexible autonomous motor sequencing

Abstract: One of the fundamental functions of the brain is to flexibly plan and control movement production at different timescales in order to efficiently shape structured behaviors. I will present research elucidating how these complex computations are performed in the mammalian brain, with an emphasis on autonomous motor control. After briefly mentioning research on the mechanisms underlying high-level planning, I will focus on the efficient interface of these high-level control commands with motor cortical dynamics to drive muscles. I will notably take advantage of the fact that the anatomy of the circuits underlying the latter computation is better known. Specifically, I will show how these architectural constraints lead to a principled understanding of how the combination of hardwired circuits and strategically positioned plastic connections located within loops can create a form of efficient modularity. I will show that this modular architecture can balance two different objectives: first, supporting the flexible recombination of an extensible library of re-usable motor primitives; and second, promoting the efficient use of neural resources by taking advantage of shared connections between modules. I will finally show that these insights are relevant for designing artificial neural networks able to flexibly and robustly compose hierarchical continuous behaviors from a library of motor primitives.

About the speaker: Laureline's research uses a multidisciplinary approach to investigate the network mechanisms of neural computations. Her Ph.D. was co-advised by Angelo Arleo at University Pierre and Marie Curie (France) and Wulfram Gerstner at Ecole Polytechnique Federale de Lausanne (Switzerland), focusing on both model-driven data analysis and theory of neural network dynamics. She is now a senior post-doc at the Center for Theoretical Neuroscience at Columbia University, working with Sean Escola and Larry Abbott on principles of computation in neural networks.





2021

Wed 13 October 2021 | CNIB Seminar
Sara Solla, Northwestern University
Stability of neural dynamics underlies stereotyped learned behavior


Tue 25 May 2021 | CNIB Seminar
Misha Tsodyks, Weizmann
Mathematical models of human memory


Thu 11 March 2021 | Basel Neuroscience Seminars hosted by Zenke Lab
Nicole Rust, UPenn
Single-trial image memory

Abstract: Humans have a remarkable ability to remember the images that they have seen, even after seeing thousands, each only once and only for a few seconds. In this talk, I will describe our recent work focused on the neural mechanisms that support visual familiarity memory. In the first part of the talk, I will describe the correlates of the natural variation with which some images are inherently more memorable than others, both the brain as well as deep neural networks trained to categorize objects. In the second part of the talk, I will describe how these results challenge current proposals about how visual familiarity is signaled in the brain, as well as evidence in support of a novel theory about how familiarity is decoded to drive behavior.



Tue 16 February 2021 | CNIB Seminar
Rava da Silveira, ENS Paris and IOB Basel
Efficient Random Codes in a Shallow Neural Network


Tue 19 January 2021 | Neurotheory Seminar
SueYeon Chung, Columbia University, New York
Neural manifolds in deep networks and the brain




2020

Tue 15 December 2020 | CNIB Seminar
Robert Rosenbaum, University of Notre Dame, Indiana
Universal Properties of Neuronal Networks with Excitatory-Inhibitory Balance


Tue 24 November 2020 | CNIB Seminar
Yoram Burak, Hebrew University, Jerusalem
Linking neural representations of space by grid cells and place cells in the hippocampal formation


Thu 20 August 2020 | Basel Neuroscience Seminar
Nicole Rust, UPenn, USA
This event was canceled due to COVID19.


Thu 16 July 2020 hosted by R.A. da Silveira
Mehrdad Jazayeri, MIT, USA
This event was canceled due to COVID19.


Mon 06 July 2020 | Neurotheory Seminar
Richard Naud, University of Ottawa, Canada
This event was canceled due to COVID19.


Mon 22 June 2020 | Neurotheory Seminar
Viola Priesemann, MPI for Dynamics and Self-Organization, Germany
Information flow and spreading dynamics in neural networks and beyond

Abstract: Biological as well as artificial networks show amazing information processing properties. A popular hypothesis is that neural networks profit from operating close to a continuous phase transition, because at a phase transitions, several computational properties are maximized. We show that maximizing these properties is advantageous for some tasks – but not for others. We then show how homeostatic plasticity enables us to tune networks away or towards a phase transition, and thereby adapt the network to task requirements. Thereby we shed light on the operation of biological neural networks, and inform the design and self-organization of artificial ones. – In a second part of the talk, we address the spread of SARS-CoV-2 in Germany. We quantify how governmental policies and the concurrent behavioral changes led to a transition from exponential growth to decline of novel case numbers. We conclude with discussing potential scenarios of the SARS-CoV-2 dynamics for the months to come.



Tue 09 June 2020 hosted by Alex Schier
Everton Agnes, University of Oxford, UK
Flexible, robust, and stable learning with interacting synapses


Fri 05 June 2020 hosted by R.A. da Silveira
Larry Abbott, Columbia University, USA
This event was canceled due to COVID19.


Wed 18 March 2020 | CNIB Seminar
Peter Dayan, MPI for Biological Cybernetics, Germany
Replay and Preplay in Human Planning
This event was canceled due to COVID19.


Thu 20 February 2020 | FMI students and post-doc seminars
Nao Uchida, Harvard, USA
A normative perspective on the diversity of dopamine signals




2019

Thu 05 December 2019 hosted by R. Friedrich
Elad Schneidman, Weizmann, Israel
Learning the code of large neural populations using random projections


Wed 27 November 2019 | Neurotheory Seminar
Wulfram Gerstner, EPFL, Switzerland
Eligibility traces and three-factor learning rules


Wed 13 November 2019 | Neurotheory Seminar
Emre Neftci, UC Irvine, USA
Data and power efficient intelligence with neuromorphic hardware