Attractor networks underlying working memory
Models aiming to explain working memory are characterized by the persistence of information in the dynamics of a neural network even after the evoking stimulus has been removed. To explore this phenomenon we make use of an idealized neural network rate model with simple connectivity. This model can produce a set of coexisting fixed points that represent stable stationary states of the network. The number of fixed points scales in a combinatorial way with the size of the network so long as a small number of analytically determined conditions on the model parameters are met. The natural dynamics of the network choose one of the fixed points based on the initial conditions (typically the closest fixed point), analogous to the way associative memory may work.
This work aims to address a number of fundamental questions about attractor networks subserving working memory: how many fixed points can a network have? Can they be arranged in any arbitrary pattern? How can they be learned and repositioned?
Structure - dynamics relationships
The dynamics of a network are determined by the interplay of single neuron dynamics and the connectivity structure of the network. We are interested in how the structure of a network influences its dynamics. Utilizing approaches from statistical mechanics we explore how properties of network connectivity, such as sparsity and fine-scale motifs affect network dynamics.
Relating population activity to behavior
In collaboration with the Svoboda lab, we use perturbation experiments to determine which features of neuronal population activity are relevant for neural representation.
Statistical models of neural activity
Understanding the dynamics of neural circuits should serve as crucial inspiration for determining what computations they achieve. In order to analyze population recordings we develop statistical models of neural activity.
Fine scale connectivity patterns in neural projections
We are interested in how the fine scale connectivity patterns in neural projections relate to single neuron biophysics and computation. In collaboration with the Magee lab and with Jinny Kim's lab in KIST, we analyze mGrasp connectivity data. To see an example of this work see our 2014 Neuron paper.