Filter
Associated Lab
- Druckmann Lab (3) Apply Druckmann Lab filter
- Hermundstad Lab (2) Apply Hermundstad Lab filter
- Jayaraman Lab (4) Apply Jayaraman Lab filter
- Lee (Albert) Lab (1) Apply Lee (Albert) Lab filter
- Leonardo Lab (1) Apply Leonardo Lab filter
- Magee Lab (2) Apply Magee Lab filter
- Pastalkova Lab (1) Apply Pastalkova Lab filter
- Reiser Lab (3) Apply Reiser Lab filter
- Romani Lab (39) Apply Romani Lab filter
- Rubin Lab (1) Apply Rubin Lab filter
- Spruston Lab (1) Apply Spruston Lab filter
- Svoboda Lab (5) Apply Svoboda Lab filter
Publication Date
- 2023 (2) Apply 2023 filter
- 2022 (3) Apply 2022 filter
- 2021 (4) Apply 2021 filter
- 2020 (2) Apply 2020 filter
- 2019 (3) Apply 2019 filter
- 2018 (3) Apply 2018 filter
- 2017 (6) Apply 2017 filter
- 2016 (2) Apply 2016 filter
- 2015 (4) Apply 2015 filter
- 2014 (2) Apply 2014 filter
- 2013 (1) Apply 2013 filter
- 2011 (1) Apply 2011 filter
- 2010 (1) Apply 2010 filter
- 2008 (2) Apply 2008 filter
- 2007 (1) Apply 2007 filter
- 2006 (1) Apply 2006 filter
- 2005 (1) Apply 2005 filter
Type of Publication
39 Publications
Showing 1-10 of 39 resultsNeocortical spiking dynamics control aspects of behavior, yet how these dynamics emerge during motor learning remains elusive. Activity-dependent synaptic plasticity is likely a key mechanism, as it reconfigures network architectures that govern neural dynamics. Here, we examined how the mouse premotor cortex acquires its well-characterized neural dynamics that control movement timing, specifically lick timing. To probe the role of synaptic plasticity, we have genetically manipulated proteins essential for major forms of synaptic plasticity, Ca2+/calmodulin-dependent protein kinase II (CaMKII) and Cofilin, in a region and cell-type-specific manner. Transient inactivation of CaMKII in the premotor cortex blocked learning of new lick timing without affecting the execution of learned action or ongoing spiking activity. Furthermore, among the major glutamatergic neurons in the premotor cortex, CaMKII and Cofilin activity in pyramidal tract (PT) neurons, but not intratelencephalic (IT) neurons, is necessary for learning. High-density electrophysiology in the premotor cortex uncovered that neural dynamics anticipating licks are progressively shaped during learning, which explains the change in lick timing. Such reconfiguration in behaviorally relevant dynamics is impeded by CaMKII manipulation in PT neurons. Altogether, the activity of plasticity-related proteins in PT neurons plays a central role in sculpting neocortical dynamics to learn new behavior.
Hippocampal area CA3 is thought to play a central role in memory formation and retrieval. Although various network mechanisms have been hypothesized to mediate these computations, direct evidence is lacking. Using intracellular membrane potential recordings from CA3 neurons and optogenetic manipulations in behaving mice we found that place field activity is produced by a symmetric form of Behavioral Timescale Synaptic Plasticity (BTSP) at recurrent synaptic connections among CA3 principal neurons but not at synapses from the dentate gyrus (DG). Additional manipulations revealed that excitatory input from the entorhinal cortex (EC) but not DG was required to update place cell activity based on the animal’s movement. These data were captured by a computational model that used BTSP and an external updating input to produce attractor dynamics under online learning conditions. Additional theoretical results demonstrate the enhanced memory storage capacity of such networks, particularly in the face of correlated input patterns. The evidence sheds light on the cellular and circuit mechanisms of learning and memory formation in the hippocampus.
The brain plans and executes volitional movements. The underlying patterns of neural population activity have been explored in the context of movements of the eyes, limbs, tongue, and head in nonhuman primates and rodents. How do networks of neurons produce the slow neural dynamics that prepare specific movements and the fast dynamics that ultimately initiate these movements? Recent work exploits rapid and calibrated perturbations of neural activity to test specific dynamical systems models that are capable of producing the observed neural activity. These joint experimental and computational studies show that cortical dynamics during motor planning reflect fixed points of neural activity (attractors). Subcortical control signals reshape and move attractors over multiple timescales, causing commitment to specific actions and rapid transitions to movement execution. Experiments in rodents are beginning to reveal how these algorithms are implemented at the level of brain-wide neural circuits.
To flexibly navigate, many animals rely on internal spatial representations that persist when the animal is standing still in darkness, and update accurately by integrating the animal's movements in the absence of localizing sensory cues. Theories of mammalian head direction cells have proposed that these dynamics can be realized in a special class of networks that maintain a localized bump of activity via structured recurrent connectivity, and that shift this bump of activity via angular velocity input. Although there are many different variants of these so-called ring attractor networks, they all rely on large numbers of neurons to generate representations that persist in the absence of input and accurately integrate angular velocity input. Surprisingly, in the fly, Drosophila melanogaster, a head direction representation is maintained by a much smaller number of neurons whose dynamics and connectivity resemble those of a ring attractor network. These findings challenge our understanding of ring attractors and their putative implementation in neural circuits. Here, we analyzed failures of angular velocity integration that emerge in small attractor networks with only a few computational units. Motivated by the peak performance of the fly head direction system in darkness, we mathematically derived conditions under which small networks, even with as few as 4 neurons, achieve the performance of much larger networks. The resulting description reveals that by appropriately tuning the network connectivity, the network can maintain persistent representations over the continuum of head directions, and it can accurately integrate angular velocity inputs. We then analytically determined how performance degrades as the connectivity deviates from this optimally-tuned setting, and we find a trade-off between network size and the tuning precision needed to achieve persistence and accurate integration. This work shows how even small networks can accurately track an animal's movements to guide navigation, and it informs our understanding of the functional capabilities of discrete systems more broadly.
The brain plans and executes volitional movements. The underlying patterns of neural population activity have been explored in the context of movements of the eyes, limbs, tongue, and head in nonhuman primates and rodents. How do networks of neurons produce the slow neural dynamics that prepare specific movements and the fast dynamics that ultimately initiate these movements? Recent work exploits rapid and calibrated perturbations of neural activity to test specific dynamical systems models that are capable of producing the observed neural activity. These joint experimental and computational studies show that cortical dynamics during motor planning reflect fixed points of neural activity (attractors). Subcortical control signals reshape and move attractors over multiple timescales, causing commitment to specific actions and rapid transitions to movement execution. Experiments in rodents are beginning to reveal how these algorithms are implemented at the level of brain-wide neural circuits. Expected final online publication date for the , Volume 45 is July 2022. Please see http://www.annualreviews.org/page/journal/pubdates for revised estimates.
Learning requires neural adaptations thought to be mediated by activity-dependent synaptic plasticity. A relatively non-standard form of synaptic plasticity driven by dendritic calcium spikes, or plateau potentials, has been reported to underlie place field formation in rodent hippocampal CA1 neurons. Here we found that this behavioral timescale synaptic plasticity (BTSP) can also reshape existing place fields via bidirectional synaptic weight changes that depend on the temporal proximity of plateau potentials to pre-existing place fields. When evoked near an existing place field, plateau potentials induced less synaptic potentiation and more depression, suggesting BTSP might depend inversely on postsynaptic activation. However, manipulations of place cell membrane potential and computational modeling indicated that this anti-correlation actually results from a dependence on current synaptic weight such that weak inputs potentiate and strong inputs depress. A network model implementing this bidirectional synaptic learning rule suggested that BTSP enables population activity, rather than pairwise neuronal correlations, to drive neural adaptations to experience.
Diverse sensory systems, from audition to thermosensation, feature a separation of inputs into ON (increments) and OFF (decrements) signals. In the Drosophila visual system, separate ON and OFF pathways compute the direction of motion, yet anatomical and functional studies have identified some crosstalk between these channels. We used this well-studied circuit to ask whether the motion computation depends on ON-OFF pathway crosstalk. Using whole-cell electrophysiology, we recorded visual responses of T4 (ON) and T5 (OFF) cells, mapped their composite ON-OFF receptive fields, and found that they share a similar spatiotemporal structure. We fit a biophysical model to these receptive fields that accurately predicts directionally selective T4 and T5 responses to both ON and OFF moving stimuli. This model also provides a detailed mechanistic explanation for the directional preference inversion in response to the prominent reverse-phi illusion. Finally, we used the steering responses of tethered flying flies to validate the model's predicted effects of varying stimulus parameters on the behavioral turning inversion.
Decisions are held in memory until enacted, which makes them potentially vulnerable to distracting sensory input. Gating of information flow from sensory to motor areas could protect memory from interference during decision-making, but the underlying network mechanisms are not understood. Here, we trained mice to detect optogenetic stimulation of the somatosensory cortex, with a delay separating sensation and action. During the delay, distracting stimuli lost influence on behavior over time, even though distractor-evoked neural activity percolated through the cortex without attenuation. Instead, choice-encoding activity in the motor cortex became progressively less sensitive to the impact of distractors. Reverse engineering of neural networks trained to reproduce motor cortex activity revealed that the reduction in sensitivity to distractors was caused by a growing separation in the neural activity space between attractors that encode alternative decisions. Our results show that communication between brain regions can be gated via attractor dynamics, which control the degree of commitment to an action.
Empirical estimates of the dimensionality of neural population activity are often much lower than the population size. Similar phenomena are also observed in trained and designed neural network models. These experimental and computational results suggest that mapping low-dimensional dynamics to high-dimensional neural space is a common feature of cortical computation. Despite the ubiquity of this observation, the constraints arising from such mapping are poorly understood. Here we consider a specific example of mapping low-dimensional dynamics to high-dimensional neural activity-the neural engineering framework. We analytically solve the framework for the classic ring model-a neural network encoding a static or dynamic angular variable. Our results provide a complete characterization of the success and failure modes for this model. Based on similarities between this and other frameworks, we speculate that these results could apply to more general scenarios.
Hippocampal activity represents many behaviorally important variables, including context, an animal's location within a given environmental context, time, and reward. Using longitudinal calcium imaging in mice, multiple large virtual environments, and differing reward contingencies, we derived a unified probabilistic model of CA1 representations centered on a single feature-the field propensity. Each cell's propensity governs how many place fields it has per unit space, predicts its reward-related activity, and is preserved across distinct environments and over months. Propensity is broadly distributed-with many low, and some very high, propensity cells-and thus strongly shapes hippocampal representations. This results in a range of spatial codes, from sparse to dense. Propensity varied ∼10-fold between adjacent cells in salt-and-pepper fashion, indicating substantial functional differences within a presumed cell type. Intracellular recordings linked propensity to cell excitability. The stability of each cell's propensity across conditions suggests this fundamental property has anatomical, transcriptional, and/or developmental origins.