Filter
Associated Lab
- Dudman Lab (2) Apply Dudman Lab filter
- Fetter Lab (1) Apply Fetter Lab filter
- Fitzgerald Lab (2) Apply Fitzgerald Lab filter
- Harris Lab (4) Apply Harris Lab filter
- Lavis Lab (2) Apply Lavis Lab filter
- Lee (Albert) Lab (1) Apply Lee (Albert) Lab filter
- Looger Lab (1) Apply Looger Lab filter
- Magee Lab (2) Apply Magee Lab filter
- Menon Lab (4) Apply Menon Lab filter
- Romani Lab (1) Apply Romani Lab filter
- Rubin Lab (1) Apply Rubin Lab filter
- Spruston Lab (97) Apply Spruston Lab filter
- Sternson Lab (1) Apply Sternson Lab filter
- Stringer Lab (1) Apply Stringer Lab filter
- Svoboda Lab (3) Apply Svoboda Lab filter
- Tillberg Lab (3) Apply Tillberg Lab filter
Associated Project Team
Publication Date
- 2025 (7) Apply 2025 filter
- 2023 (2) Apply 2023 filter
- 2022 (2) Apply 2022 filter
- 2021 (3) Apply 2021 filter
- 2020 (5) Apply 2020 filter
- 2019 (6) Apply 2019 filter
- 2018 (8) Apply 2018 filter
- 2017 (1) Apply 2017 filter
- 2016 (6) Apply 2016 filter
- 2015 (7) Apply 2015 filter
- 2014 (1) Apply 2014 filter
- 2013 (2) Apply 2013 filter
- 2012 (4) Apply 2012 filter
- 2011 (1) Apply 2011 filter
- 2010 (2) Apply 2010 filter
- 2009 (4) Apply 2009 filter
- 2008 (3) Apply 2008 filter
- 2007 (4) Apply 2007 filter
- 2006 (2) Apply 2006 filter
- 2005 (5) Apply 2005 filter
- 2003 (1) Apply 2003 filter
- 2002 (1) Apply 2002 filter
- 2001 (2) Apply 2001 filter
- 2000 (2) Apply 2000 filter
- 1999 (3) Apply 1999 filter
- 1998 (3) Apply 1998 filter
- 1997 (3) Apply 1997 filter
- 1995 (3) Apply 1995 filter
- 1994 (2) Apply 1994 filter
- 1993 (1) Apply 1993 filter
- 1992 (1) Apply 1992 filter
Type of Publication
97 Publications
Showing 1-10 of 97 resultsMost behaviors involve neural dynamics in high-dimensional activity spaces. A common approach is to extract dimensions that capture task-related variability, such as those separating stimuli or choices, yielding low-dimensional, task-aligned neural activity subspaces (“coding dimensions”). However, whether these dimensions actively drive decisions or merely reflect underlying computations remains unclear. Moreover, neural activity outside these coding subspaces (“residual dimensions”) is often ignored, though it could also causally shape neural dynamics driving behavior. We developed a recurrent neural network model that fits population activity and uncovers the dynamic interactions between coding and residual subspaces on single trials. Applied to electrophysiological recordings from the anterior lateral motor cortex (ALM) and motor thalamus in mice performing a delayed response task, our model demonstrates that perturbations of residual dimensions reliably alter behavioral choices, whereas perturbations of the choice dimension, which strongly encodes the animal’s upcoming decision, are largely ineffective. These perturbation effects arise because residual dimensions drive transient amplification across an intermediate number of coding and residual dimensions (\~10), before the dynamics collapse into discrete attractor states corresponding to the animal’s choice. By dissecting the low-dimensional variability underlying error trials, we find that it primarily shifts trajectories along residual dimensions, biasing single decisions. Residual activity in thalamus shapes cortical decision dynamics, implicating weakly selective thalamic populations in the emergence of cortical selectivity. Our findings challenge the conventional focus on low-dimensional coding subspaces as sufficient framework for understanding neural computations, demonstrating that dimensions previously considered task-irrelevant and accounting for little variance can have a critical role in driving behavior.
Movement-related activity has been detected across much of the brain, including sensory and motor regions. However, much remains unknown regarding the distribution of movement-related activity across brain regions, and how this activity relates to neural computation. Here we analyzed movement-related activity in brain-wide recordings of more than 50,000 neurons in mice performing a decision-making task. We used multiple machine learning methods to predict neural activity from videography and found that movement-related signals differed across areas, with stronger movement signals close to the motor periphery and in motor-associated subregions. Delineating activity that predicts or follows movement revealed fine-scale structure of sensory and motor encoding across and within brain areas. Through single-trial video-based predictions of behavior, we identified activity modulation by uninstructed movements and their impact on choice-related activity analysis. Our work provides a map of movement encoding across the brain and approaches for linking neural activity, uninstructed movements and decision-making.
Memories are believed to be stored in synapses and retrieved by reactivating neural ensembles. Learning alters synaptic weights, which can interfere with previously stored memories that share the same synapses, creating a trade-off between plasticity and stability. Interestingly, neural representations change even in stable environments, without apparent learning or forgetting-a phenomenon known as representational drift. Theoretical studies have suggested that multiple neural representations can correspond to a memory, with postlearning exploration of these representation solutions driving drift. However, it remains unclear whether representations explored through drift differ from those learned or offer unique advantages. Here, we show that representational drift uncovers noise-robust representations that are otherwise difficult to learn. We first define the nonlinear solution space manifold of synaptic weights for fixed input-output mappings, which allows us to disentangle drift from learning and forgetting and simulate drift as diffusion within this manifold. Solutions explored by drift have many inactive and saturated neurons, making them robust to weight perturbations due to noise or continual learning. Such solutions are prevalent and entropically favored by drift, but their lack of gradients makes them difficult to learn and nonconducive to future learning. To overcome this, we introduce an allocation procedure that selectively shifts representations for new stimuli into a learning-conducive regime. By combining allocation with drift, we resolve the trade-off between learnability and robustness.
Animals generate a range of locomotor patterns that subserve diverse behaviors, and in vertebrates, the required supraspinal commands derive from reticulospinal neurons in the brainstem. Yet how these commands are encoded across the reticulospinal population is unknown, making it unclear whether a universal control logic generates the full locomotor repertoire or if distinct sets of command modules might compose movement in different behavioral contexts. Here, we used calcium imaging, high-resolution behavior tracking, and statistical modeling to comprehensively survey reticulospinal activity and relate single-cell activity to movement kinematics as larval zebrafish generated a broad diversity of swim types. We found that reticulospinal population activity had a low-dimensional organization and identified 8 functional archetypes that provided a succinct and robust encoding of the full range of locomotor actions. Across much of locomotor space, 5 functional archetypes supported multiplexed control of swim speed and independent control of direction, whereas an independent set of 3 functional archetypes controlled the specialized swims that zebrafish use during hunting to orient toward prey. Overall, our study reveals a modular supraspinal control architecture that is partitioned according to behavioral context.
Cognitive maps confer animals with flexible intelligence by representing spatial, temporal and abstract relationships that can be used to shape thought, planning and behaviour. Cognitive maps have been observed in the hippocampus1, but their algorithmic form and learning mechanisms remain obscure. Here we used large-scale, longitudinal two-photon calcium imaging to record activity from thousands of neurons in the CA1 region of the hippocampus while mice learned to efficiently collect rewards from two subtly different linear tracks in virtual reality. Throughout learning, both animal behaviour and hippocampal neural activity progressed through multiple stages, gradually revealing improved task representation that mirrored improved behavioural efficiency. The learning process involved progressive decorrelations in initially similar hippocampal neural activity within and across tracks, ultimately resulting in orthogonalized representations resembling a state machine capturing the inherent structure of the task. This decorrelation process was driven by individual neurons acquiring task-state-specific responses (that is, 'state cells'). Although various standard artificial neural networks did not naturally capture these dynamics, the clone-structured causal graph, a hidden Markov model variant, uniquely reproduced both the final orthogonalized states and the learning trajectory seen in animals. The observed cellular and population dynamics constrain the mechanisms underlying cognitive map formation in the hippocampus, pointing to hidden state inference as a fundamental computational principle, with implications for both biological and artificial intelligence.
Synaptic plasticity alters neuronal connections in response to experience, which is thought to underlie learning and memory. However, the loci of learning-related synaptic plasticity, and the degree to which plasticity is localized or distributed, remain largely unknown. Here we describe a new method, DELTA, for mapping brain-wide changes in synaptic protein turnover with single-synapse resolution, based on Janelia Fluor dyes and HaloTag knock-in mice. During associative learning, the turnover of the ionotropic glutamate receptor subunit GluA2, an indicator of synaptic plasticity, was enhanced in several brain regions, most markedly hippocampal area CA1. More broadly distributed increases in the turnover of synaptic proteins were observed in response to environmental enrichment. In CA1, GluA2 stability was regulated in an input-specific manner, with more turnover in layers containing input from CA3 compared to entorhinal cortex. DELTA will facilitate exploration of the molecular and circuit basis of learning and memory and other forms of plasticity at scales ranging from single synapses to the entire brain.
Effective classification of neuronal cell types requires both molecular and morphological descriptors to be collected in situ at single cell resolution. However, current spatial transcriptomics techniques are not compatible with imaging workflows that successfully reconstruct the morphology of complete axonal projections. Here, we introduce a new methodology that combines tissue clearing, submicron whole-brain two photon imaging, and Expansion-Assisted Iterative Fluorescence In Situ Hybridization (EASI-FISH) to assign molecular identities to fully reconstructed neurons in the mouse brain, which we call morphoFISH. We used morphoFISH to molecularly identify a previously unknown population of cingulate neurons projecting ipsilaterally to the dorsal striatum and contralaterally to higher-order thalamus. By pairing whole-brain morphometry, improved techniques for nucleic acid preservation and spatial gene expression, morphoFISH offers a quantitative solution for discovery of multimodal cell types and complements existing techniques for characterization of increasingly fine-grained cellular heterogeneity in brain circuits.Competing Interest StatementThe authors have declared no competing interest.
Our ability to remember the past is essential for guiding our future behavior. Psychological and neurobiological features of declarative memories are known to transform over time in a process known as systems consolidation. While many theories have sought to explain the time-varying role of hippocampal and neocortical brain areas, the computational principles that govern these transformations remain unclear. Here we propose a theory of systems consolidation in which hippocampal-cortical interactions serve to optimize generalizations that guide future adaptive behavior. We use mathematical analysis of neural network models to characterize fundamental performance tradeoffs in systems consolidation, revealing that memory components should be organized according to their predictability. The theory shows that multiple interacting memory systems can outperform just one, normatively unifying diverse experimental observations and making novel experimental predictions. Our results suggest that the psychological taxonomy and neurobiological organization of declarative memories reflect a system optimized for behaving well in an uncertain future.
Animals can learn general task structures and use them to solve new problems with novel sensory specifics. This capacity of ‘learning to learn’, or meta-learning, is difficult to achieve in artificial systems, and the mechanisms by which it is achieved in animals are unknown. As a step toward enabling mechanistic studies, we developed a behavioral paradigm that demonstrates meta-learning in head-fixed mice. We trained mice to perform a two-alternative forced-choice task in virtual reality (VR), and successively changed the visual cues that signaled reward location. Mice showed increased learning speed in both cue generalization and serial reversal tasks. During reversal learning, behavior exhibited sharp transitions, with the transition occurring earlier in each successive reversal. Analysis of motor patterns revealed that animals utilized similar motor programs to execute the same actions in response to different cues but modified the motor programs during reversal learning. Our study demonstrates that mice can perform meta-learning tasks in VR, thus opening up opportunities for future mechanistic studies.
Cells regulate function by synthesizing and degrading proteins. This turnover ranges from minutes to weeks, as it varies across proteins, cellular compartments, cell types, and tissues. Current methods for tracking protein turnover lack the spatial and temporal resolution needed to investigate these processes, especially in the intact brain, which presents unique challenges. We describe a pulse-chase method (DELTA) for measuring protein turnover with high spatial and temporal resolution throughout the body, including the brain. DELTA relies on rapid covalent capture by HaloTag of fluorophores that were optimized for bioavailability in vivo. The nuclear protein MeCP2 showed brain region- and cell type-specific turnover. The synaptic protein PSD95 was destabilized in specific brain regions by behavioral enrichment. A novel variant of expansion microscopy further facilitated turnover measurements at individual synapses. DELTA enables studies of adaptive and maladaptive plasticity in brain-wide neural circuits.
