Filter
Associated Lab
- Branson Lab (1) Apply Branson Lab filter
- Dudman Lab (2) Apply Dudman Lab filter
- Harris Lab (4) Apply Harris Lab filter
- Lee (Albert) Lab (2) Apply Lee (Albert) Lab filter
- Remove Pachitariu Lab filter Pachitariu Lab
- Romani Lab (1) Apply Romani Lab filter
- Sternson Lab (1) Apply Sternson Lab filter
- Stringer Lab (19) Apply Stringer Lab filter
- Svoboda Lab (2) Apply Svoboda Lab filter
- Turaga Lab (1) Apply Turaga Lab filter
Publication Date
- 2025 (6) Apply 2025 filter
- 2024 (7) Apply 2024 filter
- 2023 (4) Apply 2023 filter
- 2022 (4) Apply 2022 filter
- 2021 (6) Apply 2021 filter
- 2020 (2) Apply 2020 filter
- 2019 (4) Apply 2019 filter
- 2018 (2) Apply 2018 filter
- 2017 (5) Apply 2017 filter
- 2016 (2) Apply 2016 filter
- 2015 (2) Apply 2015 filter
- 2013 (3) Apply 2013 filter
- 2012 (1) Apply 2012 filter
Type of Publication
48 Publications
Showing 1-10 of 48 resultsHigh-density silicon probes have transformed neuroscience by enabling large-scale neural recordings at single-cell resolution. However, existing technologies have provided limited functionality in nonhuman primates (NHPs) such as macaques. In the present report, we describe the design, fabrication and performance of Neuropixels 1.0 NHP, a high-channel electrode array designed to enable large-scale acute recording throughout large animal brains. The probe features 4,416 recording sites distributed along a 45-mm shank. Experimenters can programmably select 384 recording channels, enabling simultaneous multi-area recording from thousands of neurons with single or multiple probes. This technology substantially increases scalability and recording access relative to existing technologies and enables new classes of experiments that involve electrophysiological mapping of brain areas at single-neuron and single-spike resolution, measurement of spike-spike correlations between cells and simultaneous brain-wide recordings at scale.
Representation learning in neural networks may be implemented with supervised or unsupervised algorithms, distinguished by the availability of instruction. In the sensory cortex, perceptual learning drives neural plasticity1-13, but it is not known whether this is due to supervised or unsupervised learning. Here we recorded populations of up to 90,000 neurons simultaneously from the primary visual cortex (V1) and higher visual areas (HVAs) while mice learned multiple tasks, as well as during unrewarded exposure to the same stimuli. Similar to previous studies, we found that neural changes in task mice were correlated with their behavioural learning. However, the neural changes were mostly replicated in mice with unrewarded exposure, suggesting that the changes were in fact due to unsupervised learning. The neural plasticity was highest in the medial HVAs and obeyed visual, rather than spatial, learning rules. In task mice only, we found a ramping reward-prediction signal in anterior HVAs, potentially involved in supervised learning. Our neural results predict that unsupervised learning may accelerate subsequent task learning, a prediction that we validated with behavioural experiments. Preprint: https://www.biorxiv.org/content/early/2024/02/27/2024.02.25.581990
Modern algorithms for biological segmentation can match inter-human agreement in annotation quality. This however is not a performance bound: a hypothetical human-consensus segmentation could reduce error rates in half. To obtain a model that generalizes better we adapted the pretrained transformer backbone of a foundation model (SAM) to the Cellpose framework. The resulting Cellpose-SAM model substantially outperforms inter-human agreement and approaches the human-consensus bound. We increase generalization performance further by making the model robust to channel shuffling, cell size, shot noise, downsampling, isotropic and anisotropic blur. The new model can be readily adopted into the Cellpose ecosystem which includes finetuning, human-in-the-loop training, image restoration and 3D segmentation approaches. These properties establish Cellpose-SAM as a foundation model for biological segmentation.
Motor control in mammals is traditionally viewed as a hierarchy of descending spinal-targeting pathways, with frontal cortex at the top 1–3. Many redundant muscle patterns can solve a given task, and this high dimensionality allows flexibility but poses a problem for efficient learning 4. Although a feasible solution invokes subcortical innate motor patterns, or primitives, to reduce the dimensionality of the control problem, how cortex learns to utilize such primitives remains an open question 5–7. To address this, we studied cortical and subcortical interactions as head-fixed mice learned contextual control of innate hindlimb extension behavior. Naïve mice performed reactive extensions to turn off a cold air stimulus within seconds and, using predictive cues, learned to avoid the stimulus altogether in tens of trials. Optogenetic inhibition of large areas of rostral cortex completely prevented avoidance behavior, but did not impair hindlimb extensions in reaction to the cold air stimulus. Remarkably, mice covertly learned to avoid the cold stimulus even without any prior experience of successful, cortically-mediated avoidance. These findings support a dynamic, heterarchical model in which the dominant locus of control can change, on the order of seconds, between cortical and subcortical brain areas. We propose that cortex can leverage periods when subcortex predominates as demonstrations, to learn parameterized control of innate behavioral primitives.
Generalist methods for cellular segmentation have good out-of-the-box performance on a variety of image types; however, existing methods struggle for images that are degraded by noise, blurring or undersampling, all of which are common in microscopy. We focused the development of Cellpose3 on addressing these cases and here we demonstrate substantial out-of-the-box gains in segmentation and image quality for noisy, blurry and undersampled images. Unlike previous approaches that train models to restore pixel values, we trained Cellpose3 to output images that are well segmented by a generalist segmentation model, while maintaining perceptual similarity to the target images. Furthermore, we trained the restoration models on a large, varied collection of datasets, thus ensuring good generalization to user images. We provide these tools as 'one-click' buttons inside the graphical interface of Cellpose as well as in the Cellpose API.
Artificial neural networks learn faster if they are initialized well. Good initializations can generate high-dimensional macroscopic dynamics with long timescales. It is not known if biological neural networks have similar properties. Here we show that the eigenvalue spectrum and dynamical properties of large-scale neural recordings in mice (two-photon and electrophysiology) are similar to those produced by linear dynamics governed by a random symmetric matrix that is critically normalized. An exception was hippocampal area CA1: population activity in this area resembled an efficient, uncorrelated neural code, which may be optimized for information storage capacity. Global emergent activity modes persisted in simulations with sparse, clustered or spatial connectivity. We hypothesize that the spontaneous neural activity reflects a critical initialization of whole-brain neural circuits that is optimized for learning time-dependent tasks.
Simultaneous recordings from hundreds or thousands of neurons are becoming routine because of innovations in instrumentation, molecular tools, and data processing software. Such recordings can be analyzed with data science methods, but it is not immediately clear what methods to use or how to adapt them for neuroscience applications. We review, categorize, and illustrate diverse analysis methods for neural population recordings and describe how these methods have been used to make progress on longstanding questions in neuroscience. We review a variety of approaches, ranging from the mathematically simple to the complex, from exploratory to hypothesis-driven, and from recently developed to more established methods. We also illustrate some of the common statistical pitfalls in analyzing large-scale neural data.
Neurophysiology has long progressed through exploratory experiments and chance discoveries. Anecdotes abound of researchers listening to spikes in real time and noticing patterns of activity related to ongoing stimuli or behaviors. With the advent of large-scale recordings, such close observation of data has become difficult. To find patterns in large-scale neural data, we developed 'Rastermap', a visualization method that displays neurons as a raster plot after sorting them along a one-dimensional axis based on their activity patterns. We benchmarked Rastermap on realistic simulations and then used it to explore recordings of tens of thousands of neurons from mouse cortex during spontaneous, stimulus-evoked and task-evoked epochs. We also applied Rastermap to whole-brain zebrafish recordings; to wide-field imaging data; to electrophysiological recordings in rat hippocampus, monkey frontal cortex and various cortical and subcortical regions in mice; and to artificial neural networks. Finally, we illustrate high-dimensional scenarios where Rastermap and similar algorithms cannot be used effectively.
Neural circuits connecting the cerebral cortex, the basal ganglia and the thalamus are fundamental networks for sensorimotor processing and their dysfunction has been consistently implicated in neuropsychiatric disorders1-9. These recursive, loop circuits have been investigated in animal models and by clinical neuroimaging, however, direct functional access to developing human neurons forming these networks has been limited. Here, we use human pluripotent stem cells to reconstruct an in vitro cortico-striatal-thalamic-cortical circuit by creating a four-part loop assembloid. More specifically, we generate regionalized neural organoids that resemble the key elements of the cortico-striatal-thalamic-cortical circuit, and functionally integrate them into loop assembloids using custom 3D-printed biocompatible wells. Volumetric and mesoscale calcium imaging, as well as extracellular recordings from individual parts of these assembloids reveal the emergence of synchronized patterns of neuronal activity. In addition, a multi–step rabies retrograde tracing approach demonstrate the formation of neuronal connectivity across the network in loop assembloids. Lastly, we apply this system to study heterozygous loss of ASH1L gene associated with autism spectrum disorder and Tourette syndrome and discover aberrant synchronized activity in disease model assembloids. Taken together, this human multi-cellular platform will facilitate functional investigations of the cortico-striatal-thalamic-cortical circuit in the context of early human development and in disease conditions.
As we move through the world, we see the same visual scenes from different perspectives. Although we experience perspective deformations, our perception of a scene remains stable. This raises the question of which neuronal representations in visual brain areas are perspective-tuned and which are invariant. Focusing on planar rotations, we introduce a mathematical framework based on the principle of equivariance, which asserts that an image rotation results in a corresponding rotation of neuronal representations, to explain how the same representation can range from being fully tuned to fully invariant. We applied this framework to large-scale simultaneous neuronal recordings from four visual cortical areas in mice, where we found that representations are both tuned and invariant but become more invariant across higher-order areas. While common deep convolutional neural networks show similar trends in orientation-invariance across layers, they are not rotation-equivariant. We propose that equivariance is a prevalent computation of populations of biological neurons to gradually achieve invariance through structured tuning.