Filter
Associated Lab
- Branson Lab (1) Apply Branson Lab filter
- Dudman Lab (2) Apply Dudman Lab filter
- Harris Lab (3) Apply Harris Lab filter
- Lee (Albert) Lab (2) Apply Lee (Albert) Lab filter
- Pachitariu Lab (47) Apply Pachitariu Lab filter
- Romani Lab (1) Apply Romani Lab filter
- Sternson Lab (1) Apply Sternson Lab filter
- Stringer Lab (19) Apply Stringer Lab filter
- Svoboda Lab (2) Apply Svoboda Lab filter
- Turaga Lab (1) Apply Turaga Lab filter
Publication Date
- 2025 (4) Apply 2025 filter
- 2024 (8) Apply 2024 filter
- 2023 (4) Apply 2023 filter
- 2022 (4) Apply 2022 filter
- 2021 (6) Apply 2021 filter
- 2020 (2) Apply 2020 filter
- 2019 (4) Apply 2019 filter
- 2018 (2) Apply 2018 filter
- 2017 (5) Apply 2017 filter
- 2016 (2) Apply 2016 filter
- 2015 (2) Apply 2015 filter
- 2013 (3) Apply 2013 filter
- 2012 (1) Apply 2012 filter
Type of Publication
47 Publications
Showing 11-20 of 47 resultsMany biological applications require the segmentation of cell bodies, membranes and nuclei from microscopy images. Deep learning has enabled great progress on this problem, but current methods are specialized for images that have large training datasets. Here we introduce a generalist, deep learning-based segmentation method called Cellpose, which can precisely segment cells from a wide range of image types and does not require model retraining or parameter adjustments. Cellpose was trained on a new dataset of highly varied images of cells, containing over 70,000 segmented objects. We also demonstrate a three-dimensional (3D) extension of Cellpose that reuses the two-dimensional (2D) model and does not require 3D-labeled data. To support community contributions to the training data, we developed software for manual labeling and for curation of the automated results. Periodically retraining the model on the community-contributed data will ensure that Cellpose improves constantly.
In recent years, two-photon calcium imaging has become a standard tool to probe the function of neural circuits and to study computations in neuronal populations. However, the acquired signal is only an indirect measurement of neural activity due to the comparatively slow dynamics of fluorescent calcium indicators. Different algorithms for estimating spike rates from noisy calcium measurements have been proposed in the past, but it is an open question how far performance can be improved. Here, we report the results of the spikefinder challenge, launched to catalyze the development of new spike rate inference algorithms through crowd-sourcing. We present ten of the submitted algorithms which show improved performance compared to previously evaluated methods. Interestingly, the top-performing algorithms are based on a wide range of principles from deep neural networks to generative models, yet provide highly correlated estimates of the neural activity. The competition shows that benchmark challenges can drive algorithmic developments in neuroscience.
Electrophysiology has long been the workhorse of neuroscience, allowing scientists to record with millisecond precision the action potentials generated by neurons in vivo. Recently, calcium imaging of fluorescent indicators has emerged as a powerful alternative. This technique has its own strengths and weaknesses and unique data processing problems and interpretation confounds. Here we review the computational methods that convert raw calcium movies to estimates of single neuron spike times with minimal human supervision. By computationally addressing the weaknesses of calcium imaging, these methods hold the promise of significantly improving data quality. We also introduce a new metric to evaluate the output of these processing pipelines, which is based on the cluster isolation distance routinely used in electrophysiology.
Motor control in mammals is traditionally viewed as a hierarchy of descending spinal-targeting pathways, with frontal cortex at the top 1–3. Many redundant muscle patterns can solve a given task, and this high dimensionality allows flexibility but poses a problem for efficient learning 4. Although a feasible solution invokes subcortical innate motor patterns, or primitives, to reduce the dimensionality of the control problem, how cortex learns to utilize such primitives remains an open question 5–7. To address this, we studied cortical and subcortical interactions as head-fixed mice learned contextual control of innate hindlimb extension behavior. Naïve mice performed reactive extensions to turn off a cold air stimulus within seconds and, using predictive cues, learned to avoid the stimulus altogether in tens of trials. Optogenetic inhibition of large areas of rostral cortex completely prevented avoidance behavior, but did not impair hindlimb extensions in reaction to the cold air stimulus. Remarkably, mice covertly learned to avoid the cold stimulus even without any prior experience of successful, cortically-mediated avoidance. These findings support a dynamic, heterarchical model in which the dominant locus of control can change, on the order of seconds, between cortical and subcortical brain areas. We propose that cortex can leverage periods when subcortex predominates as demonstrations, to learn parameterized control of innate behavioral primitives.
Representation learning in neural networks may be implemented with supervised or unsupervised algorithms, distinguished by the availability of feedback. In sensory cortex, perceptual learning drives neural plasticity, but it is not known if this is due to supervised or unsupervised learning. Here we recorded populations of up to 90,000 neurons simultaneously from the primary visual cortex (V1) and higher visual areas (HVA), while mice learned multiple tasks as well as during unrewarded exposure to the same stimuli. Similar to previous studies, we found that neural changes in task mice were correlated with their behavioral learning. However, the neural changes were mostly replicated in mice with unrewarded exposure, suggesting that the changes were in fact due to unsupervised learning. The neural plasticity was concentrated in the medial HVAs and obeyed visual, rather than spatial, learning rules. In task mice only, we found a ramping reward prediction signal in anterior HVAs, potentially involved in supervised learning. Our neural results predict that unsupervised learning may accelerate subsequent task learning, a prediction which we validated with behavioral experiments.
Survival behaviors are orchestrated by hardwired circuits located in deep subcortical brain regions, most prominently the hypothalamus. Artificial activation of spatially localized, genetically defined hypothalamic cell populations is known to trigger distinct behaviors, suggesting a nucleus-centered organization of behavioral control. However, no study has investigated the hypothalamic representation of innate behaviors using unbiased, large-scale single neuron recordings. Here, using custom silicon probes, we performed recordings across the rostro-caudal extent of the medial hypothalamus in freely moving animals engaged in a diverse array of social and predator defense (“fear”) behaviors. Nucleus-averaged activity revealed spatially distributed generic “ignition signals” that occurred at the onset of each behavior, and did not identify sparse, nucleus-specific behavioral representations. Single-unit analysis revealed that social and fear behavior classes are encoded by activity in distinct sets of spatially distributed neuronal ensembles spanning the entire hypothalamic rostro-caudal axis. Individual ensemble membership, however, was drawn from neurons in 3-4 adjacent nuclei. Mixed selectivity was identified as the most prevalent mode of behavior representation by individual hypothalamic neurons. Encoding models indicated that a significant fraction of the variance in single neuron activity is explained by behavior. This work reveals that innate behaviors are encoded in the hypothalamus by activity in spatially distributed neural ensembles that each span multiple neighboring nuclei, complementing the prevailing view of hypothalamic behavioral control by single nucleus-restricted cell types derived from perturbational studies.
State-of-the-art silicon probes for electrical recording from neurons have thousands of recording sites. However, due to volume limitations there are typically many fewer wires carrying signals off the probe, which restricts the number of channels that can be recorded simultaneously. To overcome this fundamental constraint, we propose a method called electrode pooling that uses a single wire to serve many recording sites through a set of controllable switches. Here we present the framework behind this method and an experimental strategy to support it. We then demonstrate its feasibility by implementing electrode pooling on the Neuropixels 1.0 electrode array and characterizing its effect on signal and noise. Finally we use simulations to explore the conditions under which electrode pooling saves wires without compromising the content of the recordings. We make recommendations on the design of future devices to take advantage of this strategy.
Biological tissue is often composed of cells with similar morphologies replicated throughout large volumes and many biological applications rely on the accurate identification of these cells and their locations from image data. Here we develop a generative model that captures the regularities present in images composed of repeating elements of a few different types. Formally, the model can be described as convolutional sparse block coding. For inference we use a variant of convolutional matching pursuit adapted to block-based representations. We extend the K-SVD learning algorithm to subspaces by retaining several principal vectors from the SVD decomposition instead of just one. Good models with little cross-talk between subspaces can be obtained by learning the blocks incrementally. We perform extensive experiments on simulated images and the inference algorithm consistently recovers a large proportion of the cells with a small number of false positives. We fit the convolutional model to noisy GCaMP6 two-photon images of spiking neurons and to Nissl-stained slices of cortical tissue and show that it recovers cell body locations without supervision. The flexibility of the block-based representation is reflected in the variability of the recovered cell shapes.
Recent studies in mice have shown that orofacial behaviors drive a large fraction of neural activity across the brain. To understand the nature and function of these signals, we need better computational models to characterize the behaviors and relate them to neural activity. Here we developed Facemap, a framework consisting of a keypoint tracking algorithm and a deep neural network encoder for predicting neural activity. We used the Facemap keypoints as input for the deep neural network to predict the activity of ∼50,000 simultaneously-recorded neurons and in visual cortex we doubled the amount of explained variance compared to previous methods. Our keypoint tracking algorithm was more accurate than existing pose estimation tools, while the inference speed was several times faster, making it a powerful tool for closed-loop behavioral experiments. The Facemap tracker was easy to adapt to data from new labs, requiring as few as 10 annotated frames for near-optimal performance. We used Facemap to find that the neuronal activity clusters which were highly driven by behaviors were more spatially spread-out across cortex. We also found that the deep keypoint features inferred by the model had time-asymmetrical state dynamics that were not apparent in the raw keypoint data. In summary, Facemap provides a stepping stone towards understanding the function of the brainwide neural signals and their relation to behavior.
New silicon technology is enabling large-scale electrophysiological recordings in vivo from hundreds to thousands of channels. Interpreting these recordings requires scalable and accurate automated methods for spike sorting, which should minimize the time required for manual curation of the results. Here we introduce KiloSort, a new integrated spike sorting framework that uses template matching both during spike detection and during spike clustering. KiloSort models the electrical voltage as a sum of template waveforms triggered on the spike times, which allows overlapping spikes to be identified and resolved. Unlike previous algorithms that compress the data with PCA, KiloSort operates on the raw data which allows it to construct a more accurate model of the waveforms. Processing times are faster than in previous algorithms thanks to batch-based optimization on GPUs. We compare KiloSort to an established algorithm and show favorable performance, at much reduced processing times. A novel post-clustering merging step based on the continuity of the templates further reduced substantially the number of manual operations required on this data, for the neurons with near-zero error rates, paving the way for fully automated spike sorting of multichannel electrode recordings.