Main Menu (Mobile)- Block

Main Menu - Block

janelia7_blocks-janelia7_secondary_menu | block
More in this page
janelia7_blocks-janelia7_fake_breadcrumb | block
Pachitariu Lab / Publications
custom | custom

Filter

facetapi-Q2b17qCsTdECvJIqZJgYMaGsr8vANl1n | block

Associated Lab

facetapi-PV5lg7xuz68EAY8eakJzrcmwtdGEnxR0 | block
facetapi-021SKYQnqXW6ODq5W5dPAFEDBaEJubhN | block
general_search_page-panel_pane_1 | views_panes

15 Publications

Showing 1-10 of 15 results
Your Criteria:
    11/07/22 | Cellpose 2.0: how to train your own model.
    Pachitariu M, Stringer C
    Nature Methods. 2022 Nov 07;19(12):1634-41. doi: 10.1038/s41592-022-01663-4

    Pretrained neural network models for biological segmentation can provide good out-of-the-box results for many image types. However, such models do not allow users to adapt the segmentation style to their specific needs and can perform suboptimally for test images that are very different from the training images. Here we introduce Cellpose 2.0, a new package that includes an ensemble of diverse pretrained models as well as a human-in-the-loop pipeline for rapid prototyping of new custom models. We show that models pretrained on the Cellpose dataset can be fine-tuned with only 500-1,000 user-annotated regions of interest (ROI) to perform nearly as well as models trained on entire datasets with up to 200,000 ROI. A human-in-the-loop approach further reduced the required user annotation to 100-200 ROI, while maintaining high-quality segmentations. We provide software tools such as an annotation graphical user interface, a model zoo and a human-in-the-loop pipeline to facilitate the adoption of Cellpose 2.0.

    View Publication Page
    02/12/24 | Cellpose3: one-click image restoration for improved cellular segmentation.
    Stringer C, Pachitariu M
    bioRxiv. 2024 Feb 12:. doi: 10.1101/2024.02.10.579780

    Generalist methods for cellular segmentation have good out-of-the-box performance on a variety of image types. However, existing methods struggle for images that are degraded by noise, blurred or undersampled, all of which are common in microscopy. We focused the development of Cellpose3 on addressing these cases, and here we demonstrate substantial out-of-the-box gains in segmentation and image quality for noisy, blurry or undersampled images. Unlike previous approaches, which train models to restore pixel values, we trained Cellpose3 to output images that are well-segmented by a generalist segmentation model, while maintaining perceptual similarity to the target images. Furthermore, we trained the restoration models on a large, varied collection of datasets, thus ensuring good generalization to user images. We provide these tools as “one-click” buttons inside the graphical interface of Cellpose as well as in the Cellpose API.

    View Publication Page
    02/03/20 | Cellpose: a generalist algorithm for cellular segmentation
    Stringer C, Michaelos M, Pachitariu M
    bioRxiv. 2020 Feb 03:. doi: 10.1101/2020.02.02.931238

    Many biological applications require the segmentation of cell bodies, membranes and nuclei from microscopy images. Deep learning has enabled great progress on this problem, but current methods are specialized for images that have large training datasets. Here we introduce a generalist, deep learning-based segmentation algorithm called Cellpose, which can very precisely segment a wide range of image types out-of-the-box and does not require model retraining or parameter adjustments. We trained Cellpose on a new dataset of highly-varied images of cells, containing over 70,000 segmented objects. To support community contributions to the training data, we developed software for manual labelling and for curation of the automated results, with optional direct upload to our data repository. Periodically retraining the model on the community-contributed data will ensure that Cellpose improves constantly.

    View Publication Page
    01/07/21 | Cellpose: a generalist algorithm for cellular segmentation.
    Stringer C, Wang T, Michaelos M, Pachitariu M
    Nature Methods. 2021 Jan 07;18(1):100-106. doi: 10.1038/s41592-020-01018-x

    Many biological applications require the segmentation of cell bodies, membranes and nuclei from microscopy images. Deep learning has enabled great progress on this problem, but current methods are specialized for images that have large training datasets. Here we introduce a generalist, deep learning-based segmentation method called Cellpose, which can precisely segment cells from a wide range of image types and does not require model retraining or parameter adjustments. Cellpose was trained on a new dataset of highly varied images of cells, containing over 70,000 segmented objects. We also demonstrate a three-dimensional (3D) extension of Cellpose that reuses the two-dimensional (2D) model and does not require 3D-labeled data. To support community contributions to the training data, we developed software for manual labeling and for curation of the automated results. Periodically retraining the model on the community-contributed data will ensure that Cellpose improves constantly.

    View Publication Page
    04/01/19 | Computational processing of neural recordings from calcium imaging data.
    Stringer C, Pachitariu M
    Current Opinion in Neurobiology. 2019 Apr ;55:22-31. doi: 10.1016/j.conb.2018.11.005

    Electrophysiology has long been the workhorse of neuroscience, allowing scientists to record with millisecond precision the action potentials generated by neurons in vivo. Recently, calcium imaging of fluorescent indicators has emerged as a powerful alternative. This technique has its own strengths and weaknesses and unique data processing problems and interpretation confounds. Here we review the computational methods that convert raw calcium movies to estimates of single neuron spike times with minimal human supervision. By computationally addressing the weaknesses of calcium imaging, these methods hold the promise of significantly improving data quality. We also introduce a new metric to evaluate the output of these processing pipelines, which is based on the cluster isolation distance routinely used in electrophysiology.

    View Publication Page
    02/27/24 | Distinct streams for supervised and unsupervised learning in the visual cortex
    Lin Zhong , Scott Baptista , Rachel Gattoni , Jon Arnold , Daniel Flickinger , Carsen Stringer , Marius Pachitariu
    bioRxiv. 2024 Feb 27:. doi: 10.1101/2024.02.25.581990

    Representation learning in neural networks may be implemented with supervised or unsupervised algorithms, distinguished by the availability of feedback. In sensory cortex, perceptual learning drives neural plasticity, but it is not known if this is due to supervised or unsupervised learning. Here we recorded populations of up to 90,000 neurons simultaneously from the primary visual cortex (V1) and higher visual areas (HVA), while mice learned multiple tasks as well as during unrewarded exposure to the same stimuli. Similar to previous studies, we found that neural changes in task mice were correlated with their behavioral learning. However, the neural changes were mostly replicated in mice with unrewarded exposure, suggesting that the changes were in fact due to unsupervised learning. The neural plasticity was concentrated in the medial HVAs and obeyed visual, rather than spatial, learning rules. In task mice only, we found a ramping reward prediction signal in anterior HVAs, potentially involved in supervised learning. Our neural results predict that unsupervised learning may accelerate subsequent task learning, a prediction which we validated with behavioral experiments.

    View Publication Page
    11/20/23 | Facemap: a framework for modeling neural activity based on orofacial tracking
    Atika Syeda , Lin Zhong , Renee Tung , Will Long , Marius Pachitariu , Carsen Stringer
    Nature Neuroscience. 2023 Nov 20:. doi: 10.1038/s41593-023-01490-6

    Recent studies in mice have shown that orofacial behaviors drive a large fraction of neural activity across the brain. To understand the nature and function of these signals, we need better computational models to characterize the behaviors and relate them to neural activity. Here we developed Facemap, a framework consisting of a keypoint tracking algorithm and a deep neural network encoder for predicting neural activity. We used the Facemap keypoints as input for the deep neural network to predict the activity of ∼50,000 simultaneously-recorded neurons and in visual cortex we doubled the amount of explained variance compared to previous methods. Our keypoint tracking algorithm was more accurate than existing pose estimation tools, while the inference speed was several times faster, making it a powerful tool for closed-loop behavioral experiments. The Facemap tracker was easy to adapt to data from new labs, requiring as few as 10 annotated frames for near-optimal performance. We used Facemap to find that the neuronal activity clusters which were highly driven by behaviors were more spatially spread-out across cortex. We also found that the deep keypoint features inferred by the model had time-asymmetrical state dynamics that were not apparent in the raw keypoint data. In summary, Facemap provides a stepping stone towards understanding the function of the brainwide neural signals and their relation to behavior.

    View Publication Page
    06/26/19 | High-dimensional geometry of population responses in visual cortex.
    Stringer C, Pachitariu M, Steinmetz NA, Carandini M, Harris KD
    Nature. 2019 Jun 26;571(7765):361-65. doi: 10.1038/s41586-019-1346-5

    A neuronal population encodes information most efficiently when its activity is uncorrelated and high-dimensional, and most robustly when its activity is correlated and lower-dimensional. Here, we analyzed the correlation structure of natural image coding, in large visual cortical populations recorded from awake mice. Evoked population activity was high dimensional, with correlations obeying an unexpected power-law: the n-th principal component variance scaled as 1/n. This was not inherited from the 1/f spectrum of natural images, because it persisted after stimulus whitening. We proved mathematically that the variance spectrum must decay at least this fast if a population code is smooth, i.e. if small changes in input cannot dominate population activity. The theory also predicts larger power-law exponents for lower-dimensional stimulus ensembles, which we validated experimentally. These results suggest that coding smoothness represents a fundamental constraint governing correlations in neural population codes.

    View Publication Page
    05/13/21 | High-precision coding in visual cortex.
    Stringer C, Michaelos M, Tsyboulski D, Lindo SE, Pachitariu M
    Cell. 2021 May 13;184(10):2767-78. doi: 10.1016/j.cell.2021.03.042

    Individual neurons in visual cortex provide the brain with unreliable estimates of visual features. It is not known whether the single-neuron variability is correlated across large neural populations, thus impairing the global encoding of stimuli. We recorded simultaneously from up to 50,000 neurons in mouse primary visual cortex (V1) and in higher order visual areas and measured stimulus discrimination thresholds of 0.35° and 0.37°, respectively, in an orientation decoding task. These neural thresholds were almost 100 times smaller than the behavioral discrimination thresholds reported in mice. This discrepancy could not be explained by stimulus properties or arousal states. Furthermore, behavioral variability during a sensory discrimination task could not be explained by neural variability in V1. Instead, behavior-related neural activity arose dynamically across a network of non-sensory brain areas. These results imply that perceptual discrimination in mice is limited by downstream decoders, not by neural noise in sensory representations.

    View Publication Page
    07/28/23 | Rastermap: a discovery method for neural population recordings
    Carsen Stringer , Lin Zhong , Atika Syeda , Fengtong Du , Marius Pachitariu
    bioRxiv. 2023 Jul 28:. doi: 10.1101/2023.07.25.550571

    Neurophysiology has long progressed through exploratory experiments and chance discoveries. Anecdotes abound of researchers setting up experiments while listening to spikes in real time and observing a pattern of consistent firing when certain stimuli or behaviors happened. With the advent of large-scale recordings, such close observation of data has become harder because high-dimensional spaces are impenetrable to our pattern-finding intuitions. To help ourselves find patterns in neural data, our lab has been openly developing a visualization framework known as “Rastermap” over the past five years. Rastermap takes advantage of a new global optimization algorithm for sorting neural responses along a one-dimensional manifold. Displayed as a raster plot, the sorted neurons show a variety of activity patterns, which can be more easily identified and interpreted. We first benchmark Rastermap on realistic simulations with multiplexed cognitive variables. Then we demonstrate it on recordings of tens of thousands of neurons from mouse visual and sensorimotor cortex during spontaneous, stimulus-evoked and task-evoked epochs, as well as on whole-brain zebrafish recordings, widefield calcium imaging data, population recordings from rat hippocampus and artificial neural networks. Finally, we illustrate high-dimensional scenarios where Rastermap and similar algorithms cannot be used effectively.

    View Publication Page