Main Menu (Mobile)- Block

Main Menu - Block

custom | custom

Search Results

general_search_page-panel_pane_1 | views_panes

19 Janelia Publications

Showing 1-10 of 19 results
Your Criteria:
    06/18/25 | Unsupervised pretraining in biological neural networks
    Lin Zhong , Scott Baptista , Rachel Gattoni , Jon Arnold , Daniel Flickinger , Carsen Stringer , Marius Pachitariu
    Nature. 2025 Jun 18:. doi: 10.1038/s41586-025-09180-y

    Representation learning in neural networks may be implemented with supervised or unsupervised algorithms, distinguished by the availability of instruction. In the sensory cortex, perceptual learning drives neural plasticity1-13, but it is not known whether this is due to supervised or unsupervised learning. Here we recorded populations of up to 90,000 neurons simultaneously from the primary visual cortex (V1) and higher visual areas (HVAs) while mice learned multiple tasks, as well as during unrewarded exposure to the same stimuli. Similar to previous studies, we found that neural changes in task mice were correlated with their behavioural learning. However, the neural changes were mostly replicated in mice with unrewarded exposure, suggesting that the changes were in fact due to unsupervised learning. The neural plasticity was highest in the medial HVAs and obeyed visual, rather than spatial, learning rules. In task mice only, we found a ramping reward-prediction signal in anterior HVAs, potentially involved in supervised learning. Our neural results predict that unsupervised learning may accelerate subsequent task learning, a prediction that we validated with behavioural experiments.

     

    Preprint: https://www.biorxiv.org/content/early/2024/02/27/2024.02.25.581990

    View Publication Page
    05/01/25 | Cellpose-SAM: superhuman generalization for cellular segmentation
    Pachitariu M, Rariden M, Stringer C
    bioRxiv. 2025 May 1:. doi: 10.1101/2025.04.28.651001

    Modern algorithms for biological segmentation can match inter-human agreement in annotation quality. This however is not a performance bound: a hypothetical human-consensus segmentation could reduce error rates in half. To obtain a model that generalizes better we adapted the pretrained transformer backbone of a foundation model (SAM) to the Cellpose framework. The resulting Cellpose-SAM model substantially outperforms inter-human agreement and approaches the human-consensus bound. We increase generalization performance further by making the model robust to channel shuffling, cell size, shot noise, downsampling, isotropic and anisotropic blur. The new model can be readily adopted into the Cellpose ecosystem which includes finetuning, human-in-the-loop training, image restoration and 3D segmentation approaches. These properties establish Cellpose-SAM as a foundation model for biological segmentation.

    View Publication Page
    02/12/25 | Cellpose3: one-click image restoration for improved cellular segmentation.
    Stringer C, Pachitariu M
    Nat Methods. 2025 Feb 12:. doi: 10.1038/s41592-025-02595-5

    Generalist methods for cellular segmentation have good out-of-the-box performance on a variety of image types; however, existing methods struggle for images that are degraded by noise, blurring or undersampling, all of which are common in microscopy. We focused the development of Cellpose3 on addressing these cases and here we demonstrate substantial out-of-the-box gains in segmentation and image quality for noisy, blurry and undersampled images. Unlike previous approaches that train models to restore pixel values, we trained Cellpose3 to output images that are well segmented by a generalist segmentation model, while maintaining perceptual similarity to the target images. Furthermore, we trained the restoration models on a large, varied collection of datasets, thus ensuring good generalization to user images. We provide these tools as 'one-click' buttons inside the graphical interface of Cellpose as well as in the Cellpose API.

    View Publication Page
    01/10/25 | A critical initialization for biological neural networks
    Pachitariu M, Zhong L, Gracias A, Minisi A, Lopez C, Stringer C
    bioRxiv. 01/2025:. doi: 10.1101/2025.01.10.632397

    Artificial neural networks learn faster if they are initialized well. Good initializations can generate high-dimensional macroscopic dynamics with long timescales. It is not known if biological neural networks have similar properties. Here we show that the eigenvalue spectrum and dynamical properties of large-scale neural recordings in mice (two-photon and electrophysiology) are similar to those produced by linear dynamics governed by a random symmetric matrix that is critically normalized. An exception was hippocampal area CA1: population activity in this area resembled an efficient, uncorrelated neural code, which may be optimized for information storage capacity. Global emergent activity modes persisted in simulations with sparse, clustered or spatial connectivity. We hypothesize that the spontaneous neural activity reflects a critical initialization of whole-brain neural circuits that is optimized for learning time-dependent tasks.

    View Publication Page
    11/08/24 | Analysis methods for large-scale neuronal recordings.
    Stringer C, Pachitariu M
    Science. 2024 Nov 08;386(6722):eadp7429. doi: 10.1126/science.adp7429

    Simultaneous recordings from hundreds or thousands of neurons are becoming routine because of innovations in instrumentation, molecular tools, and data processing software. Such recordings can be analyzed with data science methods, but it is not immediately clear what methods to use or how to adapt them for neuroscience applications. We review, categorize, and illustrate diverse analysis methods for neural population recordings and describe how these methods have been used to make progress on longstanding questions in neuroscience. We review a variety of approaches, ranging from the mathematically simple to the complex, from exploratory to hypothesis-driven, and from recently developed to more established methods. We also illustrate some of the common statistical pitfalls in analyzing large-scale neural data.

    View Publication Page
    10/16/24 | Rastermap: a discovery method for neural population recordings
    Carsen Stringer , Lin Zhong , Atika Syeda , Fengtong Du , Marius Pachitariu
    Nat. Neurosci.. 2024 Oct 16:. doi: 10.1038/s41593-024-01783-4

    Neurophysiology has long progressed through exploratory experiments and chance discoveries. Anecdotes abound of researchers listening to spikes in real time and noticing patterns of activity related to ongoing stimuli or behaviors. With the advent of large-scale recordings, such close observation of data has become difficult. To find patterns in large-scale neural data, we developed 'Rastermap', a visualization method that displays neurons as a raster plot after sorting them along a one-dimensional axis based on their activity patterns. We benchmarked Rastermap on realistic simulations and then used it to explore recordings of tens of thousands of neurons from mouse cortex during spontaneous, stimulus-evoked and task-evoked epochs. We also applied Rastermap to whole-brain zebrafish recordings; to wide-field imaging data; to electrophysiological recordings in rat hippocampus, monkey frontal cortex and various cortical and subcortical regions in mice; and to artificial neural networks. Finally, we illustrate high-dimensional scenarios where Rastermap and similar algorithms cannot be used effectively.

    View Publication Page
    07/02/24 | Towards a simplified model of primary visual cortex
    Du F, Núñez-Ochoa MA, Pachitariu M, Stringer C
    bioRxiv. 2024 Jul 02:. doi: 10.1101/2024.06.30.601394

    Artificial neural networks (ANNs) have been shown to predict neural responses in primary visual cortex (V1) better than classical models. However, this performance comes at the expense of simplicity because the ANN models typically have many hidden layers with many feature maps in each layer. Here we show that ANN models of V1 can be substantially simplified while retaining high predictive power. To demonstrate this, we first recorded a new dataset of over 29,000 neurons responding to up to 65,000 natural image presentations in mouse V1. We found that ANN models required only two convolutional layers for good performance, with a relatively small first layer. We further found that we could make the second layer small without loss of performance, by fitting a separate "minimodel" to each neuron. Similar simplifications applied for models of monkey V1 neurons. We show that these relatively simple models can nonetheless be useful for tasks such as object and visual texture recognition and we use the models to gain insight into how texture invariance arises in biological neurons.

    View Publication Page
    04/08/24 | Spike sorting with Kilosort4
    Pachitariu M, Sridhar S, Pennington J, Stringer C
    Nat Methods. 2024 Apr 08:. doi: 10.1038/s41592-024-02232-7

    Spike sorting is the computational process of extracting the firing times of single neurons from recordings of local electrical fields. This is an important but hard problem in neuroscience, made complicated by the nonstationarity of the recordings and the dense overlap in electrical fields between nearby neurons. To address the spike-sorting problem, we have been openly developing the Kilosort framework. Here we describe the various algorithmic steps introduced in different versions of Kilosort. We also report the development of Kilosort4, a version with substantially improved performance due to clustering algorithms inspired by graph-based approaches. To test the performance of Kilosort, we developed a realistic simulation framework that uses densely sampled electrical fields from real experiments to generate nonstationary spike waveforms and realistic noise. We found that nearly all versions of Kilosort outperformed other algorithms on a variety of simulated conditions and that Kilosort4 performed best in all cases, correctly identifying even neurons with low amplitudes and small spatial extents in high drift conditions.

    View Publication Page
    04/07/24 | Transformers do not outperform Cellpose
    Carsen Stringer , Marius Pachitariu
    bioRxiv. 2024 Apr 7:. doi: 10.1101/2024.04.06.587952

    In a recent publication, Ma et al [1] claim that a transformer-based cellular segmentation method called Mediar [2] — which won a Neurips challenge — outperforms Cellpose [3] (0.897 vs 0.543 median F1 score). Here we show that this result was obtained by artificially impairing Cellpose in multiple ways. When we removed these impairments, Cellpose outperformed Mediar (0.861 vs 0.826 median F1 score on the updated test set). To further investigate the performance of transformers for cellular segmentation, we replaced the Cellpose backbone with a transformer. The transformer-Cellpose model also did not outperform the standard Cellpose (0.848 median F1 test score). Our results suggest that transformers do not advance the state-of-the-art in cellular segmentation.

    View Publication Page
    11/20/23 | Facemap: a framework for modeling neural activity based on orofacial tracking
    Atika Syeda , Lin Zhong , Renee Tung , Will Long , Marius Pachitariu , Carsen Stringer
    Nature Neuroscience. 2023 Nov 20:. doi: 10.1038/s41593-023-01490-6

    Recent studies in mice have shown that orofacial behaviors drive a large fraction of neural activity across the brain. To understand the nature and function of these signals, we need better computational models to characterize the behaviors and relate them to neural activity. Here we developed Facemap, a framework consisting of a keypoint tracking algorithm and a deep neural network encoder for predicting neural activity. We used the Facemap keypoints as input for the deep neural network to predict the activity of ∼50,000 simultaneously-recorded neurons and in visual cortex we doubled the amount of explained variance compared to previous methods. Our keypoint tracking algorithm was more accurate than existing pose estimation tools, while the inference speed was several times faster, making it a powerful tool for closed-loop behavioral experiments. The Facemap tracker was easy to adapt to data from new labs, requiring as few as 10 annotated frames for near-optimal performance. We used Facemap to find that the neuronal activity clusters which were highly driven by behaviors were more spatially spread-out across cortex. We also found that the deep keypoint features inferred by the model had time-asymmetrical state dynamics that were not apparent in the raw keypoint data. In summary, Facemap provides a stepping stone towards understanding the function of the brainwide neural signals and their relation to behavior.

    View Publication Page