Main Menu (Mobile)- Block

Main Menu - Block

janelia7_blocks-janelia7_secondary_menu | block
janelia7_blocks-janelia7_fake_breadcrumb | block
Pachitariu Lab / Publications
general_search_page-panel_pane_1 | views_panes

33 Publications

Showing 1-10 of 33 results
01/07/23 | Solving the spike sorting problem with Kilosort
Marius Pachitariu , Shashwat Sridhar , Carsen Stringer
bioRxiv. 2023 Jan 07:. doi: 10.1101/2023.01.07.523036

Spike sorting is the computational process of extracting the firing times of single neurons from recordings of local electrical fields. This is an important but hard problem in neuroscience, complicated by the non-stationarity of the recordings and the dense overlap in electrical fields between nearby neurons. To solve the spike sorting problem, we have continuously developed over the past eight years a framework known as Kilosort. This paper describes the various algorithmic steps introduced in different versions of Kilosort. We also report the development of Kilosort4, a new version with substantially improved performance due to new clustering algorithms inspired by graph-based approaches. To test the performance of Kilosort, we developed a realistic simulation framework which uses densely sampled electrical fields from real experiments to generate non-stationary spike waveforms and realistic noise. We find that nearly all versions of Kilosort outperform other algorithms on a variety of simulated conditions, and Kilosort4 performs best in all cases, correctly identifying even neurons with low amplitudes and small spatial extents in high drift conditions.

View Publication Page
11/07/22 | Cellpose 2.0: how to train your own model.
Pachitariu M, Stringer C
Nature Methods. 2022 Nov 07;19(12):1634-41. doi: 10.1038/s41592-022-01663-4

Pretrained neural network models for biological segmentation can provide good out-of-the-box results for many image types. However, such models do not allow users to adapt the segmentation style to their specific needs and can perform suboptimally for test images that are very different from the training images. Here we introduce Cellpose 2.0, a new package that includes an ensemble of diverse pretrained models as well as a human-in-the-loop pipeline for rapid prototyping of new custom models. We show that models pretrained on the Cellpose dataset can be fine-tuned with only 500-1,000 user-annotated regions of interest (ROI) to perform nearly as well as models trained on entire datasets with up to 200,000 ROI. A human-in-the-loop approach further reduced the required user annotation to 100-200 ROI, while maintaining high-quality segmentations. We provide software tools such as an annotation graphical user interface, a model zoo and a human-in-the-loop pipeline to facilitate the adoption of Cellpose 2.0.

View Publication Page
11/04/22 | Facemap: a framework for modeling neural activity based on orofacial tracking
Atika Syeda , Lin Zhong , Renee Tung , Will Long , Marius Pachitariu , Carsen Stringer
bioRxiv. 2022 Nov 04:. doi: 10.1101/2022.11.03.515121

Recent studies in mice have shown that orofacial behaviors drive a large fraction of neural activity across the brain. To understand the nature and function of these signals, we need better computational models to characterize the behaviors and relate them to neural activity. Here we developed Facemap, a framework consisting of a keypoint tracking algorithm and a deep neural network encoder for predicting neural activity. We used the Facemap keypoints as input for the deep neural network to predict the activity of ∼50,000 simultaneously-recorded neurons and in visual cortex we doubled the amount of explained variance compared to previous methods. Our keypoint tracking algorithm was more accurate than existing pose estimation tools, while the inference speed was several times faster, making it a powerful tool for closed-loop behavioral experiments. The Facemap tracker was easy to adapt to data from new labs, requiring as few as 10 annotated frames for near-optimal performance. We used Facemap to find that the neuronal activity clusters which were highly driven by behaviors were more spatially spread-out across cortex. We also found that the deep keypoint features inferred by the model had time-asymmetrical state dynamics that were not apparent in the raw keypoint data. In summary, Facemap provides a stepping stone towards understanding the function of the brainwide neural signals and their relation to behavior.

View Publication Page
10/10/22 | Structured random receptive fields enable informative sensory encodings.
Pandey B, Pachitariu M, Brunton BW, Harris KD
PLoS Computational Biology. 2022 Oct 10;18(10):e1010484. doi: 10.1371/journal.pcbi.1010484

Brains must represent the outside world so that animals survive and thrive. In early sensory systems, neural populations have diverse receptive fields structured to detect important features in inputs, yet significant variability has been ignored in classical models of sensory neurons. We model neuronal receptive fields as random, variable samples from parameterized distributions and demonstrate this model in two sensory modalities using data from insect mechanosensors and mammalian primary visual cortex. Our approach leads to a significant theoretical connection between the foundational concepts of receptive fields and random features, a leading theory for understanding artificial neural networks. The modeled neurons perform a randomized wavelet transform on inputs, which removes high frequency noise and boosts the signal. Further, these random feature neurons enable learning from fewer training samples and with smaller networks in artificial tasks. This structured random model of receptive fields provides a unifying, mathematically tractable framework to understand sensory encodings across both spatial and temporal domains.

View Publication Page
02/13/22 | Structured random receptive fields enable informative sensory encodings
Biraj Pandey , Marius Pachitariu , Bingni W. Brunton , Kameron Decker Harris
bioRxiv. 2022 Feb 13:. doi: 10.1101/2021.09.09.459651

Brains must represent the outside world so that animals survive and thrive. In early sensory systems, neural populations have diverse receptive fields structured to detect important features in inputs, yet significant variability has been ignored in classical models of sensory neurons. We model neuronal receptive fields as random, variable samples from parametrized distributions in two sensory modalities, using data from insect mechanosensors and neurons of mammalian primary visual cortex. We show that these random feature neurons perform a randomized wavelet transform on inputs which removes high frequency noise and boosts the signal. Our result makes a significant theoretical connection between the foundational concepts of receptive fields in neuroscience and random features in artificial neural networks. Further, these random feature neurons enable learning from fewer training samples and with smaller networks in artificial tasks. This structured random model of receptive fields provides a unifying, mathematically tractable framework to understand sensory encodings across both spatial and temporal domains.

View Publication Page
01/12/22 | Toroidal topology of population activity in grid cells.
Gardner RJ, Hermansen E, Pachitariu M, Burak Y, Baas NA, Dunn BA, Moser M, Moser EI
Nature. 2022 Jan 12;602(7895):123-128. doi: 10.1038/s41586-021-04268-7

The medial entorhinal cortex is part of a neural system for mapping the position of an individual within a physical environment. Grid cells, a key component of this system, fire in a characteristic hexagonal pattern of locations, and are organized in modules that collectively form a population code for the animal's allocentric position. The invariance of the correlation structure of this population code across environments and behavioural states, independent of specific sensory inputs, has pointed to intrinsic, recurrently connected continuous attractor networks (CANs) as a possible substrate of the grid pattern. However, whether grid cell networks show continuous attractor dynamics, and how they interface with inputs from the environment, has remained unclear owing to the small samples of cells obtained so far. Here, using simultaneous recordings from many hundreds of grid cells and subsequent topological data analysis, we show that the joint activity of grid cells from an individual module resides on a toroidal manifold, as expected in a two-dimensional CAN. Positions on the torus correspond to positions of the moving animal in the environment. Individual cells are preferentially active at singular positions on the torus. Their positions are maintained between environments and from wakefulness to sleep, as predicted by CAN models for grid cells but not by alternative feedforward models. This demonstration of network dynamics on a toroidal manifold provides a population-level visualization of CAN dynamics in grid cells.

View Publication Page
09/02/21 | Electrode pooling can boost the yield of extracellular recordings with switchable silicon probes.
Lee KH, Ni Y, Colonell J, Karsh B, Putzeys J, Pachitariu M, Harris TD, Meister M
Nature Communications. 2021 Sep 02;12(1):5245. doi: 10.1038/s41467-021-25443-4

State-of-the-art silicon probes for electrical recording from neurons have thousands of recording sites. However, due to volume limitations there are typically many fewer wires carrying signals off the probe, which restricts the number of channels that can be recorded simultaneously. To overcome this fundamental constraint, we propose a method called electrode pooling that uses a single wire to serve many recording sites through a set of controllable switches. Here we present the framework behind this method and an experimental strategy to support it. We then demonstrate its feasibility by implementing electrode pooling on the Neuropixels 1.0 electrode array and characterizing its effect on signal and noise. Finally we use simulations to explore the conditions under which electrode pooling saves wires without compromising the content of the recordings. We make recommendations on the design of future devices to take advantage of this strategy.

View Publication Page
Pachitariu LabSternson Lab
07/01/21 | Hunger or thirst state uncertainty is resolved by outcome evaluation in medial prefrontal cortex to guide decision-making.
Eiselt A, Chen S, Chen J, Arnold J, Kim T, Pachitariu M, Sternson SM
Nature Neuroscience. 2021 Jul 01;24(7):907-912. doi: 10.1038/s41593-021-00850-4

Physiological need states direct decision-making toward re-establishing homeostasis. Using a two-alternative forced choice task for mice that models elements of human decisions, we found that varying hunger and thirst states caused need-inappropriate choices, such as food seeking when thirsty. These results show limits on interoceptive knowledge of hunger and thirst states to guide decision-making. Instead, need states were identified after food and water consumption by outcome evaluation, which depended on the medial prefrontal cortex.

View Publication Page
05/13/21 | High-precision coding in visual cortex.
Stringer C, Michaelos M, Tsyboulski D, Lindo SE, Pachitariu M
Cell. 2021 May 13;184(10):2767-78. doi: 10.1016/j.cell.2021.03.042

Individual neurons in visual cortex provide the brain with unreliable estimates of visual features. It is not known whether the single-neuron variability is correlated across large neural populations, thus impairing the global encoding of stimuli. We recorded simultaneously from up to 50,000 neurons in mouse primary visual cortex (V1) and in higher order visual areas and measured stimulus discrimination thresholds of 0.35° and 0.37°, respectively, in an orientation decoding task. These neural thresholds were almost 100 times smaller than the behavioral discrimination thresholds reported in mice. This discrepancy could not be explained by stimulus properties or arousal states. Furthermore, behavioral variability during a sensory discrimination task could not be explained by neural variability in V1. Instead, behavior-related neural activity arose dynamically across a network of non-sensory brain areas. These results imply that perceptual discrimination in mice is limited by downstream decoders, not by neural noise in sensory representations.

View Publication Page
04/16/21 | Neuropixels 2.0: A miniaturized high-density probe for stable, long-term brain recordings.
Steinmetz NA, Aydın Ç, Lebedeva A, Okun M, Pachitariu M, Bauza M, Beau M, Bhagat J, Böhm C, Broux M, Chen S, Colonell J, Gardner RJ, Karsh B, Kloosterman F, Kostadinov D, Mora-Lopez C, O'Callaghan J, Park J, Putzeys J, Sauerbrei B, van Daal RJ, Vollan AZ, Wang S, Welkenhuysen M, Ye Z, Dudman JT, Dutta B, Hantman AW, Harris KD, Lee AK, Moser EI, O'Keefe J, Renart A, Svoboda K, Häusser M, Haesler S, Carandini M, Harris TD
Science. 2021 Apr 16;372(6539):. doi: 10.1126/science.abf4588

Measuring the dynamics of neural processing across time scales requires following the spiking of thousands of individual neurons over milliseconds and months. To address this need, we introduce the Neuropixels 2.0 probe together with newly designed analysis algorithms. The probe has more than 5000 sites and is miniaturized to facilitate chronic implants in small mammals and recording during unrestrained behavior. High-quality recordings over long time scales were reliably obtained in mice and rats in six laboratories. Improved site density and arrangement combined with newly created data processing methods enable automatic post hoc correction for brain movements, allowing recording from the same neurons for more than 2 months. These probes and algorithms enable stable recordings from thousands of sites during free behavior, even in small animals such as mice.

View Publication Page