Main Menu (Mobile)- Block

Main Menu - Block

janelia7_blocks-janelia7_fake_breadcrumb | block
Lee Tzumin Lab / Publications
general_search_page-panel_pane_1 | views_panes

27 Publications

Showing 11-20 of 27 results
Your Criteria:
    06/26/19 | High-dimensional geometry of population responses in visual cortex.
    Stringer C, Pachitariu M, Steinmetz NA, Carandini M, Harris KD
    Nature. 2019 Jun 26;571(7765):361-65. doi: 10.1038/s41586-019-1346-5

    A neuronal population encodes information most efficiently when its activity is uncorrelated and high-dimensional, and most robustly when its activity is correlated and lower-dimensional. Here, we analyzed the correlation structure of natural image coding, in large visual cortical populations recorded from awake mice. Evoked population activity was high dimensional, with correlations obeying an unexpected power-law: the n-th principal component variance scaled as 1/n. This was not inherited from the 1/f spectrum of natural images, because it persisted after stimulus whitening. We proved mathematically that the variance spectrum must decay at least this fast if a population code is smooth, i.e. if small changes in input cannot dominate population activity. The theory also predicts larger power-law exponents for lower-dimensional stimulus ensembles, which we validated experimentally. These results suggest that coding smoothness represents a fundamental constraint governing correlations in neural population codes.

    View Publication Page
    05/13/21 | High-precision coding in visual cortex.
    Stringer C, Michaelos M, Tsyboulski D, Lindo SE, Pachitariu M
    Cell. 2021 May 13;184(10):2767-78. doi: 10.1016/j.cell.2021.03.042

    Individual neurons in visual cortex provide the brain with unreliable estimates of visual features. It is not known whether the single-neuron variability is correlated across large neural populations, thus impairing the global encoding of stimuli. We recorded simultaneously from up to 50,000 neurons in mouse primary visual cortex (V1) and in higher order visual areas and measured stimulus discrimination thresholds of 0.35° and 0.37°, respectively, in an orientation decoding task. These neural thresholds were almost 100 times smaller than the behavioral discrimination thresholds reported in mice. This discrepancy could not be explained by stimulus properties or arousal states. Furthermore, behavioral variability during a sensory discrimination task could not be explained by neural variability in V1. Instead, behavior-related neural activity arose dynamically across a network of non-sensory brain areas. These results imply that perceptual discrimination in mice is limited by downstream decoders, not by neural noise in sensory representations.

    View Publication Page
    12/02/16 | Inhibitory control of correlated intrinsic variability in cortical networks
    Stringer C, Pachitariu M, Steinmetz NA, Okun M, Bartho P, Harris KD, Sahani M, Lesica NA
    Elife. 12/2016;5:e19695. doi: https://doi.org/10.7554/eLife.19695

    Cortical networks exhibit intrinsic dynamics that drive coordinated, large-scale fluctuations across neuronal populations and create noise correlations that impact sensory coding. To investigate the network-level mechanisms that underlie these dynamics, we developed novel computational techniques to fit a deterministic spiking network model directly to multi-neuron recordings from different rodent species, sensory modalities, and behavioral states. The model generated correlated variability without external noise and accurately reproduced the diverse activity patterns in our recordings. Analysis of the model parameters suggested that differences in noise correlations across recordings were due primarily to differences in the strength of feedback inhibition. Further analysis of our recordings confirmed that putative inhibitory neurons were indeed more active during desynchronized cortical states with weak noise correlations. Our results demonstrate that network models with intrinsically-generated variability can accurately reproduce the activity patterns observed in multi-neuron recordings and suggest that inhibition modulates the interactions between intrinsic dynamics and sensory inputs to control the strength of noise correlations.

    View Publication Page
    08/07/23 | Learning produces a hippocampal cognitive map in the form of an orthogonalized state machine.
    Weinan Sun , Johan Winnubst , Maanasa Natrajan , Chongxi Lai , Koichiro Kajikawa , Michalis Michaelos , Rachel Gattoni , Carsen Stringer , Daniel Flickinger , James E. Fitzgerald , Nelson Spruston
    bioRxiv. 2023 Aug 07:. doi: 10.1101/2023.08.03.551900

    Cognitive maps confer animals with flexible intelligence by representing spatial, temporal, and abstract relationships that can be used to shape thought, planning, and behavior. Cognitive maps have been observed in the hippocampus, but their algorithmic form and the processes by which they are learned remain obscure. Here, we employed large-scale, longitudinal two-photon calcium imaging to record activity from thousands of neurons in the CA1 region of the hippocampus while mice learned to efficiently collect rewards from two subtly different versions of linear tracks in virtual reality. The results provide a detailed view of the formation of a cognitive map in the hippocampus. Throughout learning, both the animal behavior and hippocampal neural activity progressed through multiple intermediate stages, gradually revealing improved task understanding and behavioral efficiency. The learning process led to progressive decorrelations in initially similar hippocampal neural activity within and across tracks, ultimately resulting in orthogonalized representations resembling a state machine capturing the inherent structure of the task. We show that a Hidden Markov Model (HMM) and a biologically plausible recurrent neural network trained using Hebbian learning can both capture core aspects of the learning dynamics and the orthogonalized representational structure in neural activity. In contrast, we show that gradient-based learning of sequence models such as Long Short-Term Memory networks (LSTMs) and Transformers do not naturally produce such representations. We further demonstrate that mice exhibited adaptive behavior in novel task settings, with neural activity reflecting flexible deployment of the state machine. These findings shed light on the mathematical form of cognitive maps, the learning rules that sculpt them, and the algorithms that promote adaptive behavior in animals. The work thus charts a course toward a deeper understanding of biological intelligence and offers insights toward developing more robust learning algorithms in artificial intelligence.

    View Publication Page
    10/05/22 | Not so spontaneous: Multi-dimensional representations of behaviors and context in sensory areas.
    Avitan L, Stringer C
    Neuron. 2022 Oct 05;110(19):3064. doi: 10.1016/j.neuron.2022.06.019

    Sensory areas are spontaneously active in the absence of sensory stimuli. This spontaneous activity has long been studied; however, its functional role remains largely unknown. Recent advances in technology, allowing large-scale neural recordings in the awake and behaving animal, have transformed our understanding of spontaneous activity. Studies using these recordings have discovered high-dimensional spontaneous activity patterns, correlation between spontaneous activity and behavior, and dissimilarity between spontaneous and sensory-driven activity patterns. These findings are supported by evidence from developing animals, where a transition toward these characteristics is observed as the circuit matures, as well as by evidence from mature animals across species. These newly revealed characteristics call for the formulation of a new role for spontaneous activity in neural sensory computation.

    View Publication Page
    07/27/22 | Omnipose: a high-precision morphology-independent solution for bacterial cell segmentation
    Kevin J. Cutler , Carsen Stringer , Paul A. Wiggins , Joseph D. Mougous
    bioRxiv. 2022 Jul 27:. doi: 10.1101/2021.11.03.467199

    Advances in microscopy hold great promise for allowing quantitative and precise readouts of morphological and molecular phenomena at the single cell level in bacteria. However, the potential of this approach is ultimately limited by the availability of methods to perform unbiased cell segmentation, defined as the ability to faithfully identify cells independent of their morphology or optical characteristics. In this study, we present a new algorithm, Omnipose, which accurately segments samples that present significant challenges to current algorithms, including mixed bacterial cultures, antibiotic-treated cells, and cells of extended or branched morphology. We show that Omnipose achieves generality and performance beyond leading algorithms and its predecessor, Cellpose, by virtue of unique neural network outputs such as the gradient of the distance field. Finally, we demonstrate the utility of Omnipose in the characterization of extreme morphological phenotypes that arise during interbacterial antagonism and on the segmentation of non-bacterial objects. Our results distinguish Omnipose as a uniquely powerful tool for answering diverse questions in bacterial cell biology.

    View Publication Page
    10/17/22 | Omnipose: a high-precision morphology-independent solution for bacterial cell segmentation.
    Cutler KJ, Stringer C, Lo TW, Rappez L, Stroustrup N, Brook Peterson S, Wiggins PA, Mougous JD
    Nature Methods. 2022 Oct 17:. doi: 10.1038/s41592-022-01639-4

    Advances in microscopy hold great promise for allowing quantitative and precise measurement of morphological and molecular phenomena at the single-cell level in bacteria; however, the potential of this approach is ultimately limited by the availability of methods to faithfully segment cells independent of their morphological or optical characteristics. Here, we present Omnipose, a deep neural network image-segmentation algorithm. Unique network outputs such as the gradient of the distance field allow Omnipose to accurately segment cells on which current algorithms, including its predecessor, Cellpose, produce errors. We show that Omnipose achieves unprecedented segmentation performance on mixed bacterial cultures, antibiotic-treated cells and cells of elongated or branched morphology. Furthermore, the benefits of Omnipose extend to non-bacterial subjects, varied imaging modalities and three-dimensional objects. Finally, we demonstrate the utility of Omnipose in the characterization of extreme morphological phenotypes that arise during interbacterial antagonism. Our results distinguish Omnipose as a powerful tool for characterizing diverse and arbitrarily shaped cell types from imaging data.

    View Publication Page
    07/28/23 | Rastermap: a discovery method for neural population recordings
    Carsen Stringer , Lin Zhong , Atika Syeda , Fengtong Du , Marius Pachitariu
    bioRxiv. 2023 Jul 28:. doi: 10.1101/2023.07.25.550571

    Neurophysiology has long progressed through exploratory experiments and chance discoveries. Anecdotes abound of researchers setting up experiments while listening to spikes in real time and observing a pattern of consistent firing when certain stimuli or behaviors happened. With the advent of large-scale recordings, such close observation of data has become harder because high-dimensional spaces are impenetrable to our pattern-finding intuitions. To help ourselves find patterns in neural data, our lab has been openly developing a visualization framework known as “Rastermap” over the past five years. Rastermap takes advantage of a new global optimization algorithm for sorting neural responses along a one-dimensional manifold. Displayed as a raster plot, the sorted neurons show a variety of activity patterns, which can be more easily identified and interpreted. We first benchmark Rastermap on realistic simulations with multiplexed cognitive variables. Then we demonstrate it on recordings of tens of thousands of neurons from mouse visual and sensorimotor cortex during spontaneous, stimulus-evoked and task-evoked epochs, as well as on whole-brain zebrafish recordings, widefield calcium imaging data, population recordings from rat hippocampus and artificial neural networks. Finally, we illustrate high-dimensional scenarios where Rastermap and similar algorithms cannot be used effectively.

    View Publication Page
    08/06/18 | Robustness of spike deconvolution for neuronal calcium imaging.
    Pachitariu M, Stringer C, Harris KD
    The Journal of Neuroscience : the official journal of the Society for Neuroscience. 2018 Aug 06;38(37):7976-85. doi: 10.1523/JNEUROSCI.3339-17.2018

    Calcium imaging is a powerful method to record the activity of neural populations in many species, but inferring spike times from calcium signals is a challenging problem. We compared multiple approaches using multiple datasets with ground truth electrophysiology, and found that simple non-negative deconvolution (NND) outperformed all other algorithms on out-of-sample test data. We introduce a novel benchmark applicable to recordings without electrophysiological ground truth, based on the correlation of responses to two stimulus repeats, and used this to show that unconstrained NND also outperformed the other algorithms when run on "zoomed out" datasets of ∼10,000 cell recordings from the visual cortex of mice of either sex. Finally, we show that NND-based methods match the performance of a supervised method based on convolutional neural networks, while avoiding some of the biases of such methods, and at much faster running times. We therefore recommend that spikes be inferred from calcium traces using simple NND, due to its simplicity, efficiency and accuracy.The experimental method that currently allows for recordings of the largest numbers of cells simultaneously is two-photon calcium imaging. However, use of this powerful method requires that neuronal firing times be inferred correctly from the large resulting datasets. Previous studies have claimed that complex supervised learning algorithms outperform simple deconvolution methods at this task. Unfortunately, these studies suffered from several problems and biases. When we repeated the analysis, using the same data and correcting these problems, we found that simpler spike inference methods perform better. Even more importantly, we found that supervised learning methods can introduce artifactual structure into spike trains, that can in turn lead to erroneous scientific conclusions. Of the algorithms we evaluated, we found that an extremely simple method performed best in all circumstances tested, was much faster to run, and was insensitive to parameter choices, making incorrect scientific conclusions much less likely.

    View Publication Page
    10/06/20 | Simultaneous computation of dynamical and equilibrium information using a weighted ensemble of trajectories
    Suarez E, Lettieri S, Stringer CA, Zwier MC, Subramanian SR, Chong LT, Zuckerman DM
    Journal of chemical theory and computation;10:2658–2667