Filter
Associated Lab
- Branson Lab (1) Apply Branson Lab filter
- Dudman Lab (2) Apply Dudman Lab filter
- Harris Lab (3) Apply Harris Lab filter
- Lee (Albert) Lab (2) Apply Lee (Albert) Lab filter
- Pachitariu Lab (47) Apply Pachitariu Lab filter
- Romani Lab (1) Apply Romani Lab filter
- Sternson Lab (1) Apply Sternson Lab filter
- Stringer Lab (19) Apply Stringer Lab filter
- Svoboda Lab (2) Apply Svoboda Lab filter
- Turaga Lab (1) Apply Turaga Lab filter
Publication Date
- 2025 (4) Apply 2025 filter
- 2024 (8) Apply 2024 filter
- 2023 (4) Apply 2023 filter
- 2022 (4) Apply 2022 filter
- 2021 (6) Apply 2021 filter
- 2020 (2) Apply 2020 filter
- 2019 (4) Apply 2019 filter
- 2018 (2) Apply 2018 filter
- 2017 (5) Apply 2017 filter
- 2016 (2) Apply 2016 filter
- 2015 (2) Apply 2015 filter
- 2013 (3) Apply 2013 filter
- 2012 (1) Apply 2012 filter
Type of Publication
47 Publications
Showing 21-30 of 47 resultsSensory, motor and cognitive operations involve the coordinated action of large neuronal populations across multiple brain regions in both superficial and deep structures. Existing extracellular probes record neural activity with excellent spatial and temporal (sub-millisecond) resolution, but from only a few dozen neurons per shank. Optical Ca(2+) imaging offers more coverage but lacks the temporal resolution needed to distinguish individual spikes reliably and does not measure local field potentials. Until now, no technology compatible with use in unrestrained animals has combined high spatiotemporal resolution with large volume coverage. Here we design, fabricate and test a new silicon probe known as Neuropixels to meet this need. Each probe has 384 recording channels that can programmably address 960 complementary metal-oxide-semiconductor (CMOS) processing-compatible low-impedance TiN sites that tile a single 10-mm long, 70 × 20-μm cross-section shank. The 6 × 9-mm probe base is fabricated with the shank on a single chip. Voltage signals are filtered, amplified, multiplexed and digitized on the base, allowing the direct transmission of noise-free digital data from the probe. The combination of dense recording sites and high channel count yielded well-isolated spiking activity from hundreds of neurons per probe implanted in mice and rats. Using two probes, more than 700 well-isolated single neurons were recorded simultaneously from five brain structures in an awake mouse. The fully integrated functionality and small size of Neuropixels probes allowed large populations of neurons from several brain structures to be recorded in freely moving animals. This combination of high-performance electrode technology and scalable chip fabrication methods opens a path towards recording of brain-wide neural activity during behaviour.
A neuronal population encodes information most efficiently when its activity is uncorrelated and high-dimensional, and most robustly when its activity is correlated and lower-dimensional. Here, we analyzed the correlation structure of natural image coding, in large visual cortical populations recorded from awake mice. Evoked population activity was high dimensional, with correlations obeying an unexpected power-law: the n-th principal component variance scaled as 1/n. This was not inherited from the 1/f spectrum of natural images, because it persisted after stimulus whitening. We proved mathematically that the variance spectrum must decay at least this fast if a population code is smooth, i.e. if small changes in input cannot dominate population activity. The theory also predicts larger power-law exponents for lower-dimensional stimulus ensembles, which we validated experimentally. These results suggest that coding smoothness represents a fundamental constraint governing correlations in neural population codes.
Individual neurons in visual cortex provide the brain with unreliable estimates of visual features. It is not known whether the single-neuron variability is correlated across large neural populations, thus impairing the global encoding of stimuli. We recorded simultaneously from up to 50,000 neurons in mouse primary visual cortex (V1) and in higher order visual areas and measured stimulus discrimination thresholds of 0.35° and 0.37°, respectively, in an orientation decoding task. These neural thresholds were almost 100 times smaller than the behavioral discrimination thresholds reported in mice. This discrepancy could not be explained by stimulus properties or arousal states. Furthermore, behavioral variability during a sensory discrimination task could not be explained by neural variability in V1. Instead, behavior-related neural activity arose dynamically across a network of non-sensory brain areas. These results imply that perceptual discrimination in mice is limited by downstream decoders, not by neural noise in sensory representations.
Physiological need states direct decision-making toward re-establishing homeostasis. Using a two-alternative forced choice task for mice that models elements of human decisions, we found that varying hunger and thirst states caused need-inappropriate choices, such as food seeking when thirsty. These results show limits on interoceptive knowledge of hunger and thirst states to guide decision-making. Instead, need states were identified after food and water consumption by outcome evaluation, which depended on the medial prefrontal cortex.
Cortical networks exhibit intrinsic dynamics that drive coordinated, large-scale fluctuations across neuronal populations and create noise correlations that impact sensory coding. To investigate the network-level mechanisms that underlie these dynamics, we developed novel computational techniques to fit a deterministic spiking network model directly to multi-neuron recordings from different rodent species, sensory modalities, and behavioral states. The model generated correlated variability without external noise and accurately reproduced the diverse activity patterns in our recordings. Analysis of the model parameters suggested that differences in noise correlations across recordings were due primarily to differences in the strength of feedback inhibition. Further analysis of our recordings confirmed that putative inhibitory neurons were indeed more active during desynchronized cortical states with weak noise correlations. Our results demonstrate that network models with intrinsically-generated variability can accurately reproduce the activity patterns observed in multi-neuron recordings and suggest that inhibition modulates the interactions between intrinsic dynamics and sensory inputs to control the strength of noise correlations.
We determined how learning modifies neural representations in primary visual cortex (V1) during acquisition of a visually guided behavioral task. We imaged the activity of the same layer 2/3 neuronal populations as mice learned to discriminate two visual patterns while running through a virtual corridor, where one pattern was rewarded. Improvements in behavioral performance were closely associated with increasingly distinguishable population-level representations of task-relevant stimuli, as a result of stabilization of existing and recruitment of new neurons selective for these stimuli. These effects correlated with the appearance of multiple task-dependent signals during learning: those that increased neuronal selectivity across the population when expert animals engaged in the task, and those reflecting anticipation or behavioral choices specifically in neuronal subsets preferring the rewarded stimulus. Therefore, learning engages diverse mechanisms that modify sensory and non-sensory representations in V1 to adjust its processing to task requirements and the behavioral relevance of visual stimuli.
We present a dynamic nonlinear generative model for visual motion based on a latent representation of binary-gated Gaussian variables. Trained on sequences of images, the model learns to represent different movement directions in different variables. We use an online approximate-inference scheme that can be mapped to the dynamics of networks of neurons. Probed with drifting grating stimuli and moving bars of light, neurons in the model show patterns of responses analogous to those of direction-selective simple cells in primary visual cortex. Most model neurons also show speed tuning and respond equally well to a range of motion directions and speeds aligned to the constraint line of their respective preferred speed. We show how these computations are enabled by a specific pattern of recurrent connections learned by the model.
Measuring the dynamics of neural processing across time scales requires following the spiking of thousands of individual neurons over milliseconds and months. To address this need, we introduce the Neuropixels 2.0 probe together with newly designed analysis algorithms. The probe has more than 5000 sites and is miniaturized to facilitate chronic implants in small mammals and recording during unrestrained behavior. High-quality recordings over long time scales were reliably obtained in mice and rats in six laboratories. Improved site density and arrangement combined with newly created data processing methods enable automatic post hoc correction for brain movements, allowing recording from the same neurons for more than 2 months. These probes and algorithms enable stable recordings from thousands of sites during free behavior, even in small animals such as mice.
Neurophysiology has long progressed through exploratory experiments and chance discoveries. Anecdotes abound of researchers listening to spikes in real time and noticing patterns of activity related to ongoing stimuli or behaviors. With the advent of large-scale recordings, such close observation of data has become difficult. To find patterns in large-scale neural data, we developed 'Rastermap', a visualization method that displays neurons as a raster plot after sorting them along a one-dimensional axis based on their activity patterns. We benchmarked Rastermap on realistic simulations and then used it to explore recordings of tens of thousands of neurons from mouse cortex during spontaneous, stimulus-evoked and task-evoked epochs. We also applied Rastermap to whole-brain zebrafish recordings; to wide-field imaging data; to electrophysiological recordings in rat hippocampus, monkey frontal cortex and various cortical and subcortical regions in mice; and to artificial neural networks. Finally, we illustrate high-dimensional scenarios where Rastermap and similar algorithms cannot be used effectively.
Population neural recordings with long-range temporal structure are often best understood in terms of a shared underlying low-dimensional dynamical process. Advances in recording technology provide access to an ever larger fraction of the population, but the standard computational approaches available to identify the collective dynamics scale poorly with the size of the dataset. Here we describe a new, scalable approach to discovering the low-dimensional dynamics that underlie simultaneously recorded spike trains from a neural population. Our method is based on recurrent linear models (RLMs), and relates closely to timeseries models based on recurrent neural networks. We formulate RLMs for neural data by generalising the Kalman-filter-based likelihood calculation for latent linear dynamical systems (LDS) models to incorporate a generalised-linear observation process. We show that RLMs describe motor-cortical population data better than either directly-coupled generalised-linear models or latent linear dynamical system models with generalised-linear observations. We also introduce the cascaded linear model (CLM) to capture low-dimensional instantaneous correlations in neural populations. The CLM describes the cortical recordings better than either Ising or Gaussian models and, like the RLM, can be fit exactly and quickly. The CLM can also be seen as a generalization of a low-rank Gaussian model, in this case factor analysis. The computational tractability of the RLM and CLM allow both to scale to very high-dimensional neural data.