Filter
Associated Lab
- Aguilera Castrejon Lab (1) Apply Aguilera Castrejon Lab filter
- Ahrens Lab (1) Apply Ahrens Lab filter
- Beyene Lab (1) Apply Beyene Lab filter
- Fitzgerald Lab (1) Apply Fitzgerald Lab filter
- Keller Lab (1) Apply Keller Lab filter
- Lavis Lab (1) Apply Lavis Lab filter
- Liu (Zhe) Lab (1) Apply Liu (Zhe) Lab filter
- Pachitariu Lab (21) Apply Pachitariu Lab filter
- Schreiter Lab (1) Apply Schreiter Lab filter
- Spruston Lab (1) Apply Spruston Lab filter
- Stringer Lab (41) Apply Stringer Lab filter
- Tillberg Lab (2) Apply Tillberg Lab filter
- Turner Lab (1) Apply Turner Lab filter
Associated Project Team
Publication Date
Type of Publication
41 Publications
Showing 1-10 of 41 resultsThe brain’s capabilities rely on both the molecular properties of individual cells and their interactions across brain-wide networks. However, relating gene expression to activity in individual neurons across the entire brain remains elusive. Here we developed an experimental-computational platform, WARP, for whole-brain imaging of neuronal activity during behavior, expansion-assisted spatial transcriptomics, and cellular-level registration of these two modalities. Through joint analysis of whole-brain neuronal activity during multiple behaviors, cellular gene expression, and anatomy, we identified functions of molecularly defined populations — including luminance coding in a cckb-pou4f2 midbrain population and task-structured activity in pvalb7-eomesa hippocampal-like neurons — and defined over 2,000 other function-gene-anatomy subpopulations. Analysis of this unprecedented multimodal dataset also revealed that most gene-matched neurons showed stronger activity correlations, highlighting a brain-wide role for gene expression in functional organization. WARP establishes a foundational platform and open-access dataset for cross-experiment discovery, high-throughput function-to-gene mapping, unification of cell biology and systems neuroscience, and scalable circuit modeling at the whole-brain scale.
Neural recordings using optical methods have improved dramatically. For example, we demonstrate here recordings of over 100,000 neurons from the mouse cortex obtained with a standard commercial microscope. To process such large datasets, we developed Suite2p, a collection of efficient algorithms for motion correction, cell detection, activity extraction and quality control. We also developed new approaches to benchmark performance on these tasks. Our GPU-accelerated non-rigid motion correction substantially outperforms alternative methods, while running over five times faster. For cell detection, Suite2p outperforms the CNMF algorithm in Caiman and Fiola, finding more cells and producing fewer false positives, while running in a fraction of the time. We also introduce quality control steps for users to evaluate performance on their own data, while offering alternative algorithms for specialized types of recordings such as those from one-photon and voltage imaging.
fMRI signals were traditionally seen as slow and sampled in the order of seconds, but recent technological advances have enabled much faster sampling rates. We hypothesized that high-frequency fMRI signals can capture spontaneous neural activity that index brain states. Using fast fMRI (TR=378ms) and simultaneous EEG in 27 humans drifting between sleep and wakefulness, we found that fMRI spectral power increased during NREM sleep (compared to wakefulness) across several frequency ranges as fast as 1Hz. This fast fMRI power was correlated with canonical arousal-linked EEG rhythms (alpha and delta), with spatiotemporal correlation patterns for each rhythm reflecting a combination of shared arousal dynamics and rhythm-specific neural signatures. Using machine learning, we found that alpha and delta EEG rhythms can be decoded from fast fMRI signals, in subjects held-out from the training set, showing that fMRI as fast as 0.9Hz (alpha) and 0.7Hz (delta) contains reliable neurally-coupled information that generalizes across individuals. Finally, we demonstrate that this fast fMRI acquisition allows for EEG rhythms to be decoded from 3.8s windows of fMRI data. These results reveal that high-frequency fMRI signals are coupled to dynamically varying brain states, and that fast fMRI sampling allows for more temporally precise quantification of spontaneous neural activity than previously thought possible.
The brain exhibits rich oscillatory dynamics that play critical roles in vigilance and cognition, such as the neural rhythms that define sleep. These rhythms continuously fluctuate, signaling major changes in vigilance, but the widespread brain dynamics underlying these oscillations are difficult to investigate. Using simultaneous EEG and fast fMRI in humans who fell asleep inside the scanner, we developed a machine learning approach to investigate which fMRI regions and networks predict fluctuations in neural rhythms. We demonstrated that the rise and fall of alpha (8-12 Hz) and delta (1-4 Hz) power-two canonical EEG bands critically involved with cognition and vigilance-can be predicted from fMRI data in subjects that were not present in the training set. This approach also identified predictive information in individual brain regions across the cortex and subcortex. Finally, we developed an approach to identify shared and unique predictive information, and found that information about alpha rhythms was highly separable in two networks linked to arousal and visual systems. Conversely, delta rhythms were diffusely represented on a large spatial scale primarily across the cortex. These results demonstrate that EEG rhythms can be predicted from fMRI data, identify large-scale network patterns that underlie alpha and delta rhythms, and establish a novel framework for investigating multimodal brain dynamics.
Predictive coding is a theoretical framework that can explain how animals build internal models of their sensory environments by predicting sensory inputs. Predictive coding may capture either spatial or temporal relationships between sensory objects. While the original theory by Rao and Ballard, 1999 described spatial predictive coding, much of the recent experimental data has been interpreted as evidence for temporal predictive coding. Here we directly tested whether the “mismatch” neural responses in sensory cortex are due to a spatial or a temporal internal model. We adopted two common paradigms to study predictive coding: one based on virtual-reality and one based on static images. After training mice with repeated visual stimulation for several days, we performed multiple manipulations, including: 1) we introduced a novel stimulus, 2) we replaced a stimulus with a novel gray wall, 3) we duplicated a trained stimulus, or 4) we altered the order of the stimuli. The first two manipulations induced a substantial mismatch response in neural populations of up to 20,000 neurons recorded across primary and higher-order visual cortex, while the third and fourth ones did not. Thus, a mismatch response only occurred if a new spatial – not temporal – pattern was introduced.
Spatial multiomic profiling has been transforming the understanding of local tumor ecosystems. Yet, the spatial analyses of tumor-immune interactions at systemic levels, such as in liquid biopsies, are challenging. Within the last 10 years, we have longitudinally collected nearly 3,000 patient blood samples for multiplexing imaging of circulating tumor cells (CTCs) and their interactions with white blood cells (WBCs). Multicellular CTC clusters exhibit enhanced metastatic potential. The detection of CTCs and characterization of tumor immune ecosystems are constrained by (1) low frequency of CTCs in blood samples; (2) specific lineages of immune cells are not recognized by limited channels of current imaging methods, (3) reliance on labor-intensive manual analysis slows down the discovery of biomarkers for predicting therapy response and survival in cancer patients. We hypothesize that an AI-powered platform will accelerate the lineage and spatial characterization of tumor immune ecosystems for prognostic evaluations.
Artificial neural networks (ANNs) have been shown to predict neural responses in primary visual cortex (V1) better than classical models. However, this performance often comes at the expense of simplicity and interpretability. Here we introduce a new class of simplified ANN models that can predict over 70% of the response variance of V1 neurons. To achieve this high performance, we first recorded a new dataset of over 29,000 neurons responding to up to 65,000 natural image presentations in mouse V1. We found that ANN models required only two convolutional layers for good performance, with a relatively small first layer. We further found that we could make the second layer small without loss of performance, by fitting individual "minimodels" to each neuron. Similar simplifications applied for models of monkey V1 neurons. We show that the minimodels can be used to gain insight into how stimulus invariance arises in biological neurons. Preprint: https://www.biorxiv.org/content/early/2024/07/02/2024.06.30.601394
Representation learning in neural networks may be implemented with supervised or unsupervised algorithms, distinguished by the availability of instruction. In the sensory cortex, perceptual learning drives neural plasticity1-13, but it is not known whether this is due to supervised or unsupervised learning. Here we recorded populations of up to 90,000 neurons simultaneously from the primary visual cortex (V1) and higher visual areas (HVAs) while mice learned multiple tasks, as well as during unrewarded exposure to the same stimuli. Similar to previous studies, we found that neural changes in task mice were correlated with their behavioural learning. However, the neural changes were mostly replicated in mice with unrewarded exposure, suggesting that the changes were in fact due to unsupervised learning. The neural plasticity was highest in the medial HVAs and obeyed visual, rather than spatial, learning rules. In task mice only, we found a ramping reward-prediction signal in anterior HVAs, potentially involved in supervised learning. Our neural results predict that unsupervised learning may accelerate subsequent task learning, a prediction that we validated with behavioural experiments. Preprint: https://www.biorxiv.org/content/early/2024/02/27/2024.02.25.581990
All brain functions in animals rely upon neuronal connectivity that is established during early development. Although the activity-dependent mechanisms are deemed important for brain development and adult synaptic plasticity, the precise cellular and molecular mechanisms remain however, largely unknown. This lack of fundamental knowledge regarding developmental neuronal assembly owes its existence to the complexity of the mammalian brain as cell-cell interactions between individual neurons cannot be investigated directly. Here, we used individually identified synaptic partners from Lymnaea stagnalis to interrogate the role of neuronal activity patterns over an extended time period during various growth time points and synaptogenesis. Using intracellular recordings, microelectrode arrays, and time-lapse imaging, we identified unique patterns of activity throughout neurite outgrowth and synapse formation. Perturbation of voltage-gated Ca channels compromised neuronal growth patterns which also invoked a protein kinase A mediated pathway. Our findings underscore the importance of unique activity patterns in regulating neuronal growth, neurite branching, and synapse formation, and identify the underlying cellular and molecular mechanisms.
Modern algorithms for biological segmentation can match inter-human agreement in annotation quality. This however is not a performance bound: a hypothetical human-consensus segmentation could reduce error rates in half. To obtain a model that generalizes better we adapted the pretrained transformer backbone of a foundation model (SAM) to the Cellpose framework. The resulting Cellpose-SAM model substantially outperforms inter-human agreement and approaches the human-consensus bound. We increase generalization performance further by making the model robust to channel shuffling, cell size, shot noise, downsampling, isotropic and anisotropic blur. The new model can be readily adopted into the Cellpose ecosystem which includes finetuning, human-in-the-loop training, image restoration and 3D segmentation approaches. These properties establish Cellpose-SAM as a foundation model for biological segmentation.
