Filter
Associated Lab
- Aguilera Castrejon Lab (2) Apply Aguilera Castrejon Lab filter
- Beyene Lab (1) Apply Beyene Lab filter
- Branson Lab (1) Apply Branson Lab filter
- Fitzgerald Lab (1) Apply Fitzgerald Lab filter
- Keller Lab (2) Apply Keller Lab filter
- Lavis Lab (1) Apply Lavis Lab filter
- Liu (Zhe) Lab (1) Apply Liu (Zhe) Lab filter
- Pachitariu Lab (21) Apply Pachitariu Lab filter
- Schreiter Lab (1) Apply Schreiter Lab filter
- Spruston Lab (1) Apply Spruston Lab filter
- Remove Stringer Lab filter Stringer Lab
- Tillberg Lab (1) Apply Tillberg Lab filter
- Turner Lab (1) Apply Turner Lab filter
Associated Project Team
Associated Support Team
40 Results
Showing 1-10 of 40 resultsArtificial neural networks learn faster if they are initialized well. Good initializations can generate high-dimensional macroscopic dynamics with long timescales. It is not known if biological neural networks have similar properties. Here we show that the eigenvalue spectrum and dynamical properties of large-scale neural recordings in mice (two-photon and electrophysiology) are similar to those produced by linear dynamics governed by a random symmetric matrix that is critically normalized. An exception was hippocampal area CA1: population activity in this area resembled an efficient, uncorrelated neural code, which may be optimized for information storage capacity. Global emergent activity modes persisted in simulations with sparse, clustered or spatial connectivity. We hypothesize that the spontaneous neural activity reflects a critical initialization of whole-brain neural circuits that is optimized for learning time-dependent tasks.
Genetically encoded fluorescent calcium indicators allow cellular-resolution recording of physiology. However, bright, genetically targetable indicators that can be multiplexed with existing tools in vivo are needed for simultaneous imaging of multiple signals. Here we describe WHaloCaMP, a modular chemigenetic calcium indicator built from bright dye-ligands and protein sensor domains. Fluorescence change in WHaloCaMP results from reversible quenching of the bound dye via a strategically placed tryptophan. WHaloCaMP is compatible with rhodamine dye-ligands that fluoresce from green to near-infrared, including several that efficiently label the brain in animals. When bound to a near-infrared dye-ligand, WHaloCaMP shows a 7× increase in fluorescence intensity and a 2.1-ns increase in fluorescence lifetime upon calcium binding. We use WHaloCaMP1a to image Ca responses in vivo in flies and mice, to perform three-color multiplexed functional imaging of hundreds of neurons and astrocytes in zebrafish larvae and to quantify Ca concentration using fluorescence lifetime imaging microscopy (FLIM).
Artificial neural networks (ANNs) have been shown to predict neural responses in primary visual cortex (V1) better than classical models. However, this performance often comes at the expense of simplicity and interpretability. Here we introduce a new class of simplified ANN models that can predict over 70% of the response variance of V1 neurons. To achieve this high performance, we first recorded a new dataset of over 29,000 neurons responding to up to 65,000 natural image presentations in mouse V1. We found that ANN models required only two convolutional layers for good performance, with a relatively small first layer. We further found that we could make the second layer small without loss of performance, by fitting individual "minimodels" to each neuron. Similar simplifications applied for models of monkey V1 neurons. We show that the minimodels can be used to gain insight into how stimulus invariance arises in biological neurons. Preprint: https://www.biorxiv.org/content/early/2024/07/02/2024.06.30.601394
Circulating tumor cells (CTCs) are critical biomarkers for predicting therapy response and survival in breast cancer patients. Multicellular CTC clusters exhibit enhanced metastatic potential, yet their detection and characterization are constrained by low frequency in blood samples and reliance on labor-intensive manual analysis. Advancing these methods could significantly improve prognostic evaluation and therapeutic strategies.Leveraging FDA-approved CellSearch technology and single-cell sequencing, we analyzed 2, 853 blood specimens, longitudinally collected from 1358 patients with advanced cancer (breast, prostate, etc) and other diseases. Integrating machine learning and deep learning tools, we developed a novel CTCpose platform to automate detection and analysis of CTCs, immune cells, and their interactions. Using artificial intelligence (AI)-driven image analysis, we extracted over 270 cellular and nuclear features including intensity, morphometry, fourier shape, gradient/edge, and haralick of cytokeratin, CD45, and DAPI expression patterns, enabling precise characterization of CTCs, white blood cells (WBCs), CTC clusters, and their interactions with immune cells (WBCs).The CTCpose platform enabled automated identification of CTCs, WBCs, homotypic CTC clusters, heterogenous CTC-WBC clusters, and immune cell clusters, providing comprehensive insights into cell morphology, biomarker expression, and spatial organization. These features correlated with patient survival, disease progression, and treatment response. Our findings highlight the clinical significance of CTC-immune cell interactions and dynamic alterations of CTCs (singles and clusters) and underscore their potential in stratifying patients into distinct risk categories.This study demonstrates the transformative potential of deep learning in overcoming limitations of traditional CTC detection methods and integrating imaging data with large cohorts of patient data. By automating and enhancing the analysis of CTC-immune cell interactions, we present a robust framework for developing predictive models with direct clinical relevance. This work opens avenues for personalized treatment strategies, underscoring the impact of AI in advancing precision oncology.Yuanfei Sun, Joshua R. Squires, Andrew Hoffmann, Youbin Zhang, Allegra Minor, Anmol Singh, David Scholten, Chengsheng Mao, Yuan Luo, Deyu Fang, William J. Gradishar, Massimo Cristofanilli, Carsen Stringer, Huiping Liu. Deep learning enables automated detection of circulating tumor cell-immune cell interactions with prognostic insights in cancer [abstract]. In: Proceedings of the American Association for Cancer Research Annual Meeting 2025; Part 1 (Regular Abstracts); 2025 Apr 25-30; Chicago, IL. Philadelphia (PA): AACR; Cancer Res 2025;85(8_Suppl_1):Abstract nr 2420.
Spatial multiomic profiling has been transforming the understanding of local tumor ecosystems. Yet, the spatial analyses of tumor-immune interactions at systemic levels, such as in liquid biopsies, are challenging. Within the last 10 years, we have longitudinally collected nearly 3,000 patient blood samples for multiplexing imaging of circulating tumor cells (CTCs) and their interactions with white blood cells (WBCs). Multicellular CTC clusters exhibit enhanced metastatic potential. The detection of CTCs and characterization of tumor immune ecosystems are constrained by (1) low frequency of CTCs in blood samples; (2) specific lineages of immune cells are not recognized by limited channels of current imaging methods, (3) reliance on labor-intensive manual analysis slows down the discovery of biomarkers for predicting therapy response and survival in cancer patients. We hypothesize that an AI-powered platform will accelerate the lineage and spatial characterization of tumor immune ecosystems for prognostic evaluations.
Simultaneous recordings from hundreds or thousands of neurons are becoming routine because of innovations in instrumentation, molecular tools, and data processing software. Such recordings can be analyzed with data science methods, but it is not immediately clear what methods to use or how to adapt them for neuroscience applications. We review, categorize, and illustrate diverse analysis methods for neural population recordings and describe how these methods have been used to make progress on longstanding questions in neuroscience. We review a variety of approaches, ranging from the mathematically simple to the complex, from exploratory to hypothesis-driven, and from recently developed to more established methods. We also illustrate some of the common statistical pitfalls in analyzing large-scale neural data.
Pretrained neural network models for biological segmentation can provide good out-of-the-box results for many image types. However, such models do not allow users to adapt the segmentation style to their specific needs and can perform suboptimally for test images that are very different from the training images. Here we introduce Cellpose 2.0, a new package that includes an ensemble of diverse pretrained models as well as a human-in-the-loop pipeline for rapid prototyping of new custom models. We show that models pretrained on the Cellpose dataset can be fine-tuned with only 500-1,000 user-annotated regions of interest (ROI) to perform nearly as well as models trained on entire datasets with up to 200,000 ROI. A human-in-the-loop approach further reduced the required user annotation to 100-200 ROI, while maintaining high-quality segmentations. We provide software tools such as an annotation graphical user interface, a model zoo and a human-in-the-loop pipeline to facilitate the adoption of Cellpose 2.0.
Modern algorithms for biological segmentation can match inter-human agreement in annotation quality. This however is not a performance bound: a hypothetical human-consensus segmentation could reduce error rates in half. To obtain a model that generalizes better we adapted the pretrained transformer backbone of a foundation model (SAM) to the Cellpose framework. The resulting Cellpose-SAM model substantially outperforms inter-human agreement and approaches the human-consensus bound. We increase generalization performance further by making the model robust to channel shuffling, cell size, shot noise, downsampling, isotropic and anisotropic blur. The new model can be readily adopted into the Cellpose ecosystem which includes finetuning, human-in-the-loop training, image restoration and 3D segmentation approaches. These properties establish Cellpose-SAM as a foundation model for biological segmentation.