Filter
Associated Lab
- Aso Lab (2) Apply Aso Lab filter
- Card Lab (1) Apply Card Lab filter
- Cardona Lab (3) Apply Cardona Lab filter
- Dickson Lab (1) Apply Dickson Lab filter
- Espinosa Medina Lab (1) Apply Espinosa Medina Lab filter
- Feliciano Lab (1) Apply Feliciano Lab filter
- Fitzgerald Lab (1) Apply Fitzgerald Lab filter
- Funke Lab (45) Apply Funke Lab filter
- Hess Lab (6) Apply Hess Lab filter
- Jayaraman Lab (1) Apply Jayaraman Lab filter
- Keller Lab (2) Apply Keller Lab filter
- Lippincott-Schwartz Lab (2) Apply Lippincott-Schwartz Lab filter
- Liu (Zhe) Lab (1) Apply Liu (Zhe) Lab filter
- Reiser Lab (2) Apply Reiser Lab filter
- Romani Lab (1) Apply Romani Lab filter
- Rubin Lab (2) Apply Rubin Lab filter
- Saalfeld Lab (11) Apply Saalfeld Lab filter
- Scheffer Lab (2) Apply Scheffer Lab filter
- Stern Lab (2) Apply Stern Lab filter
- Tillberg Lab (2) Apply Tillberg Lab filter
- Turaga Lab (5) Apply Turaga Lab filter
- Turner Lab (1) Apply Turner Lab filter
Associated Project Team
Publication Date
- 2026 (3) Apply 2026 filter
- 2025 (5) Apply 2025 filter
- 2024 (6) Apply 2024 filter
- 2023 (11) Apply 2023 filter
- 2022 (5) Apply 2022 filter
- 2021 (3) Apply 2021 filter
- 2020 (3) Apply 2020 filter
- 2019 (1) Apply 2019 filter
- 2018 (4) Apply 2018 filter
- 2017 (1) Apply 2017 filter
- 2016 (2) Apply 2016 filter
- 2015 (1) Apply 2015 filter
Type of Publication
45 Publications
Showing 1-10 of 45 resultsWe address the problem of inferring the number of independently blinking fluorescent light emitters, when only their combined intensity contributions can be observed. This problem occurs regularly in light microscopy of objects smaller than the diffraction limit, where one wishes to count the number of fluorescently labeled subunits. Our proposed solution directly models the photophysics of the system, as well as the blinking kinetics of the fluorescent emitters as a fully differentiable hidden Markov model, estimating a posterior distribution of the total number of emitters. We show that our model is more accurate and increases the range of countable subunits by a factor of 2 compared to current state-of-the-art methods. Furthermore, we demonstrate that our model can be used to investigate the effect of blinking kinetics on counting ability and therefore can inform optimal experimental conditions.
We address the problem of inferring the number of independently blinking fluorescent light emitters, when only their combined intensity contributions can be observed at each timepoint. This problem occurs regularly in light microscopy of objects that are smaller than the diffraction limit, where one wishes to count the number of fluorescently labelled subunits. Our proposed solution directly models the photo-physics of the system, as well as the blinking kinetics of the fluorescent emitters as a fully differentiable hidden Markov model. Given a trace of intensity over time, our model jointly estimates the parameters of the intensity distribution per emitter, their blinking rates, as well as a posterior distribution of the total number of fluorescent emitters. We show that our model is consistently more accurate and increases the range of countable subunits by a factor of two compared to current state-of-the-art methods, which count based on autocorrelation and blinking frequency, Further-more, we demonstrate that our model can be used to investigate the effect of blinking kinetics on counting ability, and therefore can inform experimental conditions that will maximize counting accuracy.
Animal behavior is principally expressed through neural control of muscles. Therefore understanding how the brain controls behavior requires mapping neuronal circuits all the way to motor neurons. We have previously established technology to collect large-volume electron microscopy data sets of neural tissue and fully reconstruct the morphology of the neurons and their chemical synaptic connections throughout the volume. Using these tools we generated a dense wiring diagram, or connectome, for a large portion of the Drosophila central brain. However, in most animals, including the fly, the majority of motor neurons are located outside the brain in a neural center closer to the body, i.e. the mammalian spinal cord or insect ventral nerve cord (VNC). In this paper, we extend our effort to map full neural circuits for behavior by generating a connectome of the VNC of a male fly.
The recent assembly of the adult Drosophila melanogaster central brain connectome, containing more than 125,000 neurons and 50 million synaptic connections, provides a template for examining sensory processing throughout the brain. Here we create a leaky integrate-and-fire computational model of the entire Drosophila brain, on the basis of neural connectivity and neurotransmitter identity, to study circuit properties of feeding and grooming behaviours. We show that activation of sugar-sensing or water-sensing gustatory neurons in the computational model accurately predicts neurons that respond to tastes and are required for feeding initiation. In addition, using the model to activate neurons in the feeding region of the Drosophila brain predicts those that elicit motor neuron firing-a testable hypothesis that we validate by optogenetic activation and behavioural studies. Activating different classes of gustatory neurons in the model makes accurate predictions of how several taste modalities interact, providing circuit-level insight into aversive and appetitive taste processing. Additionally, we applied this model to mechanosensory circuits and found that computational activation of mechanosensory neurons predicts activation of a small set of neurons comprising the antennal grooming circuit, and accurately describes the circuit response upon activation of different mechanosensory subtypes. Our results demonstrate that modelling brain circuits using only synapse-level connectivity and predicted neurotransmitter identity generates experimentally testable hypotheses and can describe complete sensorimotor transformations.
The forthcoming assembly of the adult Drosophila melanogaster central brain connectome, containing over 125,000 neurons and 50 million synaptic connections, provides a template for examining sensory processing throughout the brain. Here, we create a leaky integrate-and-fire computational model of the entire Drosophila brain, based on neural connectivity and neurotransmitter identity, to study circuit properties of feeding and grooming behaviors. We show that activation of sugar-sensing or water-sensing gustatory neurons in the computational model accurately predicts neurons that respond to tastes and are required for feeding initiation. Computational activation of neurons in the feeding region of the Drosophila brain predicts those that elicit motor neuron firing, a testable hypothesis that we validate by optogenetic activation and behavioral studies. Moreover, computational activation of different classes of gustatory neurons makes accurate predictions of how multiple taste modalities interact, providing circuit-level insight into aversive and appetitive taste processing. Our computational model predicts that the sugar and water pathways form a partially shared appetitive feeding initiation pathway, which our calcium imaging and behavioral experiments confirm. Additionally, we applied this model to mechanosensory circuits and found that computational activation of mechanosensory neurons predicts activation of a small set of neurons comprising the antennal grooming circuit that do not overlap with gustatory circuits, and accurately describes the circuit response upon activation of different mechanosensory subtypes. Our results demonstrate that modeling brain circuits purely from connectivity and predicted neurotransmitter identity generates experimentally testable hypotheses and can accurately describe complete sensorimotor transformations.
Connectomics has become essential for the study of brain function, yet for most research groups it remains prohibitively costly in imaging time, data storage, and analysis. Here, we present an imaging, processing, and analysis pipeline for multi-resolution image acquisition and circuit reconstruction. Applied to the central complex of six insect species, we were able to obtain global projectomes at cellular resolution (40-50 nm) with embedded local connectomes describing key computational compartments at synaptic resolution (8-12 nm). We provide standardized protocols for volume EM sample preparation, image acquisition and image alignment, combined with existing methods for µCT block trimming, automatic segmentation, synapse detection, collaborative skeleton tracing with CATMAID, and segmentation proofreading via CAVE. We validated our workflow by reconstructing head direction cells across all six insect species, which revealed deep conservation at the level of cell types, cell numbers and projection patterns, while also revealing circuit level specializations. Overall, our pipeline democratizes comparative connectomics by making this method accessible for small research groups with modest resources.
Most existing deep learning-based cell tracking methods rely on supervised learning, requiring large-scale annotated datasets that are often unavailable in real-world scenarios. Moreover, many approaches lack tools and methods for correcting mispredicted links or incorporating corrections through fine-tuning. These limitations contribute to the limited adoption of deep learning-based tracking methods in the life sciences, where manual tracking remains the predominant approach. To reduce the annotation burden and enable model training without extensive labeled data, we introduce a loss function for unsupervised training. Our method leverages the predictable dynamics inherent in many biological processes, providing an initialization that does not require an annotated dataset. We further investigate how minimal user-provided annotations can refine tracking accuracy. To this end, we propose an active learning framework that selectively identifies uncertain decisions within the tracking graph, allowing for efficient annotation of the most informative data points. We evaluate our approach on two microscopy datasets, demonstrating the effectiveness of both our unsupervised training strategy and active learning scheme in improving tracking performance. Our implementation and reproducible experiments are available at github.com/funkelab/attrackt and github.com/funkelab/attrackt_experiments, respectively.
Automatic image segmentation is critical to scale up electron microscope (EM) connectome reconstruction. To this end, segmentation competitions, such as CREMI and SNEMI, exist to help researchers evaluate segmentation algorithms with the goal of improving them. Because generating ground truth is time-consuming, these competitions often fail to capture the challenges in segmenting larger datasets required in connectomics. More generally, the common metrics for EM image segmentation do not emphasize impact on downstream analysis and are often not very useful for isolating problem areas in the segmentation. For example, they do not capture connectivity information and often over-rate the quality of a segmentation as we demonstrate later. To address these issues, we introduce a novel strategy to enable evaluation of segmentation at large scales both in a supervised setting, where ground truth is available, or an unsupervised setting. To achieve this, we first introduce new metrics more closely aligned with the use of segmentation in downstream analysis and reconstruction. In particular, these include synapse connectivity and completeness metrics that provide both meaningful and intuitive interpretations of segmentation quality as it relates to the preservation of neuron connectivity. Also, we propose measures of segmentation correctness and completeness with respect to the percentage of "orphan" fragments and the concentrations of self-loops formed by segmentation failures, which are helpful in analysis and can be computed without ground truth. The introduction of new metrics intended to be used for practical applications involving large datasets necessitates a scalable software ecosystem, which is a critical contribution of this paper. To this end, we introduce a scalable, flexible software framework that enables integration of several different metrics and provides mechanisms to evaluate and debug differences between segmentations. We also introduce visualization software to help users to consume the various metrics collected. We evaluate our framework on two relatively large public groundtruth datasets providing novel insights on example segmentations.
We present a method to automatically identify and track nuclei in time-lapse microscopy recordings of entire developing embryos. The method combines deep learning and global optimization. On a mouse dataset, it reconstructs 75.8% of cell lineages spanning 1 h, as compared to 31.8% for the competing method. Our approach improves understanding of where and when cell fate decisions are made in developing embryos, tissues, and organs.
We present a method to automatically identify and track nuclei in time-lapse microscopy recordings of entire developing embryos. The method combines deep learning and global optimization. On a mouse dataset, it reconstructs 75.8% of cell lineages spanning 1 h, as compared to 31.8% for the competing method. Our approach improves understanding of where and when cell fate decisions are made in developing embryos, tissues, and organs.
