Filter
Associated Lab
- Aso Lab (1) Apply Aso Lab filter
- Betzig Lab (2) Apply Betzig Lab filter
- Bock Lab (1) Apply Bock Lab filter
- Card Lab (1) Apply Card Lab filter
- Cardona Lab (3) Apply Cardona Lab filter
- Clapham Lab (1) Apply Clapham Lab filter
- Fetter Lab (3) Apply Fetter Lab filter
- Funke Lab (10) Apply Funke Lab filter
- Harris Lab (3) Apply Harris Lab filter
- Hess Lab (8) Apply Hess Lab filter
- Jayaraman Lab (1) Apply Jayaraman Lab filter
- Lippincott-Schwartz Lab (3) Apply Lippincott-Schwartz Lab filter
- Rubin Lab (4) Apply Rubin Lab filter
- Remove Saalfeld Lab filter Saalfeld Lab
- Scheffer Lab (2) Apply Scheffer Lab filter
- Singer Lab (1) Apply Singer Lab filter
- Sternson Lab (2) Apply Sternson Lab filter
- Svoboda Lab (2) Apply Svoboda Lab filter
- Tillberg Lab (3) Apply Tillberg Lab filter
- Turaga Lab (3) Apply Turaga Lab filter
Associated Project Team
Associated Support Team
- Electron Microscopy (2) Apply Electron Microscopy filter
- Integrative Imaging (1) Apply Integrative Imaging filter
- Janelia Experimental Technology (1) Apply Janelia Experimental Technology filter
- Project Technical Resources (1) Apply Project Technical Resources filter
- Scientific Computing Software (6) Apply Scientific Computing Software filter
- Scientific Computing Systems (1) Apply Scientific Computing Systems filter
Publication Date
- 2024 (2) Apply 2024 filter
- 2023 (6) Apply 2023 filter
- 2022 (6) Apply 2022 filter
- 2021 (4) Apply 2021 filter
- 2020 (2) Apply 2020 filter
- 2019 (3) Apply 2019 filter
- 2018 (5) Apply 2018 filter
- 2017 (4) Apply 2017 filter
- 2016 (3) Apply 2016 filter
- 2015 (3) Apply 2015 filter
- 2014 (1) Apply 2014 filter
- 2012 (1) Apply 2012 filter
- 2010 (1) Apply 2010 filter
41 Janelia Publications
Showing 21-30 of 41 resultsWe present a method combining affinity prediction with region agglomeration, which improves significantly upon the state of the art of neuron segmentation from electron microscopy (EM) in accuracy and scalability. Our method consists of a 3D U-net, trained to predict affinities between voxels, followed by iterative region agglomeration. We train using a structured loss based on MALIS, encouraging topologically correct segmentations obtained from affinity thresholding. Our extension consists of two parts: First, we present a quasi-linear method to compute the loss gradient, improving over the original quadratic algorithm. Second, we compute the gradient in two separate passes to avoid spurious gradient contributions in early training stages. Our predictions are accurate enough that simple learning-free percentile-based agglomeration outperforms more involved methods used earlier on inferior predictions. We present results on three diverse EM datasets, achieving relative improvements over previous results of 27%, 15%, and 250%. Our findings suggest that a single method can be applied to both nearly isotropic block-face EM data and anisotropic serial sectioned EM data. The runtime of our method scales linearly with the size of the volume and achieves a throughput of ~2.6 seconds per megavoxel, qualifying our method for the processing of very large datasets.
Imaging large samples at the resolution offered by electron microscopy is typically achieved by sequentially recording overlapping tiles that are later combined to seamless mosaics. Mosaics of serial sections are aligned to reconstruct three-dimensional volumes. To achieve this, image distortions and artifacts as introduced during sample preparation or imaging need to be removed. In this chapter, we will discuss typical sources of artifacts and distortion, and we will learn how to use the open source software TrakEM2 to correct them.
Optical and electron microscopy have made tremendous inroads toward understanding the complexity of the brain. However, optical microscopy offers insufficient resolution to reveal subcellular details, and electron microscopy lacks the throughput and molecular contrast to visualize specific molecular constituents over millimeter-scale or larger dimensions. We combined expansion microscopy and lattice light-sheet microscopy to image the nanoscale spatial relationships between proteins across the thickness of the mouse cortex or the entire Drosophila brain. These included synaptic proteins at dendritic spines, myelination along axons, and presynaptic densities at dopaminergic neurons in every fly brain region. The technology should enable statistically rich, large-scale studies of neural development, sexual dimorphism, degree of stereotypy, and structural correlations to behavior or neural activity, all with molecular contrast.
Neural circuit reconstruction at single synapse resolution is increasingly recognized as crucially important to decipher the function of biological nervous systems. Volume electron microscopy in serial transmission or scanning mode has been demonstrated to provide the necessary resolution to segment or trace all neurites and to annotate all synaptic connections.
Automatic annotation of synaptic connections has been done successfully in near isotropic electron microscopy of vertebrate model organisms. Results on non-isotropic data in insect models, however, are not yet on par with human annotation.
We designed a new 3D-U-Net architecture to optimally represent isotropic fields of view in non-isotropic data. We used regression on a signed distance transform of manually annotated synaptic clefts of the CREMI challenge dataset to train this model and observed significant improvement over the state of the art.
We developed open source software for optimized parallel prediction on very large volumetric datasets and applied our model to predict synaptic clefts in a 50 tera-voxels dataset of the complete Drosophila brain. Our model generalizes well to areas far away from where training data was available.
The fruit fly Drosophila melanogaster is an important model organism for neuroscience with a wide array of genetic tools that enable the mapping of individuals neurons and neural subtypes. Brain templates are essential for comparative biological studies because they enable analyzing many individuals in a common reference space. Several central brain templates exist for Drosophila, but every one is either biased, uses sub-optimal tissue preparation, is imaged at low resolution, or does not account for artifacts. No publicly available Drosophila ventral nerve cord template currently exists. In this work, we created high-resolution templates of the Drosophila brain and ventral nerve cord using the best-available technologies for imaging, artifact correction, stitching, and template construction using groupwise registration. We evaluated our central brain template against the four most competitive, publicly available brain templates and demonstrate that ours enables more accurate registration with fewer local deformations in shorter time.
Drosophila melanogaster has a rich repertoire of innate and learned behaviors. Its 100,000-neuron brain is a large but tractable target for comprehensive neural circuit mapping. Only electron microscopy (EM) enables complete, unbiased mapping of synaptic connectivity; however, the fly brain is too large for conventional EM. We developed a custom high-throughput EM platform and imaged the entire brain of an adult female fly at synaptic resolution. To validate the dataset, we traced brain-spanning circuitry involving the mushroom body (MB), which has been extensively studied for its role in learning. All inputs to Kenyon cells (KCs), the intrinsic neurons of the MB, were mapped, revealing a previously unknown cell type, postsynaptic partners of KC dendrites, and unexpected clustering of olfactory projection neurons. These reconstructions show that this freely available EM volume supports mapping of brain-spanning circuits, which will significantly accelerate Drosophila neuroscience..
Two successful approaches for the segmentation of biomedical images are (1) the selection of segment candidates from a merge-tree, and (2) the clustering of small superpixels by solving a Multi-Cut problem. In this paper, we introduce a model that unifies both approaches. Our model, the Candidate Multi-Cut (CMC), allows joint selection and clustering of segment candidates from a merge-tree. This way, we overcome the respective limitations of the individual methods: (1) the space of possible segmentations is not constrained to candidates of a merge-tree, and (2) the decision for clustering can be made on candidates larger than superpixels, using features over larger contexts. We solve the optimization problem of selecting and clustering of candidates using an integer linear program. On datasets of 2D light microscopy of cell populations and 3D electron microscopy of neurons, we show that our method generalizes well and generates more accurate segmentations than merge-tree or Multi-Cut methods alone.
Large electron microscopy image datasets for connectomics are typically composed of thousands to millions of partially overlapping two-dimensional images (tiles), which must be registered into a coherent volume prior to further analysis. A common registration strategy is to find matching features between neighboring and overlapping image pairs, followed by a numerical estimation of optimal image deformation using a so-called solver program.
Existing solvers are inadequate for large data volumes, and inefficient for small-scale image registration.
In this work, an efficient and accurate matrix-based solver method is presented. A linear system is constructed that combines minimization of feature-pair square distances with explicit constraints in a regularization term. In absence of reliable priors for regularization, we show how to construct a rigid-model approximation to use as prior. The linear system is solved using available computer programs, whose performance on typical registration tasks we briefly compare, and to which future scale-up is delegated. Our method is applied to the joint alignment of 2.67 million images, with more than 200 million point-pairs and has been used for successfully aligning the first full adult fruit fly brain.
The most sophisticated existing methods to generate 3D isotropic super-resolution (SR) from non-isotropic electron microscopy (EM) are based on learned dictionaries. Unfortunately, none of the existing methods generate practically satisfying results. For 2D natural images, recently developed super-resolution methods that use deep learning have been shown to significantly outperform the previous state of the art. We have adapted one of the most successful architectures (FSRCNN) for 3D super-resolution, and compared its performance to a 3D U-Net architecture that has not been used previously to generate super-resolution. We trained both architectures on artificially downscaled isotropic ground truth from focused ion beam milling scanning EM (FIB-SEM) and tested the performance for various hyperparameter settings. Our results indicate that both architectures can successfully generate 3D isotropic super-resolution from non-isotropic EM, with the U-Net performing consistently better. We propose several promising directions for practical application.
The integration of cellular and molecular structural data is key to understanding the function of macromolecular assemblies and complexes in their in vivo context. Here we report on the outcomes of a workshop that discussed how to integrate structural data from a range of public archives. The workshop identified two main priorities: the development of tools and file formats to support segmentation (that is, the decomposition of a three-dimensional volume into regions that can be associated with defined objects), and the development of tools to support the annotation of biological structures.