Main Menu (Mobile)- Block

Main Menu - Block

custom | custom

Search Results

filters_region_cap | custom


facetapi-Q2b17qCsTdECvJIqZJgYMaGsr8vANl1n | block
facetapi-W9JlIB1X0bjs93n1Alu3wHJQTTgDCBGe | block

Associated Project Team

facetapi-61yz1V0li8B1bixrCWxdAe2aYiEXdhd0 | block

Associated Support Team

facetapi-PV5lg7xuz68EAY8eakJzrcmwtdGEnxR0 | block
facetapi-aK0bSsPXQOqhYQEgonL2xGNrv4SPvFLb | block

Tool Types

general_search_page-panel_pane_1 | views_panes

15 Janelia Publications

Showing 1-10 of 15 results
Your Criteria:
    09/17/20 | Microtubule Tracking in Electron Microscopy Volumes
    Nils Eckstein , Julia Buhmann , Matthew Cook , Jan Funke
    International Conference on Medical Image Computing and Computer-Assisted Intervention. 2020 Sep 17:

    We present a method for microtubule tracking in electron microscopy volumes. Our method first identifies a sparse set of voxels that likely belong to microtubules. Similar to prior work, we then enumerate potential edges between these voxels, which we represent in a candidate graph. Tracks of microtubules are found by selecting nodes and edges in the candidate graph by solving a constrained optimization problem incorporating biological priors on microtubule structure. For this, we present a novel integer linear programming formulation, which results in speed-ups of three orders of magnitude and an increase of 53% in accuracy compared to prior art (evaluated on three 1 . 2 × 4 × 4µm volumes of Drosophila neural tissue). We also propose a scheme to solve the optimization problem in a block-wise fashion, which allows distributed tracking and is necessary to process very large electron microscopy volumes. Finally, we release a benchmark dataset for microtubule tracking, here used for training, testing and validation, consisting of eight 30 x 1000 x 1000 voxel blocks (1 . 2 × 4 × 4µm) of densely annotated microtubules in the CREMI data set (

    View Publication Page
    09/14/20 | Dense neuronal reconstruction through X-ray holographic nano-tomography.
    Kuan AT, Phelps JS, Thomas LA, Nguyen TM, Han J, Chen C, Azevedo AW, Tuthill JC, Funke J, Cloetens P, Pacureanu A, Lee WA
    Nature Neuroscience. 2020 Sep 14:. doi: 10.1038/s41593-020-0704-9

    Imaging neuronal networks provides a foundation for understanding the nervous system, but resolving dense nanometer-scale structures over large volumes remains challenging for light microscopy (LM) and electron microscopy (EM). Here we show that X-ray holographic nano-tomography (XNH) can image millimeter-scale volumes with sub-100-nm resolution, enabling reconstruction of dense wiring in Drosophila melanogaster and mouse nervous tissue. We performed correlative XNH and EM to reconstruct hundreds of cortical pyramidal cells and show that more superficial cells receive stronger synaptic inhibition on their apical dendrites. By combining multiple XNH scans, we imaged an adult Drosophila leg with sufficient resolution to comprehensively catalog mechanosensory neurons and trace individual motor axons from muscles to the central nervous system. To accelerate neuronal reconstructions, we trained a convolutional neural network to automatically segment neurons from XNH volumes. Thus, XNH bridges a key gap between LM and EM, providing a new avenue for neural circuit discovery.

    View Publication Page
    09/10/20 | Inpainting Networks Learn to Separate Cells in Microscopy Images
    Wolf S, Hamprecht FA, Funke J
    British Machine Vision Conference. 2020 Sep:

    Deep neural networks trained to inpaint partially occluded images show a deep understanding of image composition and have even been shown to remove objects from images convincingly. In this work, we investigate how this implicit knowledge of image composition can be be used to separate cells in densely populated microscopy images. We propose a measure for the independence of two image regions given a fully self-supervised inpainting network and separate objects by maximizing this independence. We evaluate our method on two cell segmentation datasets and show that cells can be separated completely unsupervised. Furthermore, combined with simple foreground detection, our method yields instance segmentation of similar quality to fully supervised methods.

    View Publication Page
    09/02/20 | Neurotransmitter Classification from Electron Microscopy Images at Synaptic Sites in Drosophila
    Eckstein N, Bates AS, Du M, Hartenstein V, Jefferis GS, Funke J
    bioRxiv. 2020 Sep 2:. doi: 10.1101/2020.06.12.148775

    High-resolution electron microscopy (EM) of nervous systems enables the reconstruction of neural circuits at the level of individual synaptic connections. However, for invertebrates, such as Drosophila melanogaster, it has so far been unclear whether the phenotype of neurons or synapses alone is sufficient to predict specific functional properties such as neurotransmitter identity. Here, we show that in Drosophila melanogaster artificial convolutional neural networks can confidently predict the type of neurotransmitter released at a synaptic site from EM images alone. The network successfully discriminates between six types of neurotransmitters (GABA, glutamate, acetylcholine, serotonin, dopamine, and octopamine) with an average accuracy of 87% for individual synapses and 94% for entire neurons, assuming each neuron expresses only one neurotransmitter. This result is surprising as there are often no obvious cues in the EM images that human observers can use to predict neurotransmitter identity. We apply the proposed method to quantify whether, similar to the ventral nervous system (VNS), all hemilineages in the Drosophila melanogaster brain express only one fast acting transmitter within their neurons. To test this principle, we predict the neurotransmitter identity of all identified synapses in 89 hemilineages in the Drosophila melanogaster adult brain. While the majority of our predictions show homogeneity of fast-acting neurotransmitter identity within a single hemilineage, we identify a set of hemilineages that express two fast-acting neurotransmitters with high statistical significance. As a result, our predictions are inconsistent with the hypothesis that all neurons within a hemilineage express the same fast-acting neurotransmitter in the brain of Drosophila melanogaster.Competing Interest StatementThe authors have declared no competing interest.

    View Publication Page
    01/11/20 | Reconstruction of motor control circuits in adult Drosophila using automated transmission electron microscopy
    Maniates-Selvin JT, Hildebrand DG, Graham BJ, Kuan AT, Thomas LA, Nguyen T, Buhmann J, Azevedo AW, Shanny BL, Funke J, Tuthill JC, Lee WA
    bioRxiv. 2020 Jan 11:. doi: 10.1101/2020.01.10.902478

    Many animals use coordinated limb movements to interact with and navigate through the environment. To investigate circuit mechanisms underlying locomotor behavior, we used serial-section electron microscopy (EM) to map synaptic connectivity within a neuronal network that controls limb movements. We present a synapse-resolution EM dataset containing the ventral nerve cord (VNC) of an adult female Drosophila melanogaster. To generate this dataset, we developed GridTape, a technology that combines automated serial-section collection with automated high-throughput transmission EM. Using this dataset, we reconstructed 507 motor neurons, including all those that control the legs and wings. We show that a specific class of leg sensory neurons directly synapse onto the largest-caliber motor neuron axons on both sides of the body, representing a unique feedback pathway for fast limb control. We provide open access to the dataset and reconstructions registered to a standard atlas to permit matching of cells between EM and light microscopy data. We also provide GridTape instrumentation designs and software to make large-scale EM data acquisition more accessible and affordable to the scientific community.

    View Publication Page
    07/01/19 | Large scale image segmentation with structured loss based deep learning for connectome reconstruction.
    Funke J, Tschopp FD, Grisaitis W, Sheridan A, Singh C, Saalfeld S, Turaga SC
    IEEE Transactions on Pattern Analysis and Machine Intelligence. 2019 Jul 1;41(7):1669-80. doi: 10.1109/TPAMI.2018.2835450

    We present a method combining affinity prediction with region agglomeration, which improves significantly upon the state of the art of neuron segmentation from electron microscopy (EM) in accuracy and scalability. Our method consists of a 3D U-net, trained to predict affinities between voxels, followed by iterative region agglomeration. We train using a structured loss based on MALIS, encouraging topologically correct segmentations obtained from affinity thresholding. Our extension consists of two parts: First, we present a quasi-linear method to compute the loss gradient, improving over the original quadratic algorithm. Second, we compute the gradient in two separate passes to avoid spurious gradient contributions in early training stages. Our predictions are accurate enough that simple learning-free percentile-based agglomeration outperforms more involved methods used earlier on inferior predictions. We present results on three diverse EM datasets, achieving relative improvements over previous results of 27%, 15%, and 250%. Our findings suggest that a single method can be applied to both nearly isotropic block-face EM data and anisotropic serial sectioned EM data. The runtime of our method scales linearly with the size of the volume and achieves a throughput of ~2.6 seconds per megavoxel, qualifying our method for the processing of very large datasets.

    View Publication Page
    03/19/20 | Automatic Detection of Synaptic Partners in a Whole-Brain Drosophila EM Dataset
    Buhmann J, Sheridan A, Gerhard S, Krause R, Nguyen T, Heinrich L, Schlegel P, Lee WA, Wilson R, Saalfeld S, Jefferis G, Bock D, Turaga S, Cook M, Funke J
    bioRxiv. 2020 Mar 19:. doi: 10.1101/2019.12.12.874172

    The study of neural circuits requires the reconstruction of neurons and the identification of synaptic connections between them. To scale the reconstruction to the size of whole-brain datasets, semi-automatic methods are needed to solve those tasks. Here, we present an automatic method for synaptic partner identification in insect brains, which uses convolutional neural networks to identify post-synaptic sites and their pre-synaptic partners. The networks can be trained from human generated point annotations alone and requires only simple post-processing to obtain final predictions. We used our method to extract 244 million putative synaptic partners in the fifty-teravoxel full adult fly brain (FAFB) electron microscopy (EM) dataset and evaluated its accuracy on 146,643 synapses from 702 neurons with a total cable length of 312 mm in four different brain regions. The predicted synaptic connections can be used together with a neuron segmentation to infer a connectivity graph with high accuracy: 96% of edges between connected neurons are correctly classified as weakly connected (less than five synapses) and strongly connected (at least five synapses). Our synaptic partner predictions for the FAFB dataset are publicly available, together with a query library allowing automatic retrieval of up- and downstream neurons.

    View Publication Page
    11/13/18 | Analyzing image segmentation for connectomics.
    Plaza SM, Funke J
    Frontiers in Neural Circuits. 2018;12:102. doi: 10.3389/fncir.2018.00102

    Automatic image segmentation is critical to scale up electron microscope (EM) connectome reconstruction. To this end, segmentation competitions, such as CREMI and SNEMI, exist to help researchers evaluate segmentation algorithms with the goal of improving them. Because generating ground truth is time-consuming, these competitions often fail to capture the challenges in segmenting larger datasets required in connectomics. More generally, the common metrics for EM image segmentation do not emphasize impact on downstream analysis and are often not very useful for isolating problem areas in the segmentation. For example, they do not capture connectivity information and often over-rate the quality of a segmentation as we demonstrate later. To address these issues, we introduce a novel strategy to enable evaluation of segmentation at large scales both in a supervised setting, where ground truth is available, or an unsupervised setting. To achieve this, we first introduce new metrics more closely aligned with the use of segmentation in downstream analysis and reconstruction. In particular, these include synapse connectivity and completeness metrics that provide both meaningful and intuitive interpretations of segmentation quality as it relates to the preservation of neuron connectivity. Also, we propose measures of segmentation correctness and completeness with respect to the percentage of "orphan" fragments and the concentrations of self-loops formed by segmentation failures, which are helpful in analysis and can be computed without ground truth. The introduction of new metrics intended to be used for practical applications involving large datasets necessitates a scalable software ecosystem, which is a critical contribution of this paper. To this end, we introduce a scalable, flexible software framework that enables integration of several different metrics and provides mechanisms to evaluate and debug differences between segmentations. We also introduce visualization software to help users to consume the various metrics collected. We evaluate our framework on two relatively large public groundtruth datasets providing novel insights on example segmentations.

    View Publication Page
    09/26/18 | Synaptic cleft segmentation in non-isotropic volume electron microscopy of the complete Drosophila brain.
    Heinrich L, Funke J, Pape C, Nunez-Iglesias J, Saalfeld S
    Medical Image Computing and Computer Assisted Intervention – MICCAI 2018. 2018 Sep 26:317-25. doi: 10.1007/978-3-030-00934-2_36

    Neural circuit reconstruction at single synapse resolution is increasingly recognized as crucially important to decipher the function of biological nervous systems. Volume electron microscopy in serial transmission or scanning mode has been demonstrated to provide the necessary resolution to segment or trace all neurites and to annotate all synaptic connections. 
    Automatic annotation of synaptic connections has been done successfully in near isotropic electron microscopy of vertebrate model organisms. Results on non-isotropic data in insect models, however, are not yet on par with human annotation. 
    We designed a new 3D-U-Net architecture to optimally represent isotropic fields of view in non-isotropic data. We used regression on a signed distance transform of manually annotated synaptic clefts of the CREMI challenge dataset to train this model and observed significant improvement over the state of the art. 
    We developed open source software for optimized parallel prediction on very large volumetric datasets and applied our model to predict synaptic clefts in a 50 tera-voxels dataset of the complete Drosophila brain. Our model generalizes well to areas far away from where training data was available.

    View Publication Page
    09/26/18 | Synaptic partner prediction from point annotations in insect brains.
    Buhmann J, Krause R, Lentini RC, Eckstein N, Cook M, Turaga SC, Funke J
    MICCAI 2018: Medical Image Computing and Computer Assisted Intervention. 2018 Sep 26:. doi: 10.1007/978-3-030-00934-2_35

    High-throughput electron microscopy allows recording of lar- ge stacks of neural tissue with sufficient resolution to extract the wiring diagram of the underlying neural network. Current efforts to automate this process focus mainly on the segmentation of neurons. However, in order to recover a wiring diagram, synaptic partners need to be identi- fied as well. This is especially challenging in insect brains like Drosophila melanogaster, where one presynaptic site is associated with multiple post- synaptic elements. Here we propose a 3D U-Net architecture to directly identify pairs of voxels that are pre- and postsynaptic to each other. To that end, we formulate the problem of synaptic partner identification as a classification problem on long-range edges between voxels to encode both the presence of a synaptic pair and its direction. This formulation allows us to directly learn from synaptic point annotations instead of more ex- pensive voxel-based synaptic cleft or vesicle annotations. We evaluate our method on the MICCAI 2016 CREMI challenge and improve over the current state of the art, producing 3% fewer errors than the next best method.

    View Publication Page