Main Menu (Mobile)- Block

Main Menu - Block

custom | custom

Search Results

general_search_page-panel_pane_1 | views_panes

33 Janelia Publications

Showing 21-30 of 33 results
Your Criteria:
    12/01/20 | Dense neuronal reconstruction through X-ray holographic nano-tomography.
    Kuan AT, Phelps JS, Thomas LA, Nguyen TM, Han J, Chen C, Azevedo AW, Tuthill JC, Funke J, Cloetens P, Pacureanu A, Lee WA
    Nature Neuroscience. 2020 Dec -1;23(12):1637-43. doi: 10.1038/s41593-020-0704-9

    Imaging neuronal networks provides a foundation for understanding the nervous system, but resolving dense nanometer-scale structures over large volumes remains challenging for light microscopy (LM) and electron microscopy (EM). Here we show that X-ray holographic nano-tomography (XNH) can image millimeter-scale volumes with sub-100-nm resolution, enabling reconstruction of dense wiring in Drosophila melanogaster and mouse nervous tissue. We performed correlative XNH and EM to reconstruct hundreds of cortical pyramidal cells and show that more superficial cells receive stronger synaptic inhibition on their apical dendrites. By combining multiple XNH scans, we imaged an adult Drosophila leg with sufficient resolution to comprehensively catalog mechanosensory neurons and trace individual motor axons from muscles to the central nervous system. To accelerate neuronal reconstructions, we trained a convolutional neural network to automatically segment neurons from XNH volumes. Thus, XNH bridges a key gap between LM and EM, providing a new avenue for neural circuit discovery.

    View Publication Page
    09/17/20 | Microtubule Tracking in Electron Microscopy Volumes
    Nils Eckstein , Julia Buhmann , Matthew Cook , Jan Funke
    International Conference on Medical Image Computing and Computer-Assisted Intervention. 2020 Sep 17:

    We present a method for microtubule tracking in electron microscopy volumes. Our method first identifies a sparse set of voxels that likely belong to microtubules. Similar to prior work, we then enumerate potential edges between these voxels, which we represent in a candidate graph. Tracks of microtubules are found by selecting nodes and edges in the candidate graph by solving a constrained optimization problem incorporating biological priors on microtubule structure. For this, we present a novel integer linear programming formulation, which results in speed-ups of three orders of magnitude and an increase of 53% in accuracy compared to prior art (evaluated on three 1 . 2 × 4 × 4µm volumes of Drosophila neural tissue). We also propose a scheme to solve the optimization problem in a block-wise fashion, which allows distributed tracking and is necessary to process very large electron microscopy volumes. Finally, we release a benchmark dataset for microtubule tracking, here used for training, testing and validation, consisting of eight 30 x 1000 x 1000 voxel blocks (1 . 2 × 4 × 4µm) of densely annotated microtubules in the CREMI data set (https://github.com/nilsec/micron).

    View Publication Page
    09/10/20 | Inpainting Networks Learn to Separate Cells in Microscopy Images
    Wolf S, Hamprecht FA, Funke J
    British Machine Vision Conference. 2020 Sep:

    Deep neural networks trained to inpaint partially occluded images show a deep understanding of image composition and have even been shown to remove objects from images convincingly. In this work, we investigate how this implicit knowledge of image composition can be be used to separate cells in densely populated microscopy images. We propose a measure for the independence of two image regions given a fully self-supervised inpainting network and separate objects by maximizing this independence. We evaluate our method on two cell segmentation datasets and show that cells can be separated completely unsupervised. Furthermore, combined with simple foreground detection, our method yields instance segmentation of similar quality to fully supervised methods.

    View Publication Page
    09/02/20 | Neurotransmitter Classification from Electron Microscopy Images at Synaptic Sites in Drosophila
    Eckstein N, Bates AS, Du M, Hartenstein V, Jefferis GS, Funke J
    bioRxiv. 2020 Sep 2:. doi: 10.1101/2020.06.12.148775

    High-resolution electron microscopy (EM) of nervous systems enables the reconstruction of neural circuits at the level of individual synaptic connections. However, for invertebrates, such as Drosophila melanogaster, it has so far been unclear whether the phenotype of neurons or synapses alone is sufficient to predict specific functional properties such as neurotransmitter identity. Here, we show that in Drosophila melanogaster artificial convolutional neural networks can confidently predict the type of neurotransmitter released at a synaptic site from EM images alone. The network successfully discriminates between six types of neurotransmitters (GABA, glutamate, acetylcholine, serotonin, dopamine, and octopamine) with an average accuracy of 87% for individual synapses and 94% for entire neurons, assuming each neuron expresses only one neurotransmitter. This result is surprising as there are often no obvious cues in the EM images that human observers can use to predict neurotransmitter identity. We apply the proposed method to quantify whether, similar to the ventral nervous system (VNS), all hemilineages in the Drosophila melanogaster brain express only one fast acting transmitter within their neurons. To test this principle, we predict the neurotransmitter identity of all identified synapses in 89 hemilineages in the Drosophila melanogaster adult brain. While the majority of our predictions show homogeneity of fast-acting neurotransmitter identity within a single hemilineage, we identify a set of hemilineages that express two fast-acting neurotransmitters with high statistical significance. As a result, our predictions are inconsistent with the hypothesis that all neurons within a hemilineage express the same fast-acting neurotransmitter in the brain of Drosophila melanogaster.

    View Publication Page
    07/01/19 | Large scale image segmentation with structured loss based deep learning for connectome reconstruction.
    Funke J, Tschopp FD, Grisaitis W, Sheridan A, Singh C, Saalfeld S, Turaga SC
    IEEE Transactions on Pattern Analysis and Machine Intelligence. 2019 Jul 1;41(7):1669-80. doi: 10.1109/TPAMI.2018.2835450

    We present a method combining affinity prediction with region agglomeration, which improves significantly upon the state of the art of neuron segmentation from electron microscopy (EM) in accuracy and scalability. Our method consists of a 3D U-net, trained to predict affinities between voxels, followed by iterative region agglomeration. We train using a structured loss based on MALIS, encouraging topologically correct segmentations obtained from affinity thresholding. Our extension consists of two parts: First, we present a quasi-linear method to compute the loss gradient, improving over the original quadratic algorithm. Second, we compute the gradient in two separate passes to avoid spurious gradient contributions in early training stages. Our predictions are accurate enough that simple learning-free percentile-based agglomeration outperforms more involved methods used earlier on inferior predictions. We present results on three diverse EM datasets, achieving relative improvements over previous results of 27%, 15%, and 250%. Our findings suggest that a single method can be applied to both nearly isotropic block-face EM data and anisotropic serial sectioned EM data. The runtime of our method scales linearly with the size of the volume and achieves a throughput of ~2.6 seconds per megavoxel, qualifying our method for the processing of very large datasets.

    View Publication Page
    11/13/18 | Analyzing image segmentation for connectomics.
    Plaza SM, Funke J
    Frontiers in Neural Circuits. 2018;12:102. doi: 10.3389/fncir.2018.00102

    Automatic image segmentation is critical to scale up electron microscope (EM) connectome reconstruction. To this end, segmentation competitions, such as CREMI and SNEMI, exist to help researchers evaluate segmentation algorithms with the goal of improving them. Because generating ground truth is time-consuming, these competitions often fail to capture the challenges in segmenting larger datasets required in connectomics. More generally, the common metrics for EM image segmentation do not emphasize impact on downstream analysis and are often not very useful for isolating problem areas in the segmentation. For example, they do not capture connectivity information and often over-rate the quality of a segmentation as we demonstrate later. To address these issues, we introduce a novel strategy to enable evaluation of segmentation at large scales both in a supervised setting, where ground truth is available, or an unsupervised setting. To achieve this, we first introduce new metrics more closely aligned with the use of segmentation in downstream analysis and reconstruction. In particular, these include synapse connectivity and completeness metrics that provide both meaningful and intuitive interpretations of segmentation quality as it relates to the preservation of neuron connectivity. Also, we propose measures of segmentation correctness and completeness with respect to the percentage of "orphan" fragments and the concentrations of self-loops formed by segmentation failures, which are helpful in analysis and can be computed without ground truth. The introduction of new metrics intended to be used for practical applications involving large datasets necessitates a scalable software ecosystem, which is a critical contribution of this paper. To this end, we introduce a scalable, flexible software framework that enables integration of several different metrics and provides mechanisms to evaluate and debug differences between segmentations. We also introduce visualization software to help users to consume the various metrics collected. We evaluate our framework on two relatively large public groundtruth datasets providing novel insights on example segmentations.

    View Publication Page
    09/26/18 | Synaptic cleft segmentation in non-isotropic volume electron microscopy of the complete Drosophila brain.
    Heinrich L, Funke J, Pape C, Nunez-Iglesias J, Saalfeld S
    Medical Image Computing and Computer Assisted Intervention – MICCAI 2018. 2018 Sep 26:317-25. doi: 10.1007/978-3-030-00934-2_36

    Neural circuit reconstruction at single synapse resolution is increasingly recognized as crucially important to decipher the function of biological nervous systems. Volume electron microscopy in serial transmission or scanning mode has been demonstrated to provide the necessary resolution to segment or trace all neurites and to annotate all synaptic connections. 
    Automatic annotation of synaptic connections has been done successfully in near isotropic electron microscopy of vertebrate model organisms. Results on non-isotropic data in insect models, however, are not yet on par with human annotation. 
    We designed a new 3D-U-Net architecture to optimally represent isotropic fields of view in non-isotropic data. We used regression on a signed distance transform of manually annotated synaptic clefts of the CREMI challenge dataset to train this model and observed significant improvement over the state of the art. 
    We developed open source software for optimized parallel prediction on very large volumetric datasets and applied our model to predict synaptic clefts in a 50 tera-voxels dataset of the complete Drosophila brain. Our model generalizes well to areas far away from where training data was available.

    View Publication Page
    09/26/18 | Synaptic partner prediction from point annotations in insect brains.
    Buhmann J, Krause R, Lentini RC, Eckstein N, Cook M, Turaga SC, Funke J
    MICCAI 2018: Medical Image Computing and Computer Assisted Intervention. 2018 Sep 26:. doi: 10.1007/978-3-030-00934-2_35

    High-throughput electron microscopy allows recording of lar- ge stacks of neural tissue with sufficient resolution to extract the wiring diagram of the underlying neural network. Current efforts to automate this process focus mainly on the segmentation of neurons. However, in order to recover a wiring diagram, synaptic partners need to be identi- fied as well. This is especially challenging in insect brains like Drosophila melanogaster, where one presynaptic site is associated with multiple post- synaptic elements. Here we propose a 3D U-Net architecture to directly identify pairs of voxels that are pre- and postsynaptic to each other. To that end, we formulate the problem of synaptic partner identification as a classification problem on long-range edges between voxels to encode both the presence of a synaptic pair and its direction. This formulation allows us to directly learn from synaptic point annotations instead of more ex- pensive voxel-based synaptic cleft or vesicle annotations. We evaluate our method on the MICCAI 2016 CREMI challenge and improve over the current state of the art, producing 3% fewer errors than the next best method.

    View Publication Page
    05/24/18 | The candidate multi-cut for cell segmentation.
    Funke J, Zhang C, Pietzsch T, Gonzalez Ballester MA, Saalfeld S
    2018 IEEE 15th International Symposium on Biomedical Imaging (ISBI 2018). 2017 Jul 04:. doi: 10.1109/ISBI.2018.8363658

    Two successful approaches for the segmentation of biomedical images are (1) the selection of segment candidates from a merge-tree, and (2) the clustering of small superpixels by solving a Multi-Cut problem. In this paper, we introduce a model that unifies both approaches. Our model, the Candidate Multi-Cut (CMC), allows joint selection and clustering of segment candidates from a merge-tree. This way, we overcome the respective limitations of the individual methods: (1) the space of possible segmentations is not constrained to candidates of a merge-tree, and (2) the decision for clustering can be made on candidates larger than superpixels, using features over larger contexts. We solve the optimization problem of selecting and clustering of candidates using an integer linear program. On datasets of 2D light microscopy of cell populations and 3D electron microscopy of neurons, we show that our method generalizes well and generates more accurate segmentations than merge-tree or Multi-Cut methods alone.

    View Publication Page
    Cardona LabFunke Lab
    01/17/17 | TED: A Tolerant Edit Distance for segmentation evaluation.
    Funke J, Klein J, Moreno-Noguer F, Cardona A, Cook M
    Methods. 2017 Jan 17;115:119-27. doi: 10.1016/j.ymeth.2016.12.013

    In this paper, we present a novel error measure to compare a computer-generated segmentation of images or volumes against ground truth. This measure, which we call Tolerant Edit Distance (TED), is motivated by two observations that we usually encounter in biomedical image processing: (1) Some errors, like small boundary shifts, are tolerable in practice. Which errors are tolerable is application dependent and should be explicitly expressible in the measure. (2) Non-tolerable errors have to be corrected manually. The effort needed to do so should be reflected by the error measure. Our measure is the minimal weighted sum of split and merge operations to apply to one segmentation such that it resembles another segmentation within specified tolerance bounds. This is in contrast to other commonly used measures like Rand index or variation of information, which integrate small, but tolerable, differences. Additionally, the TED provides intuitive numbers and allows the localization and classification of errors in images or volumes. We demonstrate the applicability of the TED on 3D segmentations of neurons in electron microscopy images where topological correctness is arguable more important than exact boundary locations. Furthermore, we show that the TED is not just limited to evaluation tasks. We use it as the loss function in a max-margin learning framework to find parameters of an automatic neuron segmentation algorithm. We show that training to minimize the TED, i.e., to minimize crucial errors, leads to higher segmentation accuracy compared to other learning methods.

    View Publication Page