Main Menu (Mobile)- Block

Main Menu - Block

janelia7_blocks-janelia7_secondary_menu | block
More in this page
janelia7_blocks-janelia7_fake_breadcrumb | block
Funke Lab / Publications
custom | custom


facetapi-Q2b17qCsTdECvJIqZJgYMaGsr8vANl1n | block
facetapi-W9JlIB1X0bjs93n1Alu3wHJQTTgDCBGe | block

Associated Project Team

facetapi-PV5lg7xuz68EAY8eakJzrcmwtdGEnxR0 | block
facetapi-021SKYQnqXW6ODq5W5dPAFEDBaEJubhN | block
general_search_page-panel_pane_1 | views_panes

9 Publications

Showing 1-9 of 9 results
11/13/18 | Analyzing image segmentation for connectomics.
Plaza SM, Funke J
Frontiers in Neural Circuits. 2018;12:102. doi: 10.3389/fncir.2018.00102

Automatic image segmentation is critical to scale up electron microscope (EM) connectome reconstruction. To this end, segmentation competitions, such as CREMI and SNEMI, exist to help researchers evaluate segmentation algorithms with the goal of improving them. Because generating ground truth is time-consuming, these competitions often fail to capture the challenges in segmenting larger datasets required in connectomics. More generally, the common metrics for EM image segmentation do not emphasize impact on downstream analysis and are often not very useful for isolating problem areas in the segmentation. For example, they do not capture connectivity information and often over-rate the quality of a segmentation as we demonstrate later. To address these issues, we introduce a novel strategy to enable evaluation of segmentation at large scales both in a supervised setting, where ground truth is available, or an unsupervised setting. To achieve this, we first introduce new metrics more closely aligned with the use of segmentation in downstream analysis and reconstruction. In particular, these include synapse connectivity and completeness metrics that provide both meaningful and intuitive interpretations of segmentation quality as it relates to the preservation of neuron connectivity. Also, we propose measures of segmentation correctness and completeness with respect to the percentage of "orphan" fragments and the concentrations of self-loops formed by segmentation failures, which are helpful in analysis and can be computed without ground truth. The introduction of new metrics intended to be used for practical applications involving large datasets necessitates a scalable software ecosystem, which is a critical contribution of this paper. To this end, we introduce a scalable, flexible software framework that enables integration of several different metrics and provides mechanisms to evaluate and debug differences between segmentations. We also introduce visualization software to help users to consume the various metrics collected. We evaluate our framework on two relatively large public groundtruth datasets providing novel insights on example segmentations.

View Publication Page
06/21/18 | Synaptic partner prediction from point annotations in insect brains.
Buhmann J, Krause R, Lentini RC, Eckstein N, Cook M, Turaga SC, Funke J
arXiv. 2018 Jun 21:1806.08205

High-throughput electron microscopy allows recording of lar- ge stacks of neural tissue with sufficient resolution to extract the wiring diagram of the underlying neural network. Current efforts to automate this process focus mainly on the segmentation of neurons. However, in order to recover a wiring diagram, synaptic partners need to be identi- fied as well. This is especially challenging in insect brains like Drosophila melanogaster, where one presynaptic site is associated with multiple post- synaptic elements. Here we propose a 3D U-Net architecture to directly identify pairs of voxels that are pre- and postsynaptic to each other. To that end, we formulate the problem of synaptic partner identification as a classification problem on long-range edges between voxels to encode both the presence of a synaptic pair and its direction. This formulation allows us to directly learn from synaptic point annotations instead of more ex- pensive voxel-based synaptic cleft or vesicle annotations. We evaluate our method on the MICCAI 2016 CREMI challenge and improve over the current state of the art, producing 3% fewer errors than the next best method.

View Publication Page
05/24/18 | Large scale image segmentation with structured loss based deep learning for connectome reconstruction..
Funke J, Tschopp FD, Grisaitis W, Sheridan A, Singh C, Saalfeld S, Turaga SC
IEEE Transactions on Pattern Analysis and Machine Intelligence. 2018 May 24:. doi: 10.1109/TPAMI.2018.2835450

We present a method combining affinity prediction with region agglomeration, which improves significantly upon the state of the art of neuron segmentation from electron microscopy (EM) in accuracy and scalability. Our method consists of a 3D U-net, trained to predict affinities between voxels, followed by iterative region agglomeration. We train using a structured loss based on MALIS, encouraging topologically correct segmentations obtained from affinity thresholding. Our extension consists of two parts: First, we present a quasi-linear method to compute the loss gradient, improving over the original quadratic algorithm. Second, we compute the gradient in two separate passes to avoid spurious gradient contributions in early training stages. Our predictions are accurate enough that simple learning-free percentile-based agglomeration outperforms more involved methods used earlier on inferior predictions. We present results on three diverse EM datasets, achieving relative improvements over previous results of 27%, 15%, and 250%. Our findings suggest that a single method can be applied to both nearly isotropic block-face EM data and anisotropic serial sectioned EM data. The runtime of our method scales linearly with the size of the volume and achieves a throughput of ~2.6 seconds per megavoxel, qualifying our method for the processing of very large datasets.

View Publication Page
05/24/18 | The candidate multi-cut for cell segmentation.
Funke J, Zhang C, Pietzsch T, Gonzalez Ballester MA, Saalfeld S
2018 IEEE 15th International Symposium on Biomedical Imaging (ISBI 2018). 2017 Jul 04:. doi: 10.1109/ISBI.2018.8363658

Two successful approaches for the segmentation of biomedical images are (1) the selection of segment candidates from a merge-tree, and (2) the clustering of small superpixels by solving a Multi-Cut problem. In this paper, we introduce a model that unifies both approaches. Our model, the Candidate Multi-Cut (CMC), allows joint selection and clustering of segment candidates from a merge-tree. This way, we overcome the respective limitations of the individual methods: (1) the space of possible segmentations is not constrained to candidates of a merge-tree, and (2) the decision for clustering can be made on candidates larger than superpixels, using features over larger contexts. We solve the optimization problem of selecting and clustering of candidates using an integer linear program. On datasets of 2D light microscopy of cell populations and 3D electron microscopy of neurons, we show that our method generalizes well and generates more accurate segmentations than merge-tree or Multi-Cut methods alone.

View Publication Page
05/07/18 | Synaptic cleft segmentation in non-isotropic volume electron microscopy of the complete Drosophila brain.
Heinrich L, Funke J, Pape C, Nunez-Iglesias J, Saalfeld S
arXiv. 2018 May 07:1805.02718

Neural circuit reconstruction at single synapse resolution is increasingly recognized as crucially important to decipher the function of biological nervous systems. Volume electron microscopy in serial transmission or scanning mode has been demonstrated to provide the necessary resolution to segment or trace all neurites and to annotate all synaptic connections. 
Automatic annotation of synaptic connections has been done successfully in near isotropic electron microscopy of vertebrate model organisms. Results on non-isotropic data in insect models, however, are not yet on par with human annotation. 
We designed a new 3D-U-Net architecture to optimally represent isotropic fields of view in non-isotropic data. We used regression on a signed distance transform of manually annotated synaptic clefts of the CREMI challenge dataset to train this model and observed significant improvement over the state of the art. 
We developed open source software for optimized parallel prediction on very large volumetric datasets and applied our model to predict synaptic clefts in a 50 tera-voxels dataset of the complete Drosophila brain. Our model generalizes well to areas far away from where training data was available.

View Publication Page
01/17/17 | TED: A Tolerant Edit Distance for segmentation evaluation.
Funke J, Klein J, Moreno-Noguer F, Cardona A, Cook M
Methods. 2017 Jan 17;115:119-27. doi: 10.1016/j.ymeth.2016.12.013

In this paper, we present a novel error measure to compare a computer-generated segmentation of images or volumes against ground truth. This measure, which we call Tolerant Edit Distance (TED), is motivated by two observations that we usually encounter in biomedical image processing: (1) Some errors, like small boundary shifts, are tolerable in practice. Which errors are tolerable is application dependent and should be explicitly expressible in the measure. (2) Non-tolerable errors have to be corrected manually. The effort needed to do so should be reflected by the error measure. Our measure is the minimal weighted sum of split and merge operations to apply to one segmentation such that it resembles another segmentation within specified tolerance bounds. This is in contrast to other commonly used measures like Rand index or variation of information, which integrate small, but tolerable, differences. Additionally, the TED provides intuitive numbers and allows the localization and classification of errors in images or volumes. We demonstrate the applicability of the TED on 3D segmentations of neurons in electron microscopy images where topological correctness is arguable more important than exact boundary locations. Furthermore, we show that the TED is not just limited to evaluation tasks. We use it as the loss function in a max-margin learning framework to find parameters of an automatic neuron segmentation algorithm. We show that training to minimize the TED, i.e., to minimize crucial errors, leads to higher segmentation accuracy compared to other learning methods.

View Publication Page
06/15/16 | Efficient convolutional neural networks for pixelwise classification on heterogeneous hardware systems.
Tschopp F, Martel JN, Turaga SC, Cook M, Funke J
IEEE 13th International Symposium on Biomedical Imaging: From Nano to Macro. 2016 Jun 15:. doi: 10.1109/ISBI.2016.7493487

With recent advances in high-throughput Electron Microscopy (EM) imaging it is now possible to image an entire nervous system of organisms like Drosophila melanogaster. One of the bottlenecks to reconstruct a connectome from these large volumes (œ 100 TiB) is the pixel-wise prediction of membranes. The time it would typically take to process such a volume using a convolutional neural network (CNN) with a sliding window approach is in the order of years on a current GPU. With sliding windows, however, a lot of redundant computations are carried out. In this paper, we present an extension to the Caffe library to increase throughput by predicting many pixels at once. On a sliding window network successfully used for membrane classification, we show that our method achieves a speedup of up to 57×, maintaining identical prediction results.

View Publication Page
04/13/16 | Structured learning of assignment models for neuron reconstruction to minimize topological errors.
Funke J, Klein J, Moreno-Noguer F, Cardona A, Cook M
IEEE 13th International Symposium on Biomedical Imaging (ISBI). 2016 Ap 13:607-11. doi: 10.1109/ ISBI.2016.7493341

Structured learning provides a powerful framework for empirical risk minimization on the predictions of structured models. It allows end-to-end learning of model parameters to minimize an application specific loss function. This framework is particularly well suited for discrete optimization models that are used for neuron reconstruction from anisotropic electron microscopy (EM) volumes. However, current methods are still learning unary potentials by training a classifier that is agnostic about the model it is used in. We believe the reason for that lies in the difficulties of (1) finding a representative training sample, and (2) designing an application specific loss function that captures the quality of a proposed solution. In this paper, we show how to find a representative training sample from human generated ground truth, and propose a loss function that is suitable to minimize topological errors in the reconstruction. We compare different training methods on two challenging EM-datasets. Our structured learning approach shows consistently higher reconstruction accuracy than other current learning methods.

View Publication Page
11/18/15 | Who is talking to whom: Synaptic partner detection in anisotropic volumes of insect brain.
Kreshuk A, Funke J, Cardona A, Hamprecht FA
Medical Image Computing and Computer-Assisted Intervention -- MICCAI 2015:661-8. doi: 10.1007/978-3-319-24553-9_81

Automated reconstruction of neural connectivity graphs from electron microscopy image stacks is an essential step towards large-scale neural circuit mapping. While significant progress has recently been made in automated segmentation of neurons and detection of synapses, the problem of synaptic partner assignment for polyadic (one-to-many) synapses, prevalent in the Drosophila brain, remains unsolved. In this contribution, we propose a method which automatically assigns pre- and postsynaptic roles to neurites adjacent to a synaptic site. The method constructs a probabilistic graphical model over potential synaptic partner pairs which includes factors to account for a high rate of one-to-many connections, as well as the possibility of the same neuron to be pre-synaptic in one synapse and post-synaptic in another. The algorithm has been validated on a publicly available stack of ssTEM images of Drosophila neural tissue and has been shown to reconstruct most of the synaptic relations correctly.

View Publication Page