Main Menu (Mobile)- Block

Main Menu - Block

custom | custom

Search Results

filters_region_cap | custom

Filter

facetapi-Q2b17qCsTdECvJIqZJgYMaGsr8vANl1n | block
facetapi-W9JlIB1X0bjs93n1Alu3wHJQTTgDCBGe | block
facetapi-61yz1V0li8B1bixrCWxdAe2aYiEXdhd0 | block
facetapi-PV5lg7xuz68EAY8eakJzrcmwtdGEnxR0 | block
general_search_page-panel_pane_1 | views_panes

36 Janelia Publications

Showing 21-30 of 36 results
Your Criteria:
    06/06/14 | Small sample learning of superpixel classifiers for EM segmentation- extended version.
    Parag T, Plaza SM, Scheffer LK
    arXiv. 2014 Jun 6:arXiv:1406.1774 [cs.CV]

    Pixel and superpixel classifiers have become essential tools for EM segmentation algorithms. Training these classifiers remains a major bottleneck primarily due to the requirement of completely annotating the dataset which is tedious, error-prone and costly. In this paper, we propose an interactive learning scheme for the superpixel classifier for EM segmentation. Our algorithm is "active semi-supervised" because it requests the labels of a small number of examples from user and applies label propagation technique to generate these queries. Using only a small set (<20%) of all datapoints, the proposed algorithm consistently generates a classifier almost as accurate as that estimated from a complete groundtruth. We provide segmentation results on multiple datasets to show the strength of these classifiers.

    View Publication Page
    06/05/14 | A context-aware delayed agglomeration framework for EM segmentation.
    Parag T, Chakraborty A, Plaza SM
    arXiv. 2014 Jun 5:arXiv:1406.1476 [cs.CV]

    This paper proposes a novel agglomerative framework for Electron Microscopy (EM) image (or volume) segmentation. For the overall segmentation methodology, we propose a context-aware algorithm that clusters the over-segmented regions of different sub-classes (representing different biological entities) in different stages. Furthermore, a delayed scheme for agglomerative clustering, which postpones the merge of newly formed bodies, is also proposed to generate a more confident boundary prediction. We report significant improvements in both segmentation accuracy and speed attained by the proposed approaches over existing standard methods on both 2D and 3D datasets.

    View Publication Page
    04/04/14 | Graph-based active learning of agglomeration (GALA): a Python library to segment 2D and 3D neuroimages
    Nunez-Iglesias J, Kennedy R, Plaza SM, Chakraborty A, William T. Katz
    Frontiers in Neuroinformatics. 2014 Apr 4;8:34. doi: 10.3389/fninf.2014.00034

    The aim in high-resolution connectomics is to reconstruct complete neuronal connectivity in a tissue. Currently, the only technology capable of resolving the smallest neuronal processes is electron microscopy (EM). Thus, a common approach to network reconstruction is to perform (error-prone) automatic segmentation of EM images, followed by manual proofreading by experts to fix errors. We have developed an algorithm and software library to not only improve the accuracy of the initial automatic segmentation, but also point out the image coordinates where it is likely to have made errors. Our software, called gala (graph-based active learning of agglomeration), improves the state of the art in agglomerative image segmentation. It is implemented in Python and makes extensive use of the scientific Python stack (numpy, scipy, networkx, scikit-learn, scikit-image, and others). We present here the software architecture of the gala library, and discuss several designs that we consider would be generally useful for other segmentation packages. We also discuss the current limitations of the gala library and how we intend to address them.

    View Publication Page
    03/02/14 | Toward large-scale connectome reconstructions.
    Plaza SM, Scheffer LK, Chklovskii DB
    Current Opinion in Neurobiology. 2014 Mar 2;25C:201-10. doi: 10.1016/j.conb.2014.01.019

    Recent results have shown the possibility of both reconstructing connectomes of small but biologically interesting circuits and extracting from these connectomes insights into their function. However, these reconstructions were heroic proof-of-concept experiments, requiring person-months of effort per neuron reconstructed, and will not scale to larger circuits, much less the brains of entire animals. In this paper we examine what will be required to generate and use substantially larger connectomes, finding five areas that need increased attention: firstly, imaging better suited to automatic reconstruction, with excellent z-resolution; secondly, automatic detection, validation, and measurement of synapses; thirdly, reconstruction methods that keep and use uncertainty metrics for every object, from initial images, through segmentation, reconstruction, and connectome queries; fourthly, processes that are fully incremental, so that the connectome may be used before it is fully complete; and finally, better tools for analysis of connectomes, once they are obtained.

    View Publication Page
    01/20/14 | Lessons from the neurons themselves.
    Scheffer L
    Design Automation Conference (ASP-DAC), 2014 19th Asia and South Pacific. 2014 Jan 20-23:197-200. doi: 10.1109/ASPDAC.2014.6742889

    Natural neural circuits, optimized by millions of years of evolution, are fast, low power, robust, and adapt in response to experience, all characteristics we would love to have in systems we ourselves design. Recently there have been enormous advances in understanding how neurons implement computations within the brain of living creatures. Can we use this new-found knowledge to create better artificial system? What lessons can we learn from the neurons themselves, that can help us create better neuromorphic circuits?

    View Publication Page
    08/07/13 | A visual motion detection circuit suggested by Drosophila connectomics.
    Takemura S, Bharioke A, Lu Z, Nern A, Vitaladevuni S, Rivlin PK, Katz WT, Olbris DJ, Plaza SM, Winston P, Zhao T, Horne JA, Fetter RD, Takemura S, Blazek K, Chang L, Ogundeyi O, Saunders MA, Shapiro V, Sigmund C, Rubin GM, Scheffer LK, Meinertzhagen IA, Chklovskii DB
    Nature. 2013 Aug 7;500(7461):175–81. doi: doi:10.1038/nature12450

    Animal behaviour arises from computations in neuronal circuits, but our understanding of these computations has been frustrated by the lack of detailed synaptic connection maps, or connectomes. For example, despite intensive investigations over half a century, the neuronal implementation of local motion detection in the insect visual system remains elusive. Here we develop a semi-automated pipeline using electron microscopy to reconstruct a connectome, containing 379 neurons and 8,637 chemical synaptic contacts, within the Drosophila optic medulla. By matching reconstructed neurons to examples from light microscopy, we assigned neurons to cell types and assembled a connectome of the repeating module of the medulla. Within this module, we identified cell types constituting a motion detection circuit, and showed that the connections onto individual motion-sensitive neurons in this circuit were consistent with their direction selectivity. Our results identify cellular targets for future functional investigations, and demonstrate that connectomes can provide key insights into neuronal computations.

    View Publication Page
    08/02/13 | Electron microscopy reconstruction of brain structure using sparse representations over learned dictionaries.
    Hu T, Nunez-Iglesias J, Vitaladevuni S, Scheffer L, Xu S, Bolorizadeh M, Hess H, Fetter R, Chklovskii D
    IEEE Transactions on Medical Imaging. 2013 Aug 2;32(12):2179-88. doi: 10.1109/TMI.2013.2276018

    A central problem in neuroscience is reconstructing neuronal circuits on the synapse level. Due to a wide range of scales in brain architecture such reconstruction requires imaging that is both high-resolution and high-throughput. Existing electron microscopy (EM) techniques possess required resolution in the lateral plane and either high-throughput or high depth resolution but not both. Here, we exploit recent advances in unsupervised learning and signal processing to obtain high depth-resolution EM images computationally without sacrificing throughput. First, we show that the brain tissue can be represented as a sparse linear combination of localized basis functions that are learned using high-resolution datasets. We then develop compressive sensing-inspired techniques that can reconstruct the brain tissue from very few (typically 5) tomographic views of each section. This enables tracing of neuronal processes and, hence, high throughput reconstruction of neural circuits on the level of individual synapses.

    View Publication Page
    04/22/13 | Automated alignment of imperfect EM images for neural reconstruction.
    Scheffer LK, Karsh B, Vitaladevun S
    arXiv. 2013 Apr-22:arXiv:1304.6034 [q-bio.QM]

    The most established method of reconstructing neural circuits from animals involves slicing tissue very thin, then taking mosaics of electron microscope (EM) images. To trace neurons across different images and through different sections, these images must be accurately aligned, both with the others in the same section and to the sections above and below. Unfortunately, sectioning and imaging are not ideal processes - some of the problems that make alignment difficult include lens distortion, tissue shrinkage during imaging, tears and folds in the sectioned tissue, and dust and other artifacts. In addition the data sets are large (hundreds of thousands of images) and each image must be aligned with many neighbors, so the process must be automated and reliable. This paper discusses methods of dealing with these problems, with numeric results describing the accuracy of the resulting alignments.

    View Publication Page
    01/01/12 | Design tools for artificial nervous systems.
    Scheffer L
    Design Automation Conference (DAC), 2012 49th ACM/EDAC/IEEE. 2012:

    Electronic and biological systems both perform complex information processing, but they use very different techniques. Though electronics has the advantage in raw speed, biological systems have the edge in many other areas. They can be produced, and indeed self-reproduce, without expensive and finicky factories. They are tolerant of manufacturing defects, and learn and adapt for better performance. In many cases they can self-repair damage. These advantages suggest that biological systems might be useful in a wide variety of tasks involving information processing. So far, all attempts to use the nervous system of a living organism for information processing have involved selective breeding of existing organisms. This approach, largely independent of the details of internal operation, is used since we do not yet understand how neural systems work, nor exactly how they are constructed. However, as our knowledge increases, the day will come when we can envision useful nervous systems and design them based upon what we want them to do, as opposed to variations on what has been already built. We will then need tools, corresponding to our Electronic Design Automation tools, to help with the design. This paper is concerned with what such tools might look like.

    View Publication Page
    01/01/12 | Minimizing manual image segmentation turn-around time for neuronal reconstruction by embracing uncertainty.
    Plaza SM, Scheffer LK, Saunders M
    PLoS One. 2012;7:e44448. doi: 10.1371/journal.pone.0044448

    The ability to automatically segment an image into distinct regions is a critical aspect in many visual processing applications. Because inaccuracies often exist in automatic segmentation, manual segmentation is necessary in some application domains to correct mistakes, such as required in the reconstruction of neuronal processes from microscopic images. The goal of the automated segmentation tool is traditionally to produce the highest-quality segmentation, where quality is measured by the similarity to actual ground truth, so as to minimize the volume of manual correction necessary. Manual correction is generally orders-of-magnitude more time consuming than automated segmentation, often making handling large images intractable. Therefore, we propose a more relevant goal: minimizing the turn-around time of automated/manual segmentation while attaining a level of similarity with ground truth. It is not always necessary to inspect every aspect of an image to generate a useful segmentation. As such, we propose a strategy to guide manual segmentation to the most uncertain parts of segmentation. Our contributions include 1) a probabilistic measure that evaluates segmentation without ground truth and 2) a methodology that leverages these probabilistic measures to significantly reduce manual correction while maintaining segmentation quality.

    View Publication Page