Main Menu (Mobile)- Block

Main Menu - Block

janelia7_blocks-janelia7_fake_breadcrumb | block
Koyama Lab / Publications
custom | custom

Filter

facetapi-Q2b17qCsTdECvJIqZJgYMaGsr8vANl1n | block
facetapi-W9JlIB1X0bjs93n1Alu3wHJQTTgDCBGe | block
facetapi-PV5lg7xuz68EAY8eakJzrcmwtdGEnxR0 | block
facetapi-021SKYQnqXW6ODq5W5dPAFEDBaEJubhN | block
general_search_page-panel_pane_1 | views_panes

56 Publications

Showing 51-56 of 56 results
Your Criteria:
    01/01/12 | Minimizing manual image segmentation turn-around time for neuronal reconstruction by embracing uncertainty.
    Plaza SM, Scheffer LK, Saunders M
    PLoS One. 2012;7:e44448. doi: 10.1371/journal.pone.0044448

    The ability to automatically segment an image into distinct regions is a critical aspect in many visual processing applications. Because inaccuracies often exist in automatic segmentation, manual segmentation is necessary in some application domains to correct mistakes, such as required in the reconstruction of neuronal processes from microscopic images. The goal of the automated segmentation tool is traditionally to produce the highest-quality segmentation, where quality is measured by the similarity to actual ground truth, so as to minimize the volume of manual correction necessary. Manual correction is generally orders-of-magnitude more time consuming than automated segmentation, often making handling large images intractable. Therefore, we propose a more relevant goal: minimizing the turn-around time of automated/manual segmentation while attaining a level of similarity with ground truth. It is not always necessary to inspect every aspect of an image to generate a useful segmentation. As such, we propose a strategy to guide manual segmentation to the most uncertain parts of segmentation. Our contributions include 1) a probabilistic measure that evaluates segmentation without ground truth and 2) a methodology that leverages these probabilistic measures to significantly reduce manual correction while maintaining segmentation quality.

    View Publication Page
    10/01/12 | Super-resolution using sparse representations over learned dictionaries: reconstruction of brain structure using electron microscopy.
    Hu T, Nunez-Iglesias J, Vitaladevuni S, Scheffer L, Xu S, Bolorizadeh M, Hess H, Fetter R, Chklovskii D
    arXiv.org . 2012 Oct:

    A central problem in neuroscience is reconstructing neuronal circuits on the synapse level. Due to a wide range of scales in brain architecture such reconstruction requires imaging that is both high-resolution and high-throughput. Existing electron microscopy (EM) techniques possess required resolution in the lateral plane and either high-throughput or high depth resolution but not both. Here, we exploit recent advances in unsupervised learning and signal processing to obtain high depth-resolution EM images computationally without sacrificing throughput. First, we show that the brain tissue can be represented as a sparse linear combination of localized basis functions that are learned using high-resolution datasets. We then develop compressive sensing-inspired techniques that can reconstruct the brain tissue from very few (typically 5) tomographic views of each section. This enables tracing of neuronal processes and, hence, high throughput reconstruction of neural circuits on the level of individual synapses.

    View Publication Page
    Truman LabRubin LabFlyEM
    10/01/10 | Refinement of tools for targeted gene expression in Drosophila.
    Pfeiffer BD, Ngo TB, Hibbard KL, Murphy C, Jenett A, Truman JW, Rubin GM
    Genetics. 2010 Oct;186(2):735-55. doi: 10.1534/genetics.110.119917

    A wide variety of biological experiments rely on the ability to express an exogenous gene in a transgenic animal at a defined level and in a spatially and temporally controlled pattern. We describe major improvements of the methods available for achieving this objective in Drosophila melanogaster. We have systematically varied core promoters, UTRs, operator sequences, and transcriptional activating domains used to direct gene expression with the GAL4, LexA, and Split GAL4 transcription factors and the GAL80 transcriptional repressor. The use of site-specific integration allowed us to make quantitative comparisons between different constructs inserted at the same genomic location. We also characterized a set of PhiC31 integration sites for their ability to support transgene expression of both drivers and responders in the nervous system. The increased strength and reliability of these optimized reagents overcome many of the previous limitations of these methods and will facilitate genetic manipulations of greater complexity and sophistication.

    View Publication Page
    10/01/10 | Semi-automated reconstruction of neural circuits using electron microscopy.
    Chklovskii DB, Vitaladevuni S, Scheffer LK
    Current Opinion in Neurobiology. 2010 Oct;20:667-75. doi: 10.1371/journal.pcbi.1001066

    Reconstructing neuronal circuits at the level of synapses is a central problem in neuroscience, and the focus of the nascent field of connectomics. Previously used to reconstruct the C. elegans wiring diagram, serial-section transmission electron microscopy (ssTEM) is a proven technique for the task. However, to reconstruct more complex circuits, ssTEM will require the automation of image processing. We review progress in the processing of electron microscopy images and, in particular, a semi-automated reconstruction pipeline deployed at Janelia. Drosophila circuits underlying identified behaviors are being reconstructed in the pipeline with the goal of generating a complete Drosophila connectome.

    View Publication Page
    01/01/10 | Anatomic analysis of Gal4 expression patterns of the Rubin line collection: the central complex.
    Jenett A, Wolff T, Nern A, Pfeiffer BD, Ngo T, Murphy C, Long F, Peng H, Rubin GM
    Journal of Neurogenetics. 2010;24:71-2
    01/01/10 | Increasing depth resolution of electron microscopy of neural circuits using sparse tomographic reconstruction.
    Veeraraghavan A, Genkin AV, Vitaladevuni S, Scheffer L, Xu C, Hess H, Fetter R, Cantoni M, Knott G, Chklovskii DB
    Computer Vision and Pattern Recognition (CVPR). 2010:1767-74. doi: 10.1109/CVPR.2010.5539846