Main Menu (Mobile)- Block

Main Menu - Block


janelia7_blocks-janelia7_fake_breadcrumb | block
node_body | node_body
janelia7_blocks-janelia7_select_pub_list_header | block

Select Publications

View All Publications
publications_landing_pages | views
03/02/14 | Toward large-scale connectome reconstructions.
Plaza SM, Scheffer LK, Chklovskii DB
Current Opinion in Neurobiology. 2014 Mar 2;25C:201-10. doi: 10.1016/j.conb.2014.01.019

Recent results have shown the possibility of both reconstructing connectomes of small but biologically interesting circuits and extracting from these connectomes insights into their function. However, these reconstructions were heroic proof-of-concept experiments, requiring person-months of effort per neuron reconstructed, and will not scale to larger circuits, much less the brains of entire animals. In this paper we examine what will be required to generate and use substantially larger connectomes, finding five areas that need increased attention: firstly, imaging better suited to automatic reconstruction, with excellent z-resolution; secondly, automatic detection, validation, and measurement of synapses; thirdly, reconstruction methods that keep and use uncertainty metrics for every object, from initial images, through segmentation, reconstruction, and connectome queries; fourthly, processes that are fully incremental, so that the connectome may be used before it is fully complete; and finally, better tools for analysis of connectomes, once they are obtained.

View Publication Page
04/04/14 | Graph-based active learning of agglomeration (GALA): a Python library to segment 2D and 3D neuroimages
Nunez-Iglesias J, Kennedy R, Plaza SM, Chakraborty A, Katz WT
Frontiers in Neuroinformatics. 2014 Apr 4;8:34. doi: 10.3389/fninf.2014.00034

The aim in high-resolution connectomics is to reconstruct complete neuronal connectivity in a tissue. Currently, the only technology capable of resolving the smallest neuronal processes is electron microscopy (EM). Thus, a common approach to network reconstruction is to perform (error-prone) automatic segmentation of EM images, followed by manual proofreading by experts to fix errors. We have developed an algorithm and software library to not only improve the accuracy of the initial automatic segmentation, but also point out the image coordinates where it is likely to have made errors. Our software, called gala (graph-based active learning of agglomeration), improves the state of the art in agglomerative image segmentation. It is implemented in Python and makes extensive use of the scientific Python stack (numpy, scipy, networkx, scikit-learn, scikit-image, and others). We present here the software architecture of the gala library, and discuss several designs that we consider would be generally useful for other segmentation packages. We also discuss the current limitations of the gala library and how we intend to address them.

View Publication Page
08/20/13 | Machine learning of hierarchical clustering to segment 2D and 3D images.
Nunez-Iglesias J, Kennedy R, Parag T, Shi J, Chklovskii DB
PLoS One. 2013;8:e71715. doi: 10.1371/journal.pone.0071715

We aim to improve segmentation through the use of machine learning tools during region agglomeration. We propose an active learning approach for performing hierarchical agglomerative segmentation from superpixels. Our method combines multiple features at all scales of the agglomerative process, works for data with an arbitrary number of dimensions, and scales to very large datasets. We advocate the use of variation of information to measure segmentation accuracy, particularly in 3D electron microscopy (EM) images of neural tissue, and using this metric demonstrate an improvement over competing algorithms in EM and natural images.

View Publication Page
01/01/12 | Minimizing manual image segmentation turn-around time for neuronal reconstruction by embracing uncertainty.
Plaza SM, Scheffer LK, Saunders M
PLoS One. 2012;7:e44448. doi: 10.1371/journal.pone.0044448

The ability to automatically segment an image into distinct regions is a critical aspect in many visual processing applications. Because inaccuracies often exist in automatic segmentation, manual segmentation is necessary in some application domains to correct mistakes, such as required in the reconstruction of neuronal processes from microscopic images. The goal of the automated segmentation tool is traditionally to produce the highest-quality segmentation, where quality is measured by the similarity to actual ground truth, so as to minimize the volume of manual correction necessary. Manual correction is generally orders-of-magnitude more time consuming than automated segmentation, often making handling large images intractable. Therefore, we propose a more relevant goal: minimizing the turn-around time of automated/manual segmentation while attaining a level of similarity with ground truth. It is not always necessary to inspect every aspect of an image to generate a useful segmentation. As such, we propose a strategy to guide manual segmentation to the most uncertain parts of segmentation. Our contributions include 1) a probabilistic measure that evaluates segmentation without ground truth and 2) a methodology that leverages these probabilistic measures to significantly reduce manual correction while maintaining segmentation quality.

View Publication Page