Main Menu (Mobile)- Block

Main Menu - Block

janelia7_blocks-janelia7_fake_breadcrumb | block
Koyama Lab / Publications
custom | custom

Filter

facetapi-Q2b17qCsTdECvJIqZJgYMaGsr8vANl1n | block

Associated Lab

facetapi-PV5lg7xuz68EAY8eakJzrcmwtdGEnxR0 | block
facetapi-021SKYQnqXW6ODq5W5dPAFEDBaEJubhN | block

Type of Publication

general_search_page-panel_pane_1 | views_panes

12 Publications

Showing 1-10 of 12 results
Your Criteria:
    01/13/15 | Mapping social behavior-induced brain activation at cellular resolution in the mouse.
    Kim Y, Venkataraju KU, Pradhan K, Mende C, Taranda J, Turaga SC, Arganda-Carreras I, Ng L, Hawrylycz MJ, Rockland KS, Seung HS, Osten P
    Cell Reports. 2015 Jan 13;10(2):292-305. doi: 10.1016/j.celrep.2014.12.014

    Understanding how brain activation mediates behaviors is a central goal of systems neuroscience. Here, we apply an automated method for mapping brain activation in the mouse in order to probe how sex-specific social behaviors are represented in the male brain. Our method uses the immediate-early-gene c-fos, a marker of neuronal activation, visualized by serial two-photon tomography: the c-fos-GFP+ neurons are computationally detected, their distribution is registered to a reference brain and a brain atlas, and their numbers are analyzed by statistical tests. Our results reveal distinct and shared female and male interaction-evoked patterns of male brain activation representing sex discrimination and social recognition. We also identify brain regions whose degree of activity correlates to specific features of social behaviors and estimate the total numbers and the densities of activated neurons per brain areas. Our study opens the door to automated screening of behavior-evoked brain activation in the mouse.

    View Publication Page
    05/15/14 | Space-time wiring specificity supports direction selectivity in the retina.
    Kim JS, Greene MJ, Zlateski A, Lee K, Richardson M, Turaga SC, Purcaro M, Balkam M, Robinson A, Behabadi BF, Campos M, Denk W, Seung HS, EyeWirers
    Nature. 2014 May 15;509(7500):331-6. doi: 10.1038/nature13240

    How does the mammalian retina detect motion? This classic problem in visual neuroscience has remained unsolved for 50 years. In search of clues, here we reconstruct Off-type starburst amacrine cells (SACs) and bipolar cells (BCs) in serial electron microscopic images with help from EyeWire, an online community of 'citizen neuroscientists'. On the basis of quantitative analyses of contact area and branch depth in the retina, we find evidence that one BC type prefers to wire with a SAC dendrite near the SAC soma, whereas another BC type prefers to wire far from the soma. The near type is known to lag the far type in time of visual response. A mathematical model shows how such 'space-time wiring specificity' could endow SAC dendrites with receptive fields that are oriented in space-time and therefore respond selectively to stimuli that move in the outward direction from the soma.

    View Publication Page
    08/08/13 | Connectomic reconstruction of the inner plexiform layer in the mouse retina.
    Helmstaedter M, Briggman KL, Turaga SC, Jain V, Seung HS, Denk W
    Nature. 2013 Aug 8;500(7461):168-74. doi: 10.1038/nature12346

    Comprehensive high-resolution structural maps are central to functional exploration and understanding in biology. For the nervous system, in which high resolution and large spatial extent are both needed, such maps are scarce as they challenge data acquisition and analysis capabilities. Here we present for the mouse inner plexiform layer–the main computational neuropil region in the mammalian retina–the dense reconstruction of 950 neurons and their mutual contacts. This was achieved by applying a combination of crowd-sourced manual annotation and machine-learning-based volume segmentation to serial block-face electron microscopy data. We characterize a new type of retinal bipolar interneuron and show that we can subdivide a known type based on connectivity. Circuit motifs that emerge from our data indicate a functional mechanism for a known cellular response in a ganglion cell that detects localized motion, and predict that another ganglion cell is motion sensitive.

    View Publication Page
    01/01/13 | Inferring neural population dynamics from multiple partial recordings of the same neural circuit.
    Turaga SC, Buesing L, Packer AM, Dalgleish H, Pettit N, Häusser M, Macke JH
    Advances in Neural Information Processing Systems (NIPS) . 2013;26:

    Simultaneous recordings of the activity of large neural populations are extremely valuable as they can be used to infer the dynamics and interactions of neurons in a local circuit, shedding light on the computations performed. It is now possible to measure the activity of hundreds of neurons using 2-photon calcium imaging. However, many computations are thought to involve circuits consisting of thousands of neurons, such as cortical barrels in rodent somatosensory cortex. Here we contribute a statistical method for stitching" together sequentially imaged sets of neurons into one model by phrasing the problem as fitting a latent dynamical system with missing observations. This method allows us to substantially expand the population-sizes for which population dynamics can be characterized---beyond the number of simultaneously imaged neurons. In particular, we demonstrate using recordings in mouse somatosensory cortex that this method makes it possible to predict noise correlations between non-simultaneously recorded neuron pairs.

    View Publication Page
    12/17/11 | Learning to Agglomerate Superpixel Hierarchies
    Viren Jain , Srinivas C. Turaga , K Briggman , Moritz N. Helmstaedter , Winfried Denk , H. S. Seung
    Advances in Neural Information Processing Systems 24 (NIPS 2011). 12/2011;24:

    An agglomerative clustering algorithm merges the most similar pair of clusters at every iteration. The function that evaluates similarity is traditionally hand- designed, but there has been recent interest in supervised or semisupervised settings in which ground-truth clustered data is available for training. Here we show how to train a similarity function by regarding it as the action-value function of a reinforcement learning problem. We apply this general method to segment images by clustering superpixels, an application that we call Learning to Agglomerate Superpixel Hierarchies (LASH). When applied to a challenging dataset of brain images from serial electron microscopy, LASH dramatically improved segmentation accuracy when clustering supervoxels generated by state of the boundary detection algorithms. The naive strategy of directly training only supervoxel similarities and applying single linkage clustering produced less improvement.

    View Publication Page
    10/01/10 | Machines that learn to segment images: a crucial technology for connectomics.
    Jain V, Seung HS, Turaga SC
    Current Opinion in Neurobiology. 2010 Oct;20(5):653-66. doi: 10.1016/j.conb.2010.07.004

    Connections between neurons can be found by checking whether synapses exist at points of contact, which in turn are determined by neural shapes. Finding these shapes is a special case of image segmentation, which is laborious for humans and would ideally be performed by computers. New metrics properly quantify the performance of a computer algorithm using its disagreement with ’true’ segmentations of example images. New machine learning methods search for segmentation algorithms that minimize such metrics. These advances have reduced computer errors dramatically. It should now be faster for a human to correct the remaining errors than to segment an image manually. Further reductions in human effort are expected, and crucial for finding connectomes more complex than that of Caenorhabditis elegans.

    View Publication Page
    02/01/10 | Convolutional networks can learn to generate affinity graphs for image segmentation.
    Turaga SC, Murray JF, Jain V, Roth F, Helmstaedter M, Briggman K, Denk W, Seung HS
    Neural Computation. 2010 Feb;22(2):511-38. doi: 10.1162/neco.2009.10-08-881

    Many image segmentation algorithms first generate an affinity graph and then partition it. We present a machine learning approach to computing an affinity graph using a convolutional network (CN) trained using ground truth provided by human experts. The CN affinity graph can be paired with any standard partitioning algorithm and improves segmentation accuracy significantly compared to standard hand-designed affinity functions. We apply our algorithm to the challenging 3D segmentation problem of reconstructing neuronal processes from volumetric electron microscopy (EM) and show that we are able to learn a good affinity graph directly from the raw EM images. Further, we show that our affinity graph improves the segmentation accuracy of both simple and sophisticated graph partitioning algorithms. In contrast to previous work, we do not rely on prior knowledge in the form of hand-designed image features or image preprocessing. Thus, we expect our algorithm to generalize effectively to arbitrary image types.

    View Publication Page
    12/07/09 | Maximin affinity learning of image segmentation
    Srinivas C. Turaga , Kevin Briggman , Moritz N. Helmstaedter , Winfried Denk , Sebastian Seung
    Advances in Neural Information Processing Systems 22 (NIPS 2009);22:

    Images can be segmented by first using a classifier to predict an affinity graph that reflects the degree to which image pixels must be grouped together and then partitioning the graph to yield a segmentation. Machine learning has been applied to the affinity classifier to produce affinity graphs that are good in the sense of minimizing edge misclassification rates. However, this error measure is only indirectly related to the quality of segmentations produced by ultimately partitioning the affinity graph. We present the first machine learning algorithm for training a classifier to produce affinity graphs that are good in the sense of producing segmentations that directly minimize the Rand index, a well known segmentation performance measure. The Rand index measures segmentation performance by quantifying the classification of the connectivity of image pixel pairs after segmentation. By using the simple graph partitioning algorithm of finding the connected components of the thresholded affinity graph, we are able to train an affinity classifier to directly minimize the Rand index of segmentations resulting from the graph partitioning. Our learning algorithm corresponds to the learning of maximin affinities between image pixel pairs, which are predictive of the pixel-pair connectivity.

    View Publication Page
    10/14/07 | Supervised Learning of Image Restoration with Convolutional Networks
    Jain V, Murray J, Roth F, Turaga S, Zhigulin V, Briggman K, Helmstaedter M, Denk W, Seung H
    IEEE 11th International Conference on Computer Vision, 2007. ICCV 2007. 2007-10:. doi: 10.1109/ICCV.2007.4408909

    Convolutional networks have achieved a great deal of success in high-level vision problems such as object recognition. Here we show that they can also be used as a general method for low-level image processing. As an example of our approach, convolutional networks are trained using gradient learning to solve the problem of restoring noisy or degraded images. For our training data, we have used electron microscopic images of neural circuitry with ground truth restorations provided by human experts. On this dataset, Markov random field (MRF), conditional random field (CRF), and anisotropic diffusion algorithms perform about the same as simple thresholding, but superior performance is obtained with a convolutional network containing over 34,000 adjustable parameters. When restored by this convolutional network, the images are clean enough to be used for segmentation, whereas the other approaches fail in this respect. We do not believe that convolutional networks are fundamentally superior to MRFs as a representation for image processing algorithms. On the contrary, the two approaches are closely related. But in practice, it is possible to train complex convolutional networks, while even simple MRF models are hindered by problems with Bayesian learning and inference procedures. Our results suggest that high model complexity is the single most important factor for good performance, and this is possible with convolutional networks.

    View Publication Page
    11/01/06 | Cluster analysis and robust use of full-field models for sonar beamforming
    Brian Tracey ,  Nigel Lee , Srinivas Turaga
    Journal of Acoustical Society of America . 11/2006;120(5): 2635–2647. doi: 10.1121/1.2346128

    Multipath propagation in shallow water can lead to mismatch losses when single-path replicas are usedfor horizontal array beamforming.Matched field processing(MFP) seeks to remedy this by using full-fieldacoustic propagationmodels to predict the multipath arrival structure. Ideally MFP can give source localization in range and depth as well as detection gains but robustly estimating range and depth is difficult in practice. The approach described here seeks to collapse full-field replica outputs to bearing which is robustly estimated while retaining any signal gains provided by the full-field model.Clusteranalysis is used to group together full-field replicas with similar responses. This yields a less redundant “sampled field” describing a set of representative multipath structures for each bearing. A detection algorithm is introduced that uses clustering to collapse beamformer outputs to bearing such that signal gains are retained while increases in the noise floor are minimized. Horizontal array data from SWELLEX-96 are used to demonstrate the detection benefits of sampled field as compared to single-pathbeamforming.

    View Publication Page