Main Menu (Mobile)- Block

Main Menu - Block

janelia7_blocks-janelia7_secondary_menu | block
More in this page
janelia7_blocks-janelia7_fake_breadcrumb | block
Turaga Lab / Publications
custom | custom

Filter

facetapi-Q2b17qCsTdECvJIqZJgYMaGsr8vANl1n | block
facetapi-W9JlIB1X0bjs93n1Alu3wHJQTTgDCBGe | block

Associated Project Team

facetapi-PV5lg7xuz68EAY8eakJzrcmwtdGEnxR0 | block
facetapi-021SKYQnqXW6ODq5W5dPAFEDBaEJubhN | block
general_search_page-panel_pane_1 | views_panes

51 Publications

Showing 31-40 of 51 results
10/04/20 | Learning Guided Electron Microscopy with Active Acquisition
Mi L, Wang H, Meirovitch Y, Schalek R, Turaga SC, Lichtman JW, Samuel AD, Shavit N, Martel AL, Abolmaesumi P, Stoyanov D, Mateus D, Zuluaga MA, Zhou SK, Racoceanu D, Joskowicz L
Medical Image Computing and Computer Assisted Intervention – MICCAI 2020. 10/2020:

Single-beam scanning electron microscopes (SEM) are widely used to acquire massive datasets for biomedical study, material analysis, and fabrication inspection. Datasets are typically acquired with uniform acquisition: applying the electron beam with the same power and duration to all image pixels, even if there is great variety in the pixels' importance for eventual use. Many SEMs are now able to move the beam to any pixel in the field of view without delay, enabling them, in principle, to invest their time budget more effectively with non-uniform imaging.

View Publication Page
12/17/11 | Learning to Agglomerate Superpixel Hierarchies
Viren Jain , Srinivas C. Turaga , K Briggman , Moritz N. Helmstaedter , Winfried Denk , H. S. Seung
Advances in Neural Information Processing Systems 24 (NIPS 2011). 12/2011;24:

An agglomerative clustering algorithm merges the most similar pair of clusters at every iteration. The function that evaluates similarity is traditionally hand- designed, but there has been recent interest in supervised or semisupervised settings in which ground-truth clustered data is available for training. Here we show how to train a similarity function by regarding it as the action-value function of a reinforcement learning problem. We apply this general method to segment images by clustering superpixels, an application that we call Learning to Agglomerate Superpixel Hierarchies (LASH). When applied to a challenging dataset of brain images from serial electron microscopy, LASH dramatically improved segmentation accuracy when clustering supervoxels generated by state of the boundary detection algorithms. The naive strategy of directly training only supervoxel similarities and applying single linkage clustering produced less improvement.

View Publication Page
01/01/21 | Local Shape Descriptors for Neuron Segmentation
Sheridan A, Nguyen T, Deb D, Lee WA, Saalfeld S, Turaga S, Manor U, Funke J
bioRxiv. 2021/01:. doi: 10.1101/2021.01.18.427039

We present a simple, yet effective, auxiliary learning task for the problem of neuron segmentation in electron microscopy volumes. The auxiliary task consists of the prediction of Local Shape Descriptors (LSDs), which we combine with conventional voxel-wise direct neighbor affinities for neuron boundary detection. The shape descriptors are designed to capture local statistics about the neuron to be segmented, such as diameter, elongation, and direction. On a large study comparing several existing methods across various specimen, imaging techniques, and resolutions, we find that auxiliary learning of LSDs consistently increases segmentation accuracy of affinity-based methods over a range of metrics. Furthermore, the addition of LSDs promotes affinitybased segmentation methods to be on par with the current state of the art for neuron segmentation (Flood-Filling Networks, FFN), while being two orders of magnitudes more efficient—a critical requirement for the processing of future petabyte-sized datasets. Implementations of the new auxiliary learning task, network architectures, training, prediction, and evaluation code, as well as the datasets used in this study are publicly available as a benchmark for future method contributions.Competing Interest StatementThe authors have declared no competing interest.

View Publication Page
02/01/23 | Local shape descriptors for neuron segmentation.
Sheridan A, Nguyen TM, Deb D, Lee WA, Saalfeld S, Turaga SC, Manor U, Funke J
Nature Methods. 2023 Feb 01;20(2):295-303. doi: 10.1038/s41592-022-01711-z

We present an auxiliary learning task for the problem of neuron segmentation in electron microscopy volumes. The auxiliary task consists of the prediction of local shape descriptors (LSDs), which we combine with conventional voxel-wise direct neighbor affinities for neuron boundary detection. The shape descriptors capture local statistics about the neuron to be segmented, such as diameter, elongation, and direction. On a study comparing several existing methods across various specimen, imaging techniques, and resolutions, auxiliary learning of LSDs consistently increases segmentation accuracy of affinity-based methods over a range of metrics. Furthermore, the addition of LSDs promotes affinity-based segmentation methods to be on par with the current state of the art for neuron segmentation (flood-filling networks), while being two orders of magnitudes more efficient-a critical requirement for the processing of future petabyte-sized datasets.

View Publication Page
10/01/10 | Machines that learn to segment images: a crucial technology for connectomics.
Jain V, Seung HS, Turaga SC
Current Opinion in Neurobiology. 2010 Oct;20(5):653-66. doi: 10.1016/j.conb.2010.07.004

Connections between neurons can be found by checking whether synapses exist at points of contact, which in turn are determined by neural shapes. Finding these shapes is a special case of image segmentation, which is laborious for humans and would ideally be performed by computers. New metrics properly quantify the performance of a computer algorithm using its disagreement with ’true’ segmentations of example images. New machine learning methods search for segmentation algorithms that minimize such metrics. These advances have reduced computer errors dramatically. It should now be faster for a human to correct the remaining errors than to segment an image manually. Further reductions in human effort are expected, and crucial for finding connectomes more complex than that of Caenorhabditis elegans.

View Publication Page
01/13/15 | Mapping social behavior-induced brain activation at cellular resolution in the mouse.
Kim Y, Venkataraju KU, Pradhan K, Mende C, Taranda J, Turaga SC, Arganda-Carreras I, Ng L, Hawrylycz MJ, Rockland KS, Seung HS, Osten P
Cell Reports. 2015 Jan 13;10(2):292-305. doi: 10.1016/j.celrep.2014.12.014

Understanding how brain activation mediates behaviors is a central goal of systems neuroscience. Here, we apply an automated method for mapping brain activation in the mouse in order to probe how sex-specific social behaviors are represented in the male brain. Our method uses the immediate-early-gene c-fos, a marker of neuronal activation, visualized by serial two-photon tomography: the c-fos-GFP+ neurons are computationally detected, their distribution is registered to a reference brain and a brain atlas, and their numbers are analyzed by statistical tests. Our results reveal distinct and shared female and male interaction-evoked patterns of male brain activation representing sex discrimination and social recognition. We also identify brain regions whose degree of activity correlates to specific features of social behaviors and estimate the total numbers and the densities of activated neurons per brain areas. Our study opens the door to automated screening of behavior-evoked brain activation in the mouse.

View Publication Page
12/07/09 | Maximin affinity learning of image segmentation
Srinivas C. Turaga , Kevin Briggman , Moritz N. Helmstaedter , Winfried Denk , Sebastian Seung
Advances in Neural Information Processing Systems 22 (NIPS 2009);22:

Images can be segmented by first using a classifier to predict an affinity graph that reflects the degree to which image pixels must be grouped together and then partitioning the graph to yield a segmentation. Machine learning has been applied to the affinity classifier to produce affinity graphs that are good in the sense of minimizing edge misclassification rates. However, this error measure is only indirectly related to the quality of segmentations produced by ultimately partitioning the affinity graph. We present the first machine learning algorithm for training a classifier to produce affinity graphs that are good in the sense of producing segmentations that directly minimize the Rand index, a well known segmentation performance measure. The Rand index measures segmentation performance by quantifying the classification of the connectivity of image pixel pairs after segmentation. By using the simple graph partitioning algorithm of finding the connected components of the thresholded affinity graph, we are able to train an affinity classifier to directly minimize the Rand index of segmentations resulting from the graph partitioning. Our learning algorithm corresponds to the learning of maximin affinities between image pixel pairs, which are predictive of the pixel-pair connectivity.

View Publication Page
04/21/21 | Model-based Bayesian inference of neural activity and connectivity from all-optical interrogation of a neural circuit
Aitchison L, Russell L, Packer AM, Yan J, Castonguay P, Häusser M, Turaga SC, I. Guyon , U. V. Luxburg , S. Bengio , H. Wallach , R. Fergus , S. Vishwanathan , R. Garnett
Advances in Neural Information Processing Systems:

Population activity measurement by calcium imaging can be combined with cellular resolution optogenetic activity perturbations to enable the mapping of neural connectivity in vivo. This requires accurate inference of perturbed and unperturbed neural activity from calcium imaging measurements, which are noisy and indirect, and can also be contaminated by photostimulation artifacts. We have developed a new fully Bayesian approach to jointly inferring spiking activity and neural connectivity from in vivo all-optical perturbation experiments. In contrast to standard approaches that perform spike inference and analysis in two separate maximum-likelihood phases, our joint model is able to propagate uncertainty in spike inference to the inference of connectivity and vice versa. We use the framework of variational autoencoders to model spiking activity using discrete latent variables, low-dimensional latent common input, and sparse spike-and-slab generalized linear coupling between neurons. Additionally, we model two properties of the optogenetic perturbation: off-target photostimulation and photostimulation transients. Using this model, we were able to fit models on 30 minutes of data in just 10 minutes. We performed an all-optical circuit mapping experiment in primary visual cortex of the awake mouse, and use our approach to predict neural connectivity between excitatory neurons in layer 2/3. Predicted connectivity is sparse and consistent with known correlations with stimulus tuning, spontaneous correlation and distance.

 

 

View Publication Page
12/04/17 | Model-based Bayesian inference of neural activity and connectivity from all-optical interrogation of a neural circuit.
Aitchison L, Russell L, Packer AM, Yan J, Castonguaye P, Häusser M, Turaga SC
31st Conference on Neural Information Processing Systems (NIPS 2017). 2017 Dec 04:

Population activity measurement by calcium imaging can be combined with cellular resolution optogenetic activity perturbations to enable the mapping of neural connectivity in vivo. This requires accurate inference of perturbed and unperturbed neural activity from calcium imaging measurements, which are noisy and indirect, and can also be contaminated by photostimulation artifacts. We have developed a new fully Bayesian approach to jointly inferring spiking activity and neural connectivity from in vivo all-optical perturbation experiments. In contrast to standard approaches that perform spike inference and analysis in two separate maximum-likelihood phases, our joint model is able to propagate uncertainty in spike inference to the inference of connectivity and vice versa. We use the framework of variational autoencoders to model spiking activity using discrete latent variables, low-dimensional latent common input, and sparse spike-and-slab generalized linear coupling between neurons. Additionally, we model two properties of the optogenetic perturbation: off-target photostimulation and photostimulation transients. Our joint model includes at least two sets of discrete random variables; to avoid the dramatic slowdown typically caused by being unable to differentiate such variables, we introduce two strategies that have not, to our knowledge, been used with variational autoencoders. Using this model, we were able to fit models on 30 minutes of data in just 10 minutes. We performed an all-optical circuit mapping experiment in primary visual cortex of the awake mouse, and use our approach to predict neural connectivity between excitatory neurons in layer 2/3. Predicted connectivity is sparse and consistent with known correlations with stimulus tuning, spontaneous correlation and distance.

View Publication Page
04/04/18 | Opportunities and obstacles for deep learning in biology and medicine.
Ching T, Himmelstein DS, Beaulieu-Jones BK, Kalinin AA, Do BT, Way GP, Ferrero E, Agapow P, Zietz M, Hoffman MM, Xie W, Rosen GL, Lengerich BJ, Israeli J, Lanchantin J, Woloszynek S, Carpenter AE, Shrikumar A, Xu J, Cofer EM
Journal of The Royal Society Interface. 2018 Apr 4:. doi: 10.1098/rsif.2017.0387

Deep learning describes a class of machine learning algorithms that are capable of combining raw inputs into layers of intermediate features. These algorithms have recently shown impressive results across a variety of domains. Biology and medicine are data-rich disciplines, but the data are complex and often ill-understood. Hence, deep learning techniques may be particularly well suited to solve problems of these fields. We examine applications of deep learning to a variety of biomedical problems—patient classification, fundamental biological processes and treatment of patients—and discuss whether deep learning will be able to transform these tasks or if the biomedical sphere poses unique challenges. Following from an extensive literature review, we find that deep learning has yet to revolutionize biomedicine or definitively resolve any of the most pressing challenges in the field, but promising advances have been made on the prior state of the art. Even though improvements over previous baselines have been modest in general, the recent progress indicates that deep learning methods will provide valuable means for speeding up or aiding human investigation. Though progress has been made linking a specific neural network's prediction to input features, understanding how users should interpret these models to make testable hypotheses about the system under study remains an open challenge. Furthermore, the limited amount of labelled data for training presents problems in some domains, as do legal and privacy constraints on work with sensitive health records. Nonetheless, we foresee deep learning enabling changes at both bench and bedside with the potential to transform several areas of biology and medicine.

View Publication Page