Filter
Associated Lab
- Ahrens Lab (2) Apply Ahrens Lab filter
- Branson Lab (2) Apply Branson Lab filter
- Cardona Lab (1) Apply Cardona Lab filter
- Funke Lab (5) Apply Funke Lab filter
- Keller Lab (1) Apply Keller Lab filter
- Pachitariu Lab (1) Apply Pachitariu Lab filter
- Reiser Lab (1) Apply Reiser Lab filter
- Saalfeld Lab (3) Apply Saalfeld Lab filter
- Sternson Lab (1) Apply Sternson Lab filter
- Remove Turaga Lab filter Turaga Lab
Publication Date
- 2022 (3) Apply 2022 filter
- 2021 (12) Apply 2021 filter
- 2020 (1) Apply 2020 filter
- 2019 (2) Apply 2019 filter
- 2018 (7) Apply 2018 filter
- 2017 (3) Apply 2017 filter
- 2016 (2) Apply 2016 filter
- 2015 (2) Apply 2015 filter
- 2014 (1) Apply 2014 filter
- 2013 (2) Apply 2013 filter
- 2011 (1) Apply 2011 filter
- 2010 (2) Apply 2010 filter
- 2009 (1) Apply 2009 filter
- 2007 (1) Apply 2007 filter
- 2006 (1) Apply 2006 filter
- 2004 (1) Apply 2004 filter
- 2003 (1) Apply 2003 filter
Type of Publication
43 Publications
Showing 31-40 of 43 resultsTo stimulate progress in automating the reconstruction of neural circuits, we organized the first international challenge on 2D segmentation of electron microscopic (EM) images of the brain. Participants submitted boundary maps predicted for a test set of images, and were scored based on their agreement with a consensus of human expert annotations. The winning team had no prior experience with EM images, and employed a convolutional network. This “deep learning” approach has since become accepted as a standard for segmentation of EM images. The challenge has continued to accept submissions, and the best so far has resulted from cooperation between two teams. The challenge has probably saturated, as algorithms cannot progress beyond limits set by ambiguities inherent in 2D scoring and the size of the test dataset. Retrospective evaluation of the challenge scoring system reveals that it was not sufficiently robust to variations in the widths of neurite borders. We propose a solution to this problem, which should be useful for a future 3D segmentation challenge.
Understanding how brain activation mediates behaviors is a central goal of systems neuroscience. Here, we apply an automated method for mapping brain activation in the mouse in order to probe how sex-specific social behaviors are represented in the male brain. Our method uses the immediate-early-gene c-fos, a marker of neuronal activation, visualized by serial two-photon tomography: the c-fos-GFP+ neurons are computationally detected, their distribution is registered to a reference brain and a brain atlas, and their numbers are analyzed by statistical tests. Our results reveal distinct and shared female and male interaction-evoked patterns of male brain activation representing sex discrimination and social recognition. We also identify brain regions whose degree of activity correlates to specific features of social behaviors and estimate the total numbers and the densities of activated neurons per brain areas. Our study opens the door to automated screening of behavior-evoked brain activation in the mouse.
How does the mammalian retina detect motion? This classic problem in visual neuroscience has remained unsolved for 50 years. In search of clues, here we reconstruct Off-type starburst amacrine cells (SACs) and bipolar cells (BCs) in serial electron microscopic images with help from EyeWire, an online community of 'citizen neuroscientists'. On the basis of quantitative analyses of contact area and branch depth in the retina, we find evidence that one BC type prefers to wire with a SAC dendrite near the SAC soma, whereas another BC type prefers to wire far from the soma. The near type is known to lag the far type in time of visual response. A mathematical model shows how such 'space-time wiring specificity' could endow SAC dendrites with receptive fields that are oriented in space-time and therefore respond selectively to stimuli that move in the outward direction from the soma.
Comprehensive high-resolution structural maps are central to functional exploration and understanding in biology. For the nervous system, in which high resolution and large spatial extent are both needed, such maps are scarce as they challenge data acquisition and analysis capabilities. Here we present for the mouse inner plexiform layer–the main computational neuropil region in the mammalian retina–the dense reconstruction of 950 neurons and their mutual contacts. This was achieved by applying a combination of crowd-sourced manual annotation and machine-learning-based volume segmentation to serial block-face electron microscopy data. We characterize a new type of retinal bipolar interneuron and show that we can subdivide a known type based on connectivity. Circuit motifs that emerge from our data indicate a functional mechanism for a known cellular response in a ganglion cell that detects localized motion, and predict that another ganglion cell is motion sensitive.
Simultaneous recordings of the activity of large neural populations are extremely valuable as they can be used to infer the dynamics and interactions of neurons in a local circuit, shedding light on the computations performed. It is now possible to measure the activity of hundreds of neurons using 2-photon calcium imaging. However, many computations are thought to involve circuits consisting of thousands of neurons, such as cortical barrels in rodent somatosensory cortex. Here we contribute a statistical method for stitching" together sequentially imaged sets of neurons into one model by phrasing the problem as fitting a latent dynamical system with missing observations. This method allows us to substantially expand the population-sizes for which population dynamics can be characterized---beyond the number of simultaneously imaged neurons. In particular, we demonstrate using recordings in mouse somatosensory cortex that this method makes it possible to predict noise correlations between non-simultaneously recorded neuron pairs.
An agglomerative clustering algorithm merges the most similar pair of clusters at every iteration. The function that evaluates similarity is traditionally hand- designed, but there has been recent interest in supervised or semisupervised settings in which ground-truth clustered data is available for training. Here we show how to train a similarity function by regarding it as the action-value function of a reinforcement learning problem. We apply this general method to segment images by clustering superpixels, an application that we call Learning to Agglomerate Superpixel Hierarchies (LASH). When applied to a challenging dataset of brain images from serial electron microscopy, LASH dramatically improved segmentation accuracy when clustering supervoxels generated by state of the boundary detection algorithms. The naive strategy of directly training only supervoxel similarities and applying single linkage clustering produced less improvement.
Connections between neurons can be found by checking whether synapses exist at points of contact, which in turn are determined by neural shapes. Finding these shapes is a special case of image segmentation, which is laborious for humans and would ideally be performed by computers. New metrics properly quantify the performance of a computer algorithm using its disagreement with ’true’ segmentations of example images. New machine learning methods search for segmentation algorithms that minimize such metrics. These advances have reduced computer errors dramatically. It should now be faster for a human to correct the remaining errors than to segment an image manually. Further reductions in human effort are expected, and crucial for finding connectomes more complex than that of Caenorhabditis elegans.
Many image segmentation algorithms first generate an affinity graph and then partition it. We present a machine learning approach to computing an affinity graph using a convolutional network (CN) trained using ground truth provided by human experts. The CN affinity graph can be paired with any standard partitioning algorithm and improves segmentation accuracy significantly compared to standard hand-designed affinity functions. We apply our algorithm to the challenging 3D segmentation problem of reconstructing neuronal processes from volumetric electron microscopy (EM) and show that we are able to learn a good affinity graph directly from the raw EM images. Further, we show that our affinity graph improves the segmentation accuracy of both simple and sophisticated graph partitioning algorithms. In contrast to previous work, we do not rely on prior knowledge in the form of hand-designed image features or image preprocessing. Thus, we expect our algorithm to generalize effectively to arbitrary image types.
Images can be segmented by first using a classifier to predict an affinity graph that reflects the degree to which image pixels must be grouped together and then partitioning the graph to yield a segmentation. Machine learning has been applied to the affinity classifier to produce affinity graphs that are good in the sense of minimizing edge misclassification rates. However, this error measure is only indirectly related to the quality of segmentations produced by ultimately partitioning the affinity graph. We present the first machine learning algorithm for training a classifier to produce affinity graphs that are good in the sense of producing segmentations that directly minimize the Rand index, a well known segmentation performance measure. The Rand index measures segmentation performance by quantifying the classification of the connectivity of image pixel pairs after segmentation. By using the simple graph partitioning algorithm of finding the connected components of the thresholded affinity graph, we are able to train an affinity classifier to directly minimize the Rand index of segmentations resulting from the graph partitioning. Our learning algorithm corresponds to the learning of maximin affinities between image pixel pairs, which are predictive of the pixel-pair connectivity.
Convolutional networks have achieved a great deal of success in high-level vision problems such as object recognition. Here we show that they can also be used as a general method for low-level image processing. As an example of our approach, convolutional networks are trained using gradient learning to solve the problem of restoring noisy or degraded images. For our training data, we have used electron microscopic images of neural circuitry with ground truth restorations provided by human experts. On this dataset, Markov random field (MRF), conditional random field (CRF), and anisotropic diffusion algorithms perform about the same as simple thresholding, but superior performance is obtained with a convolutional network containing over 34,000 adjustable parameters. When restored by this convolutional network, the images are clean enough to be used for segmentation, whereas the other approaches fail in this respect. We do not believe that convolutional networks are fundamentally superior to MRFs as a representation for image processing algorithms. On the contrary, the two approaches are closely related. But in practice, it is possible to train complex convolutional networks, while even simple MRF models are hindered by problems with Bayesian learning and inference procedures. Our results suggest that high model complexity is the single most important factor for good performance, and this is possible with convolutional networks.