Filter
Associated Lab
- Aso Lab (2) Apply Aso Lab filter
- Betzig Lab (1) Apply Betzig Lab filter
- Remove Bock Lab filter Bock Lab
- Card Lab (1) Apply Card Lab filter
- Cardona Lab (1) Apply Cardona Lab filter
- Fetter Lab (1) Apply Fetter Lab filter
- Harris Lab (1) Apply Harris Lab filter
- Rubin Lab (2) Apply Rubin Lab filter
- Saalfeld Lab (1) Apply Saalfeld Lab filter
- Scheffer Lab (1) Apply Scheffer Lab filter
Associated Project Team
Publication Date
Type of Publication
17 Publications
Showing 11-17 of 17 resultsWe demonstrate a meaningful prospective power analysis for an (admittedly idealized) illustrative connectome inference task. Modeling neurons as vertices and synapses as edges in a simple random graph model, we optimize the trade-off between the number of (putative) edges identified and the accuracy of the edge identification procedure. We conclude that explicit analysis of the quantity/quality trade-off is imperative for optimal neuroscientific experimental design. In particular, identifying edges faster/more cheaply, but with more error, can yield superior inferential performance.
We describe a scalable database cluster for the spatial analysis and annotation of high-throughput brain imaging data, initially for 3-d electron microscopy image stacks, but for time-series and multi-channel data as well. The system was designed primarily for workloads that build connectomes- neural connectivity maps of the brain-using the parallel execution of computer vision algorithms on high-performance compute clusters. These services and open-science data sets are publicly available at openconnecto.me. The system design inherits much from NoSQL scale-out and data-intensive computing architectures. We distribute data to cluster nodes by partitioning a spatial index. We direct I/O to different systems-reads to parallel disk arrays and writes to solid-state storage-to avoid I/O interference and maximize throughput. All programming interfaces are RESTful Web services, which are simple and stateless, improving scalability and usability. We include a performance evaluation of the production system, highlighting the effec-tiveness of spatial data organization.
The last decade has seen a rapid increase in the number of tools to acquire volume electron microscopy (EM) data. Several new scanning EM (SEM) imaging methods have emerged, and classical transmission EM (TEM) methods are being scaled up and automated. Here we summarize the new methods for acquiring large EM volumes, and discuss the tradeoffs in terms of resolution, acquisition speed, and reliability. We then assess each method’s applicability to the problem of reconstructing anatomical connectivity between neurons, considering both the current capabilities and future prospects of the method. Finally, we argue that neuronal ’wiring diagrams’ are likely necessary, but not sufficient, to understand the operation of most neuronal circuits: volume EM imaging will likely find its best application in combination with other methods in neuroscience, such as molecular biology, optogenetics, and physiology.
How does the brain compute? Answering this question necessitates neuronal connectomes, annotated graphs of all synaptic connections within defined brain areas. Further, understanding the energetics of the brain’s computations requires vascular graphs. The assembly of a connectome requires sensitive hardware tools to measure neuronal and neurovascular features in all three dimensions, as well as software and machine learning for data analysis and visualization. We present the state of the art on the reconstruction of circuits and vasculature that link brain anatomy and function. Analysis at the scale of tens of nanometers yields connections between identified neurons, while analysis at the micrometer scale yields probabilistic rules of connection between neurons and exact vascular connectivity.
In the cerebral cortex, local circuits consist of tens of thousands of neurons, each of which makes thousands of synaptic connections. Perhaps the biggest impediment to understanding these networks is that we have no wiring diagrams of their interconnections. Even if we had a partial or complete wiring diagram, however, understanding the network would also require information about each neuron’s function. Here we show that the relationship between structure and function can be studied in the cortex with a combination of in vivo physiology and network anatomy. We used two-photon calcium imaging to characterize a functional property–the preferred stimulus orientation–of a group of neurons in the mouse primary visual cortex. Large-scale electron microscopy of serial thin sections was then used to trace a portion of these neurons’ local network. Consistent with a prediction from recent physiological experiments, inhibitory interneurons received convergent anatomical input from nearby excitatory neurons with a broad range of preferred orientations, although weak biases could not be rejected.
We introduce an efficient search strategy to substantially accelerate feature based registration. Previous feature based registration algorithms often use truncated search strategies in order to achieve small computation times. Our new accelerated search strategy is based on the realization that the search for corresponding features can be dramatically accelerated by utilizing Johnson-Lindenstrauss dimension reduction. Order of magnitude calculations for the search strategy we propose here indicate that the algorithm proposed is more than a million times faster than previously utilized naive search strategies, and this advantage in speed is directly translated into an advantage in accuracy as the fast speed enables more comparisons to be made in the same amount of time. We describe the accelerated scheme together with a full complexity analysis. The registration algorithm was applied to large transmission electron microscopy (TEM) images of neural ultrastructure. Our experiments demonstrate that our algorithm enables alignment of TEM images with increased accuracy and efficiency compared to previous algorithms.
3D reconstruction from serial 2D microscopy images depends on non-linear alignment of serial sections. For some structures, such as the neuronal circuitry of the brain, very large images at very high resolution are necessary to permit reconstruction. These very large images prevent the direct use of classical registration methods. We propose in this work a method to deal with the non-linear alignment of arbitrarily large 2D images using the finite support properties of cubic B-splines. After initial affine alignment, each large image is split into a grid of smaller overlapping sub-images, which are individually registered using cubic B-splines transformations. Inside the overlapping regions between neighboring sub-images, the coefficients of the knots controlling the B-splines deformations are blended, to create a virtual large grid of knots for the whole image. The sub-images are resampled individually, using the new coefficients, and assembled together into a final large aligned image. We evaluated the method on a series of large transmission electron microscopy images and our results indicate significant improvements compared to both manual and affine alignment.