Filter
Associated Lab
- Aso Lab (1) Apply Aso Lab filter
- Betzig Lab (1) Apply Betzig Lab filter
- Clapham Lab (1) Apply Clapham Lab filter
- Funke Lab (1) Apply Funke Lab filter
- Harris Lab (1) Apply Harris Lab filter
- Hess Lab (1) Apply Hess Lab filter
- Lippincott-Schwartz Lab (1) Apply Lippincott-Schwartz Lab filter
- Rubin Lab (1) Apply Rubin Lab filter
- Remove Saalfeld Lab filter Saalfeld Lab
- Turaga Lab (1) Apply Turaga Lab filter
Associated Support Team
3 Janelia Publications
Showing 1-3 of 3 resultsImaging large samples at the resolution offered by electron microscopy is typically achieved by sequentially recording overlapping tiles that are later combined to seamless mosaics. Mosaics of serial sections are aligned to reconstruct three-dimensional volumes. To achieve this, image distortions and artifacts as introduced during sample preparation or imaging need to be removed. In this chapter, we will discuss typical sources of artifacts and distortion, and we will learn how to use the open source software TrakEM2 to correct them.
Optical and electron microscopy have made tremendous inroads toward understanding the complexity of the brain. However, optical microscopy offers insufficient resolution to reveal subcellular details, and electron microscopy lacks the throughput and molecular contrast to visualize specific molecular constituents over millimeter-scale or larger dimensions. We combined expansion microscopy and lattice light-sheet microscopy to image the nanoscale spatial relationships between proteins across the thickness of the mouse cortex or the entire Drosophila brain. These included synaptic proteins at dendritic spines, myelination along axons, and presynaptic densities at dopaminergic neurons in every fly brain region. The technology should enable statistically rich, large-scale studies of neural development, sexual dimorphism, degree of stereotypy, and structural correlations to behavior or neural activity, all with molecular contrast.
We present a method combining affinity prediction with region agglomeration, which improves significantly upon the state of the art of neuron segmentation from electron microscopy (EM) in accuracy and scalability. Our method consists of a 3D U-net, trained to predict affinities between voxels, followed by iterative region agglomeration. We train using a structured loss based on MALIS, encouraging topologically correct segmentations obtained from affinity thresholding. Our extension consists of two parts: First, we present a quasi-linear method to compute the loss gradient, improving over the original quadratic algorithm. Second, we compute the gradient in two separate passes to avoid spurious gradient contributions in early training stages. Our predictions are accurate enough that simple learning-free percentile-based agglomeration outperforms more involved methods used earlier on inferior predictions. We present results on three diverse EM datasets, achieving relative improvements over previous results of 27%, 15%, and 250%. Our findings suggest that a single method can be applied to both nearly isotropic block-face EM data and anisotropic serial sectioned EM data. The runtime of our method scales linearly with the size of the volume and achieves a throughput of ~2.6 seconds per megavoxel, qualifying our method for the processing of very large datasets.