Filter
Associated Lab
- Aso Lab (2) Apply Aso Lab filter
- Card Lab (1) Apply Card Lab filter
- Cardona Lab (3) Apply Cardona Lab filter
- Dickson Lab (1) Apply Dickson Lab filter
- Espinosa Medina Lab (1) Apply Espinosa Medina Lab filter
- Feliciano Lab (1) Apply Feliciano Lab filter
- Fitzgerald Lab (1) Apply Fitzgerald Lab filter
- Funke Lab (38) Apply Funke Lab filter
- Hess Lab (5) Apply Hess Lab filter
- Keller Lab (2) Apply Keller Lab filter
- Lippincott-Schwartz Lab (2) Apply Lippincott-Schwartz Lab filter
- Reiser Lab (1) Apply Reiser Lab filter
- Rubin Lab (1) Apply Rubin Lab filter
- Saalfeld Lab (10) Apply Saalfeld Lab filter
- Scheffer Lab (1) Apply Scheffer Lab filter
- Stern Lab (2) Apply Stern Lab filter
- Tillberg Lab (2) Apply Tillberg Lab filter
- Turaga Lab (5) Apply Turaga Lab filter
- Turner Lab (1) Apply Turner Lab filter
Associated Project Team
Publication Date
Type of Publication
38 Publications
Showing 11-20 of 38 resultsImaging neuronal networks provides a foundation for understanding the nervous system, but resolving dense nanometer-scale structures over large volumes remains challenging for light microscopy (LM) and electron microscopy (EM). Here we show that X-ray holographic nano-tomography (XNH) can image millimeter-scale volumes with sub-100-nm resolution, enabling reconstruction of dense wiring in Drosophila melanogaster and mouse nervous tissue. We performed correlative XNH and EM to reconstruct hundreds of cortical pyramidal cells and show that more superficial cells receive stronger synaptic inhibition on their apical dendrites. By combining multiple XNH scans, we imaged an adult Drosophila leg with sufficient resolution to comprehensively catalog mechanosensory neurons and trace individual motor axons from muscles to the central nervous system. To accelerate neuronal reconstructions, we trained a convolutional neural network to automatically segment neurons from XNH volumes. Thus, XNH bridges a key gap between LM and EM, providing a new avenue for neural circuit discovery.
With recent advances in high-throughput Electron Microscopy (EM) imaging it is now possible to image an entire nervous system of organisms like Drosophila melanogaster. One of the bottlenecks to reconstruct a connectome from these large volumes (œ 100 TiB) is the pixel-wise prediction of membranes. The time it would typically take to process such a volume using a convolutional neural network (CNN) with a sliding window approach is in the order of years on a current GPU. With sliding windows, however, a lot of redundant computations are carried out. In this paper, we present an extension to the Caffe library to increase throughput by predicting many pixels at once. On a sliding window network successfully used for membrane classification, we show that our method achieves a speedup of up to 57×, maintaining identical prediction results.
Dopaminergic neurons with distinct projection patterns and physiological properties compose memory subsystems in a brain. However, it is poorly understood whether or how they interact during complex learning. Here, we identify a feedforward circuit formed between dopamine subsystems and show that it is essential for second-order conditioning, an ethologically important form of higher-order associative learning. The Drosophila mushroom body comprises a series of dopaminergic compartments, each of which exhibits distinct memory dynamics. We find that a slow and stable memory compartment can serve as an effective “teacher” by instructing other faster and transient memory compartments via a single key interneuron, which we identify by connectome analysis and neurotransmitter prediction. This excitatory interneuron acquires enhanced response to reward-predicting odor after first-order conditioning and, upon activation, evokes dopamine release in the “student” compartments. These hierarchical connections between dopamine subsystems explain distinct properties of first- and second-order memory long known by behavioral psychologists.
Deep neural networks trained to inpaint partially occluded images show a deep understanding of image composition and have even been shown to remove objects from images convincingly. In this work, we investigate how this implicit knowledge of image composition can be be used to separate cells in densely populated microscopy images. We propose a measure for the independence of two image regions given a fully self-supervised inpainting network and separate objects by maximizing this independence. We evaluate our method on two cell segmentation datasets and show that cells can be separated completely unsupervised. Furthermore, combined with simple foreground detection, our method yields instance segmentation of similar quality to fully supervised methods.
We present a method combining affinity prediction with region agglomeration, which improves significantly upon the state of the art of neuron segmentation from electron microscopy (EM) in accuracy and scalability. Our method consists of a 3D U-net, trained to predict affinities between voxels, followed by iterative region agglomeration. We train using a structured loss based on MALIS, encouraging topologically correct segmentations obtained from affinity thresholding. Our extension consists of two parts: First, we present a quasi-linear method to compute the loss gradient, improving over the original quadratic algorithm. Second, we compute the gradient in two separate passes to avoid spurious gradient contributions in early training stages. Our predictions are accurate enough that simple learning-free percentile-based agglomeration outperforms more involved methods used earlier on inferior predictions. We present results on three diverse EM datasets, achieving relative improvements over previous results of 27%, 15%, and 250%. Our findings suggest that a single method can be applied to both nearly isotropic block-face EM data and anisotropic serial sectioned EM data. The runtime of our method scales linearly with the size of the volume and achieves a throughput of ~2.6 seconds per megavoxel, qualifying our method for the processing of very large datasets.
We present an auxiliary learning task for the problem of neuron segmentation in electron microscopy volumes. The auxiliary task consists of the prediction of local shape descriptors (LSDs), which we combine with conventional voxel-wise direct neighbor affinities for neuron boundary detection. The shape descriptors capture local statistics about the neuron to be segmented, such as diameter, elongation, and direction. On a study comparing several existing methods across various specimen, imaging techniques, and resolutions, auxiliary learning of LSDs consistently increases segmentation accuracy of affinity-based methods over a range of metrics. Furthermore, the addition of LSDs promotes affinity-based segmentation methods to be on par with the current state of the art for neuron segmentation (flood-filling networks), while being two orders of magnitudes more efficient-a critical requirement for the processing of future petabyte-sized datasets.
We present a method for microtubule tracking in electron microscopy volumes. Our method first identifies a sparse set of voxels that likely belong to microtubules. Similar to prior work, we then enumerate potential edges between these voxels, which we represent in a candidate graph. Tracks of microtubules are found by selecting nodes and edges in the candidate graph by solving a constrained optimization problem incorporating biological priors on microtubule structure. For this, we present a novel integer linear programming formulation, which results in speed-ups of three orders of magnitude and an increase of 53% in accuracy compared to prior art (evaluated on three 1 . 2 × 4 × 4µm volumes of Drosophila neural tissue). We also propose a scheme to solve the optimization problem in a block-wise fashion, which allows distributed tracking and is necessary to process very large electron microscopy volumes. Finally, we release a benchmark dataset for microtubule tracking, here used for training, testing and validation, consisting of eight 30 x 1000 x 1000 voxel blocks (1 . 2 × 4 × 4µm) of densely annotated microtubules in the CREMI data set (https://github.com/nilsec/micron).
Understanding learning through synaptic plasticity rules in the brain is a grand challenge for neuroscience. Here we introduce a novel computational framework for inferring plasticity rules from experimental data on neural activity trajectories and behavioral learning dynamics. Our methodology parameterizes the plasticity function to provide theoretical interpretability and facilitate gradient-based optimization. For instance, we use Taylor series expansions or multilayer perceptrons to approximate plasticity rules, and we adjust their parameters via gradient descent over entire trajectories to closely match observed neural activity and behavioral data. Notably, our approach can learn intricate rules that induce long nonlinear time-dependencies, such as those incorporating postsynaptic activity and current synaptic weights. We validate our method through simulations, accurately recovering established rules, like Oja’s, as well as more complex hypothetical rules incorporating reward-modulated terms. We assess the resilience of our technique to noise and, as a tangible application, apply it to behavioral data from Drosophila during a probabilistic reward-learning experiment. Remarkably, we identify an active forgetting component of reward learning in flies that enhances the predictive accuracy of previous models. Overall, our modeling framework provides an exciting new avenue to elucidate the computational principles governing synaptic plasticity and learning in the brain.
Brains comprise complex networks of neurons and connections, similar to the nodes and edges of artificial networks. Network analysis applied to the wiring diagrams of brains can offer insights into how they support computations and regulate the flow of information underlying perception and behaviour. The completion of the first whole-brain connectome of an adult fly, containing over 130,000 neurons and millions of synaptic connections, offers an opportunity to analyse the statistical properties and topological features of a complete brain. Here we computed the prevalence of two- and three-node motifs, examined their strengths, related this information to both neurotransmitter composition and cell type annotations, and compared these metrics with wiring diagrams of other animals. We found that the network of the fly brain displays rich-club organization, with a large population (30% of the connectome) of highly connected neurons. We identified subsets of rich-club neurons that may serve as integrators or broadcasters of signals. Finally, we examined subnetworks based on 78 anatomically defined brain regions or neuropils. These data products are shared within the FlyWire Codex (https://codex.flywire.ai) and should serve as a foundation for models and experiments exploring the relationship between neural activity and anatomical structure.
Animals communicate using sounds in a wide range of contexts, and auditory systems must encode behaviorally relevant acoustic features to drive appropriate reactions. How feature detection emerges along auditory pathways has been difficult to solve due to challenges in mapping the underlying circuits and characterizing responses to behaviorally relevant features. Here, we study auditory activity in the Drosophila melanogaster brain and investigate feature selectivity for the two main modes of fly courtship song, sinusoids and pulse trains. We identify 24 new cell types of the intermediate layers of the auditory pathway, and using a new connectomic resource, FlyWire, we map all synaptic connections between these cell types, in addition to connections to known early and higher-order auditory neurons-this represents the first circuit-level map of the auditory pathway. We additionally determine the sign (excitatory or inhibitory) of most synapses in this auditory connectome. We find that auditory neurons display a continuum of preferences for courtship song modes and that neurons with different song-mode preferences and response timescales are highly interconnected in a network that lacks hierarchical structure. Nonetheless, we find that the response properties of individual cell types within the connectome are predictable from their inputs. Our study thus provides new insights into the organization of auditory coding within the Drosophila brain.