Filter
Associated Lab
- Ahrens Lab (2) Apply Ahrens Lab filter
- Branson Lab (4) Apply Branson Lab filter
- Card Lab (1) Apply Card Lab filter
- Cardona Lab (1) Apply Cardona Lab filter
- Funke Lab (5) Apply Funke Lab filter
- Keller Lab (1) Apply Keller Lab filter
- Otopalik Lab (1) Apply Otopalik Lab filter
- Pachitariu Lab (1) Apply Pachitariu Lab filter
- Reiser Lab (2) Apply Reiser Lab filter
- Rubin Lab (1) Apply Rubin Lab filter
- Saalfeld Lab (3) Apply Saalfeld Lab filter
- Sternson Lab (1) Apply Sternson Lab filter
- Remove Turaga Lab filter Turaga Lab
Associated Support Team
34 Janelia Publications
Showing 31-34 of 34 resultsHigh-throughput electron microscopy allows recording of lar- ge stacks of neural tissue with sufficient resolution to extract the wiring diagram of the underlying neural network. Current efforts to automate this process focus mainly on the segmentation of neurons. However, in order to recover a wiring diagram, synaptic partners need to be identi- fied as well. This is especially challenging in insect brains like Drosophila melanogaster, where one presynaptic site is associated with multiple post- synaptic elements. Here we propose a 3D U-Net architecture to directly identify pairs of voxels that are pre- and postsynaptic to each other. To that end, we formulate the problem of synaptic partner identification as a classification problem on long-range edges between voxels to encode both the presence of a synaptic pair and its direction. This formulation allows us to directly learn from synaptic point annotations instead of more ex- pensive voxel-based synaptic cleft or vesicle annotations. We evaluate our method on the MICCAI 2016 CREMI challenge and improve over the current state of the art, producing 3% fewer errors than the next best method.
Single-molecule localization fluorescence microscopy constructs super-resolution images by sequential imaging and computational localization of sparsely activated fluorophores. Accurate and efficient fluorophore localization algorithms are key to the success of this computational microscopy method. We present a novel localization algorithm based on deep learning which significantly improves upon the state of the art. Our contributions are a novel network architecture for simultaneous detection and localization, and new loss function which phrases detection and localization as a Bayesian inference problem, and thus allows the network to provide uncertainty-estimates. In contrast to standard methods which independently process imaging frames, our network architecture uses temporal context from multiple sequentially imaged frames to detect and localize molecules. We demonstrate the power of our method across a variety of datasets, imaging modalities, signal to noise ratios, and fluorophore densities. While existing localization algorithms can achieve optimal localization accuracy at low fluorophore densities, they are confounded by high densities. Our method is the first deep-learning based approach which achieves state-of-the-art on the SMLM2016 challenge. It achieves the best scores on 12 out of 12 data-sets when comparing both detection accuracy and precision, and excels at high densities. Finally, we investigate how unsupervised learning can be used to make the network robust against mismatch between simulated and real data. The lessons learned here are more generally relevant for the training of deep networks to solve challenging Bayesian inverse problems on spatially extended domains in biology and physics.
Single-molecule localization fluorescence microscopy constructs super-resolution images by sequential imaging and computational localization of sparsely activated fluorophores. Accurate and efficient fluorophore localization algorithms are key to the success of this computational microscopy method. We present a novel localization algorithm based on deep learning which significantly improves upon the state of the art. Our contributions are a novel network architecture for simultaneous detection and localization, and new loss function which phrases detection and localization as a Bayesian inference problem, and thus allows the network to provide uncertainty-estimates. In contrast to standard methods which independently process imaging frames, our network architecture uses temporal context from multiple sequentially imaged frames to detect and localize molecules. We demonstrate the power of our method across a variety of datasets, imaging modalities, signal to noise ratios, and fluorophore densities. While existing localization algorithms can achieve optimal localization accuracy at low fluorophore densities, they are confounded by high densities. Our method is the first deep-learning based approach which achieves state-of-the-art on the SMLM2016 challenge. It achieves the best scores on 12 out of 12 data-sets when comparing both detection accuracy and precision, and excels at high densities. Finally, we investigate how unsupervised learning can be used to make the network robust against mismatch between simulated and real data. The lessons learned here are more generally relevant for the training of deep networks to solve challenging Bayesian inverse problems on spatially extended domains in biology and physics.
The body of an animal determines how the nervous system produces behavior. Therefore, detailed modeling of the neural control of sensorimotor behavior requires a detailed model of the body. Here we contribute an anatomically-detailed biomechanical whole-body model of the fruit fly Drosophila melanogaster in the MuJoCo physics engine. Our model is general-purpose, enabling the simulation of diverse fly behaviors, both on land and in the air. We demonstrate the generality of our model by simulating realistic locomotion, both flight and walking. To support these behaviors, we have extended MuJoCo with phenomenological models of fluid forces and adhesion forces. Through data-driven end-to-end reinforcement learning, we demonstrate that these advances enable the training of neural network controllers capable of realistic locomotion along complex trajectories based on high-level steering control signals. With a visually guided flight task, we demonstrate a neural controller that can use the vision sensors of the body model to control and steer flight. Our project is an open-source platform for modeling neural control of sensorimotor behavior in an embodied context.Competing Interest StatementThe authors have declared no competing interest.