Main Menu (Mobile)- Block

Main Menu - Block

janelia7_blocks-janelia7_secondary_menu | block
More in this page
janelia7_blocks-janelia7_fake_breadcrumb | block
Funke Lab / Publications
custom | custom

Filter

facetapi-Q2b17qCsTdECvJIqZJgYMaGsr8vANl1n | block
facetapi-W9JlIB1X0bjs93n1Alu3wHJQTTgDCBGe | block
facetapi-PV5lg7xuz68EAY8eakJzrcmwtdGEnxR0 | block
facetapi-021SKYQnqXW6ODq5W5dPAFEDBaEJubhN | block
general_search_page-panel_pane_1 | views_panes

38 Publications

Showing 11-20 of 38 results
12/01/20 | Dense neuronal reconstruction through X-ray holographic nano-tomography.
Kuan AT, Phelps JS, Thomas LA, Nguyen TM, Han J, Chen C, Azevedo AW, Tuthill JC, Funke J, Cloetens P, Pacureanu A, Lee WA
Nature Neuroscience. 2020 Dec -1;23(12):1637-43. doi: 10.1038/s41593-020-0704-9

Imaging neuronal networks provides a foundation for understanding the nervous system, but resolving dense nanometer-scale structures over large volumes remains challenging for light microscopy (LM) and electron microscopy (EM). Here we show that X-ray holographic nano-tomography (XNH) can image millimeter-scale volumes with sub-100-nm resolution, enabling reconstruction of dense wiring in Drosophila melanogaster and mouse nervous tissue. We performed correlative XNH and EM to reconstruct hundreds of cortical pyramidal cells and show that more superficial cells receive stronger synaptic inhibition on their apical dendrites. By combining multiple XNH scans, we imaged an adult Drosophila leg with sufficient resolution to comprehensively catalog mechanosensory neurons and trace individual motor axons from muscles to the central nervous system. To accelerate neuronal reconstructions, we trained a convolutional neural network to automatically segment neurons from XNH volumes. Thus, XNH bridges a key gap between LM and EM, providing a new avenue for neural circuit discovery.

View Publication Page
06/15/16 | Efficient convolutional neural networks for pixelwise classification on heterogeneous hardware systems.
Tschopp F, Martel JN, Turaga SC, Cook M, Funke J
IEEE 13th International Symposium on Biomedical Imaging: From Nano to Macro. 2016 Jun 15:. doi: 10.1109/ISBI.2016.7493487

With recent advances in high-throughput Electron Microscopy (EM) imaging it is now possible to image an entire nervous system of organisms like Drosophila melanogaster. One of the bottlenecks to reconstruct a connectome from these large volumes (œ 100 TiB) is the pixel-wise prediction of membranes. The time it would typically take to process such a volume using a convolutional neural network (CNN) with a sliding window approach is in the order of years on a current GPU. With sliding windows, however, a lot of redundant computations are carried out. In this paper, we present an extension to the Caffe library to increase throughput by predicting many pixels at once. On a sliding window network successfully used for membrane classification, we show that our method achieves a speedup of up to 57×, maintaining identical prediction results.

View Publication Page
01/24/23 | Hierarchical architecture of dopaminergic circuits enables second-order conditioning in Drosophila
Daichi Yamada , Daniel Bushey , Li Feng , Karen Hibbard , Megan Sammons , Jan Funke , Ashok Litwin-Kumar , Toshihide Hige , Yoshinori Aso
eLife. 2023 Jan 24:. doi: 10.7554/eLife.79042

Dopaminergic neurons with distinct projection patterns and physiological properties compose memory subsystems in a brain. However, it is poorly understood whether or how they interact during complex learning. Here, we identify a feedforward circuit formed between dopamine subsystems and show that it is essential for second-order conditioning, an ethologically important form of higher-order associative learning. The Drosophila mushroom body comprises a series of dopaminergic compartments, each of which exhibits distinct memory dynamics. We find that a slow and stable memory compartment can serve as an effective “teacher” by instructing other faster and transient memory compartments via a single key interneuron, which we identify by connectome analysis and neurotransmitter prediction. This excitatory interneuron acquires enhanced response to reward-predicting odor after first-order conditioning and, upon activation, evokes dopamine release in the “student” compartments. These hierarchical connections between dopamine subsystems explain distinct properties of first- and second-order memory long known by behavioral psychologists.

View Publication Page
09/10/20 | Inpainting Networks Learn to Separate Cells in Microscopy Images
Wolf S, Hamprecht FA, Funke J
British Machine Vision Conference. 2020 Sep:

Deep neural networks trained to inpaint partially occluded images show a deep understanding of image composition and have even been shown to remove objects from images convincingly. In this work, we investigate how this implicit knowledge of image composition can be be used to separate cells in densely populated microscopy images. We propose a measure for the independence of two image regions given a fully self-supervised inpainting network and separate objects by maximizing this independence. We evaluate our method on two cell segmentation datasets and show that cells can be separated completely unsupervised. Furthermore, combined with simple foreground detection, our method yields instance segmentation of similar quality to fully supervised methods.

View Publication Page
07/01/19 | Large scale image segmentation with structured loss based deep learning for connectome reconstruction.
Funke J, Tschopp FD, Grisaitis W, Sheridan A, Singh C, Saalfeld S, Turaga SC
IEEE Transactions on Pattern Analysis and Machine Intelligence. 2019 Jul 1;41(7):1669-80. doi: 10.1109/TPAMI.2018.2835450

We present a method combining affinity prediction with region agglomeration, which improves significantly upon the state of the art of neuron segmentation from electron microscopy (EM) in accuracy and scalability. Our method consists of a 3D U-net, trained to predict affinities between voxels, followed by iterative region agglomeration. We train using a structured loss based on MALIS, encouraging topologically correct segmentations obtained from affinity thresholding. Our extension consists of two parts: First, we present a quasi-linear method to compute the loss gradient, improving over the original quadratic algorithm. Second, we compute the gradient in two separate passes to avoid spurious gradient contributions in early training stages. Our predictions are accurate enough that simple learning-free percentile-based agglomeration outperforms more involved methods used earlier on inferior predictions. We present results on three diverse EM datasets, achieving relative improvements over previous results of 27%, 15%, and 250%. Our findings suggest that a single method can be applied to both nearly isotropic block-face EM data and anisotropic serial sectioned EM data. The runtime of our method scales linearly with the size of the volume and achieves a throughput of ~2.6 seconds per megavoxel, qualifying our method for the processing of very large datasets.

View Publication Page
02/01/23 | Local shape descriptors for neuron segmentation.
Sheridan A, Nguyen TM, Deb D, Lee WA, Saalfeld S, Turaga SC, Manor U, Funke J
Nature Methods. 2023 Feb 01;20(2):295-303. doi: 10.1038/s41592-022-01711-z

We present an auxiliary learning task for the problem of neuron segmentation in electron microscopy volumes. The auxiliary task consists of the prediction of local shape descriptors (LSDs), which we combine with conventional voxel-wise direct neighbor affinities for neuron boundary detection. The shape descriptors capture local statistics about the neuron to be segmented, such as diameter, elongation, and direction. On a study comparing several existing methods across various specimen, imaging techniques, and resolutions, auxiliary learning of LSDs consistently increases segmentation accuracy of affinity-based methods over a range of metrics. Furthermore, the addition of LSDs promotes affinity-based segmentation methods to be on par with the current state of the art for neuron segmentation (flood-filling networks), while being two orders of magnitudes more efficient-a critical requirement for the processing of future petabyte-sized datasets.

View Publication Page
09/17/20 | Microtubule Tracking in Electron Microscopy Volumes
Nils Eckstein , Julia Buhmann , Matthew Cook , Jan Funke
International Conference on Medical Image Computing and Computer-Assisted Intervention. 2020 Sep 17:

We present a method for microtubule tracking in electron microscopy volumes. Our method first identifies a sparse set of voxels that likely belong to microtubules. Similar to prior work, we then enumerate potential edges between these voxels, which we represent in a candidate graph. Tracks of microtubules are found by selecting nodes and edges in the candidate graph by solving a constrained optimization problem incorporating biological priors on microtubule structure. For this, we present a novel integer linear programming formulation, which results in speed-ups of three orders of magnitude and an increase of 53% in accuracy compared to prior art (evaluated on three 1 . 2 × 4 × 4µm volumes of Drosophila neural tissue). We also propose a scheme to solve the optimization problem in a block-wise fashion, which allows distributed tracking and is necessary to process very large electron microscopy volumes. Finally, we release a benchmark dataset for microtubule tracking, here used for training, testing and validation, consisting of eight 30 x 1000 x 1000 voxel blocks (1 . 2 × 4 × 4µm) of densely annotated microtubules in the CREMI data set (https://github.com/nilsec/micron).

View Publication Page
Turner LabFitzgerald LabFunke Lab
12/12/23 | Model-Based Inference of Synaptic Plasticity Rules
Yash Mehta , Danil Tyulmankov , Adithya E. Rajagopalan , Glenn C. Turner , James E. Fitzgerald , Jan Funke
bioRxiv. 2023 Dec 12:. doi: 10.1101/2023.12.11.571103

Understanding learning through synaptic plasticity rules in the brain is a grand challenge for neuroscience. Here we introduce a novel computational framework for inferring plasticity rules from experimental data on neural activity trajectories and behavioral learning dynamics. Our methodology parameterizes the plasticity function to provide theoretical interpretability and facilitate gradient-based optimization. For instance, we use Taylor series expansions or multilayer perceptrons to approximate plasticity rules, and we adjust their parameters via gradient descent over entire trajectories to closely match observed neural activity and behavioral data. Notably, our approach can learn intricate rules that induce long nonlinear time-dependencies, such as those incorporating postsynaptic activity and current synaptic weights. We validate our method through simulations, accurately recovering established rules, like Oja’s, as well as more complex hypothetical rules incorporating reward-modulated terms. We assess the resilience of our technique to noise and, as a tangible application, apply it to behavioral data from Drosophila during a probabilistic reward-learning experiment. Remarkably, we identify an active forgetting component of reward learning in flies that enhances the predictive accuracy of previous models. Overall, our modeling framework provides an exciting new avenue to elucidate the computational principles governing synaptic plasticity and learning in the brain.

View Publication Page
10/02/24 | Network statistics of the whole-brain connectome of Drosophila
Albert Lin , Runzhe Yang , Sven Dorkenwald , Arie Matsliah , Amy R. Sterling , Philipp Schlegel , Szi-chieh Yu , Claire E. McKellar , Marta Costa , Katharina Eichler , Alexander Shakeel Bates , Nils Eckstein , Jan Funke , Gregory S.X.E. Jefferis , Mala Murthy
Nature. 2024 Oct 02;634(8032):153–165. doi: 10.1038/s41586-024-07968-y

Brains comprise complex networks of neurons and connections, similar to the nodes and edges of artificial networks. Network analysis applied to the wiring diagrams of brains can offer insights into how they support computations and regulate the flow of information underlying perception and behaviour. The completion of the first whole-brain connectome of an adult fly, containing over 130,000 neurons and millions of synaptic connections, offers an opportunity to analyse the statistical properties and topological features of a complete brain. Here we computed the prevalence of two- and three-node motifs, examined their strengths, related this information to both neurotransmitter composition and cell type annotations, and compared these metrics with wiring diagrams of other animals. We found that the network of the fly brain displays rich-club organization, with a large population (30% of the connectome) of highly connected neurons. We identified subsets of rich-club neurons that may serve as integrators or broadcasters of signals. Finally, we examined subnetworks based on 78 anatomically defined brain regions or neuropils. These data products are shared within the FlyWire Codex (https://codex.flywire.ai) and should serve as a foundation for models and experiments exploring the relationship between neural activity and anatomical structure.

View Publication Page
08/08/22 | Neural network organization for courtship-song feature detection in Drosophila.
Baker CA, McKellar C, Pang R, Nern A, Dorkenwald S, Pacheco DA, Eckstein N, Funke J, Dickson BJ, Murthy M
Current Biology. 2022 Aug 08;32(15):3317-3333.e7. doi: 10.1016/j.cub.2022.06.019

Animals communicate using sounds in a wide range of contexts, and auditory systems must encode behaviorally relevant acoustic features to drive appropriate reactions. How feature detection emerges along auditory pathways has been difficult to solve due to challenges in mapping the underlying circuits and characterizing responses to behaviorally relevant features. Here, we study auditory activity in the Drosophila melanogaster brain and investigate feature selectivity for the two main modes of fly courtship song, sinusoids and pulse trains. We identify 24 new cell types of the intermediate layers of the auditory pathway, and using a new connectomic resource, FlyWire, we map all synaptic connections between these cell types, in addition to connections to known early and higher-order auditory neurons-this represents the first circuit-level map of the auditory pathway. We additionally determine the sign (excitatory or inhibitory) of most synapses in this auditory connectome. We find that auditory neurons display a continuum of preferences for courtship song modes and that neurons with different song-mode preferences and response timescales are highly interconnected in a network that lacks hierarchical structure. Nonetheless, we find that the response properties of individual cell types within the connectome are predictable from their inputs. Our study thus provides new insights into the organization of auditory coding within the Drosophila brain.

View Publication Page