Main Menu (Mobile)- Block

Main Menu - Block

janelia7_blocks-janelia7_fake_breadcrumb | block
Koyama Lab / Publications
general_search_page-panel_pane_1 | views_panes

43 Publications

Showing 11-20 of 43 results
Your Criteria:
    04/21/21 | Model-based Bayesian inference of neural activity and connectivity from all-optical interrogation of a neural circuit
    Aitchison L, Russell L, Packer AM, Yan J, Castonguay P, Häusser M, Turaga SC, I. Guyon , U. V. Luxburg , S. Bengio , H. Wallach , R. Fergus , S. Vishwanathan , R. Garnett
    Advances in Neural Information Processing Systems:

    Population activity measurement by calcium imaging can be combined with cellular resolution optogenetic activity perturbations to enable the mapping of neural connectivity in vivo. This requires accurate inference of perturbed and unperturbed neural activity from calcium imaging measurements, which are noisy and indirect, and can also be contaminated by photostimulation artifacts. We have developed a new fully Bayesian approach to jointly inferring spiking activity and neural connectivity from in vivo all-optical perturbation experiments. In contrast to standard approaches that perform spike inference and analysis in two separate maximum-likelihood phases, our joint model is able to propagate uncertainty in spike inference to the inference of connectivity and vice versa. We use the framework of variational autoencoders to model spiking activity using discrete latent variables, low-dimensional latent common input, and sparse spike-and-slab generalized linear coupling between neurons. Additionally, we model two properties of the optogenetic perturbation: off-target photostimulation and photostimulation transients. Using this model, we were able to fit models on 30 minutes of data in just 10 minutes. We performed an all-optical circuit mapping experiment in primary visual cortex of the awake mouse, and use our approach to predict neural connectivity between excitatory neurons in layer 2/3. Predicted connectivity is sparse and consistent with known correlations with stimulus tuning, spontaneous correlation and distance.

     

     

    View Publication Page
    04/21/21 | Programmable 3D snapshot microscopy with Fourier convolutional networks
    Deb D, Jiao Z, Chen AB, Broxton M, Ahrens MB, Podgorski K, Turaga SC

    3D snapshot microscopy enables fast volumetric imaging by capturing a 3D volume in a single 2D camera image and performing computational reconstruction. Fast volumetric imaging has a variety of biological applications such as whole brain imaging of rapid neural activity in larval zebrafish. The optimal microscope design for this optical 3D-to-2D encoding is both sample- and task-dependent, with no general solution known. Deep learning based decoders can be combined with a differentiable simulation of an optical encoder for end-to-end optimization of both the deep learning decoder and optical encoder. This technique has been used to engineer local optical encoders for other problems such as depth estimation, 3D particle localization, and lensless photography. However, 3D snapshot microscopy is known to require a highly non-local optical encoder which existing UNet-based decoders are not able to engineer. We show that a neural network architecture based on global kernel Fourier convolutional neural networks can efficiently decode information from multiple depths in a volume, globally encoded across a 3D snapshot image. We show in simulation that our proposed networks succeed in engineering and reconstructing optical encoders for 3D snapshot microscopy where the existing state-of-the-art UNet architecture fails. We also show that our networks outperform the state-of-the-art learned reconstruction algorithms for a computational photography dataset collected on a prototype lensless camera which also uses a highly non-local optical encoding.

    View Publication Page
    03/26/21 | SongExplorer: A deep learning workflow for discovery and segmentation of animal acoustic communication signals
    Arthur BJ, Ding Y, Sosale M, Khalif F, Kim E, Waddell P, Turaga SC, Stern DL
    bioRxiv. 03/2021:. doi: 10.1101/2021.03.26.437280

    Many animals produce distinct sounds or substrate-borne vibrations, but these signals have proved challenging to segment with automated algorithms. We have developed SongExplorer, a web-browser based interface wrapped around a deep-learning algorithm that supports an interactive workflow for (1) discovery of animal sounds, (2) manual annotation, (3) supervised training of a deep convolutional neural network, and (4) automated segmentation of recordings. Raw data can be explored by simultaneously examining song events, both individually and in the context of the entire recording, watching synced video, and listening to song. We provide a simple way to visualize many song events from large datasets within an interactive low-dimensional visualization, which facilitates detection and correction of incorrectly labelled song events. The machine learning model we implemented displays higher accuracy than existing heuristic algorithms and similar accuracy as two expert human annotators. We show that SongExplorer allows rapid detection of all song types from new species and of novel song types in previously well-studied species.Competing Interest StatementThe authors have declared no competing interest.

    View Publication Page
    06/27/19 | Teaching deep neural networks to localize single molecules for super-resolution microscopy
    Speiser A, Müller L, Matti U, Obara CJ, Legant WR, Ries J, Macke JH, Turaga SC
    arXiv e-prints. 06/2019:arXiv:1907.00770

    Single-molecule localization fluorescence microscopy constructs super-resolution images by sequential imaging and computational localization of sparsely activated fluorophores. Accurate and efficient fluorophore localization algorithms are key to the success of this computational microscopy method. We present a novel localization algorithm based on deep learning which significantly improves upon the state of the art. Our contributions are a novel network architecture for simultaneous detection and localization, and new loss function which phrases detection and localization as a Bayesian inference problem, and thus allows the network to provide uncertainty-estimates. In contrast to standard methods which independently process imaging frames, our network architecture uses temporal context from multiple sequentially imaged frames to detect and localize molecules. We demonstrate the power of our method across a variety of datasets, imaging modalities, signal to noise ratios, and fluorophore densities. While existing localization algorithms can achieve optimal localization accuracy at low fluorophore densities, they are confounded by high densities. Our method is the first deep-learning based approach which achieves state-of-the-art on the SMLM2016 challenge. It achieves the best scores on 12 out of 12 data-sets when comparing both detection accuracy and precision, and excels at high densities. Finally, we investigate how unsupervised learning can be used to make the network robust against mismatch between simulated and real data. The lessons learned here are more generally relevant for the training of deep networks to solve challenging Bayesian inverse problems on spatially extended domains in biology and physics.

    View Publication Page
    06/27/19 | Teaching deep neural networks to localize single molecules for super-resolution microscopy
    Artur Speiser , Lucas-Raphael Müller , Ulf Matti , Christopher J. Obara , Wesley R. Legant , Jonas Ries , Jakob H. Macke , Srinivas C. Turaga

    Single-molecule localization fluorescence microscopy constructs super-resolution images by sequential imaging and computational localization of sparsely activated fluorophores. Accurate and efficient fluorophore localization algorithms are key to the success of this computational microscopy method. We present a novel localization algorithm based on deep learning which significantly improves upon the state of the art. Our contributions are a novel network architecture for simultaneous detection and localization, and new loss function which phrases detection and localization as a Bayesian inference problem, and thus allows the network to provide uncertainty-estimates. In contrast to standard methods which independently process imaging frames, our network architecture uses temporal context from multiple sequentially imaged frames to detect and localize molecules. We demonstrate the power of our method across a variety of datasets, imaging modalities, signal to noise ratios, and fluorophore densities. While existing localization algorithms can achieve optimal localization accuracy at low fluorophore densities, they are confounded by high densities. Our method is the first deep-learning based approach which achieves state-of-the-art on the SMLM2016 challenge. It achieves the best scores on 12 out of 12 data-sets when comparing both detection accuracy and precision, and excels at high densities. Finally, we investigate how unsupervised learning can be used to make the network robust against mismatch between simulated and real data. The lessons learned here are more generally relevant for the training of deep networks to solve challenging Bayesian inverse problems on spatially extended domains in biology and physics.

    View Publication Page
    Turaga LabSternson Lab
    10/16/20 | Behavioral state coding by molecularly defined paraventricular hypothalamic cell type ensembles.
    Xu S, Yang H, Menon V, Lemire AL, Wang L, Henry FE, Turaga SC, Sternson SM
    Science. 2020 Oct 16;370(6514):. doi: 10.1126/science.abb2494

    Brains encode behaviors using neurons amenable to systematic classification by gene expression. The contribution of molecular identity to neural coding is not understood because of the challenges involved with measuring neural dynamics and molecular information from the same cells. We developed CaRMA (calcium and RNA multiplexed activity) imaging based on recording in vivo single-neuron calcium dynamics followed by gene expression analysis. We simultaneously monitored activity in hundreds of neurons in mouse paraventricular hypothalamus (PVH). Combinations of cell-type marker genes had predictive power for neuronal responses across 11 behavioral states. The PVH uses combinatorial assemblies of molecularly defined neuron populations for grouped-ensemble coding of survival behaviors. The neuropeptide receptor neuropeptide Y receptor type 1 (Npy1r) amalgamated multiple cell types with similar responses. Our results show that molecularly defined neurons are important processing units for brain function.

    View Publication Page
    08/27/19 | Constraining computational models using electron microscopy wiring diagrams.
    Litwin-Kumar A, Turaga SC
    Current Opinion in Neurobiology. 2019 Aug 27;58:94-100. doi: 10.1016/j.conb.2019.07.007

    Numerous efforts to generate "connectomes," or synaptic wiring diagrams, of large neural circuits or entire nervous systems are currently underway. These efforts promise an abundance of data to guide theoretical models of neural computation and test their predictions. However, there is not yet a standard set of tools for incorporating the connectivity constraints that these datasets provide into the models typically studied in theoretical neuroscience. This article surveys recent approaches to building models with constrained wiring diagrams and the insights they have provided. It also describes challenges and the need for new techniques to scale these approaches to ever more complex datasets.

    View Publication Page
    07/01/19 | Large scale image segmentation with structured loss based deep learning for connectome reconstruction.
    Funke J, Tschopp FD, Grisaitis W, Sheridan A, Singh C, Saalfeld S, Turaga SC
    IEEE Transactions on Pattern Analysis and Machine Intelligence. 2019 Jul 1;41(7):1669-80. doi: 10.1109/TPAMI.2018.2835450

    We present a method combining affinity prediction with region agglomeration, which improves significantly upon the state of the art of neuron segmentation from electron microscopy (EM) in accuracy and scalability. Our method consists of a 3D U-net, trained to predict affinities between voxels, followed by iterative region agglomeration. We train using a structured loss based on MALIS, encouraging topologically correct segmentations obtained from affinity thresholding. Our extension consists of two parts: First, we present a quasi-linear method to compute the loss gradient, improving over the original quadratic algorithm. Second, we compute the gradient in two separate passes to avoid spurious gradient contributions in early training stages. Our predictions are accurate enough that simple learning-free percentile-based agglomeration outperforms more involved methods used earlier on inferior predictions. We present results on three diverse EM datasets, achieving relative improvements over previous results of 27%, 15%, and 250%. Our findings suggest that a single method can be applied to both nearly isotropic block-face EM data and anisotropic serial sectioned EM data. The runtime of our method scales linearly with the size of the volume and achieves a throughput of ~2.6 seconds per megavoxel, qualifying our method for the processing of very large datasets.

    View Publication Page
    10/18/18 | In toto imaging and reconstruction of post-implantation mouse development at the single-cell level.
    McDole K, Guignard L, Amat F, Berger A, Malandain G, Royer LA, Turaga SC, Branson K, Keller PJ
    Cell. 2018 Oct 10;175(3):859-876. doi: 10.1016/j.cell.2018.09.031

    The mouse embryo has long been central to the study of mammalian development; however, elucidating the cell behaviors governing gastrulation and the formation of tissues and organs remains a fundamental challenge. A major obstacle is the lack of live imaging and image analysis technologies capable of systematically following cellular dynamics across the developing embryo. We developed a light-sheet microscope that adapts itself to the dramatic changes in size, shape, and optical properties of the post-implantation mouse embryo and captures its development from gastrulation to early organogenesis at the cellular level. We furthermore developed a computational framework for reconstructing long-term cell tracks, cell divisions, dynamic fate maps, and maps of tissue morphogenesis across the entire embryo. By jointly analyzing cellular dynamics in multiple embryos registered in space and time, we built a dynamic atlas of post-implantation mouse development that, together with our microscopy and computational methods, is provided as a resource.

    View Publication Page
    09/26/18 | Synaptic partner prediction from point annotations in insect brains.
    Buhmann J, Krause R, Lentini RC, Eckstein N, Cook M, Turaga SC, Funke J
    MICCAI 2018: Medical Image Computing and Computer Assisted Intervention. 2018 Sep 26:. doi: 10.1007/978-3-030-00934-2_35

    High-throughput electron microscopy allows recording of lar- ge stacks of neural tissue with sufficient resolution to extract the wiring diagram of the underlying neural network. Current efforts to automate this process focus mainly on the segmentation of neurons. However, in order to recover a wiring diagram, synaptic partners need to be identi- fied as well. This is especially challenging in insect brains like Drosophila melanogaster, where one presynaptic site is associated with multiple post- synaptic elements. Here we propose a 3D U-Net architecture to directly identify pairs of voxels that are pre- and postsynaptic to each other. To that end, we formulate the problem of synaptic partner identification as a classification problem on long-range edges between voxels to encode both the presence of a synaptic pair and its direction. This formulation allows us to directly learn from synaptic point annotations instead of more ex- pensive voxel-based synaptic cleft or vesicle annotations. We evaluate our method on the MICCAI 2016 CREMI challenge and improve over the current state of the art, producing 3% fewer errors than the next best method.

    View Publication Page