Filter
Associated Lab
- 43418 (27) Apply 43418 filter
- 43427 (18) Apply 43427 filter
- 43430 (53) Apply 43430 filter
- 46293 (4) Apply 46293 filter
- Ahrens Lab (37) Apply Ahrens Lab filter
- Aso Lab (32) Apply Aso Lab filter
- Baker Lab (19) Apply Baker Lab filter
- Betzig Lab (94) Apply Betzig Lab filter
- Beyene Lab (3) Apply Beyene Lab filter
- Bock Lab (14) Apply Bock Lab filter
- Branson Lab (40) Apply Branson Lab filter
- Card Lab (25) Apply Card Lab filter
- Cardona Lab (44) Apply Cardona Lab filter
- Chklovskii Lab (10) Apply Chklovskii Lab filter
- Clapham Lab (10) Apply Clapham Lab filter
- Cui Lab (19) Apply Cui Lab filter
- Darshan Lab (6) Apply Darshan Lab filter
- Dickson Lab (29) Apply Dickson Lab filter
- Druckmann Lab (21) Apply Druckmann Lab filter
- Dudman Lab (32) Apply Dudman Lab filter
- Eddy/Rivas Lab (30) Apply Eddy/Rivas Lab filter
- Egnor Lab (4) Apply Egnor Lab filter
- Espinosa Medina Lab (10) Apply Espinosa Medina Lab filter
- Feliciano Lab (5) Apply Feliciano Lab filter
- Fetter Lab (31) Apply Fetter Lab filter
- Fitzgerald Lab (11) Apply Fitzgerald Lab filter
- Freeman Lab (15) Apply Freeman Lab filter
- Funke Lab (25) Apply Funke Lab filter
- Gonen Lab (59) Apply Gonen Lab filter
- Grigorieff Lab (34) Apply Grigorieff Lab filter
- Harris Lab (39) Apply Harris Lab filter
- Heberlein Lab (13) Apply Heberlein Lab filter
- Hermundstad Lab (14) Apply Hermundstad Lab filter
- Hess Lab (59) Apply Hess Lab filter
- Jayaraman Lab (37) Apply Jayaraman Lab filter
- Ji Lab (32) Apply Ji Lab filter
- Johnson Lab (1) Apply Johnson Lab filter
- Karpova Lab (12) Apply Karpova Lab filter
- Keleman Lab (8) Apply Keleman Lab filter
- Keller Lab (59) Apply Keller Lab filter
- Lavis Lab (105) Apply Lavis Lab filter
- Lee (Albert) Lab (26) Apply Lee (Albert) Lab filter
- Leonardo Lab (19) Apply Leonardo Lab filter
- Lippincott-Schwartz Lab (71) Apply Lippincott-Schwartz Lab filter
- Liu (Zhe) Lab (45) Apply Liu (Zhe) Lab filter
- Looger Lab (133) Apply Looger Lab filter
- Magee Lab (31) Apply Magee Lab filter
- Menon Lab (12) Apply Menon Lab filter
- Murphy Lab (6) Apply Murphy Lab filter
- O'Shea Lab (3) Apply O'Shea Lab filter
- Pachitariu Lab (21) Apply Pachitariu Lab filter
- Pastalkova Lab (5) Apply Pastalkova Lab filter
- Pavlopoulos Lab (7) Apply Pavlopoulos Lab filter
- Pedram Lab (1) Apply Pedram Lab filter
- Podgorski Lab (15) Apply Podgorski Lab filter
- Reiser Lab (35) Apply Reiser Lab filter
- Riddiford Lab (20) Apply Riddiford Lab filter
- Romani Lab (25) Apply Romani Lab filter
- Rubin Lab (93) Apply Rubin Lab filter
- Saalfeld Lab (34) Apply Saalfeld Lab filter
- Scheffer Lab (35) Apply Scheffer Lab filter
- Schreiter Lab (37) Apply Schreiter Lab filter
- Shroff Lab (12) Apply Shroff Lab filter
- Simpson Lab (18) Apply Simpson Lab filter
- Singer Lab (36) Apply Singer Lab filter
- Spruston Lab (53) Apply Spruston Lab filter
- Stern Lab (58) Apply Stern Lab filter
- Sternson Lab (46) Apply Sternson Lab filter
- Stringer Lab (15) Apply Stringer Lab filter
- Svoboda Lab (126) Apply Svoboda Lab filter
- Tebo Lab (2) Apply Tebo Lab filter
- Tervo Lab (8) Apply Tervo Lab filter
- Tillberg Lab (11) Apply Tillberg Lab filter
- Tjian Lab (17) Apply Tjian Lab filter
- Truman Lab (57) Apply Truman Lab filter
- Turaga Lab (31) Apply Turaga Lab filter
- Turner Lab (14) Apply Turner Lab filter
- Vale Lab (3) Apply Vale Lab filter
- Wu Lab (8) Apply Wu Lab filter
- Zlatic Lab (27) Apply Zlatic Lab filter
- Zuker Lab (5) Apply Zuker Lab filter
Associated Project Team
- COSEM (2) Apply COSEM filter
- Fly Descending Interneuron (7) Apply Fly Descending Interneuron filter
- Fly Functional Connectome (13) Apply Fly Functional Connectome filter
- Fly Olympiad (4) Apply Fly Olympiad filter
- FlyEM (55) Apply FlyEM filter
- FlyLight (33) Apply FlyLight filter
- GENIE (33) Apply GENIE filter
- Larval Olympiad (2) Apply Larval Olympiad filter
- MouseLight (14) Apply MouseLight filter
- NeuroSeq (1) Apply NeuroSeq filter
- ThalamoSeq (1) Apply ThalamoSeq filter
- Tool Translation Team (T3) (11) Apply Tool Translation Team (T3) filter
- Transcription Imaging (45) Apply Transcription Imaging filter
Associated Support Team
- 48046 (3) Apply 48046 filter
- Anatomy and Histology (17) Apply Anatomy and Histology filter
- Cell and Tissue Culture (12) Apply Cell and Tissue Culture filter
- Cryo-Electron Microscopy (29) Apply Cryo-Electron Microscopy filter
- Drosophila Resources (33) Apply Drosophila Resources filter
- Electron Microscopy (11) Apply Electron Microscopy filter
- Gene Targeting and Transgenics (9) Apply Gene Targeting and Transgenics filter
- Janelia Experimental Technology (32) Apply Janelia Experimental Technology filter
- Light Microscopy (10) Apply Light Microscopy filter
- Management Team (1) Apply Management Team filter
- Molecular Biology (14) Apply Molecular Biology filter
- Project Technical Resources (19) Apply Project Technical Resources filter
- Quantitative Genomics (18) Apply Quantitative Genomics filter
- Scientific Computing Software (52) Apply Scientific Computing Software filter
- Scientific Computing Systems (5) Apply Scientific Computing Systems filter
- Viral Tools (12) Apply Viral Tools filter
- Vivarium (6) Apply Vivarium filter
Publication Date
- 2023 (13) Apply 2023 filter
- 2022 (184) Apply 2022 filter
- 2021 (175) Apply 2021 filter
- 2020 (176) Apply 2020 filter
- 2019 (177) Apply 2019 filter
- 2018 (206) Apply 2018 filter
- 2017 (187) Apply 2017 filter
- 2016 (191) Apply 2016 filter
- 2015 (196) Apply 2015 filter
- 2014 (191) Apply 2014 filter
- 2013 (136) Apply 2013 filter
- 2012 (112) Apply 2012 filter
- 2011 (98) Apply 2011 filter
- 2010 (62) Apply 2010 filter
- 2009 (56) Apply 2009 filter
- 2008 (40) Apply 2008 filter
- 2007 (21) Apply 2007 filter
- 2006 (3) Apply 2006 filter
2224 Janelia Publications
Showing 11-20 of 2224 resultsWe present a method to automatically identify and track nuclei in time-lapse microscopy recordings of entire developing embryos. The method combines deep learning and global optimization. On a mouse dataset, it reconstructs 75.8% of cell lineages spanning 1 h, as compared to 31.8% for the competing method. Our approach improves understanding of where and when cell fate decisions are made in developing embryos, tissues, and organs.
Daily experience suggests that we perceive distances near us linearly. However, the actual geometry of spatial representation in the brain is unknown. Here we report that neurons in the CA1 region of rat hippocampus that mediate spatial perception represent space according to a non-linear hyperbolic geometry. This geometry uses an exponential scale and yields greater positional information than a linear scale. We found that the size of the representation matches the optimal predictions for the number of CA1 neurons. The representations also dynamically expanded proportional to the logarithm of time that the animal spent exploring the environment, in correspondence with the maximal mutual information that can be received. The dynamic changes tracked even small variations due to changes in the running speed of the animal. These results demonstrate how neural circuits achieve efficient representations using dynamic hyperbolic geometry.
The cerebellum is thought to help detect and correct errors between intended and executed commands and is critical for social behaviours, cognition and emotion. Computations for motor control must be performed quickly to correct errors in real time and should be sensitive to small differences between patterns for fine error correction while being resilient to noise. Influential theories of cerebellar information processing have largely assumed random network connectivity, which increases the encoding capacity of the network's first layer. However, maximizing encoding capacity reduces the resilience to noise. To understand how neuronal circuits address this fundamental trade-off, we mapped the feedforward connectivity in the mouse cerebellar cortex using automated large-scale transmission electron microscopy and convolutional neural network-based image segmentation. We found that both the input and output layers of the circuit exhibit redundant and selective connectivity motifs, which contrast with prevailing models. Numerical simulations suggest that these redundant, non-random connectivity motifs increase the resilience to noise at a negligible cost to the overall encoding capacity. This work reveals how neuronal network structure can support a trade-off between encoding capacity and redundancy, unveiling principles of biological network architecture with implications for the design of artificial neural networks.
We present an auxiliary learning task for the problem of neuron segmentation in electron microscopy volumes. The auxiliary task consists of the prediction of local shape descriptors (LSDs), which we combine with conventional voxel-wise direct neighbor affinities for neuron boundary detection. The shape descriptors capture local statistics about the neuron to be segmented, such as diameter, elongation, and direction. On a study comparing several existing methods across various specimen, imaging techniques, and resolutions, auxiliary learning of LSDs consistently increases segmentation accuracy of affinity-based methods over a range of metrics. Furthermore, the addition of LSDs promotes affinity-based segmentation methods to be on par with the current state of the art for neuron segmentation (flood-filling networks), while being two orders of magnitudes more efficient-a critical requirement for the processing of future petabyte-sized datasets.
Brains contain networks of interconnected neurons, so knowing the network architecture is essential for understanding brain function. We therefore mapped the synaptic-resolution connectome of an insect brain (Drosophila larva) with rich behavior, including learning, value-computation, and action-selection, comprising 3,013 neurons and 544,000 synapses. We characterized neuron-types, hubs, feedforward and feedback pathways, and cross-hemisphere and brain-nerve cord interactions. We found pervasive multisensory and interhemispheric integration, highly recurrent architecture, abundant feedback from descending neurons, and multiple novel circuit motifs. The brain’s most recurrent circuits comprised the input and output neurons of the learning center. Some structural features, including multilayer shortcuts and nested recurrent loops, resembled powerful machine learning architectures. The identified brain architecture provides a basis for future experimental and theoretical studies of neural circuits.
To accurately track self-location, animals need to integrate their movements through space. In amniotes, representations of self-location have been found in regions such as the hippocampus. It is unknown whether more ancient brain regions contain such representations and by which pathways they may drive locomotion. Fish displaced by water currents must prevent uncontrolled drift to potentially dangerous areas. We found that larval zebrafish track such movements and can later swim back to their earlier location. Whole-brain functional imaging revealed the circuit enabling this process of positional homeostasis. Position-encoding brainstem neurons integrate optic flow, then bias future swimming to correct for past displacements by modulating inferior olive and cerebellar activity. Manipulation of position-encoding or olivary neurons abolished positional homeostasis or evoked behavior as if animals had experienced positional shifts. These results reveal a multiregional hindbrain circuit in vertebrates for optic flow integration, memory of self-location, and its neural pathway to behavior.Competing Interest StatementThe authors have declared no competing interest.
The central amygdala (CEA) has been richly studied for interpreting function and behavior according to specific cell types and circuits. Such work has typically defined molecular cell types by classical inhibitory marker genes; consequently, whether marker-gene-defined cell types exhaustively cover the CEA and co-vary with connectivity remains unresolved. Here, we combined single-cell RNA sequencing, multiplexed fluorescent in situ hybridization, immunohistochemistry, and long-range projection mapping to derive a “bottom-up” understanding of CEA cell types. In doing so, we identify two major cell types, encompassing one-third of all CEA neurons, that have gone unresolved in previous studies. In spatially mapping these novel types, we identify a non-canonical CEA subdomain associated with Nr2f2 expression and uncover an Isl1-expressing medial cell type that accounts for many long-range CEA projections. Our results reveal new CEA organizational principles across cell types and spatial scales and provide a framework for future work examining cell-type-specific behavior and function.
Insulin signaling plays a pivotal role in metabolic control and aging, and insulin accordingly is a key factor in several human diseases. Despite this importance, the in vivo activity dynamics of insulin-producing cells (IPCs) are poorly understood. Here, we characterized the effects of locomotion on the activity of IPCs in Drosophila. Using in vivo electrophysiology and calcium imaging, we found that IPCs were strongly inhibited during walking and flight and that their activity rebounded and overshot after cessation of locomotion. Moreover, IPC activity changed rapidly during behavioral transitions, revealing that IPCs are modulated on fast timescales in behaving animals. Optogenetic activation of locomotor networks ex vivo, in the absence of actual locomotion or changes in hemolymph sugar levels, was sufficient to inhibit IPCs. This demonstrates that the behavioral state-dependent inhibition of IPCs is actively controlled by neuronal pathways and is independent of changes in glucose concentration. By contrast, the overshoot in IPC activity after locomotion was absent ex vivo and after starvation, indicating that it was not purely driven by feedforward signals but additionally required feedback derived from changes in hemolymph sugar concentration. We hypothesize that IPC inhibition during locomotion supports mobilization of fuel stores during metabolically demanding behaviors, while the rebound in IPC activity after locomotion contributes to replenishing muscle glycogen stores. In addition, the rapid dynamics of IPC modulation support a potential role of insulin in the state-dependent modulation of sensorimotor processing.
Foraging animals must use decision-making strategies that dynamically adapt to the changing availability of rewards in the environment. A wide diversity of animals do this by distributing their choices in proportion to the rewards received from each option, Herrnstein’s operant matching law. Theoretical work suggests an elegant mechanistic explanation for this ubiquitous behavior, as operant matching follows automatically from simple synaptic plasticity rules acting within behaviorally relevant neural circuits. However, no past work has mapped operant matching onto plasticity mechanisms in the brain, leaving the biological relevance of the theory unclear. Here we discovered operant matching in Drosophila and showed that it requires synaptic plasticity that acts in the mushroom body and incorporates the expectation of reward. We began by developing a novel behavioral paradigm to measure choices from individual flies as they learn to associate odor cues with probabilistic rewards. We then built a model of the fly mushroom body to explain each fly’s sequential choice behavior using a family of biologically-realistic synaptic plasticity rules. As predicted by past theoretical work, we found that synaptic plasticity rules could explain fly matching behavior by incorporating stimulus expectations, reward expectations, or both. However, by optogenetically bypassing the representation of reward expectation, we abolished matching behavior and showed that the plasticity rule must specifically incorporate reward expectations. Altogether, these results reveal the first synaptic level mechanisms of operant matching and provide compelling evidence for the role of reward expectation signals in the fly brain.
Many animals rely on vision to navigate through their environment. The pattern of changes in the visual scene induced by self-motion is the optic flow1, which is first estimated in local patches by directionally selective (DS) neurons2–4. But how should the arrays of DS neurons, each responsive to motion in a preferred direction at a specific retinal position, be organized to support robust decoding of optic flow by downstream circuits? Understanding this global organization is challenging because it requires mapping fine, local features of neurons across the animal’s field of view3. In Drosophila, the asymmetric dendrites of the T4 and T5 DS neurons establish their preferred direction, making it possible to predict DS responses from anatomy4,5. Here we report that the preferred directions of fly DS neurons vary at different retinal positions and show that this spatial variation is established by the anatomy of the compound eye. To estimate the preferred directions across the visual field, we reconstructed hundreds of T4 neurons in a full brain EM volume6 and discovered unexpectedly stereotypical dendritic arborizations that are independent of location. We then used whole-head μCT scans to map the viewing directions of all compound eye facets and found a non-uniform sampling of visual space that explains the spatial variation in preferred directions. Our findings show that the organization of preferred directions in the fly is largely determined by the compound eye, exposing an intimate and unexpected connection between the peripheral structure of the eye, functional properties of neurons deep in the brain, and the control of body movements.