Filter
Associated Lab
- Aguilera Castrejon Lab (2) Apply Aguilera Castrejon Lab filter
- Ahrens Lab (55) Apply Ahrens Lab filter
- Aso Lab (40) Apply Aso Lab filter
- Baker Lab (19) Apply Baker Lab filter
- Betzig Lab (101) Apply Betzig Lab filter
- Beyene Lab (9) Apply Beyene Lab filter
- Bock Lab (14) Apply Bock Lab filter
- Branson Lab (51) Apply Branson Lab filter
- Card Lab (37) Apply Card Lab filter
- Cardona Lab (45) Apply Cardona Lab filter
- Chklovskii Lab (10) Apply Chklovskii Lab filter
- Clapham Lab (14) Apply Clapham Lab filter
- Cui Lab (19) Apply Cui Lab filter
- Darshan Lab (8) Apply Darshan Lab filter
- Dickson Lab (32) Apply Dickson Lab filter
- Druckmann Lab (21) Apply Druckmann Lab filter
- Dudman Lab (38) Apply Dudman Lab filter
- Eddy/Rivas Lab (30) Apply Eddy/Rivas Lab filter
- Egnor Lab (4) Apply Egnor Lab filter
- Espinosa Medina Lab (15) Apply Espinosa Medina Lab filter
- Feliciano Lab (8) Apply Feliciano Lab filter
- Fetter Lab (31) Apply Fetter Lab filter
- FIB-SEM Technology (1) Apply FIB-SEM Technology filter
- Fitzgerald Lab (16) Apply Fitzgerald Lab filter
- Freeman Lab (15) Apply Freeman Lab filter
- Funke Lab (39) Apply Funke Lab filter
- Gonen Lab (59) Apply Gonen Lab filter
- Grigorieff Lab (34) Apply Grigorieff Lab filter
- Harris Lab (53) Apply Harris Lab filter
- Heberlein Lab (13) Apply Heberlein Lab filter
- Hermundstad Lab (24) Apply Hermundstad Lab filter
- Hess Lab (74) Apply Hess Lab filter
- Ilanges Lab (2) Apply Ilanges Lab filter
- Jayaraman Lab (42) Apply Jayaraman Lab filter
- Ji Lab (33) Apply Ji Lab filter
- Johnson Lab (1) Apply Johnson Lab filter
- Karpova Lab (13) Apply Karpova Lab filter
- Keleman Lab (8) Apply Keleman Lab filter
- Keller Lab (61) Apply Keller Lab filter
- Koay Lab (2) Apply Koay Lab filter
- Lavis Lab (139) Apply Lavis Lab filter
- Lee (Albert) Lab (29) Apply Lee (Albert) Lab filter
- Leonardo Lab (19) Apply Leonardo Lab filter
- Li Lab (4) Apply Li Lab filter
- Lippincott-Schwartz Lab (100) Apply Lippincott-Schwartz Lab filter
- Liu (Yin) Lab (2) Apply Liu (Yin) Lab filter
- Liu (Zhe) Lab (59) Apply Liu (Zhe) Lab filter
- Looger Lab (137) Apply Looger Lab filter
- Magee Lab (31) Apply Magee Lab filter
- Menon Lab (12) Apply Menon Lab filter
- Murphy Lab (6) Apply Murphy Lab filter
- O'Shea Lab (6) Apply O'Shea Lab filter
- Otopalik Lab (1) Apply Otopalik Lab filter
- Pachitariu Lab (36) Apply Pachitariu Lab filter
- Pastalkova Lab (5) Apply Pastalkova Lab filter
- Pavlopoulos Lab (7) Apply Pavlopoulos Lab filter
- Pedram Lab (4) Apply Pedram Lab filter
- Podgorski Lab (16) Apply Podgorski Lab filter
- Reiser Lab (45) Apply Reiser Lab filter
- Riddiford Lab (20) Apply Riddiford Lab filter
- Romani Lab (31) Apply Romani Lab filter
- Rubin Lab (107) Apply Rubin Lab filter
- Saalfeld Lab (46) Apply Saalfeld Lab filter
- Satou Lab (1) Apply Satou Lab filter
- Scheffer Lab (36) Apply Scheffer Lab filter
- Schreiter Lab (51) Apply Schreiter Lab filter
- Sgro Lab (1) Apply Sgro Lab filter
- Shroff Lab (31) Apply Shroff Lab filter
- Simpson Lab (18) Apply Simpson Lab filter
- Singer Lab (37) Apply Singer Lab filter
- Spruston Lab (58) Apply Spruston Lab filter
- Stern Lab (73) Apply Stern Lab filter
- Sternson Lab (47) Apply Sternson Lab filter
- Stringer Lab (33) Apply Stringer Lab filter
- Svoboda Lab (131) Apply Svoboda Lab filter
- Tebo Lab (9) Apply Tebo Lab filter
- Tervo Lab (9) Apply Tervo Lab filter
- Tillberg Lab (18) Apply Tillberg Lab filter
- Tjian Lab (17) Apply Tjian Lab filter
- Truman Lab (58) Apply Truman Lab filter
- Turaga Lab (40) Apply Turaga Lab filter
- Turner Lab (28) Apply Turner Lab filter
- Vale Lab (8) Apply Vale Lab filter
- Voigts Lab (3) Apply Voigts Lab filter
- Wang (Meng) Lab (22) Apply Wang (Meng) Lab filter
- Wang (Shaohe) Lab (6) Apply Wang (Shaohe) Lab filter
- Wu Lab (8) Apply Wu Lab filter
- Zlatic Lab (26) Apply Zlatic Lab filter
- Zuker Lab (5) Apply Zuker Lab filter
Associated Project Team
- CellMap (12) Apply CellMap filter
- COSEM (3) Apply COSEM filter
- FIB-SEM Technology (3) Apply FIB-SEM Technology filter
- Fly Descending Interneuron (11) Apply Fly Descending Interneuron filter
- Fly Functional Connectome (14) Apply Fly Functional Connectome filter
- Fly Olympiad (5) Apply Fly Olympiad filter
- FlyEM (54) Apply FlyEM filter
- FlyLight (49) Apply FlyLight filter
- GENIE (47) Apply GENIE filter
- Integrative Imaging (6) Apply Integrative Imaging filter
- Larval Olympiad (2) Apply Larval Olympiad filter
- MouseLight (18) Apply MouseLight filter
- NeuroSeq (1) Apply NeuroSeq filter
- ThalamoSeq (1) Apply ThalamoSeq filter
- Tool Translation Team (T3) (27) Apply Tool Translation Team (T3) filter
- Transcription Imaging (45) Apply Transcription Imaging filter
Associated Support Team
- Project Pipeline Support (5) Apply Project Pipeline Support filter
- Anatomy and Histology (18) Apply Anatomy and Histology filter
- Cryo-Electron Microscopy (39) Apply Cryo-Electron Microscopy filter
- Electron Microscopy (17) Apply Electron Microscopy filter
- Gene Targeting and Transgenics (11) Apply Gene Targeting and Transgenics filter
- High Performance Computing (7) Apply High Performance Computing filter
- Integrative Imaging (17) Apply Integrative Imaging filter
- Invertebrate Shared Resource (40) Apply Invertebrate Shared Resource filter
- Janelia Experimental Technology (37) Apply Janelia Experimental Technology filter
- Management Team (1) Apply Management Team filter
- Molecular Genomics (15) Apply Molecular Genomics filter
- Primary & iPS Cell Culture (14) Apply Primary & iPS Cell Culture filter
- Project Technical Resources (50) Apply Project Technical Resources filter
- Quantitative Genomics (19) Apply Quantitative Genomics filter
- Scientific Computing (93) Apply Scientific Computing filter
- Viral Tools (14) Apply Viral Tools filter
- Vivarium (7) Apply Vivarium filter
Publication Date
- 2025 (160) Apply 2025 filter
- 2024 (213) Apply 2024 filter
- 2023 (158) Apply 2023 filter
- 2022 (166) Apply 2022 filter
- 2021 (175) Apply 2021 filter
- 2020 (177) Apply 2020 filter
- 2019 (177) Apply 2019 filter
- 2018 (206) Apply 2018 filter
- 2017 (186) Apply 2017 filter
- 2016 (191) Apply 2016 filter
- 2015 (195) Apply 2015 filter
- 2014 (190) Apply 2014 filter
- 2013 (136) Apply 2013 filter
- 2012 (112) Apply 2012 filter
- 2011 (98) Apply 2011 filter
- 2010 (61) Apply 2010 filter
- 2009 (56) Apply 2009 filter
- 2008 (40) Apply 2008 filter
- 2007 (21) Apply 2007 filter
- 2006 (3) Apply 2006 filter
2721 Janelia Publications
Showing 2551-2560 of 2721 resultsThe medial entorhinal cortex is part of a neural system for mapping the position of an individual within a physical environment. Grid cells, a key component of this system, fire in a characteristic hexagonal pattern of locations, and are organized in modules that collectively form a population code for the animal's allocentric position. The invariance of the correlation structure of this population code across environments and behavioural states, independent of specific sensory inputs, has pointed to intrinsic, recurrently connected continuous attractor networks (CANs) as a possible substrate of the grid pattern. However, whether grid cell networks show continuous attractor dynamics, and how they interface with inputs from the environment, has remained unclear owing to the small samples of cells obtained so far. Here, using simultaneous recordings from many hundreds of grid cells and subsequent topological data analysis, we show that the joint activity of grid cells from an individual module resides on a toroidal manifold, as expected in a two-dimensional CAN. Positions on the torus correspond to positions of the moving animal in the environment. Individual cells are preferentially active at singular positions on the torus. Their positions are maintained between environments and from wakefulness to sleep, as predicted by CAN models for grid cells but not by alternative feedforward models. This demonstration of network dynamics on a toroidal manifold provides a population-level visualization of CAN dynamics in grid cells.
Recent results have shown the possibility of both reconstructing connectomes of small but biologically interesting circuits and extracting from these connectomes insights into their function. However, these reconstructions were heroic proof-of-concept experiments, requiring person-months of effort per neuron reconstructed, and will not scale to larger circuits, much less the brains of entire animals. In this paper we examine what will be required to generate and use substantially larger connectomes, finding five areas that need increased attention: firstly, imaging better suited to automatic reconstruction, with excellent z-resolution; secondly, automatic detection, validation, and measurement of synapses; thirdly, reconstruction methods that keep and use uncertainty metrics for every object, from initial images, through segmentation, reconstruction, and connectome queries; fourthly, processes that are fully incremental, so that the connectome may be used before it is fully complete; and finally, better tools for analysis of connectomes, once they are obtained.
The Mushroom Body (MB) is the primary location of stored associative memories in the Drosophila brain. We discuss recent advances in understanding the MB's neuronal circuits made using advanced light microscopic methods and cell-type-specific genetic tools. We also review how the compartmentalized nature of the MB's organization allows this brain area to form and store memories with widely different dynamics.
The growing size of EM volumes is a significant barrier to findable, accessible, interoperable, and reusable (FAIR) sharing. Storage, sharing, visualization and processing are challenging for large datasets. Here we discuss a recent development toward the standardized storage of volume electron microscopy (vEM) data which addresses many of the issues that researchers face. The OME-Zarr format splits data into more manageable, performant chunks enabling streaming-based access, and unifies important metadata such as multiresolution pyramid descriptions. The file format is designed for centralized and remote storage (e.g., cloud storage or file system) and is therefore ideal for sharing large data. By coalescing on a common, community-wide format, these benefits will expand as ever more data is made available to the scientific community.
Despite significant advances in neuroscience, the neural bases of intelligence remain poorly understood. Arguably the most elusive aspect of intelligence is the ability to make robust inferences that go far beyond one's experience. Animals categorize objects, learn to vocalize and may even estimate causal relationships - all in the face of data that is often ambiguous and sparse. Such inductive leaps are thought to result from the brain's ability to infer latent structure that governs the environment. However, we know little about the neural computations that underlie this ability. Recent advances in developing computational frameworks that can support efficient structure learning and inductive inference may provide insight into the underlying component processes and help pave the path for uncovering their neural implementation.
This mini-symposium aims to provide an integrated perspective on recent developments in optogenetics. Research in this emerging field combines optical methods with targeted expression of genetically encoded, protein-based probes to achieve experimental manipulation and measurement of neural systems with superior temporal and spatial resolution. The essential components of the optogenetic toolbox consist of two kinds of molecular devices: actuators and reporters, which respectively enable light-mediated control or monitoring of molecular processes. The first generation of genetically encoded calcium reporters, fluorescent proteins, and neural activators has already had a great impact on neuroscience. Now, a second generation of voltage reporters, neural silencers, and functionally extended fluorescent proteins hold great promise for continuing this revolution. In this review, we will evaluate and highlight the limitations of presently available optogenic tools and discuss where these technologies and their applications are headed in the future.
The early and accurate differential diagnosis of parkinsonian disorders is still a significant challenge for clinicians. In recent years, a number of studies have used magnetic resonance imaging data combined with machine learning and statistical classifiers to successfully differentiate between different forms of Parkinsonism. However, several questions and methodological issues remain, to minimize bias and artefact-driven classification. In this study, we compared different approaches for feature selection, as well as different magnetic resonance imaging modalities, with well-matched patient groups and tightly controlling for data quality issues related to patient motion. Our sample was drawn from a cohort of 69 healthy controls, and patients with idiopathic Parkinson's disease (= 35), progressive supranuclear palsy Richardson's syndrome (= 52) and corticobasal syndrome (= 36). Participants underwent standardized T1-weighted and diffusion-weighted magnetic resonance imaging. Strict data quality control and group matching reduced the control and patient numbers to 43, 32, 33 and 26, respectively. We compared two different methods for feature selection and dimensionality reduction: whole-brain principal components analysis, and an anatomical region-of-interest based approach. In both cases, support vector machines were used to construct a statistical model for pairwise classification of healthy controls and patients. The accuracy of each model was estimated using a leave-two-out cross-validation approach, as well as an independent validation using a different set of subjects. Our cross-validation results suggest that using principal components analysis for feature extraction provides higher classification accuracies when compared to a region-of-interest based approach. However, the differences between the two feature extraction methods were significantly reduced when an independent sample was used for validation, suggesting that the principal components analysis approach may be more vulnerable to overfitting with cross-validation. Both T1-weighted and diffusion magnetic resonance imaging data could be used to successfully differentiate between subject groups, with neither modality outperforming the other across all pairwise comparisons in the cross-validation analysis. However, features obtained from diffusion magnetic resonance imaging data resulted in significantly higher classification accuracies when an independent validation cohort was used. Overall, our results support the use of statistical classification approaches for differential diagnosis of parkinsonian disorders. However, classification accuracy can be affected by group size, age, sex and movement artefacts. With appropriate controls and out-of-sample cross validation, diagnostic biomarker evaluation including magnetic resonance imaging based classifiers may be an important adjunct to clinical evaluation.
Digital light microscopy provides powerful tools for quantitatively probing the real-time dynamics of subcellular structures. While the power of modern microscopy techniques is undeniable, rigorous record-keeping and quality control are required to ensure that imaging data may be properly interpreted (quality), reproduced (reproducibility), and used to extract reliable information and scientific knowledge which can be shared for further analysis (value). Keeping notes on microscopy experiments and quality control procedures ought to be straightforward, as the microscope is a machine whose components are defined and the performance measurable. Nevertheless, to this date, no universally adopted community-driven specifications exist that delineate the required information about the microscope hardware and acquisition settings (i.e., microscopy “data provenance” metadata) and the minimally accepted calibration metrics (i.e., microscopy quality control metadata) that should be automatically recorded by both commercial microscope manufacturers and customized microscope developers. In the absence of agreed guidelines, it is inherently difficult for scientists to create comprehensive records of imaging experiments and ensure the quality of resulting image data or for manufacturers to incorporate standardized reporting and performance metrics. To add to the confusion, microscopy experiments vary greatly in aim and complexity, ranging from purely descriptive work to complex, quantitative and even sub-resolution studies that require more detailed reporting and quality control measures.
Understanding the development of complex multicellular organisms as a function of the underlying cell behavior is one of the most fundamental goals of developmental biology. The ability to quantitatively follow cell dynamics in entire developing embryos is an indispensable step towards such a system-level understanding. In recent years, light-sheet fluorescence microscopy has emerged as a particularly promising strategy for recording the in vivo data required to realize this goal. Using light-sheet fluorescence microscopy, entire complex organisms can be rapidly imaged in three dimensions at sub-cellular resolution, achieving high temporal sampling and excellent signal-to-noise ratio without damaging the living specimen or bleaching fluorescent markers. The resulting datasets allow following individual cells in vertebrate and higher invertebrate embryos over up to several days of development. However, the complexity and size of these multi-terabyte recordings typically preclude comprehensive manual analyses. Thus, new computational approaches are required to automatically segment cell morphologies, accurately track cell identities and systematically analyze cell behavior throughout embryonic development. We review current efforts in light-sheet microscopy and bioimage informatics towards this goal, and argue that comprehensive cell lineage reconstructions are finally within reach for many key model organisms, including fruit fly, zebrafish and mouse.