Filter
Associated Lab
- Aguilera Castrejon Lab (1) Apply Aguilera Castrejon Lab filter
- Ahrens Lab (50) Apply Ahrens Lab filter
- Aso Lab (40) Apply Aso Lab filter
- Baker Lab (19) Apply Baker Lab filter
- Betzig Lab (99) Apply Betzig Lab filter
- Beyene Lab (8) Apply Beyene Lab filter
- Bock Lab (14) Apply Bock Lab filter
- Branson Lab (48) Apply Branson Lab filter
- Card Lab (34) Apply Card Lab filter
- Cardona Lab (44) Apply Cardona Lab filter
- Chklovskii Lab (10) Apply Chklovskii Lab filter
- Clapham Lab (12) Apply Clapham Lab filter
- Cui Lab (19) Apply Cui Lab filter
- Darshan Lab (8) Apply Darshan Lab filter
- Dickson Lab (32) Apply Dickson Lab filter
- Druckmann Lab (21) Apply Druckmann Lab filter
- Dudman Lab (36) Apply Dudman Lab filter
- Eddy/Rivas Lab (30) Apply Eddy/Rivas Lab filter
- Egnor Lab (4) Apply Egnor Lab filter
- Espinosa Medina Lab (14) Apply Espinosa Medina Lab filter
- Feliciano Lab (7) Apply Feliciano Lab filter
- Fetter Lab (31) Apply Fetter Lab filter
- Fitzgerald Lab (16) Apply Fitzgerald Lab filter
- Freeman Lab (15) Apply Freeman Lab filter
- Funke Lab (37) Apply Funke Lab filter
- Gonen Lab (59) Apply Gonen Lab filter
- Grigorieff Lab (34) Apply Grigorieff Lab filter
- Harris Lab (49) Apply Harris Lab filter
- Heberlein Lab (13) Apply Heberlein Lab filter
- Hermundstad Lab (21) Apply Hermundstad Lab filter
- Hess Lab (70) Apply Hess Lab filter
- Ilanges Lab (2) Apply Ilanges Lab filter
- Jayaraman Lab (42) Apply Jayaraman Lab filter
- Ji Lab (33) Apply Ji Lab filter
- Johnson Lab (1) Apply Johnson Lab filter
- Karpova Lab (13) Apply Karpova Lab filter
- Keleman Lab (8) Apply Keleman Lab filter
- Keller Lab (61) Apply Keller Lab filter
- Koay Lab (1) Apply Koay Lab filter
- Lavis Lab (130) Apply Lavis Lab filter
- Lee (Albert) Lab (29) Apply Lee (Albert) Lab filter
- Leonardo Lab (19) Apply Leonardo Lab filter
- Li Lab (3) Apply Li Lab filter
- Lippincott-Schwartz Lab (91) Apply Lippincott-Schwartz Lab filter
- Liu (Zhe) Lab (56) Apply Liu (Zhe) Lab filter
- Looger Lab (137) Apply Looger Lab filter
- Magee Lab (31) Apply Magee Lab filter
- Menon Lab (12) Apply Menon Lab filter
- Murphy Lab (6) Apply Murphy Lab filter
- O'Shea Lab (5) Apply O'Shea Lab filter
- Otopalik Lab (1) Apply Otopalik Lab filter
- Pachitariu Lab (33) Apply Pachitariu Lab filter
- Pastalkova Lab (5) Apply Pastalkova Lab filter
- Pavlopoulos Lab (7) Apply Pavlopoulos Lab filter
- Pedram Lab (3) Apply Pedram Lab filter
- Podgorski Lab (16) Apply Podgorski Lab filter
- Reiser Lab (45) Apply Reiser Lab filter
- Riddiford Lab (20) Apply Riddiford Lab filter
- Romani Lab (31) Apply Romani Lab filter
- Rubin Lab (103) Apply Rubin Lab filter
- Saalfeld Lab (43) Apply Saalfeld Lab filter
- Satou Lab (1) Apply Satou Lab filter
- Scheffer Lab (36) Apply Scheffer Lab filter
- Schreiter Lab (50) Apply Schreiter Lab filter
- Shroff Lab (27) Apply Shroff Lab filter
- Simpson Lab (18) Apply Simpson Lab filter
- Singer Lab (37) Apply Singer Lab filter
- Spruston Lab (56) Apply Spruston Lab filter
- Stern Lab (71) Apply Stern Lab filter
- Sternson Lab (47) Apply Sternson Lab filter
- Stringer Lab (29) Apply Stringer Lab filter
- Svoboda Lab (131) Apply Svoboda Lab filter
- Tebo Lab (7) Apply Tebo Lab filter
- Tervo Lab (9) Apply Tervo Lab filter
- Tillberg Lab (15) Apply Tillberg Lab filter
- Tjian Lab (17) Apply Tjian Lab filter
- Truman Lab (58) Apply Truman Lab filter
- Turaga Lab (35) Apply Turaga Lab filter
- Turner Lab (25) Apply Turner Lab filter
- Vale Lab (7) Apply Vale Lab filter
- Voigts Lab (3) Apply Voigts Lab filter
- Wang (Meng) Lab (14) Apply Wang (Meng) Lab filter
- Wang (Shaohe) Lab (5) Apply Wang (Shaohe) Lab filter
- Wu Lab (8) Apply Wu Lab filter
- Zlatic Lab (26) Apply Zlatic Lab filter
- Zuker Lab (5) Apply Zuker Lab filter
Associated Project Team
- CellMap (10) Apply CellMap filter
- COSEM (3) Apply COSEM filter
- FIB-SEM Technology (1) Apply FIB-SEM Technology filter
- Fly Descending Interneuron (10) Apply Fly Descending Interneuron filter
- Fly Functional Connectome (14) Apply Fly Functional Connectome filter
- Fly Olympiad (5) Apply Fly Olympiad filter
- FlyEM (53) Apply FlyEM filter
- FlyLight (47) Apply FlyLight filter
- GENIE (41) Apply GENIE filter
- Integrative Imaging (1) Apply Integrative Imaging filter
- Larval Olympiad (2) Apply Larval Olympiad filter
- MouseLight (17) Apply MouseLight filter
- NeuroSeq (1) Apply NeuroSeq filter
- ThalamoSeq (1) Apply ThalamoSeq filter
- Tool Translation Team (T3) (26) Apply Tool Translation Team (T3) filter
- Transcription Imaging (45) Apply Transcription Imaging filter
Associated Support Team
- Project Pipeline Support (3) Apply Project Pipeline Support filter
- Anatomy and Histology (18) Apply Anatomy and Histology filter
- Cryo-Electron Microscopy (33) Apply Cryo-Electron Microscopy filter
- Electron Microscopy (15) Apply Electron Microscopy filter
- Gene Targeting and Transgenics (11) Apply Gene Targeting and Transgenics filter
- Integrative Imaging (17) Apply Integrative Imaging filter
- Invertebrate Shared Resource (40) Apply Invertebrate Shared Resource filter
- Janelia Experimental Technology (36) Apply Janelia Experimental Technology filter
- Management Team (1) Apply Management Team filter
- Molecular Genomics (15) Apply Molecular Genomics filter
- Primary & iPS Cell Culture (14) Apply Primary & iPS Cell Culture filter
- Project Technical Resources (46) Apply Project Technical Resources filter
- Quantitative Genomics (19) Apply Quantitative Genomics filter
- Scientific Computing Software (85) Apply Scientific Computing Software filter
- Scientific Computing Systems (6) Apply Scientific Computing Systems filter
- Viral Tools (14) Apply Viral Tools filter
- Vivarium (7) Apply Vivarium filter
Publication Date
- 2025 (20) Apply 2025 filter
- 2024 (231) Apply 2024 filter
- 2023 (163) Apply 2023 filter
- 2022 (167) Apply 2022 filter
- 2021 (175) Apply 2021 filter
- 2020 (177) Apply 2020 filter
- 2019 (177) Apply 2019 filter
- 2018 (206) Apply 2018 filter
- 2017 (186) Apply 2017 filter
- 2016 (191) Apply 2016 filter
- 2015 (195) Apply 2015 filter
- 2014 (190) Apply 2014 filter
- 2013 (136) Apply 2013 filter
- 2012 (112) Apply 2012 filter
- 2011 (98) Apply 2011 filter
- 2010 (61) Apply 2010 filter
- 2009 (56) Apply 2009 filter
- 2008 (40) Apply 2008 filter
- 2007 (21) Apply 2007 filter
- 2006 (3) Apply 2006 filter
2605 Janelia Publications
Showing 171-180 of 2605 resultsPannexins are large-pore ion channels expressed throughout the mammalian brain that participate in various neuropathologies; however, their physiological roles remain obscure. Here, we report that pannexin1 channels (Panx1) can be synaptically activated under physiological recording conditions in rodent acute hippocampal slices. Specifically, NMDA receptor (NMDAR)-mediated responses at the mossy fiber to CA3 pyramidal cell synapse were followed by a slow postsynaptic inward current that could activate CA3 pyramidal cells but was absent in Panx1 knockout mice. Immunoelectron microscopy revealed that Panx1 was localized near the postsynaptic density. Further, Panx1-mediated currents were potentiated by metabotropic receptors and bidirectionally modulated by burst-timing-dependent plasticity of NMDAR-mediated transmission. Lastly, Panx1 channels were preferentially recruited when NMDAR activation enters a supralinear regime, resulting in temporally delayed burst-firing. Thus, Panx1 can contribute to synaptic amplification and broadening the temporal associativity window for co-activated pyramidal cells, thereby supporting the auto-associative functions of the CA3 region.
Sighted animals use visual signals to discern directional motion in their environment. Motion is not directly detected by visual neurons, and it must instead be computed from light signals that vary over space and time. This makes visual motion estimation a near universal neural computation, and decades of research have revealed much about the algorithms and mechanisms that generate directional signals. The idea that sensory systems are optimized for performance in natural environments has deeply impacted this research. In this article, we review the many ways that optimization has been used to quantitatively model visual motion estimation and reveal its underlying principles. We emphasize that no single optimization theory has dominated the literature. Instead, researchers have adeptly incorporated different computational demands and biological constraints that are pertinent to the specific brain system and animal model under study. The successes and failures of the resulting optimization models have thereby provided insights into how computational demands and biological constraints together shape neural computation.
In the dynamic landscape of scientific research, imaging core facilities are vital hubs propelling collaboration and innovation at the technology development and dissemination frontier. Here, we present a collaborative effort led by Global BioImaging (GBI), introducing international recommendations geared towards elevating the careers of Imaging Scientists in core facilities. Despite the critical role of Imaging Scientists in modern research ecosystems, challenges persist in recognising their value, aligning performance metrics and providing avenues for career progression and job security. The challenges encompass a mismatch between classic academic career paths and service-oriented roles, resulting in a lack of understanding regarding the value and impact of Imaging Scientists and core facilities and how to evaluate them properly. They further include challenges around sustainability, dedicated training opportunities and the recruitment and retention of talent. Structured across these interrelated sections, the recommendations within this publication aim to propose globally applicable solutions to navigate these challenges. These recommendations apply equally to colleagues working in other core facilities and research institutions through which access to technologies is facilitated and supported. This publication emphasises the pivotal role of Imaging Scientists in advancing research programs and presents a blueprint for fostering their career progression within institutions all around the world.
We address the problem of inferring the number of independently blinking fluorescent light emitters, when only their combined intensity contributions can be observed at each timepoint. This problem occurs regularly in light microscopy of objects that are smaller than the diffraction limit, where one wishes to count the number of fluorescently labelled subunits. Our proposed solution directly models the photo-physics of the system, as well as the blinking kinetics of the fluorescent emitters as a fully differentiable hidden Markov model. Given a trace of intensity over time, our model jointly estimates the parameters of the intensity distribution per emitter, their blinking rates, as well as a posterior distribution of the total number of fluorescent emitters. We show that our model is consistently more accurate and increases the range of countable subunits by a factor of two compared to current state-of-the-art methods, which count based on autocorrelation and blinking frequency, Further-more, we demonstrate that our model can be used to investigate the effect of blinking kinetics on counting ability, and therefore can inform experimental conditions that will maximize counting accuracy.
Cholecystokinin-expressing interneurons (CCKIs) are hypothesized to shape pyramidal cell-firing patterns and regulate network oscillations and related network state transitions. To directly probe their role in the CA1 region, we silenced their activity using optogenetic and chemogenetic tools in mice. Opto-tagged CCKIs revealed a heterogeneous population, and their optogenetic silencing triggered wide disinhibitory network changes affecting both pyramidal cells and other interneurons. CCKI silencing enhanced pyramidal cell burst firing and altered the temporal coding of place cells: theta phase precession was disrupted, whereas sequence reactivation was enhanced. Chemogenetic CCKI silencing did not alter the acquisition of spatial reference memories on the Morris water maze but enhanced the recall of contextual fear memories and enabled selective recall when similar environments were tested. This work suggests the key involvement of CCKIs in the control of place-cell temporal coding and the formation of contextual memories.
Vision provides animals with detailed information about their surroundings, conveying diverse features such as color, form, and movement across the visual scene. Computing these parallel spatial features requires a large and diverse network of neurons, such that in animals as distant as flies and humans, visual regions comprise half the brain’s volume. These visual brain regions often reveal remarkable structure-function relationships, with neurons organized along spatial maps with shapes that directly relate to their roles in visual processing. To unravel the stunning diversity of a complex visual system, a careful mapping of the neural architecture matched to tools for targeted exploration of that circuitry is essential. Here, we report a new connectome of the right optic lobe from a male Drosophila central nervous system FIB-SEM volume and a comprehensive inventory of the fly’s visual neurons. We developed a computational framework to quantify the anatomy of visual neurons, establishing a basis for interpreting how their shapes relate to spatial vision. By integrating this analysis with connectivity information, neurotransmitter identity, and expert curation, we classified the 53,000 neurons into 727 types, about half of which are systematically described and named for the first time. Finally, we share an extensive collection of split-GAL4 lines matched to our neuron type catalog. Together, this comprehensive set of tools and data unlock new possibilities for systematic investigations of vision in Drosophila, a foundation for a deeper understanding of sensory processing.
The intestine is critical for not only processing nutrients but also protecting the organism from the environment. These functions are mainly carried out by the epithelium, which is constantly being self-renewed. Many genes and pathways can influence intestinal epithelial cell proliferation. Among them is mTORC1, whose activation increases cell proliferation. Here, we report the first intestinal epithelial cell (IEC)-specific knockout () of an amino acid transporter capable of activating mTORC1. We show that the transporter, SLC7A5, is highly expressed in mouse intestinal crypt and reduces mTORC1 signaling. Surprisingly, adult intestinal crypts have increased cell proliferation but reduced mature Paneth cells. Goblet cells, the other major secretory cell type in the small intestine, are increased in the crypts but reduced in the villi. Analyses with scRNA-seq and electron microscopy have revealed dedifferentiation of Paneth cells in mice, leading to markedly reduced secretory granules with little effect on Paneth cell number. Thus, SLC7A5 likely regulates secretory cell differentiation to affect stem cell niche and indirectly regulate cell proliferation.
Leptin is an adipose tissue hormone that maintains homeostatic control of adipose tissue mass by regulating the activity of specific neural populations controlling appetite and metabolism1. Leptin regulates food intake by inhibiting orexigenic agouti-related protein (AGRP) neurons and activating anorexigenic pro-opiomelanocortin (POMC) neurons2. However, while AGRP neurons regulate food intake on a rapid time scale, acute activation of POMC neurons has only a minimal effect3–5. This has raised the possibility that there is a heretofore unidentified leptin-regulated neural population that suppresses appetite on a rapid time scale. Here, we report the discovery of a novel population of leptin-target neurons expressing basonuclin 2 (Bnc2) that acutely suppress appetite by directly inhibiting AGRP neurons. Opposite to the effect of AGRP activation, BNC2 neuronal activation elicited a place preference indicative of positive valence in hungry but not fed mice. The activity of BNC2 neurons is finely tuned by leptin, sensory food cues, and nutritional status. Finally, deleting leptin receptors in BNC2 neurons caused marked hyperphagia and obesity, similar to that observed in a leptin receptor knockout in AGRP neurons. These data indicate that BNC2-expressing neurons are a key component of the neural circuit that maintains energy balance, thus filling an important gap in our understanding of the regulation of food intake and leptin action.
Near-infrared (NIR) fluorescent reporters provide additional colors for highly multiplexed imaging of cells and organisms, and enable imaging with less toxic light and higher contrast and depth. Here, we present the engineering of nirFAST, a small tunable chemogenetic NIR fluorescent reporter that is brighter than top-performing NIR fluorescent proteins in cultured mammalian cells. nirFAST is a small genetically encoded protein of 14 kDa that binds and stabilizes the fluorescent state of synthetic, highly cell-permeant, fluorogenic chromophores (so-called fluorogens) that are otherwise dark when free. Engineered to emit NIR light, nirFAST can also emit far-red or red lights through change of chromophore. nirFAST allows the imaging of proteins in live cultured mammalian cells, chicken embryo tissues and zebrafish larvae. Its near infrared fluorescence provides an additional color for high spectral multiplexing. We showed that nirFAST is well-suited for stimulated emission depletion (STED) nanoscopy, allowing the efficient imaging of proteins with subdiffraction resolution in live cells. nirFAST enabled the design of a chemogenetic green-NIR fluorescent ubiquitination-based cell cycle indicator (FUCCI) for the monitoring of the different phases of the cell cycle. Finally, bisection of nirFAST allowed the design of a fluorogenic chemically induced dimerization technology with NIR fluorescence readout, enabling the control and visualization of protein proximity.
Deep neural networks have been applied to improve the image quality of fluorescence microscopy imaging. Previous methods are based on convolutional neural networks (CNNs) which generally require more time-consuming training of separate models for each new imaging experiment, impairing the applicability and generalization. Once the model is trained (typically with tens to hundreds of image pairs) it can then be used to enhance new images that are like the training data. In this study, we proposed a novel imaging-transformer based model, Convolutional Neural Network Transformer (CNNT), to outperform the CNN networks for image denoising. In our scheme we have trained a single CNNT based backbone model from pairwise high-low SNR images for one type of fluorescence microscope (instance structured illumination, iSim). Fast adaption to new applications was achieved by fine-tuning the backbone on only 5-10 sample pairs per new experiment. Results show the CNNT backbone and fine-tuning scheme significantly reduces the training time and improves the image quality, outperformed training separate models using CNN approaches such as - RCAN and Noise2Fast. Here we show three examples of the efficacy of this approach on denoising wide-field, two-photon and confocal fluorescence data. In the confocal experiment, which is a 5 by 5 tiled acquisition, the fine-tuned CNNT model reduces the scan time form one hour to eight minutes, with improved quality.