Filter
Associated Lab
- 43418 (27) Apply 43418 filter
- 43427 (18) Apply 43427 filter
- 43430 (52) Apply 43430 filter
- 46293 (4) Apply 46293 filter
- Ahrens Lab (39) Apply Ahrens Lab filter
- Aso Lab (33) Apply Aso Lab filter
- Baker Lab (19) Apply Baker Lab filter
- Betzig Lab (97) Apply Betzig Lab filter
- Beyene Lab (3) Apply Beyene Lab filter
- Bock Lab (14) Apply Bock Lab filter
- Branson Lab (40) Apply Branson Lab filter
- Card Lab (25) Apply Card Lab filter
- Cardona Lab (44) Apply Cardona Lab filter
- Chklovskii Lab (10) Apply Chklovskii Lab filter
- Clapham Lab (10) Apply Clapham Lab filter
- Cui Lab (19) Apply Cui Lab filter
- Darshan Lab (7) Apply Darshan Lab filter
- Dickson Lab (29) Apply Dickson Lab filter
- Druckmann Lab (21) Apply Druckmann Lab filter
- Dudman Lab (32) Apply Dudman Lab filter
- Eddy/Rivas Lab (30) Apply Eddy/Rivas Lab filter
- Egnor Lab (4) Apply Egnor Lab filter
- Espinosa Medina Lab (11) Apply Espinosa Medina Lab filter
- Feliciano Lab (6) Apply Feliciano Lab filter
- Fetter Lab (31) Apply Fetter Lab filter
- Fitzgerald Lab (11) Apply Fitzgerald Lab filter
- Freeman Lab (15) Apply Freeman Lab filter
- Funke Lab (26) Apply Funke Lab filter
- Gonen Lab (59) Apply Gonen Lab filter
- Grigorieff Lab (34) Apply Grigorieff Lab filter
- Harris Lab (42) Apply Harris Lab filter
- Heberlein Lab (13) Apply Heberlein Lab filter
- Hermundstad Lab (14) Apply Hermundstad Lab filter
- Hess Lab (59) Apply Hess Lab filter
- Jayaraman Lab (37) Apply Jayaraman Lab filter
- Ji Lab (32) Apply Ji Lab filter
- Johnson Lab (1) Apply Johnson Lab filter
- Karpova Lab (12) Apply Karpova Lab filter
- Keleman Lab (8) Apply Keleman Lab filter
- Keller Lab (59) Apply Keller Lab filter
- Lavis Lab (107) Apply Lavis Lab filter
- Lee (Albert) Lab (27) Apply Lee (Albert) Lab filter
- Leonardo Lab (19) Apply Leonardo Lab filter
- Li Lab (1) Apply Li Lab filter
- Lippincott-Schwartz Lab (74) Apply Lippincott-Schwartz Lab filter
- Liu (Zhe) Lab (47) Apply Liu (Zhe) Lab filter
- Looger Lab (136) Apply Looger Lab filter
- Magee Lab (31) Apply Magee Lab filter
- Menon Lab (12) Apply Menon Lab filter
- Murphy Lab (6) Apply Murphy Lab filter
- O'Shea Lab (3) Apply O'Shea Lab filter
- Pachitariu Lab (21) Apply Pachitariu Lab filter
- Pastalkova Lab (5) Apply Pastalkova Lab filter
- Pavlopoulos Lab (7) Apply Pavlopoulos Lab filter
- Pedram Lab (1) Apply Pedram Lab filter
- Podgorski Lab (16) Apply Podgorski Lab filter
- Reiser Lab (36) Apply Reiser Lab filter
- Riddiford Lab (20) Apply Riddiford Lab filter
- Romani Lab (25) Apply Romani Lab filter
- Rubin Lab (94) Apply Rubin Lab filter
- Saalfeld Lab (35) Apply Saalfeld Lab filter
- Scheffer Lab (36) Apply Scheffer Lab filter
- Schreiter Lab (40) Apply Schreiter Lab filter
- Shroff Lab (12) Apply Shroff Lab filter
- Simpson Lab (18) Apply Simpson Lab filter
- Singer Lab (36) Apply Singer Lab filter
- Spruston Lab (54) Apply Spruston Lab filter
- Stern Lab (58) Apply Stern Lab filter
- Sternson Lab (47) Apply Sternson Lab filter
- Stringer Lab (15) Apply Stringer Lab filter
- Svoboda Lab (130) Apply Svoboda Lab filter
- Tebo Lab (2) Apply Tebo Lab filter
- Tervo Lab (8) Apply Tervo Lab filter
- Tillberg Lab (12) Apply Tillberg Lab filter
- Tjian Lab (17) Apply Tjian Lab filter
- Truman Lab (57) Apply Truman Lab filter
- Turaga Lab (32) Apply Turaga Lab filter
- Turner Lab (17) Apply Turner Lab filter
- Vale Lab (3) Apply Vale Lab filter
- Wu Lab (8) Apply Wu Lab filter
- Zlatic Lab (27) Apply Zlatic Lab filter
- Zuker Lab (5) Apply Zuker Lab filter
Associated Project Team
- COSEM (2) Apply COSEM filter
- Fly Descending Interneuron (7) Apply Fly Descending Interneuron filter
- Fly Functional Connectome (13) Apply Fly Functional Connectome filter
- Fly Olympiad (4) Apply Fly Olympiad filter
- FlyEM (56) Apply FlyEM filter
- FlyLight (35) Apply FlyLight filter
- GENIE (37) Apply GENIE filter
- Larval Olympiad (2) Apply Larval Olympiad filter
- MouseLight (15) Apply MouseLight filter
- NeuroSeq (1) Apply NeuroSeq filter
- ThalamoSeq (1) Apply ThalamoSeq filter
- Tool Translation Team (T3) (12) Apply Tool Translation Team (T3) filter
- Transcription Imaging (45) Apply Transcription Imaging filter
Publication Date
- 2023 (71) Apply 2023 filter
- 2022 (176) Apply 2022 filter
- 2021 (175) Apply 2021 filter
- 2020 (176) Apply 2020 filter
- 2019 (177) Apply 2019 filter
- 2018 (206) Apply 2018 filter
- 2017 (187) Apply 2017 filter
- 2016 (191) Apply 2016 filter
- 2015 (196) Apply 2015 filter
- 2014 (191) Apply 2014 filter
- 2013 (136) Apply 2013 filter
- 2012 (112) Apply 2012 filter
- 2011 (98) Apply 2011 filter
- 2010 (62) Apply 2010 filter
- 2009 (56) Apply 2009 filter
- 2008 (40) Apply 2008 filter
- 2007 (21) Apply 2007 filter
- 2006 (3) Apply 2006 filter
Type of Publication
- Remove Janelia filter Janelia
2274 Publications
Showing 2131-2140 of 2274 resultsNearby neurons, sharing the same locations within the mouse whisker map, can have dramatically distinct response properties. To understand the significance of this diversity, we studied the relationship between the responses of individual neurons and their projection targets in the mouse barrel cortex. Neurons projecting to primary motor cortex (MI) or secondary somatosensory area (SII) were labeled with red fluorescent protein (RFP) using retrograde viral infection. We used in vivo two-photon Ca(2+) imaging to map the responses of RFP-positive and neighboring L2/3 neurons to whisker deflections. Neurons projecting to MI displayed larger receptive fields compared with other neurons, including those projecting to SII. Our findings support the view that intermingled neurons in primary sensory areas send specific stimulus features to different parts of the brain.
Linking activity in specific cell types with perception, cognition, and action, requires quantitative behavioral experiments in genetic model systems such as the mouse. In head-fixed primates, the combination of precise stimulus control, monitoring of motor output, and physiological recordings over large numbers of trials are the foundation on which many conceptually rich and quantitative studies have been built. Choice-based, quantitative behavioral paradigms for head-fixed mice have not been described previously. Here, we report a somatosensory absolute object localization task for head-fixed mice. Mice actively used their mystacial vibrissae (whiskers) to sense the location of a vertical pole presented to one side of the head and reported with licking whether the pole was in a target (go) or a distracter (no-go) location. Mice performed hundreds of trials with high performance (>90% correct) and localized to <0.95 mm (<6 degrees of azimuthal angle). Learning occurred over 1-2 weeks and was observed both within and across sessions. Mice could perform object localization with single whiskers. Silencing barrel cortex abolished performance to chance levels. We measured whisker movement and shape for thousands of trials. Mice moved their whiskers in a highly directed, asymmetric manner, focusing on the target location. Translation of the base of the whiskers along the face contributed substantially to whisker movements. Mice tended to maximize contact with the go (rewarded) stimulus while minimizing contact with the no-go stimulus. We conjecture that this may amplify differences in evoked neural activity between trial types.
Biological specimens are rife with optical inhomogeneities that seriously degrade imaging performance under all but the most ideal conditions. Measuring and then correcting for these inhomogeneities is the province of adaptive optics. Here we introduce an approach to adaptive optics in microscopy wherein the rear pupil of an objective lens is segmented into subregions, and light is directed individually to each subregion to measure, by image shift, the deflection faced by each group of rays as they emerge from the objective and travel through the specimen toward the focus. Applying our method to two-photon microscopy, we could recover near-diffraction-limited performance from a variety of biological and nonbiological samples exhibiting aberrations large or small and smoothly varying or abruptly changing. In particular, results from fixed mouse cortical slices illustrate our ability to improve signal and resolution to depths of 400 microm.
Biological specimens are rife with optical inhomogeneities that seriously degrade imaging performance under all but the most ideal conditions. Measuring and then correcting for these inhomogeneities is the province of adaptive optics. Here we introduce an approach to adaptive optics in microscopy wherein the rear pupil of an objective lens is segmented into subregions, and light is directed individually to each subregion to measure, by image shift, the deflection faced by each group of rays as they emerge from the objective and travel through the specimen toward the focus. Applying our method to two-photon microscopy, we could recover near-diffraction-limited performance from a variety of biological and nonbiological samples exhibiting aberrations large or small and smoothly varying or abruptly changing. In particular, results from fixed mouse cortical slices illustrate our ability to improve signal and resolution to depths of 400 microm.
Commentary: Introduces a new, zonal approach to adaptive optics (AO) in microscopy suitable for highly inhomogeneous and/or scattering samples such as living tissue. The method is unique in its ability to handle large amplitude aberrations (>20 wavelengths), including spatially complex aberrations involving high order modes beyond the ability of most AO actuators to correct. As befitting a technique designed for in vivo fluorescence imaging, it is also photon efficient.
Although used here in conjunction with two photon microscopy to demonstrate correction deep into scattering tissue, the same principle of pupil segmentation might be profitably adapted to other point-scanning or widefield methods. For example, plane illumination microscopy of multicellular specimens is often beset by substantial aberrations, and all far-field superresolution methods are exquisitely sensitive to aberrations.
Neurons derived from the same progenitor may acquire different fates according to their birth timing/order. To reveal temporally guided cell fates, we must determine neuron types as well as their lineage relationships and times of birth. Recent advances in genetic lineage analysis and fate mapping are facilitating such studies. For example, high-resolution lineage analysis can identify each sequentially derived neuron of a lineage and has revealed abrupt temporal identity changes in diverse Drosophila neuronal lineages. In addition, fate mapping of mouse neurons made from the same pool of precursors shows production of specific neuron types in specific temporal patterns. The tools used in these analyses are helping to further our understanding of the genetics of neuronal temporal identity.
Automatic alignment (registration) of 3D images of adult fruit fly brains is often influenced by the significant displacement of the relative locations of the two optic lobes (OLs) and the center brain (CB). In one of our ongoing efforts to produce a better image alignment pipeline of adult fruit fly brains, we consider separating CB and OLs and align them independently. This paper reports our automatic method to segregate CB and OLs, in particular under conditions where the signal to noise ratio (SNR) is low, the variation of the image intensity is big, and the relative displacement of OLs and CB is substantial. We design an algorithm to find a minimum-cost 3D surface in a 3D image stack to best separate an OL (of one side, either left or right) from CB. This surface is defined as an aggregation of the respective minimum-cost curves detected in each individual 2D image slice. Each curve is defined by a list of control points that best segregate OL and CB. To obtain the locations of these control points, we derive an energy function that includes an image energy term defined by local pixel intensities and two internal energy terms that constrain the curve’s smoothness and length. Gradient descent method is used to optimize this energy function. To improve both the speed and robustness of the method, for each stack, the locations of optimized control points in a slice are taken as the initialization prior for the next slice. We have tested this approach on simulated and real 3D fly brain image stacks and demonstrated that this method can reasonably segregate OLs from CBs despite the aforementioned difficulties.
Protein-protein interactions are challenging targets for modulation by small molecules. Here, we propose an approach that harnesses the increasing structural coverage of protein complexes to identify small molecules that may target protein interactions. Specifically, we identify ligand and protein binding sites that overlap upon alignment of homologous proteins. Of the 2,619 protein structure families observed to bind proteins, 1,028 also bind small molecules (250-1000 Da), and 197 exhibit a statistically significant (p<0.01) overlap between ligand and protein binding positions. These "bi-functional positions", which bind both ligands and proteins, are particularly enriched in tyrosine and tryptophan residues, similar to "energetic hotspots" described previously, and are significantly less conserved than mono-functional and solvent exposed positions. Homology transfer identifies ligands whose binding sites overlap at least 20% of the protein interface for 35% of domain-domain and 45% of domain-peptide mediated interactions. The analysis recovered known small-molecule modulators of protein interactions as well as predicted new interaction targets based on the sequence similarity of ligand binding sites. We illustrate the predictive utility of the method by suggesting structural mechanisms for the effects of sanglifehrin A on HIV virion production, bepridil on the cellular entry of anthrax edema factor, and fusicoccin on vertebrate developmental pathways. The results, available at http://pibase.janelia.org, represent a comprehensive collection of structurally characterized modulators of protein interactions, and suggest that homologous structures are a useful resource for the rational design of interaction modulators.
Full reconstruction of neuron morphology is of fundamental interest for the analysis and understanding of neuron function. We have developed a novel method capable of tracing neurons in three-dimensional microscopy data automatically. In contrast to template-based methods, the proposed approach makes no assumptions on the shape or appearance of neuron’s body. Instead, an efficient seeding approach is applied to find significant pixels almost certainly within complex neuronal structures and the tracing problem is solved by computing an graph tree structure connecting these seeds. In addition, an automated neuron comparison method is introduced for performance evaluation and structure analysis. The proposed algorithm is computationally efficient. Experiments on different types of data show promising results.
This paper addresses the problem of jointly clustering two segmentations of closely correlated images. We focus in particular on the application of reconstructing neuronal structures in over-segmented electron microscopy images. We formulate the problem of co-clustering as a quadratic semi-assignment problem and investigate convex relaxations using semidefinite and linear programming. We further introduce a linear programming method with manageable number of constraints and present an approach for learning the cost function. Our method increases computational efficiency by orders of magnitude while maintaining accuracy, automatically finds the optimal number of clusters, and empirically tends to produce binary assignment solutions. We illustrate our approach in simulations and in experiments with real EM data.