Filter
Associated Lab
- Ahrens Lab (7) Apply Ahrens Lab filter
- Aso Lab (2) Apply Aso Lab filter
- Baker Lab (2) Apply Baker Lab filter
- Betzig Lab (12) Apply Betzig Lab filter
- Bock Lab (1) Apply Bock Lab filter
- Branson Lab (5) Apply Branson Lab filter
- Card Lab (3) Apply Card Lab filter
- Cardona Lab (8) Apply Cardona Lab filter
- Dickson Lab (4) Apply Dickson Lab filter
- Druckmann Lab (3) Apply Druckmann Lab filter
- Dudman Lab (3) Apply Dudman Lab filter
- Eddy/Rivas Lab (1) Apply Eddy/Rivas Lab filter
- Egnor Lab (1) Apply Egnor Lab filter
- Fetter Lab (6) Apply Fetter Lab filter
- Freeman Lab (4) Apply Freeman Lab filter
- Funke Lab (2) Apply Funke Lab filter
- Gonen Lab (8) Apply Gonen Lab filter
- Grigorieff Lab (5) Apply Grigorieff Lab filter
- Harris Lab (5) Apply Harris Lab filter
- Hess Lab (2) Apply Hess Lab filter
- Jayaraman Lab (3) Apply Jayaraman Lab filter
- Ji Lab (4) Apply Ji Lab filter
- Karpova Lab (3) Apply Karpova Lab filter
- Keleman Lab (1) Apply Keleman Lab filter
- Keller Lab (6) Apply Keller Lab filter
- Lavis Lab (13) Apply Lavis Lab filter
- Lee (Albert) Lab (1) Apply Lee (Albert) Lab filter
- Leonardo Lab (2) Apply Leonardo Lab filter
- Lippincott-Schwartz Lab (8) Apply Lippincott-Schwartz Lab filter
- Liu (Zhe) Lab (4) Apply Liu (Zhe) Lab filter
- Looger Lab (10) Apply Looger Lab filter
- Magee Lab (2) Apply Magee Lab filter
- Murphy Lab (2) Apply Murphy Lab filter
- Pastalkova Lab (1) Apply Pastalkova Lab filter
- Pavlopoulos Lab (3) Apply Pavlopoulos Lab filter
- Podgorski Lab (1) Apply Podgorski Lab filter
- Reiser Lab (3) Apply Reiser Lab filter
- Romani Lab (2) Apply Romani Lab filter
- Rubin Lab (3) Apply Rubin Lab filter
- Saalfeld Lab (3) Apply Saalfeld Lab filter
- Schreiter Lab (2) Apply Schreiter Lab filter
- Simpson Lab (1) Apply Simpson Lab filter
- Singer Lab (5) Apply Singer Lab filter
- Spruston Lab (6) Apply Spruston Lab filter
- Stern Lab (7) Apply Stern Lab filter
- Sternson Lab (3) Apply Sternson Lab filter
- Svoboda Lab (8) Apply Svoboda Lab filter
- Tervo Lab (2) Apply Tervo Lab filter
- Tjian Lab (3) Apply Tjian Lab filter
- Truman Lab (11) Apply Truman Lab filter
- Turaga Lab (2) Apply Turaga Lab filter
- Turner Lab (1) Apply Turner Lab filter
- Wu Lab (1) Apply Wu Lab filter
- Zlatic Lab (2) Apply Zlatic Lab filter
Associated Project Team
- Fly Functional Connectome (2) Apply Fly Functional Connectome filter
- Fly Olympiad (1) Apply Fly Olympiad filter
- FlyEM (1) Apply FlyEM filter
- GENIE (2) Apply GENIE filter
- MouseLight (2) Apply MouseLight filter
- Tool Translation Team (T3) (1) Apply Tool Translation Team (T3) filter
- Transcription Imaging (10) Apply Transcription Imaging filter
Associated Support Team
- Anatomy and Histology (5) Apply Anatomy and Histology filter
- Cryo-Electron Microscopy (3) Apply Cryo-Electron Microscopy filter
- Electron Microscopy (1) Apply Electron Microscopy filter
- Janelia Experimental Technology (2) Apply Janelia Experimental Technology filter
- Primary & iPS Cell Culture (1) Apply Primary & iPS Cell Culture filter
- Quantitative Genomics (2) Apply Quantitative Genomics filter
- Scientific Computing Software (7) Apply Scientific Computing Software filter
- Viral Tools (1) Apply Viral Tools filter
- Vivarium (1) Apply Vivarium filter
Publication Date
- December 2016 (13) Apply December 2016 filter
- November 2016 (13) Apply November 2016 filter
- October 2016 (22) Apply October 2016 filter
- September 2016 (10) Apply September 2016 filter
- August 2016 (13) Apply August 2016 filter
- July 2016 (14) Apply July 2016 filter
- June 2016 (22) Apply June 2016 filter
- May 2016 (22) Apply May 2016 filter
- April 2016 (13) Apply April 2016 filter
- March 2016 (15) Apply March 2016 filter
- February 2016 (21) Apply February 2016 filter
- January 2016 (13) Apply January 2016 filter
- Remove 2016 filter 2016
191 Janelia Publications
Showing 91-100 of 191 resultsUnderstanding how the brain operates requires understanding how large sets of neurons function together. Modern recording technology makes it possible to simultaneously record the activity of hundreds of neurons, and technological developments will soon allow recording of thousands or tens of thousands. As with all experimental techniques, these methods are subject to confounds that complicate the interpretation of such recordings, and could lead to erroneous scientific conclusions. Here we discuss methods for assessing and improving the quality of data from these techniques and outline likely future directions in this field.
The diffraction limited resolution of two photon and confocal microscope can be recovered using adaptive optics to explore the detailed neuronal network in the brains of zebrafish and mouse in vivo.
The endoplasmic reticulum (ER) is an expansive, membrane-enclosed organelle that plays crucial roles in numerous cellular functions. We used emerging superresolution imaging technologies to clarify the morphology and dynamics of the peripheral ER, which contacts and modulates most other intracellular organelles. Peripheral components of the ER have classically been described as comprising both tubules and flat sheets. We show that this system consists almost exclusively of tubules at varying densities, including structures that we term ER matrices. Conventional optical imaging technologies had led to misidentification of these structures as sheets because of the dense clustering of tubular junctions and a previously uncharacterized rapid form of ER motion. The existence of ER matrices explains previous confounding evidence that had indicated the occurrence of ER “sheet” proliferation after overexpression of tubular junction–forming proteins.
Primary cilia are ubiquitous, microtubule-based organelles that play diverse roles in sensory transduction in many eukaryotic cells. They interrogate the cellular environment through chemosensing, osmosensing, and mechanosensing using receptors and ion channels in the ciliary membrane. Little is known about the mechanical and structural properties of the cilium and how these properties contribute to ciliary perception. We probed the mechanical responses of primary cilia from kidney epithelial cells [Madin-Darby canine kidney-II (MDCK-II)], which sense fluid flow in renal ducts. We found that, on manipulation with an optical trap, cilia deflect by bending along their length and pivoting around an effective hinge located below the basal body. The calculated bending rigidity indicates weak microtubule doublet coupling. Primary cilia of MDCK cells lack interdoublet dynein motors. Nevertheless, we found that the organelles display active motility. 3D tracking showed correlated fluctuations of the cilium and basal body. These angular movements seemed random but were dependent on ATP and cytoplasmic myosin-II in the cell cortex. We conclude that force generation by the actin cytoskeleton surrounding the basal body results in active ciliary movement. We speculate that actin-driven ciliary movement might tune and calibrate ciliary sensory functions.
The application of green-to-red photoconvertible fluorescent proteins (PCFPs) for in vivo studies in complex 3D tissue structures has remained limited because traditional near-UV photoconversion is not confined in the axial dimension, and photomodulation using axially confined, pulsed near-IR (NIR) lasers has proven inefficient. Confined primed conversion is a dual-wavelength continuous-wave (CW) illumination method that is capable of axially confined green-to-red photoconversion. Here we present a protocol to implement this technique with a commercial confocal laser-scanning microscope (CLSM); evaluate its performance on an in vitro setup; and apply primed conversion for in vivo labeling of single cells in developing zebrafish and mouse preimplantation embryos expressing the green-to-red photoconvertible protein Dendra2. The implementation requires a basic understanding of laser-scanning microscopy, and it can be performed within a single day once the required filter cube is manufactured.
The emerging field of connectomics aims to unlock the mysteries of the brain by understanding the connectivity between neurons. To map this connectivity, we acquire thousands of electron microscopy (EM) images with nanometer-scale resolution. After aligning these images, the resulting dataset has the potential to reveal the shapes of neurons and the synaptic connections between them. However, imaging the brain of even a tiny organism like the fruit fly yields terabytes of data. It can take years of manual effort to examine such image volumes and trace their neuronal connections. One solution is to apply image segmentation algorithms to help automate the tracing tasks. In this paper, we propose a novel strategy to apply such segmentation on very large datasets that exceed the capacity of a single machine. Our solution is robust to potential segmentation errors which could otherwise severely compromise the quality of the overall segmentation, for example those due to poor classifier generalizability or anomalies in the image dataset. We implement our algorithms in a Spark application which minimizes disk I/O, and apply them to a few large EM datasets, revealing both their effectiveness and scalability. We hope this work will encourage external contributions to EM segmentation by providing 1) a flexible plugin architecture that deploys easily on different cluster environments and 2) an in-memory representation of segmentation that could be conducive to new advances.
We rely on movement to explore the environment, for example, by palpating an object. In somatosensory cortex, activity related to movement of digits or whiskers is suppressed, which could facilitate detection of touch. Movement-related suppression is generally assumed to involve corollary discharges. Here we uncovered a thalamocortical mechanism in which cortical fast-spiking interneurons, driven by sensory input, suppress movement-related activity in layer 4 (L4) excitatory neurons. In mice locating objects with their whiskers, neurons in the ventral posteromedial nucleus (VPM) fired in response to touch and whisker movement. Cortical L4 fast-spiking interneurons inherited these responses from VPM. In contrast, L4 excitatory neurons responded mainly to touch. Optogenetic experiments revealed that fast-spiking interneurons reduced movement-related spiking in excitatory neurons, enhancing selectivity for touch-related information during active tactile sensation. These observations suggest a fundamental computation performed by the thalamocortical circuit to accentuate salient tactile information.
Naïve Bayes Nearest Neighbour (NBNN) is a simple and effective framework which addresses many of the pitfalls of K-Nearest Neighbour (KNN) classification. It has yielded competitive results on several computer vision benchmarks. Its central tenet is that during NN search, a query is not compared to every example in a database, ignoring class information. Instead, NN searches are performed within each class, generating a score per class. A key problem with NN techniques, including NBNN, is that they fail when the data representation does not capture perceptual (e.g. class-based) similarity. NBNN circumvents this by using independent engineered descriptors (e.g. SIFT). To extend its applicability outside of image-based domains, we propose to learn a metric which captures perceptual similarity. Similar to how Neighbourhood Components Analysis optimizes a differentiable form of KNN classification, we propose 'Class Conditional' metric learning (CCML), which optimizes a soft form of the NBNN selection rule. Typical metric learning algorithms learn either a global or local metric. However, our proposed method can be adjusted to a particular level of locality by tuning a single parameter. An empirical evaluation on classification and retrieval tasks demonstrates that our proposed method clearly outperforms existing learned distance metrics across a variety of image and non-image datasets.
We propose a framework for detecting action patterns from motion sequences and modeling the sensory-motor relationship of animals, using a generative recurrent neural network. The network has a discriminative part (classifying actions) and a generative part (predicting motion), whose recurrent cells are laterally connected, allowing higher levels of the network to represent high level phenomena. We test our framework on two types of data, fruit fly behavior and online handwriting. Our results show that 1) taking advantage of unlabeled sequences, by predicting future motion, significantly improves action detection performance when training labels are scarce, 2) the network learns to represent high level phenomena such as writer identity and fly gender, without supervision, and 3) simulated motion trajectories, generated by treating motion prediction as input to the network, look realistic and may be used to qualitatively evaluate whether the model has learnt generative control rules.
Capturing dynamic processes in live samples is a nontrivial task in biological imaging. Although fluorescence provides high specificity and contrast compared to other light microscopy techniques, the photophysical principles of this method can have a harmful effect on the sample. Current advances in light sheet microscopy have created a novel imaging toolbox that allows for rapid acquisition of high-resolution fluorescent images with minimal perturbation of the processes of interest. Each unique design has its own advantages and limitations. In this review, we describe several cutting edge light sheet microscopes and their optimal applications.