Filter
Associated Lab
- Darshan Lab (1) Apply Darshan Lab filter
- Harris Lab (1) Apply Harris Lab filter
- Hess Lab (1) Apply Hess Lab filter
- Karpova Lab (1) Apply Karpova Lab filter
- Lee (Albert) Lab (1) Apply Lee (Albert) Lab filter
- Lippincott-Schwartz Lab (1) Apply Lippincott-Schwartz Lab filter
- Pachitariu Lab (2) Apply Pachitariu Lab filter
- Reiser Lab (2) Apply Reiser Lab filter
- Schreiter Lab (1) Apply Schreiter Lab filter
- Stringer Lab (2) Apply Stringer Lab filter
- Svoboda Lab (1) Apply Svoboda Lab filter
Publication Date
- November 29, 2023 (1) Apply November 29, 2023 filter
- November 28, 2023 (1) Apply November 28, 2023 filter
- November 26, 2023 (1) Apply November 26, 2023 filter
- November 24, 2023 (2) Apply November 24, 2023 filter
- November 22, 2023 (1) Apply November 22, 2023 filter
- November 21, 2023 (1) Apply November 21, 2023 filter
- November 20, 2023 (2) Apply November 20, 2023 filter
- November 13, 2023 (1) Apply November 13, 2023 filter
- November 8, 2023 (1) Apply November 8, 2023 filter
- November 6, 2023 (1) Apply November 6, 2023 filter
- November 3, 2023 (1) Apply November 3, 2023 filter
- November 1, 2023 (2) Apply November 1, 2023 filter
- Remove November 2023 filter November 2023
- Remove 2023 filter 2023
15 Janelia Publications
Showing 1-10 of 15 resultsFor most model organisms in neuroscience, research into visual processing in the brain is difficult because of a lack of high-resolution maps that capture complex neuronal circuitry. The microinsect Megaphragma viggianii, because of its small size and non-trivial behavior, provides a unique opportunity for tractable whole-organism connectomics. We image its whole head using serial electron microscopy. We reconstruct its compound eye and analyze the optical properties of the ommatidia as well as the connectome of the first visual neuropil-the lamina. Compared with the fruit fly and the honeybee, Megaphragma visual system is highly simplified: it has 29 ommatidia per eye and 6 lamina neuron types. We report features that are both stereotypical among most ommatidia and specialized to some. By identifying the "barebones" circuits critical for flying insects, our results will facilitate constructing computational models of visual processing in insects.
Medial frontal cortical areas are thought to play a critical role in the brain's ability to flexibly deploy strategies that are effective in complex settings. Still, the specific circuit computations that underpin this foundational aspect of intelligence remain unclear. Here, by examining neural ensemble activity in rats that sample different strategies in a self-guided search for latent task structure, we demonstrate a robust tracking of individual strategy prevalence in the anterior cingulate cortex (ACC), especially in an area homologous to primate area 32D. Prevalence encoding in the ACC is wide-scale, independent of reward delivery, and persists through a substantial ensemble reorganization that tags ACC representations with contextual content. Our findings argue that ACC ensemble dynamics is structured by a summary statistic of recent behavioral choices, raising the possibility that ACC plays a role in estimating - through statistical learning - which actions promote the occurrence of events in the environment.
Ionic driving forces provide the net electromotive force for ion movement across receptors, channels, and transporters, and are a fundamental property of all cells. In the brain for example, fast synaptic inhibition is mediated by chloride permeable GABAA receptors, and single-cell intracellular recordings have been the only method for estimating driving forces across these receptors (DFGABAA). Here we present a new tool for quantifying inhibitory receptor driving force named ORCHID: all-Optical Reporting of CHloride Ion Driving force. We demonstrate ORCHID’s ability to provide accurate, high-throughput measurements of resting and dynamic DFGABAA from genetically targeted cell types over multiple timescales. ORCHID confirms theoretical predictions about the biophysical mechanisms that establish DFGABAA, reveals novel differences in DFGABAA between neurons and astrocytes, and affords the first in vivo measurements of intact DFGABAA. This work extends our understanding of inhibitory synaptic transmission and establishes a precedent for all-optical methods to assess ionic driving forces.
Neural representations of information are shaped by local network interactions. Previous studies linking neural coding and cortical connectivity focused on stimulus selectivity in the sensory cortex 1–4. Here we study neural activity in the motor cortex during naturalistic behavior in which mice gathered rewards with multidirectional tongue reaching. This behavior does not require training and thus allowed us to probe neural coding and connectivity in motor cortex before its activity is shaped by learning a specific task. Neurons typically responded during and after reaching movements and exhibited conjunctive tuning to target location and reward outcome. We used an all-optical 5,4,6,7 method for large-scale causal functional connectivity mapping in vivo. Mapping connectivity between > 20,000,000 excitatory neuronal pairs revealed fine-scale columnar architecture in layer 2/3 of the motor cortex. Neurons displayed local (< 100 µm) like-to-like connectivity according to target-location tuning, and inhibition over longer spatial scales. Connectivity patterns comprised a continuum, with abundant weakly connected neurons and sparse strongly connected neurons that function as network hubs. Hub neurons were weakly tuned to target-location and reward-outcome but strongly influenced neighboring neurons. This network of neurons, encoding location and outcome of movements to different motor goals, may be a general substrate for rapid learning of complex, goal-directed behaviors.
This special feature of , titled 'Advances in Quantitative Bioimaging', proposes an overview of the latest advancements in quantitative bioimaging techniques and their wide-ranging applications. The articles cover various topics, including modern imaging methods that enable visualization on a nanoscale, such as super-resolution microscopy and single-particle analysis. These techniques offer unparalleled insights into complex molecular structures and dynamic cellular processes , such as mapping nuclear pore proteins or tracking single histone deposition events throughout the cell cycle. The articles presented in this edition showcase cutting-edge quantitative imaging techniques coupled with advanced computational analysis capable of precisely measuring biological structures and processes. Examples range from correlating calcium release events to underlying protein organization in heart cells to pioneering tools for categorizing changes in microglia morphology under various conditions. This editorial highlights how these advancements are revolutionizing our understanding of living systems, while acknowledging challenges that must be addressed to fully exploit the potential of these emerging technologies, such as improving molecular probes, algorithms and correlation protocols.
Color and motion are used by many species to identify salient objects. They are processed largely independently, but color contributes to motion processing in humans, for example, enabling moving colored objects to be detected when their luminance matches the background. Here, we demonstrate an unexpected, additional contribution of color to motion vision in Drosophila. We show that behavioral ON-motion responses are more sensitive to UV than for OFF-motion, and we identify cellular pathways connecting UV-sensitive R7 photoreceptors to ON and OFF-motion-sensitive T4 and T5 cells, using neurogenetics and calcium imaging. Remarkably, this contribution of color circuitry to motion vision enhances the detection of approaching UV discs, but not green discs with the same chromatic contrast, and we show how this could generalize for systems with ON- and OFF-motion pathways. Our results provide a computational and circuit basis for how color enhances motion vision to favor the detection of saliently colored objects.
Color and motion are used by many species to identify salient objects. They are processed largely independently, but color contributes to motion processing in humans, for example, enabling moving colored objects to be detected when their luminance matches the background. Here, we demonstrate an unexpected, additional contribution of color to motion vision in Drosophila. We show that behavioral ON-motion responses are more sensitive to UV than for OFF-motion, and we identify cellular pathways connecting UV-sensitive R7 photoreceptors to ON and OFF-motion-sensitive T4 and T5 cells, using neurogenetics and calcium imaging. Remarkably, this contribution of color circuitry to motion vision enhances the detection of approaching UV discs, but not green discs with the same chromatic contrast, and we show how this could generalize for systems with ON- and OFF-motion pathways. Our results provide a computational and circuit basis for how color enhances motion vision to favor the detection of saliently colored objects.
Survival behaviors are orchestrated by hardwired circuits located in deep subcortical brain regions, most prominently the hypothalamus. Artificial activation of spatially localized, genetically defined hypothalamic cell populations is known to trigger distinct behaviors, suggesting a nucleus-centered organization of behavioral control. However, no study has investigated the hypothalamic representation of innate behaviors using unbiased, large-scale single neuron recordings. Here, using custom silicon probes, we performed recordings across the rostro-caudal extent of the medial hypothalamus in freely moving animals engaged in a diverse array of social and predator defense (“fear”) behaviors. Nucleus-averaged activity revealed spatially distributed generic “ignition signals” that occurred at the onset of each behavior, and did not identify sparse, nucleus-specific behavioral representations. Single-unit analysis revealed that social and fear behavior classes are encoded by activity in distinct sets of spatially distributed neuronal ensembles spanning the entire hypothalamic rostro-caudal axis. Individual ensemble membership, however, was drawn from neurons in 3-4 adjacent nuclei. Mixed selectivity was identified as the most prevalent mode of behavior representation by individual hypothalamic neurons. Encoding models indicated that a significant fraction of the variance in single neuron activity is explained by behavior. This work reveals that innate behaviors are encoded in the hypothalamus by activity in spatially distributed neural ensembles that each span multiple neighboring nuclei, complementing the prevailing view of hypothalamic behavioral control by single nucleus-restricted cell types derived from perturbational studies.
Recent studies in mice have shown that orofacial behaviors drive a large fraction of neural activity across the brain. To understand the nature and function of these signals, we need better computational models to characterize the behaviors and relate them to neural activity. Here we developed Facemap, a framework consisting of a keypoint tracking algorithm and a deep neural network encoder for predicting neural activity. We used the Facemap keypoints as input for the deep neural network to predict the activity of ∼50,000 simultaneously-recorded neurons and in visual cortex we doubled the amount of explained variance compared to previous methods. Our keypoint tracking algorithm was more accurate than existing pose estimation tools, while the inference speed was several times faster, making it a powerful tool for closed-loop behavioral experiments. The Facemap tracker was easy to adapt to data from new labs, requiring as few as 10 annotated frames for near-optimal performance. We used Facemap to find that the neuronal activity clusters which were highly driven by behaviors were more spatially spread-out across cortex. We also found that the deep keypoint features inferred by the model had time-asymmetrical state dynamics that were not apparent in the raw keypoint data. In summary, Facemap provides a stepping stone towards understanding the function of the brainwide neural signals and their relation to behavior.
Live-cell super-resolution microscopy enables the imaging of biological structure dynamics below the diffraction limit. Here we present enhanced super-resolution radial fluctuations (eSRRF), substantially improving image fidelity and resolution compared to the original SRRF method. eSRRF incorporates automated parameter optimization based on the data itself, giving insight into the trade-off between resolution and fidelity. We demonstrate eSRRF across a range of imaging modalities and biological systems. Notably, we extend eSRRF to three dimensions by combining it with multifocus microscopy. This realizes live-cell volumetric super-resolution imaging with an acquisition speed of ~1 volume per second. eSRRF provides an accessible super-resolution approach, maximizing information extraction across varied experimental conditions while minimizing artifacts. Its optimal parameter prediction strategy is generalizable, moving toward unbiased and optimized analyses in super-resolution microscopy.