Filter
Associated Lab
- Branson Lab (3) Apply Branson Lab filter
- Card Lab (5) Apply Card Lab filter
- Jayaraman Lab (2) Apply Jayaraman Lab filter
- Remove Reiser Lab filter Reiser Lab
- Romani Lab (4) Apply Romani Lab filter
- Rubin Lab (15) Apply Rubin Lab filter
- Stern Lab (1) Apply Stern Lab filter
- Turaga Lab (2) Apply Turaga Lab filter
- Zuker Lab (1) Apply Zuker Lab filter
Associated Project Team
Publication Date
- 2024 (3) Apply 2024 filter
- 2023 (6) Apply 2023 filter
- 2022 (3) Apply 2022 filter
- 2021 (3) Apply 2021 filter
- 2020 (2) Apply 2020 filter
- 2019 (2) Apply 2019 filter
- 2018 (4) Apply 2018 filter
- 2017 (6) Apply 2017 filter
- 2016 (3) Apply 2016 filter
- 2015 (2) Apply 2015 filter
- 2014 (2) Apply 2014 filter
- 2013 (1) Apply 2013 filter
- 2012 (1) Apply 2012 filter
- 2011 (2) Apply 2011 filter
- 2010 (2) Apply 2010 filter
- 2009 (1) Apply 2009 filter
Type of Publication
- Remove Janelia filter Janelia
43 Publications
Showing 21-30 of 43 resultsWhat can we learn from a connectome? We constructed a simplified model of the first two stages of the fly visual system, the lamina and medulla. The resulting hexagonal lattice convolutional network was trained using backpropagation through time to perform object tracking in natural scene videos. Networks initialized with weights from connectome reconstructions automatically discovered well-known orientation and direction selectivity properties in T4 neurons and their inputs, while networks initialized at random did not. Our work is the first demonstration, that knowledge of the connectome can enable in silico predictions of the functional properties of individual neurons in a circuit, leading to an understanding of circuit function from structure alone.
A recent study reports a novel form of lateral inhibition between photoreceptors supporting colour vision in the vinegar fly, Drosophila melanogaster.
A neuron that extracts directionally selective motion information from upstream signals lacking this selectivity must compare visual responses from spatially offset inputs. Distinguishing among prevailing algorithmic models for this computation requires measuring fast neuronal activity and inhibition. In the Drosophila melanogaster visual system, a fourth-order neuron-T4-is the first cell type in the ON pathway to exhibit directionally selective signals. Here we use in vivo whole-cell recordings of T4 to show that directional selectivity originates from simple integration of spatially offset fast excitatory and slow inhibitory inputs, resulting in a suppression of responses to the nonpreferred motion direction. We constructed a passive, conductance-based model of a T4 cell that accurately predicts the neuron's response to moving stimuli. These results connect the known circuit anatomy of the motion pathway to the algorithmic mechanism by which the direction of motion is computed.
The behavioral state of an animal can dynamically modulate visual processing. In flies, the behavioral state is known to alter the temporal tuning of neurons that carry visual motion information into the central brain. However, where this modulation occurs and how it tunes the properties of this neural circuit are not well understood. Here, we show that the behavioral state alters the baseline activity levels and the temporal tuning of the first directionally selective neuron in the ON motion pathway (T4) as well as its primary input neurons (Mi1, Tm3, Mi4, Mi9). These effects are especially prominent in the inhibitory neuron Mi4, and we show that central octopaminergic neurons provide input to Mi4 and increase its excitability. We further show that octopamine neurons are required for sustained behavioral responses to fast-moving, but not slow-moving, visual stimuli in walking flies. These results indicate that behavioral-state modulation acts directly on the inputs to the directionally selective neurons and supports efficient neural coding of motion stimuli.
Nervous systems combine lower-level sensory signals to detect higher-order stimulus features critical to survival, such as the visual looming motion created by an imminent collision or approaching predator. Looming-sensitive neurons have been identified in diverse animal species. Different large-scale visual features such as looming often share local cues, which means loom-detecting neurons face the challenge of rejecting confounding stimuli. Here we report the discovery of an ultra-selective looming detecting neuron, lobula plate/lobula columnar, type II (LPLC2) in Drosophila, and show how its selectivity is established by radial motion opponency. In the fly visual system, directionally selective small-field neurons called T4 and T5 form a spatial map in the lobula plate, where they each terminate in one of four retinotopic layers, such that each layer responds to motion in a different cardinal direction. Single-cell anatomical analysis reveals that each arm of the LPLC2 cross-shaped primary dendrites ramifies in one of these layers and extends along that layer's preferred motion direction. In vivo calcium imaging demonstrates that, as their shape predicts, individual LPLC2 neurons respond strongly to outward motion emanating from the centre of the neuron's receptive field. Each dendritic arm also receives local inhibitory inputs directionally selective for inward motion opposing the excitation. This radial motion opponency generates a balance of excitation and inhibition that makes LPLC2 non-responsive to related patterns of motion such as contraction, wide-field rotation or luminance change. As a population, LPLC2 neurons densely cover visual space and terminate onto the giant fibre descending neurons, which drive the jump muscle motor neuron to trigger an escape take off. Our findings provide a mechanistic description of the selective feature detection that flies use to discern and escape looming threats.
Assigning behavioral functions to neural structures has long been a central goal in neuroscience and is a necessary first step toward a circuit-level understanding of how the brain generates behavior. Here, we map the neural substrates of locomotion and social behaviors for Drosophila melanogaster using automated machine-vision and machine-learning techniques. From videos of 400,000 flies, we quantified the behavioral effects of activating 2,204 genetically targeted populations of neurons. We combined a novel quantification of anatomy with our behavioral analysis to create brain-behavior correlation maps, which are shared as browsable web pages and interactive software. Based on these maps, we generated hypotheses of regions of the brain causally related to sensory processing, locomotor control, courtship, aggression, and sleep. Our maps directly specify genetic tools to target these regions, which we used to identify a small population of neurons with a role in the control of walking. •We developed machine-vision methods to broadly and precisely quantify fly behavior•We measured effects of activating 2,204 genetically targeted neuronal populations•We created whole-brain maps of neural substrates of locomotor and social behaviors•We created resources for exploring our results and enabling further investigation Machine-vision analyses of large behavior and neuroanatomy data reveal whole-brain maps of regions associated with numerous complex behaviors.
The perception of visual motion is critical for animal navigation, and flies are a prominent model system for exploring this neural computation. In Drosophila, the T4 cells of the medulla are directionally selective and necessary for ON motion behavioral responses. To examine the emergence of directional selectivity, we developed genetic driver lines for the neuron types with the most synapses onto T4 cells. Using calcium imaging, we found that these neuron types are not directionally selective and that selectivity arises in the T4 dendrites. By silencing each input neuron type, we identified which neurons are necessary for T4 directional selectivity and ON motion behavioral responses. We then determined the sign of the connections between these neurons and T4 cells using neuronal photoactivation. Our results indicate a computational architecture for motion detection that is a hybrid of classic theoretical models.
Visual motion sensing neurons in the fly also encode a range of behavior-related signals. These nonvisual inputs appear to be used to correct some of the challenges of visually guided locomotion.
The delivery of tracers into populations of neurons is essential to visualize their anatomy and analyze their function. In some model systems genetically-targeted expression of fluorescent proteins is the method of choice; however, these genetic tools are not available for most organisms and alternative labeling methods are very limited. Here we describe a new method for neuronal labelling by electrophoretic dye delivery from a suction electrode directly through the neuronal sheath of nerves and ganglia in insects. Polar tracer molecules were delivered into the locust auditory nerve without destroying its function, simultaneously staining peripheral sensory structures and central axonal projections. Local neuron populations could be labelled directly through the surface of the brain, and in-vivo optical imaging of sound-evoked activity was achieved through the electrophoretic delivery of calcium indicators. The method provides a new tool for studying how stimuli are processed in peripheral and central sensory pathways and is a significant advance for the study of nervous systems in non-model organisms.
Visual projection neurons (VPNs) provide an anatomical connection between early visual processing and higher brain regions. Here we characterize lobula columnar (LC) cells, a class of Drosophila VPNs that project to distinct central brain structures called optic glomeruli. We anatomically describe 22 different LC types and show that, for several types, optogenetic activation in freely moving flies evokes specific behaviors. The activation phenotypes of two LC types closely resemble natural avoidance behaviors triggered by a visual loom. In vivo two-photon calcium imaging reveals that these LC types respond to looming stimuli, while another type does not, but instead responds to the motion of a small object. Activation of LC neurons on only one side of the brain can result in attractive or aversive turning behaviors depending on the cell type. Our results indicate that LC neurons convey information on the presence and location of visual features relevant for specific behaviors.