Filter
Associated Lab
- Aso Lab (2) Apply Aso Lab filter
- Remove Branson Lab filter Branson Lab
- Card Lab (2) Apply Card Lab filter
- Cardona Lab (1) Apply Cardona Lab filter
- Dickson Lab (1) Apply Dickson Lab filter
- Fetter Lab (1) Apply Fetter Lab filter
- Freeman Lab (2) Apply Freeman Lab filter
- Harris Lab (1) Apply Harris Lab filter
- Heberlein Lab (1) Apply Heberlein Lab filter
- Karpova Lab (1) Apply Karpova Lab filter
- Keller Lab (3) Apply Keller Lab filter
- Reiser Lab (2) Apply Reiser Lab filter
- Rubin Lab (6) Apply Rubin Lab filter
- Simpson Lab (1) Apply Simpson Lab filter
- Svoboda Lab (1) Apply Svoboda Lab filter
- Tervo Lab (1) Apply Tervo Lab filter
- Truman Lab (1) Apply Truman Lab filter
- Turaga Lab (2) Apply Turaga Lab filter
- Zlatic Lab (1) Apply Zlatic Lab filter
Associated Project Team
Publication Date
- 2023 (2) Apply 2023 filter
- 2021 (2) Apply 2021 filter
- 2020 (3) Apply 2020 filter
- 2019 (3) Apply 2019 filter
- 2018 (5) Apply 2018 filter
- 2017 (7) Apply 2017 filter
- 2016 (5) Apply 2016 filter
- 2015 (6) Apply 2015 filter
- 2014 (6) Apply 2014 filter
- 2012 (3) Apply 2012 filter
- 2011 (1) Apply 2011 filter
- 2009 (1) Apply 2009 filter
- 2005 (1) Apply 2005 filter
Type of Publication
45 Publications
Showing 11-20 of 45 resultsSkillful control of movement is central to our ability to sense and manipulate the world. A large body of work in nonhuman primates has demonstrated that motor cortex provides flexible, time-varying activity patterns that control the arm during reaching and grasping. Previous studies have suggested that these patterns are generated by strong local recurrent dynamics operating autonomously from inputs during movement execution. An alternative possibility is that motor cortex requires coordination with upstream brain regions throughout the entire movement in order to yield these patterns. Here, we developed an experimental preparation in the mouse to directly test these possibilities using optogenetics and electrophysiology during a skilled reach-to-grab-to-eat task. To validate this preparation, we first established that a specific, time-varying pattern of motor cortical activity was required to produce coordinated movement. Next, in order to disentangle the contribution of local recurrent motor cortical dynamics from external input, we optogenetically held the recurrent contribution constant, then observed how motor cortical activity recovered following the end of this perturbation. Both the neural responses and hand trajectory varied from trial to trial, and this variability reflected variability in external inputs. To directly probe the role of these inputs, we used optogenetics to perturb activity in the thalamus. Thalamic perturbation at the start of the trial prevented movement initiation, and perturbation at any stage of the movement prevented progression of the hand to the target; this demonstrates that input is required throughout the movement. By comparing motor cortical activity with and without thalamic perturbation, we were able to estimate the effects of external inputs on motor cortical population activity. Thus, unlike pattern-generating circuits that are local and autonomous, such as those in the spinal cord that generate left-right alternation during locomotion, the pattern generator for reaching and grasping is distributed across multiple, strongly-interacting brain regions.
The mouse embryo has long been central to the study of mammalian development; however, elucidating the cell behaviors governing gastrulation and the formation of tissues and organs remains a fundamental challenge. A major obstacle is the lack of live imaging and image analysis technologies capable of systematically following cellular dynamics across the developing embryo. We developed a light-sheet microscope that adapts itself to the dramatic changes in size, shape, and optical properties of the post-implantation mouse embryo and captures its development from gastrulation to early organogenesis at the cellular level. We furthermore developed a computational framework for reconstructing long-term cell tracks, cell divisions, dynamic fate maps, and maps of tissue morphogenesis across the entire embryo. By jointly analyzing cellular dynamics in multiple embryos registered in space and time, we built a dynamic atlas of post-implantation mouse development that, together with our microscopy and computational methods, is provided as a resource.
In this work, we address the problem of pose detection and tracking of multiple individuals for the study of behaviour in insects and animals. Using a Deep Neural Network architecture, precise detection and association of the body parts can be performed. The models are learned based on user-annotated training videos, which gives flexibility to the approach. This is illustrated on two different animals: honeybees and mice, where very good performance in part recognition and association are observed despite the presence of multiple interacting individuals.
The ability to automatize the analysis of video for monitoring animals and insects is of great interest for behavior science and ecology [1]. In particular, honeybees play a crucial role in agriculture as natural pollinators. However, recent studies has shown that phenomena such as colony collapse disorder are causing the loss of many colonies [2]. Due to the high number of interacting factors to explain these events, a multi-faceted analysis of the bees in their environment is required. We focus in our work in developing tools to help model and understand their behavior as individuals, in relation with the health and performance of the colony. In this paper, we report the development of a new system for the detection, locali- zation and tracking of honeybee body parts from video on the entrance ramp of the colony. The proposed system builds on the recent advances in Convolutional Neu- ral Networks (CNN) for Human pose estimation and evaluates the suitability for the detection of honeybee pose as shown in Figure 1. This opens the door for novel animal behavior analysis systems that take advantage of the precise detection and tracking of the insect pose.
We give a covering number bound for deep learning networks that is independent of the size of the network. The key for the simple analysis is that for linear classifiers, rotating the data doesn't affect the covering number. Thus, we can ignore the rotation part of each layer's linear transformation, and get the covering number bound by concentrating on the scaling part.
Assigning behavioral functions to neural structures has long been a central goal in neuroscience and is a necessary first step toward a circuit-level understanding of how the brain generates behavior. Here, we map the neural substrates of locomotion and social behaviors for Drosophila melanogaster using automated machine-vision and machine-learning techniques. From videos of 400,000 flies, we quantified the behavioral effects of activating 2,204 genetically targeted populations of neurons. We combined a novel quantification of anatomy with our behavioral analysis to create brain-behavior correlation maps, which are shared as browsable web pages and interactive software. Based on these maps, we generated hypotheses of regions of the brain causally related to sensory processing, locomotor control, courtship, aggression, and sleep. Our maps directly specify genetic tools to target these regions, which we used to identify a small population of neurons with a role in the control of walking. •We developed machine-vision methods to broadly and precisely quantify fly behavior•We measured effects of activating 2,204 genetically targeted neuronal populations•We created whole-brain maps of neural substrates of locomotor and social behaviors•We created resources for exploring our results and enabling further investigation Machine-vision analyses of large behavior and neuroanatomy data reveal whole-brain maps of regions associated with numerous complex behaviors.
New work on innate escape behavior shows that mice spontaneously form a spatially precise memory of the location of shelter, which is laid down quickly and updated continuously.
Kernel regression or classification (also referred to as weighted ε-NN methods in Machine Learning) are appealing for their simplicity and therefore ubiquitous in data analysis. How- ever, practical implementations of kernel regression or classification consist of quantizing or sub-sampling data for improving time efficiency, often at the cost of prediction quality. While such tradeoffs are necessary in practice, their statistical implications are generally not well understood, hence practical implementations come with few performance guaran- tees. In particular, it is unclear whether it is possible to maintain the statistical accuracy of kernel prediction—crucial in some applications—while improving prediction time. The present work provides guiding principles for combining kernel prediction with data- quantization so as to guarantee good tradeoffs between prediction time and accuracy, and in particular so as to approximately maintain the good accuracy of vanilla kernel prediction. Furthermore, our tradeoff guarantees are worked out explicitly in terms of a tuning parameter which acts as a knob that favors either time or accuracy depending on practical needs. On one end of the knob, prediction time is of the same order as that of single-nearest- neighbor prediction (which is statistically inconsistent) while maintaining consistency; on the other end of the knob, the prediction risk is nearly minimax-optimal (in terms of the original data size) while still reducing time complexity. The analysis thus reveals the interaction between the data-quantization approach and the kernel prediction method, and most importantly gives explicit control of the tradeoff to the practitioner rather than fixing the tradeoff in advance or leaving it opaque. The theoretical results are validated on data from a range of real-world application domains; in particular we demonstrate that the theoretical knob performs as expected.
Insects, like most animals, tend to steer away from imminent threats [1-7]. Drosophila melanogaster, for example, generally initiate an escape take-off in response to a looming visual stimulus, mimicking a potential predator [8]. The escape response to a visual threat is, however, flexible [9-12] and can alternatively consist of walking backward away from the perceived threat [11], which may be a more effective response to ambush predators such as nymphal praying mantids [7]. Flexibility in escape behavior may also add an element of unpredictability that makes it difficult for predators to anticipate or learn the prey's likely response [3-6]. Whereas the fly's escape jump has been well studied [8, 9, 13-18], the neuronal underpinnings of evasive walking remain largely unexplored. We previously reported the identification of a cluster of descending neurons-the moonwalker descending neurons (MDNs)-the activity of which is necessary and sufficient to trigger backward walking [19], as well as a population of visual projection neurons-the lobula columnar 16 (LC16) cells-that respond to looming visual stimuli and elicit backward walking and turning [11]. Given the similarity of their activation phenotypes, we hypothesized that LC16 neurons induce backward walking via MDNs and that turning while walking backward might reflect asymmetric activation of the left and right MDNs. Here, we present data from functional imaging, behavioral epistasis, and unilateral activation experiments that support these hypotheses. We conclude that LC16 and MDNs are critical components of the neural circuit that transduces threatening visual stimuli into directional locomotor output.