Filter
Associated Lab
- Ahrens Lab (1) Apply Ahrens Lab filter
- Aso Lab (3) Apply Aso Lab filter
- Remove Branson Lab filter Branson Lab
- Card Lab (3) Apply Card Lab filter
- Cardona Lab (1) Apply Cardona Lab filter
- Dickson Lab (1) Apply Dickson Lab filter
- Fetter Lab (1) Apply Fetter Lab filter
- Freeman Lab (2) Apply Freeman Lab filter
- Harris Lab (1) Apply Harris Lab filter
- Heberlein Lab (1) Apply Heberlein Lab filter
- Karpova Lab (1) Apply Karpova Lab filter
- Keller Lab (3) Apply Keller Lab filter
- Otopalik Lab (1) Apply Otopalik Lab filter
- Reiser Lab (3) Apply Reiser Lab filter
- Rubin Lab (7) Apply Rubin Lab filter
- Simpson Lab (1) Apply Simpson Lab filter
- Svoboda Lab (1) Apply Svoboda Lab filter
- Tervo Lab (1) Apply Tervo Lab filter
- Truman Lab (1) Apply Truman Lab filter
- Turaga Lab (4) Apply Turaga Lab filter
- Zlatic Lab (1) Apply Zlatic Lab filter
Associated Project Team
Publication Date
- 2024 (3) Apply 2024 filter
- 2023 (2) Apply 2023 filter
- 2021 (2) Apply 2021 filter
- 2020 (3) Apply 2020 filter
- 2019 (3) Apply 2019 filter
- 2018 (5) Apply 2018 filter
- 2017 (7) Apply 2017 filter
- 2016 (5) Apply 2016 filter
- 2015 (6) Apply 2015 filter
- 2014 (6) Apply 2014 filter
- 2012 (3) Apply 2012 filter
- 2011 (1) Apply 2011 filter
- 2009 (1) Apply 2009 filter
- 2005 (1) Apply 2005 filter
Type of Publication
48 Publications
Showing 21-30 of 48 resultsWe present a camera-based method for automatically quantifying the individual and social behaviors of fruit flies, Drosophila melanogaster, interacting in a planar arena. Our system includes machine-vision algorithms that accurately track many individuals without swapping identities and classification algorithms that detect behaviors. The data may be represented as an ethogram that plots the time course of behaviors exhibited by each fly or as a vector that concisely captures the statistical properties of all behaviors displayed in a given period. We found that behavioral differences between individuals were consistent over time and were sufficient to accurately predict gender and genotype. In addition, we found that the relative positions of flies during social interactions vary according to gender, genotype and social environment. We expect that our software, which permits high-throughput screening, will complement existing molecular methods available in Drosophila, facilitating new investigations into the genetic and cellular basis of behavior.
The ability to automatize the analysis of video for monitoring animals and insects is of great interest for behavior science and ecology [1]. In particular, honeybees play a crucial role in agriculture as natural pollinators. However, recent studies has shown that phenomena such as colony collapse disorder are causing the loss of many colonies [2]. Due to the high number of interacting factors to explain these events, a multi-faceted analysis of the bees in their environment is required. We focus in our work in developing tools to help model and understand their behavior as individuals, in relation with the health and performance of the colony. In this paper, we report the development of a new system for the detection, locali- zation and tracking of honeybee body parts from video on the entrance ramp of the colony. The proposed system builds on the recent advances in Convolutional Neu- ral Networks (CNN) for Human pose estimation and evaluates the suitability for the detection of honeybee pose as shown in Figure 1. This opens the door for novel animal behavior analysis systems that take advantage of the precise detection and tracking of the insect pose.
To investigate the fundamental question of how nervous systems encode, organize, and sequence behaviors, Kato et al. imaged neural activity with cellular resolution across the brain of the worm Caenorhabditis elegans. Locomotion behavior seems to be continuously represented by cyclical patterns of distributed neural activity that are present even in immobilized animals.
The Importance Weighted Auto Encoder (IWAE) objective has been shown to improve the training of generative models over the standard Variational Auto Encoder (VAE) objective. Here, we derive importance weighted extensions to Adversarial Variational Bayes (AVB) and Adversarial Autoencoder (AAE). These latent variable models use implicitly defined inference networks whose approximate posterior density qφ(z|x) cannot be directly evaluated, an essential ingredient for importance weighting. We show improved training and inference in latent variable models with our adversarially trained importance weighting method, and derive new theoretical connections between adversarial generative model training criteria and marginal likelihood based methods. We apply these methods to the important problem of inferring spiking neural activity from calcium imaging data, a challenging posterior inference problem in neuroscience, and show that posterior samples from the adversarial methods outperform factorized posteriors used in VAEs.
The mouse embryo has long been central to the study of mammalian development; however, elucidating the cell behaviors governing gastrulation and the formation of tissues and organs remains a fundamental challenge. A major obstacle is the lack of live imaging and image analysis technologies capable of systematically following cellular dynamics across the developing embryo. We developed a light-sheet microscope that adapts itself to the dramatic changes in size, shape, and optical properties of the post-implantation mouse embryo and captures its development from gastrulation to early organogenesis at the cellular level. We furthermore developed a computational framework for reconstructing long-term cell tracks, cell divisions, dynamic fate maps, and maps of tissue morphogenesis across the entire embryo. By jointly analyzing cellular dynamics in multiple embryos registered in space and time, we built a dynamic atlas of post-implantation mouse development that, together with our microscopy and computational methods, is provided as a resource.
We present a machine learning–based system for automatically computing interpretable, quantitative measures of animal behavior. Through our interactive system, users encode their intuition about behavior by annotating a small set of video frames. These manual labels are converted into classifiers that can automatically annotate behaviors in screen-scale data sets. Our general-purpose system can create a variety of accurate individual and social behavior classifiers for different organisms, including mice and adult and larval Drosophila.
Naïve Bayes Nearest Neighbour (NBNN) is a simple and effective framework which addresses many of the pitfalls of K-Nearest Neighbour (KNN) classification. It has yielded competitive results on several computer vision benchmarks. Its central tenet is that during NN search, a query is not compared to every example in a database, ignoring class information. Instead, NN searches are performed within each class, generating a score per class. A key problem with NN techniques, including NBNN, is that they fail when the data representation does not capture perceptual (e.g. class-based) similarity. NBNN circumvents this by using independent engineered descriptors (e.g. SIFT). To extend its applicability outside of image-based domains, we propose to learn a metric which captures perceptual similarity. Similar to how Neighbourhood Components Analysis optimizes a differentiable form of KNN classification, we propose 'Class Conditional' metric learning (CCML), which optimizes a soft form of the NBNN selection rule. Typical metric learning algorithms learn either a global or local metric. However, our proposed method can be adjusted to a particular level of locality by tuning a single parameter. An empirical evaluation on classification and retrieval tasks demonstrates that our proposed method clearly outperforms existing learned distance metrics across a variety of image and non-image datasets.
We propose a framework for detecting action patterns from motion sequences and modeling the sensory-motor relationship of animals, using a generative recurrent neural network. The network has a discriminative part (classifying actions) and a generative part (predicting motion), whose recurrent cells are laterally connected, allowing higher levels of the network to represent high level phenomena. We test our framework on two types of data, fruit fly behavior and online handwriting. Our results show that 1) taking advantage of unlabeled sequences, by predicting future motion, significantly improves action detection performance when training labels are scarce, 2) the network learns to represent high level phenomena such as writer identity and fly gender, without supervision, and 3) simulated motion trajectories, generated by treating motion prediction as input to the network, look realistic and may be used to qualitatively evaluate whether the model has learnt generative control rules.
Recent developments in machine vision methods for automatic, quantitative analysis of social behavior have immensely improved both the scale and level of resolution with which we can dissect interactions between members of the same species. In this paper, we review these methods, with a particular focus on how biologists can apply them to their own work. We discuss several components of machine vision-based analyses: methods to record high-quality video for automated analyses, video-based tracking algorithms for estimating the positions of interacting animals, and machine learning methods for recognizing patterns of interactions. These methods are extremely general in their applicability, and we review a subset of successful applications of them to biological questions in several model systems with very different types of social behaviors.