The Myers Lab is developing algorithms and software for the automatic interpretation of images produced by light and electron microscopy of stained samples, with an emphasis on building 3D and 4D "atlases" of brains, developing organisms, and cellular processes.
While biologists have been interested in development and anatomy for a very long time, three recent shifts have transformed the experimental landscape: (1) the sequencing of all the common model organisms, (2) rapid advances in light microscopy, and (3) an expanding panoply of genetically encoded reagents. Together these open up the possibility that we can map anatomy and developmental trajectories in terms of molecular agents on a comprehensive scale. Given such data sets, we further believe it may be possible to begin to make real breakthroughs on how the genomic program creates specific shapes and patterns reproducibly.
While there are significant obstacles to be overcome in terms of imaging technology and molecular reagents, the most significant bottleneck in this vision is the existence of robust and reliable algorithms and software for interpreting the information that is present in the tens of thousands of 3D stacks and time-series movies that are produced by such projects. The Myers lab began to focus on this informatics specialty after Gene left Celera in 2002, based on the belief that it holds the greatest potential for big breakthroughs in molecular and cellular biology. Today our entire research focus is along these lines and we have even begun to foray into the development of microscopes and robotics to realize high-throughput pipelines.
The lab is highly collaborative and works with many investigators on computation problems for specific investigations. The list below gives only the broad direction and themes for the group.
A Light-Based Map of Every Neuron in the Fly Brain
Fly brains, after deformable registration to the pattern of neuropil, are stereotypic to within 1-2 microns. By using promotor driver lines and Cre-recombinase constructs, we plan to capture 3D stacks of 100,000 randomly sampled neurons from the fly core brain that contains about 20,000 neurons. In effect, we are performing a 5X shotgun sampling of the neurons in the fly brain, and with our algorithms and software expect to be able to provide a model of almost every neuron in the brain along with information about variance in the structure of the brain.
Perfect Cell Lineage Tracking Through the First 24 Hours of Development of a Fly or Zebra Fish
Next generation SPIM microscopy portends a sampling rate and resolution that may make it possible, with sufficiently sophisticated software, to trace the lineage of every single cell in a developing embryo for about 24 hours. For a fly, this would imply one could monitor development up to the point where the embryo is about to hatch and become a larva. With such a platform in hand, one can then imagine an incredible variety of markers (e.g. every transcription factor) that one might like to monitor through this important developmental arc.
A Comprehensive Library For In-Vivo Monitoring of Intra-Cellular Processes
Monitoring intracellular processes is difficult because of the scale involved. Nonetheless, we have had significant success monitoring things such as centrosome formation, microtubule growth, nuclear envelope breakdown, etc. We continue to solve these kinds of problems on a case-by-case basis with our collaborators, but I still imagine a project in which a set of systematic and comprehensive assays for various cellular functions are realized and widely employed to gain a deeper understanding of the biophysical and signaling processes within a cell.
Diffraction-Limited Imaging of a Mouse Brain Volume in a Day
We have built a multi-photon microscope that is 60 times faster than a conventional multi-photon and have begun engineering an onboard microtome that will enable us to image the entire volume of a mouse brain in six days with no supervision after the initial set up. A next-generation version that we are contemplating may do this in a day. We are currently exploring with collaborators the important questions of (a) how to best prepare samples histologically and (b) what range of experiments one can perform with this capability.
The mechanical variables underlying object localization along the axis of the whisker.The Journal of neuroscience : the official journal of the Society for Neuroscience 2013
L. Pammer, D. H. O'Connor, A. S. Hires, N. G. Clack, D. Huber, E. W. Myers, and K. Svoboda The Journal of neuroscience : the official journal of the Society for Neuroscience, 33:6726-41 (2013)
Rodents move their whiskers to locate objects in space. Here we used psychophysical methods to show that head-fixed mice can localize objects along the axis of a single whisker, the radial dimension, with one-millimeter precision. High-speed videography allowed us to estimate the forces and bending moments at the base of the whisker, which underlie radial distance measurement. Mice judged radial object location based on multiple touches. Both the number of touches (1-17) and the forces exerted by the pole on the whisker (up to 573 μN; typical peak amplitude, 100 μN) varied greatly across trials. We manipulated the bending moment and lateral force pressing the whisker against the sides of the follicle and the axial force pushing the whisker into the follicle by varying the compliance of the object during behavior. The behavioral responses suggest that mice use multiple variables (bending moment, axial force, lateral force) to extract radial object localization. Characterization of whisker mechanics revealed that whisker bending stiffness decreases gradually with distance from the face over five orders of magnitude. As a result, the relative amplitudes of different stress variables change dramatically with radial object distance. Our data suggest that mice use distance-dependent whisker mechanics to estimate radial object location using an algorithm that does not rely on precise control of whisking, is robust to variability in whisker forces, and is independent of object compliance and object movement. More generally, our data imply that mice can measure the amplitudes of forces in the sensory follicles for tactile sensation.
The GFP reconstitution across synaptic partners (GRASP) technique, based on functional complementation between two nonfluorescent GFP fragments, can be used to detect the location of synapses quickly, accurately and with high spatial resolution. The method has been previously applied in the nematode and the fruit fly but requires substantial modification for use in the mammalian brain. We developed mammalian GRASP (mGRASP) by optimizing transmembrane split-GFP carriers for mammalian synapses. Using in silico protein design, we engineered chimeric synaptic mGRASP fragments that were efficiently delivered to synaptic locations and reconstituted GFP fluorescence in vivo. Furthermore, by integrating molecular and cellular approaches with a computational strategy for the three-dimensional reconstruction of neurons, we applied mGRASP to both long-range circuits and local microcircuits in the mouse hippocampus and thalamocortical regions, analyzing synaptic distribution in single neurons and in dendritic compartments.
Digital reconstruction of neurons from microscope images is an important and challenging problem in neuroscience. In this paper, we propose a model-based method to tackle this problem. We first formulate a model structure, then develop an algorithm for computing it by carefully taking into account morphological characteristics of neurons, as well as the image properties under typical imaging protocols. The method has been tested on the data sets used in the DIADEM competition and produced promising results for four out of the five data sets.
MOTIVATION: Automatic recognition of cell identities is critical for quantitative measurement, targeting, and manipulation of cells of model animals at single-cell resolution. It has been shown to be a powerful tool for studying gene expression and regulation, cell lineages, and cell fates. Existing methods first segment cells, before applying a recognition algorithm in the second step. As a result, the segmentation errors in the first step directly affect and complicate the subsequent cell recognition step. Moreover, in new experimental settings, some of the image features that have been previously relied upon to recognize cells may not be easy to reproduce, due to limitations on the number of color channels available for fluorescent imaging or to the cost of building transgenic animals. An approach that is more accurate and relies on only a single signal channel is clearly desirable. RESULTS: We have developed a new method, called SRS (for Simultaneous Recognition and Segmentation of cells), and applied it to 3D image stacks of the model organism C. elegans. Given a 3D image stack of the animal and a 3D atlas of target cells, SRS is effectively an atlas-guided voxel classification process: cell recognition is realized by smoothly deforming the atlas to best fit the image, where the segmentation is obtained naturally via classification of all image voxels. The method achieved a 97.7% overall recognition accuracy in recognizing a key class of marker cells, the body wall muscle (BWM) cells, on a data set of 175 C. elegans image stacks containing 14,118 manually curated BWM cells providing the "ground-truth" for accuracy. This result was achieved without any additional fiducial image features. SRS also automatically identified 14 of the image stacks as involving ±90-degree rotations. With these stacks excluded from the data set, the recognition accuracy rose to 99.1%. We also show SRS is generally applicable to other cell-types, e.g. intestinal cells. AVAILABILITY: The supplementary movies can be downloaded from our website http://penglab.janelia.org/proj/celegans_seganno. The method has been implemented as a plug-in program within the V3D system (http://penglab.janelia.org/proj/v3d) and will be released in the V3D plugin source code repository.
The V3D system provides three-dimensional (3D) visualization of gigabyte-sized microscopy image stacks in real time on current laptops and desktops. V3D streamlines the online analysis, measurement and proofreading of complicated image patterns by combining ergonomic functions for selecting a location in an image directly in 3D space and for displaying biological measurements, such as from fluorescent probes, using the overlaid surface objects. V3D runs on all major computer platforms and can be enhanced by software plug-ins to address specific biological problems. To demonstrate this extensibility, we built a V3D-based application, V3D-Neuron, to reconstruct complex 3D neuronal structures from high-resolution brain images. V3D-Neuron can precisely digitize the morphology of a single neuron in a fruitfly brain in minutes, with about a 17-fold improvement in reliability and tenfold savings in time compared with other neuron reconstruction tools. Using V3D-Neuron, we demonstrate the feasibility of building a 3D digital atlas of neurite tracts in the fruitfly brain.
Vibrissa-based object localization in head-fixed mice.The Journal of Neuroscience : The Official Journal of the Society for Neuroscience 2010
D. H. O'Connor, N. G. Clack, D. Huber, T. Komiyama, E. W. Myers, and K. Svoboda The Journal of Neuroscience : The Official Journal of the Society for Neuroscience, 30:1947-67 (2010)
Linking activity in specific cell types with perception, cognition, and action, requires quantitative behavioral experiments in genetic model systems such as the mouse. In head-fixed primates, the combination of precise stimulus control, monitoring of motor output, and physiological recordings over large numbers of trials are the foundation on which many conceptually rich and quantitative studies have been built. Choice-based, quantitative behavioral paradigms for head-fixed mice have not been described previously. Here, we report a somatosensory absolute object localization task for head-fixed mice. Mice actively used their mystacial vibrissae (whiskers) to sense the location of a vertical pole presented to one side of the head and reported with licking whether the pole was in a target (go) or a distracter (no-go) location. Mice performed hundreds of trials with high performance (>90% correct) and localized to <0.95 mm (<6 degrees of azimuthal angle). Learning occurred over 1-2 weeks and was observed both within and across sessions. Mice could perform object localization with single whiskers. Silencing barrel cortex abolished performance to chance levels. We measured whisker movement and shape for thousands of trials. Mice moved their whiskers in a highly directed, asymmetric manner, focusing on the target location. Translation of the base of the whiskers along the face contributed substantially to whisker movements. Mice tended to maximize contact with the go (rewarded) stimulus while minimizing contact with the no-go stimulus. We conjecture that this may amplify differences in evoked neural activity between trial types.
Automatic neuron tracing in volumetric microscopy images with anisotropic path searching.Medical Image Computing and Computer-Assisted Intervention : MICCAI ... International Conference on Medical Image Computing and Computer-Assisted Intervention 2010
J. Xie, T. Zhao, T. Lee, E. Myers, and H. Peng Medical Image Computing and Computer-Assisted Intervention : MICCAI ... International Conference on Medical Image Computing and Computer-Assisted Intervention, 13:472-9 (2010)
Full reconstruction of neuron morphology is of fundamental interest for the analysis and understanding of neuron function. We have developed a novel method capable of tracing neurons in three-dimensional microscopy data automatically. In contrast to template-based methods, the proposed approach makes no assumptions on the shape or appearance of neuron's body. Instead, an efficient seeding approach is applied to find significant pixels almost certainly within complex neuronal structures and the tracing problem is solved by computing an graph tree structure connecting these seeds. In addition, an automated neuron comparison method is introduced for performance evaluation and structure analysis. The proposed algorithm is computationally efficient. Experiments on different types of data show promising results.
Automatic alignment (registration) of 3D images of adult fruit fly brains is often influenced by the significant displacement of the relative locations of the two optic lobes (OLs) and the center brain (CB). In one of our ongoing efforts to produce a better image alignment pipeline of adult fruit fly brains, we consider separating CB and OLs and align them independently. This paper reports our automatic method to segregate CB and OLs, in particular under conditions where the signal to noise ratio (SNR) is low, the variation of the image intensity is big, and the relative displacement of OLs and CB is substantial. We design an algorithm to find a minimum-cost 3D surface in a 3D image stack to best separate an OL (of one side, either left or right) from CB. This surface is defined as an aggregation of the respective minimum-cost curves detected in each individual 2D image slice. Each curve is defined by a list of control points that best segregate OL and CB. To obtain the locations of these control points, we derive an energy function that includes an image energy term defined by local pixel intensities and two internal energy terms that constrain the curve's smoothness and length. Gradient descent method is used to optimize this energy function. To improve both the speed and robustness of the method, for each stack, the locations of optimized control points in a slice are taken as the initialization prior for the next slice. We have tested this approach on simulated and real 3D fly brain image stacks and demonstrated that this method can reasonably segregate OLs from CBs despite the aforementioned difficulties.
The centrosome is a dynamic structure in animal cells that serves as a microtubule organizing center during mitosis and also regulates cell-cycle progression and sets polarity cues. Automated and reliable tracking of centrosomes is essential for genetic screens that study the process of centrosome assembly and maturation in the nematode Caenorhabditis elegans.
The C. elegans cell lineage provides a unique opportunity to look at how cell lineage affects patterns of gene expression. We developed an automatic cell lineage analyzer that converts high-resolution images of worms into a data table showing fluorescence expression with single-cell resolution. We generated expression profiles of 93 genes in 363 specific cells from L1 stage larvae and found that cells with identical fates can be formed by different gene regulatory pathways. Molecular signatures identified repeating cell fate modules within the cell lineage and enabled the generation of a molecular differentiation map that reveals points in the cell lineage when developmental fates of daughter cells begin to diverge. These results demonstrate insights that become possible using computational approaches to analyze quantitative expression from many genes in parallel using a digital gene expression atlas.
Volume-object annotation system (VANO) is a cross-platform image annotation system that enables one to conveniently visualize and annotate 3D volume objects including nuclei and cells. An application of VANO typically starts with an initial collection of objects produced by a segmentation computation. The objects can then be labeled, categorized, deleted, added, split, merged and redefined. VANO has been used to build high-resolution digital atlases of the nuclei of Caenorhabditis elegans at the L1 stage and the nuclei of Drosophila melanogaster's ventral nerve cord at the late embryonic stage. AVAILABILITY: Platform independent executables of VANO, a sample dataset, and a detailed description of both its design and usage are available at research.janelia.org/peng/proj/vano. VANO is open-source for co-development.
MOTIVATION: Caenorhabditis elegans, a roundworm found in soil, is a widely studied model organism with about 1000 cells in the adult. Producing high-resolution fluorescence images of C.elegans to reveal biological insights is becoming routine, motivating the development of advanced computational tools for analyzing the resulting image stacks. For example, worm bodies usually curve significantly in images. Thus one must 'straighten' the worms if they are to be compared under a canonical coordinate system. RESULTS: We develop a worm straightening algorithm (WSA) that restacks cutting planes orthogonal to a 'backbone' that models the anterior-posterior axis of the worm. We formulate the backbone as a parametric cubic spline defined by a series of control points. We develop two methods for automatically determining the locations of the control points. Our experimental methods show that our approaches effectively straighten both 2D and 3D worm images.
Prior Publications (2)
The genome sequence of Drosophila melanogaster.Science (New York, N.Y.) 2000
M D. Adams, S E. Celniker, R A. Holt, C A. Evans, J D. Gocayne, P G. Amanatides, S E. Scherer, P W. Li, R A. Hoskins, R F. Galle, R A. George, S E. Lewis, S. Richards, M. Ashburner, S N. Henderson, G G. Sutton, J R. Wortman, M D. Yandell, Q. Zhang, L X. Chen, R C. Brandon, Y H. Rogers, R G. Blazej, M. Champe, B D. Pfeiffer, K H. Wan, C. Doyle, E G. Baxter, G. Helt, C R. Nelson, G L. Gabor, J F. Abril, A. Agbayani, H J. An, C. Andrews-Pfannkoch, D. Baldwin, R M. Ballew, A. Basu, J. Baxendale, L. Bayraktaroglu, E M. Beasley, K Y. Beeson, P V. Benos, B P. Berman, D. Bhandari, S. Bolshakov, D. Borkova, M R. Botchan, J. Bouck, P. Brokstein, P. Brottier, K C. Burtis, D A. Busam, H. Butler, E. Cadieu, A. Center, I. Chandra, J M. Cherry, S. Cawley, C. Dahlke, L B. Davenport, P. Davies, B. Pablos, A. Delcher, Z. Deng, A D. Mays, I. Dew, S M. Dietz, K. Dodson, L E. Doup, M. Downes, S. Dugan-Rocha, B C. Dunkov, P. Dunn, K J. Durbin, C C. Evangelista, C. Ferraz, S. Ferriera, W. Fleischmann, C. Fosler, A E. Gabrielian, N S. Garg, W M. Gelbart, K. Glasser, A. Glodek, F. Gong, J H. Gorrell, Z. Gu, P. Guan, M. Harris, N L. Harris, D. Harvey, T J. Heiman, J R. Hernandez, J. Houck, D. Hostin, K A. Houston, T J. Howland, M H. Wei, C. Ibegwam, M. Jalali, F. Kalush, G H. Karpen, Z. Ke, J A. Kennison, K A. Ketchum, B E. Kimmel, C D. Kodira, C. Kraft, S. Kravitz, D. Kulp, Z. Lai, P. Lasko, Y. Lei, A A. Levitsky, J. Li, Z. Li, Y. Liang, X. Lin, X. Liu, B. Mattei, T C. McIntosh, M P. McLeod, D. McPherson, G. Merkulov, N V. Milshina, C. Mobarry, J. Morris, A. Moshrefi, S M. Mount, M. Moy, B. Murphy, L. Murphy, D M. Muzny, D L. Nelson, D R. Nelson, K A. Nelson, K. Nixon, D R. Nusskern, J M. Pacleb, M. Palazzolo, G S. Pittman, S. Pan, J. Pollard, V. Puri, M G. Reese, K. Reinert, K. Remington, R D. Saunders, F. Scheeler, H. Shen, B C. Shue, I. Sidén-Kiamos, M. Simpson, M P. Skupski, T. Smith, E. Spier, A C. Spradling, M. Stapleton, R. Strong, E. Sun, R. Svirskas, C. Tector, R. Turner, E. Venter, A H. Wang, X. Wang, Z Y. Wang, D A. Wassarman, G M. Weinstock, J. Weissenbach, S M. Williams, S M. Williams, K C. Worley, D. Wu, S. Yang, Q A. Yao, J. Ye, R F. Yeh, J S. Zaveri, M. Zhan, G. Zhang, Q. Zhao, L. Zheng, X H. Zheng, F N. Zhong, W. Zhong, X. Zhou, S. Zhu, X. Zhu, H O. Smith, R A. Gibbs, E W. Myers, G M. Rubin, and J C. Venter Science (New York, N.Y.), 287:2185-95 (2000)
The fly Drosophila melanogaster is one of the most intensively studied organisms in biology and serves as a model system for the investigation of many developmental and cellular processes common to higher eukaryotes, including humans. We have determined the nucleotide sequence of nearly all of the approximately 120-megabase euchromatic portion of the Drosophila genome using a whole-genome shotgun sequencing strategy supported by extensive clone-based sequence and a high-quality bacterial artificial chromosome physical map. Efforts are under way to close the remaining gaps; however, the sequence is of sufficient accuracy and contiguity to be declared substantially complete and to support an initial analysis of genome structure and preliminary gene annotation and interpretation. The genome encodes approximately 13,600 genes, somewhat fewer than the smaller Caenorhabditis elegans genome, but with comparable functional diversity.