How does electrical activity in neuronal circuits give rise to intelligent behavior? To answer this question, we are pursuing two synergistic research directions.
First, we are reconstructing vertebrate and invertebrate wiring diagrams from electron microscopical data. Second, we are developing a theory of neuronal computation. Our interdisciplinary approach takes advantage of recent advances in applied mathematics and statistical learning theory as well as optimization theory. We believe that progress in both directions will enable us to map the components of neuronal circuits onto the mathematical steps of computational algorithms.
How does such a distributed system of relatively simple components compute? We address this question by reconstructing the network's wiring diagram and developing the theory of neuronal computation.
Wiring diagram reconstruction, also known as connectomics, is a challenging problem because identification of synapses requires few-nanometer resolution resulting in terapixel data sets. As manual analysis of such data sets is impossible, we are developing automated algorithms for circuit reconstruction using computer vision and machine learning. Our reconstruction focuses on fly circuits underlying vision.
What kind of mathematics is needed to describe neuronal computation? Many aspects of brain architecture, such as overcompleteness and sparseness, point at sparse redundant representations as lingua franca of the brain. We are mapping mathematical algorithms involving sparse representations onto known brain circuits.
The placement of neuronal cell bodies relative to the neuropile differs among species and brain areas. Cell bodies can be either embedded as in mammalian cortex or segregated as in invertebrates and some other vertebrate brain areas. Why are there such different arrangements? Here we suggest that the observed arrangements may simply be a reflection of wiring economy, a general principle that tends to reduce the total volume of the neuropile and hence the volume of the inclusions in it. Specifically, we suggest that the choice of embedded versus segregated arrangement is determined by which neuronal component - the cell body or the neurite connecting the cell body to the arbor - has a smaller volume. Our quantitative predictions are in agreement with existing and new measurements.
Recent results have shown the possibility of both reconstructing connectomes of small but biologically interesting circuits and extracting from these connectomes insights into their function. However, these reconstructions were heroic proof-of-concept experiments, requiring person-months of effort per neuron reconstructed, and will not scale to larger circuits, much less the brains of entire animals. In this paper we examine what will be required to generate and use substantially larger connectomes, finding five areas that need increased attention: firstly, imaging better suited to automatic reconstruction, with excellent z-resolution; secondly, automatic detection, validation, and measurement of synapses; thirdly, reconstruction methods that keep and use uncertainty metrics for every object, from initial images, through segmentation, reconstruction, and connectome queries; fourthly, processes that are fully incremental, so that the connectome may be used before it is fully complete; and finally, better tools for analysis of connectomes, once they are obtained.
We aim to improve segmentation through the use of machine learning tools during region agglomeration. We propose an active learning approach for performing hierarchical agglomerative segmentation from superpixels. Our method combines multiple features at all scales of the agglomerative process, works for data with an arbitrary number of dimensions, and scales to very large datasets. We advocate the use of variation of information to measure segmentation accuracy, particularly in 3D electron microscopy (EM) images of neural tissue, and using this metric demonstrate an improvement over competing algorithms in EM and natural images.
We are interested in establishing the correspondence between neuron activity and body curvature during various movements of C. Elegans worms. Given long sequences of images, specifically recorded to glow when the neuron is active, it is required to track all identifiable neurons in each frame. The characteristics of the neuron data, e.g., the uninformative nature of neuron appearance and the sequential ordering of neurons, renders standard single and multi-object tracking methods either ineffective or unnecessary for our task. In this paper, we propose a multi-target tracking algorithm that correctly assigns each neuron to one of several candidate locations in the next frame preserving shape constraint. The results demonstrate how the proposed method can robustly track more neurons than several existing methods in long sequences of images.
A visual motion detection circuit suggested by Drosophila connectomicsNature 2013
S. Takemura, A. Bharioke, Z. Lu, A. Nern, S. Vitaladevuni, P. K. Rivlin, W. T. Katz, D. J. Olbris, S. M. Plaza, P. Winston, T. Zhao, J. Horne, R. D. Fetter, S. Takemura, K. Blazek, L. Chang, O. Ogundeyi, M. A. Saunders, V. Shapiro, C. Sigmund, G. M. Rubin, L. K. Scheffer, I. A. Meinertzhagen, and D. B. Chklovskii Nature, 500:175–181 (2013)
Animal behaviour arises from computations in neuronal circuits, but our understanding of these computations has been frustrated by the lack of detailed synaptic connection maps, or connectomes. For example, despite intensive investigations over half a century, the neuronal implementation of local motion detection in the insect visual system remains elusive. Here we develop a semi-automated pipeline using electron microscopy to reconstruct a connectome, containing 379 neurons and 8,637 chemical synaptic contacts, within the Drosophila optic medulla. By matching reconstructed neurons to examples from light microscopy, we assigned neurons to cell types and assembled a connectome of the repeating module of the medulla. Within this module, we identified cell types constituting a motion detection circuit, and showed that the connections onto individual motion-sensitive neurons in this circuit were consistent with their direction selectivity. Our results identify cellular targets for future functional investigations, and demonstrate that connectomes can provide key insights into neuronal computations.
Proprioceptive Coupling within Motor Neurons Drives C. elegans Forward LocomotionNeuron 2012
Q. Wen, M. D. Po, E. Hulme, S. Chen, X. Liu, S. Kwok, M. Gershow, A. M. Leifer, V. Butler, C. Fang-Yen, T. Kawano, W. R. Schafer, G. Whitesides, M. Wyart, D. B. Chklovskii, and A. D T. Samuel Neuron, 76:750-761 (2012)
Our brains are capable of remarkably stable stimulus representations despite time-varying neural activity. For instance, during delay periods in working memory tasks, while stimuli are represented in working memory, neurons in the prefrontal cortex, thought to support the memory representation, exhibit time-varying neuronal activity. Since neuronal activity encodes the stimulus, its time-varying dynamics appears to be paradoxical and incompatible with stable network stimulus representations. Indeed, this finding raises a fundamental question: can stable representations only be encoded with stable neural activity, or, its corollary, is every change in activity a sign of change in stimulus representation?
Computing sparse redundant representations is an important problem in both applied mathematics and neuroscience. In many applications, this problem must be solved in an energy-efficient way. Here, we propose a hybrid distributed algorithm (HDA), which solves this problem on a network of simple nodes communicating by low-bandwidth channels. HDA nodes perform both gradient-descent-like steps on analog internal variables and coordinate-descent-like steps via quantized external variables communicated to each other. Interestingly, the operation is equivalent to a network of integrate-and-fire neurons, suggesting that HDA may serve as a model of neural computation. We show that the numerical performance of HDA is on par with existing algorithms. In the asymptotic regime, the representation error of HDA decays with time, t, as 1/t. HDA is stable against time-varying noise; specifically, the representation error decays as 1/√t for gaussian white noise.
Early stages of sensory systems face the challenge of compressing information from numerous receptors onto a much smaller number of projection neurons, a so called communication bottleneck. To make more efficient use of limited bandwidth, compression may be achieved using predictive coding, whereby predictable, or redundant, components of the stimulus are removed. In the case of the retina, Srinivasan et al. (1982) suggested that feedforward inhibitory connections subtracting a linear prediction generated from nearby receptors implement such compression, resulting in biphasic center-surround receptive fields. However, feedback inhibitory circuits are common in early sensory circuits and furthermore their dynamics may be nonlinear. Can such circuits implement predictive coding as well? Here, solving the transient dynamics of nonlinear reciprocal feedback circuits through analogy to a signal-processing algorithm called linearized Bregman iteration we show that nonlinear predictive coding can be implemented in an inhibitory feedback circuit. In response to a step stimulus, interneuron activity in time constructs progressively less sparse but more accurate representations of the stimulus, a temporally evolving prediction. This analysis provides a powerful theoretical framework to interpret and understand the dynamics of early sensory processing in a variety of physiological experiments and yields novel predictions regarding the relation between activity and stimulus statistics.
Large-scale automated histology in the pursuit of connectomes.The Journal of Neuroscience : the Official Journal of the Society for Neuroscience 2011
D. Kleinfeld, A. Bharioke, P. Blinder, D. D. Bock, K. L. Briggman, D. B. Chklovskii, W. Denk, M. Helmstaedter, J. P. Kaufhold, W. Lee, H. S. Meyer, K. D. Micheva, M. Oberlaender, S. Prohaska, C. R. Reid, S. J. Smith, S. Takemura, P. S. Tsai, and B. Sakmann The Journal of Neuroscience : the Official Journal of the Society for Neuroscience, 31:16125-16138 (2011)
How does the brain compute? Answering this question necessitates neuronal connectomes, annotated graphs of all synaptic connections within defined brain areas. Further, understanding the energetics of the brain's computations requires vascular graphs. The assembly of a connectome requires sensitive hardware tools to measure neuronal and neurovascular features in all three dimensions, as well as software and machine learning for data analysis and visualization. We present the state of the art on the reconstruction of circuits and vasculature that link brain anatomy and function. Analysis at the scale of tens of nanometers yields connections between identified neurons, while analysis at the micrometer scale yields probabilistic rules of connection between neurons and exact vascular connectivity.
High resolution segmentation of neuronal tissues from low depth-resolution EM imagery8th International Conference of Energy Minimization Methods in Computer Vision and Pattern Recognition Energy Minimization Methods in Computer Vision and Pattern Recognition 2011
D. Glasner, T. Hu, J. Nunez-Iglesias, L. Sheffer, S. Xu, H. Hess, R. Fetter, D. Chklovskii, and R. Basri 8th International Conference of Energy Minimization Methods in Computer Vision and Pattern Recognition Energy Minimization Methods in Computer Vision and Pattern Recognition, (2011)
Despite recent interest in reconstructing neuronal networks, complete wiring diagrams on the level of individual synapses remain scarce and the insights into function they can provide remain unclear. Even for Caenorhabditis elegans, whose neuronal network is relatively small and stereotypical from animal to animal, published wiring diagrams are neither accurate nor complete and self-consistent. Using materials from White et al. and new electron micrographs we assemble whole, self-consistent gap junction and chemical synapse networks of hermaphrodite C. elegans. We propose a method to visualize the wiring diagram, which reflects network signal flow. We calculate statistical and topological properties of the network, such as degree distributions, synaptic multiplicities, and small-world properties, that help in understanding network signal propagation. We identify neurons that may play central roles in information processing, and network motifs that could serve as functional modules of the network. We explore propagation of neuronal activity in response to sensory or artificial stimulation using linear systems theory and find several activity patterns that could serve as substrates of previously described behaviors. Finally, we analyze the interaction between the gap junction and the chemical synapse networks. Since several statistical properties of the C. elegans network, such as multiplicity and motif distributions are similar to those found in mammalian neocortex, they likely point to general principles of neuronal networks. The wiring diagram reported here can help in understanding the mechanistic basis of behavior by generating predictions about future experiments involving genetic perturbations, laser ablations, or monitoring propagation of neuronal activity in response to stimulation.
Reconstructing neuronal circuits at the level of synapses is a central problem in neuroscience, and the focus of the nascent field of connectomics. Previously used to reconstruct the C. elegans wiring diagram, serial-section transmission electron microscopy (ssTEM) is a proven technique for the task. However, to reconstruct more complex circuits, ssTEM will require the automation of image processing. We review progress in the processing of electron microscopy images and, in particular, a semi-automated reconstruction pipeline deployed at Janelia Farm. Drosophila circuits underlying identified behaviors are being reconstructed in the pipeline with the goal of generating a complete Drosophila connectome.
Increasing depth resolution of electron microscopy of neural circuits using sparse tomographic reconstructionComputer Vision and Pattern Recognition (CVPR) 2010
A. Veeraraghavan, A. V. Genkin, S. Vitaladevuni, L. Scheffer, S. Xu, H. Hess, R. Fetter, M. Cantoni, G. Knott, and D. B. Chklovskii Computer Vision and Pattern Recognition (CVPR), (2010)
Complete reconstructions of vertebrate neuronal circuits on the synaptic level require new approaches. Here, serial section transmission electron microscopy was automated to densely reconstruct four volumes, totaling 670 μm(3), from the rat hippocampus as proving grounds to determine when axo-dendritic proximities predict synapses. First, in contrast with Peters' rule, the density of axons within reach of dendritic spines did not predict synaptic density along dendrites because the fraction of axons making synapses was variable. Second, an axo-dendritic touch did not predict a synapse; nevertheless, the density of synapses along a hippocampal dendrite appeared to be a universal fraction, 0.2, of the density of touches. Finally, the largest touch between an axonal bouton and spine indicated the site of actual synapses with about 80% precision but would miss about half of all synapses. Thus, it will be difficult to predict synaptic connectivity using data sets missing ultrastructural details that distinguish between axo-dendritic touches and bona fide synapses.
Maximization of the connectivity repertoire as a statistical principle governing the shapes of dendritic arbors.Proceedings of the National Academy of Sciences of the United States of America 2009
Q. Wen, A. Stepanyants, G. N. Elston, A. Y. Grosberg, and D. B. Chklovskii Proceedings of the National Academy of Sciences of the United States of America, 106:12536-41 (2009)
The shapes of dendritic arbors are fascinating and important, yet the principles underlying these complex and diverse structures remain unclear. Here, we analyzed basal dendritic arbors of 2,171 pyramidal neurons sampled from mammalian brains and discovered 3 statistical properties: the dendritic arbor size scales with the total dendritic length, the spatial correlation of dendritic branches within an arbor has a universal functional form, and small parts of an arbor are self-similar. We proposed that these properties result from maximizing the repertoire of possible connectivity patterns between dendrites and surrounding axons while keeping the cost of dendrites low. We solved this optimization problem by drawing an analogy with maximization of the entropy for a given energy in statistical physics. The solution is consistent with the above observations and predicts scaling relations that can be tested experimentally. In addition, our theory explains why dendritic branches of pyramidal cells are distributed more sparsely than those of Purkinje cells. Our results represent a step toward a unifying view of the relationship between neuronal morphology and function.
Over hundreds of millions of years, evolution has optimized brain design to maximize its functionality while minimizing costs associated with building and maintenance. This observation suggests that one can use optimization theory to rationalize various features of brain design. Here, we attempt to explain the dimensions and branching structure of dendritic arbors by minimizing dendritic cost for given potential synaptic connectivity. Assuming only that dendritic cost increases with total dendritic length and path length from synapses to soma, we find that branching, planar, and compact dendritic arbors, such as those belonging to Purkinje cells in the cerebellum, are optimal. The theory predicts that adjacent Purkinje dendritic arbors should spatially segregate. In addition, we propose two explicit cost function expressions, falsifiable by measuring dendritic caliber near bifurcations.
Prior Publications (20)
How different is local cortical circuitry from a random network? To answer this question, we probed synaptic connections with several hundred simultaneous quadruple whole-cell recordings from layer 5 pyramidal neurons in the rat visual cortex. Analysis of this dataset revealed several nonrandom features in synaptic connectivity. We confirmed previous reports that bidirectional connections are more common than expected in a random network. We found that several highly clustered three-neuron connectivity patterns are overrepresented, suggesting that connections tend to cluster together. We also analyzed synaptic connection strength as defined by the peak excitatory postsynaptic potential amplitude. We found that the distribution of synaptic connection strength differs significantly from the Poisson distribution and can be fitted by a lognormal distribution. Such a distribution has a heavier tail and implies that synaptic weight is concentrated among few synaptic connections. In addition, the strengths of synaptic connections sharing pre- or postsynaptic neurons are correlated, implying that strong connections are even more clustered than the weak ones. Therefore, the local cortical network structure can be viewed as a skeleton of stronger connections in a sea of weaker ones. Such a skeleton is likely to play an important role in network dynamics and should be investigated further.
The advent of high-quality 3D reconstructions of neuronal arbors has revived the hope of inferring synaptic connectivity from the geometric shapes of axons and dendrites, or 'neurogeometry'. A quantitative description of connectivity must be built on a sound theoretical framework. Here, we review recent developments in neurogeometry that can provide such a framework. We base the geometric description of connectivity on the concept of a 'potential synapse'--the close apposition between axons and dendrites necessary to form an actual synapse. In addition to describing potential synaptic connectivity in neuronal circuits, neurogeometry provides insight into basic features of functional connectivity, such as specificity and plasticity.
Can neuronal morphology predict functional synaptic circuits? In the rat barrel cortex, 'barrels' and 'septa' delineate an orderly matrix of cortical columns. Using quantitative laser scanning photostimulation we measured the strength of excitatory projections from layer 4 (L4) and L5A to L2/3 pyramidal cells in barrel- and septum-related columns. From morphological reconstructions of excitatory neurons we computed the geometric circuit predicted by axodendritic overlap. Within most individual projections, functional inputs were predicted by geometry and a single scale factor, the synaptic strength per potential synapse. This factor, however, varied between projections and, in one case, even within a projection, up to 20-fold. Relationships between geometric overlap and synaptic strength thus depend on the laminar and columnar locations of both the pre- and postsynaptic neurons, even for neurons of the same type. A large plasticity potential appears to be incorporated into these circuits, allowing for functional 'tuning' with fixed axonal and dendritic arbor geometry.
A ubiquitous feature of the vertebrate anatomy is the segregation of the brain into white and gray matter. Assuming that evolution maximized brain functionality, what is the reason for such segregation? To answer this question, we posit that brain functionality requires high interconnectivity and short conduction delays. Based on this assumption we searched for the optimal brain architecture by comparing different candidate designs. We found that the optimal design depends on the number of neurons, interneuronal connectivity, and axon diameter. In particular, the requirement to connect neurons with many fast axons drives the segregation of the brain into white and gray matter. These results provide a possible explanation for the structure of various regions of the vertebrate brain, such as the mammalian neocortex and neostriatum, the avian telencephalon, and the spinal cord.
Brain function relies on specificity of synaptic connectivity patterns among different classes of neurons. Yet, the substrates of specificity in complex neuropil remain largely unknown. We search for imprints of specificity in the layout of axonal and dendritic arbors from the rat neocortex. An analysis of 3D reconstructions of pairs consisting of pyramidal cells (PCs) and GABAergic interneurons (GIs) revealed that the layout of GI axons is specific. This specificity is manifested in a relatively high tortuosity, small branch length of these axons, and correlations of their trajectories with the positions of postsynaptic neuron dendrites. Axons of PCs show no such specificity, usually taking a relatively straight course through neuropil. However, wiring patterns among PCs hold a large potential for circuit remodeling and specificity through growth and retraction of dendritic spines. Our results define distinct class-specific rules in establishing synaptic connectivity, which could be crucial in formulating a canonical cortical circuit.
Neurons often possess elaborate axonal and dendritic arbors. Why do these arbors exist and what determines their form and dimensions? To answer these questions, I consider the wiring up of a large highly interconnected neuronal network, such as the cortical column. Implementation of such a network in the allotted volume requires all the salient features of neuronal morphology: the existence of branching dendrites and axons and the presence of dendritic spines. Therefore, the requirement of high interconnectivity is, in itself, sufficient to account for the existence of these features. Moreover, the actual lengths of axons and dendrites are close to the smallest possible length for a given interconnectivity, arguing that high interconnectivity is essential for cortical function.
Does the C. elegans nervous system contain multi-neuron computational modules that perform stereotypical functions? We attempt to answer this question by searching for recurring multi-neuron inter-connectivity patterns in the C. elegans nervous system's wiring diagram.
In mammalian visual cortex, neurons are organized according to their functional properties into multiple maps such as retinotopic, ocular dominance, orientation preference, direction of motion, and others. What determines the organization of cortical maps? We argue that cortical maps reflect neuronal connectivity in intracortical circuits. Because connecting distant neurons requires costly wiring (i.e., axons and dendrites), there is an evolutionary pressure to place connected neurons as close to each other as possible. Then, cortical maps may be viewed as solutions that minimize wiring cost for given intracortical connectivity. These solutions can help us in inferring intracortical connectivity and, ultimately, in understanding the function of the visual system.
Evolution perfected brain design by maximizing its functionality while minimizing costs associated with building and maintaining it. Assumption that brain functionality is specified by neuronal connectivity, implemented by costly biological wiring, leads to the following optimal design problem. For a given neuronal connectivity, find a spatial layout of neurons that minimizes the wiring cost. Unfortunately, this problem is difficult to solve because the number of possible layouts is often astronomically large. We argue that the wiring cost may scale as wire length squared, reducing the optimal layout problem to a constrained minimization of a quadratic form. For biologically plausible constraints, this problem has exact analytical solutions, which give reasonable approximations to actual layouts in the brain. These solutions make the inverse problem of inferring neuronal connectivity from neuronal layout more tractable.
Axon calibers vary widely among different animals, neuron classes, and even within the same neuron. What determines the diameter of axon branches?
Wiring a brain presents a formidable problem because neural circuits require an enormous number of fast and durable connections. We propose that evolution was likely to have optimized neural circuits to minimize conduction delays in axons, passive cable attenuation in dendrites, and the length of "wire" used to construct circuits, and to have maximized the density of synapses. Here we ask the question: "What fraction of the volume should be taken up by axons and dendrites (i.e., wire) when these variables are at their optimal values?" The biophysical properties of axons and dendrites dictate that wire should occupy 3/5 of the volume in an optimally wired gray matter. We have measured the fraction of the volume occupied by each cellular component and find that the volume of wire is close to the predicted optimal value.
Changes in synaptic connectivity patterns through the formation and elimination of dendritic spines may contribute to structural plasticity in the brain. We characterize this contribution quantitatively by estimating the number of different synaptic connectivity patterns attainable without major arbor remodeling. This number depends on the ratio of the synapses on a dendrite to the axons that pass within a spine length of that dendrite. We call this ratio the filling fraction and calculate it from geometrical analysis and anatomical data. The filling fraction is 0.26 in mouse neocortex, 0.22-0.34 in rat hippocampus. In the macaque visual cortex, the filling fraction increases by a factor of 1.6-1.8 from area V1 to areas V2, V4, and 7a. Since the filling fraction is much smaller than 1, spine remodeling can make a large contribution to structural plasticity.
A position is available on an exciting interdisciplinary project to reconstruct brain circuits from electron microscopy data. The successful candidate will work with computer scientists, biologists, and other members of the Chklovskii Lab to develop and implement computer vision/machine learning algorithms for segmentation and recognition. The candidate will have a solid understanding of machine learning and image processing. Prior experience with biomedical image analysis is a big plus, with a strong preference for a track record of publications in peer-reviewed computer vision conferences and journals. The ideal candidate should have a PhD in computer science or electrical engineering and experience in working with groups from other fields. On the software side, the job requires developing sophisticated and reliable programs in MATLAB and C/C++, and skills with these languages are essential. Proven skills in software engineering, particularly implementing sizeable scientific computing software in a group development environment, are very helpful. Familiarity with Python, and graphics/user interface toolkits like VTK and OpenGL is useful.
The Howard Hughes Medical Institute's Janelia Farm Research Campus is a unique, world-class research community in the Washington, D.C. area. Over the next four years, Janelia Farm Research Campus (JFRC) will grow to over 400 employees, to include top researchers: biologists, physicists, chemists, engineers, and computer scientists focused on brain research in a uniquely supportive campus environment. HHMI offers a competitive salary and excellent benefits package. For consideration, please forward your resume in confidence to firstname.lastname@example.org. Please include a cover letter detailing previous research experience and three references. Please also include job title in the subject line. To learn more about HHMI and Janelia Farm visit www.janelia.org. HHMI is an Equal Opportunity Employer.
Contact: Dmitri Chklovskii
A position is available in the Chklovskii group to work on the function of recently reconstructed brain circuits involved in vision, other sensory modalities, and locomotion. The ideal candidate should have a PhD in theoretical neuroscience, statistics, applied math, computer science, electrical engineering, or physics.
The Howard Hughes Medical Institute's Janelia Farm Research Campus is a unique, world-class research community in the Washington, D.C. area. Over the next four years, Janelia Farm Research Campus (JFRC) will grow to over 400 employees, to include top researchers: biologists, physicists, chemists, engineers, and computer scientists focused on brain research in a uniquely supportive campus environment. HHMI offers a competitive salary and excellent benefits package. HHMI is an equal opportunity employer. For consideration, please forward your CV to Dmitri Chklovskii.