I was born in St. Petersburg, Russia, and studied physics and engineering in college. I moved to the US in 1989 and obtained a PhD in theoretical physics from MIT in 1994. After being a Junior Fellow at Harvard Society of Fellows I decided to make a switch to theoretical neurobiology and was a Sloan Fellow at the Salk Institute. In 1999, I founded a theoretical neuroscience group at Cold Spring Harbor Laboratory, where I was an Assistant and then Associate Professor. I moved to Janelia in 2007. My main interest is in building simple but powerful theories of brain structure and function.
We are interested in establishing the correspondence between neuron activity and body curvature during various movements of C. Elegans worms. Given long sequences of images, specifically recorded to glow when the neuron is active, it is required to track all identifiable neurons in each frame. The characteristics of the neuron data, e.g., the uninformative nature of neuron appearance and the sequential ordering of neurons, renders standard single and multi-object tracking methods either ineffective or unnecessary for our task. In this paper, we propose a multi-target tracking algorithm that correctly assigns each neuron to one of several candidate locations in the next frame preserving shape constraint. The results demonstrate how the proposed method can robustly track more neurons than several existing methods in long sequences of images.
A visual motion detection circuit suggested by Drosophila connectomicsNature 2013
S. Takemura, A. Bharioke, Z. Lu, A. Nern, S. Vitaladevuni, P. K. Rivlin, W. T. Katz, D. J. Olbris, S. M. Plaza, P. Winston, T. Zhao, J. Horne, R. D. Fetter, S. Takemura, K. Blazek, L. Chang, O. Ogundeyi, M. A. Saunders, V. Shapiro, C. Sigmund, G. M. Rubin, L. K. Scheffer, I. A. Meinertzhagen, and D. B. Chklovskii Nature, 500:175–181 (2013)
Animal behaviour arises from computations in neuronal circuits, but our understanding of these computations has been frustrated by the lack of detailed synaptic connection maps, or connectomes. For example, despite intensive investigations over half a century, the neuronal implementation of local motion detection in the insect visual system remains elusive. Here we develop a semi-automated pipeline using electron microscopy to reconstruct a connectome, containing 379 neurons and 8,637 chemical synaptic contacts, within the Drosophila optic medulla. By matching reconstructed neurons to examples from light microscopy, we assigned neurons to cell types and assembled a connectome of the repeating module of the medulla. Within this module, we identified cell types constituting a motion detection circuit, and showed that the connections onto individual motion-sensitive neurons in this circuit were consistent with their direction selectivity. Our results identify cellular targets for future functional investigations, and demonstrate that connectomes can provide key insights into neuronal computations.
We aim to improve segmentation through the use of machine learning tools during region agglomeration. We propose an active learning approach for performing hierarchical agglomerative segmentation from superpixels. Our method combines multiple features at all scales of the agglomerative process, works for data with an arbitrary number of dimensions, and scales to very large datasets. We advocate the use of variation of information to measure segmentation accuracy, particularly in 3D electron microscopy (EM) images of neural tissue, and using this metric demonstrate an improvement over competing algorithms in EM and natural images.
Computing sparse redundant representations is an important problem in both applied mathematics and neuroscience. In many applications, this problem must be solved in an energy-efficient way. Here, we propose a hybrid distributed algorithm (HDA), which solves this problem on a network of simple nodes communicating by low-bandwidth channels. HDA nodes perform both gradient-descent-like steps on analog internal variables and coordinate-descent-like steps via quantized external variables communicated to each other. Interestingly, the operation is equivalent to a network of integrate-and-fire neurons, suggesting that HDA may serve as a model of neural computation. We show that the numerical performance of HDA is on par with existing algorithms. In the asymptotic regime, the representation error of HDA decays with time, t, as 1/t. HDA is stable against time-varying noise; specifically, the representation error decays as 1/√t for gaussian white noise.
Early stages of sensory systems face the challenge of compressing information from numerous receptors onto a much smaller number of projection neurons, a so called communication bottleneck. To make more efficient use of limited bandwidth, compression may be achieved using predictive coding, whereby predictable, or redundant, components of the stimulus are removed. In the case of the retina, Srinivasan et al. (1982) suggested that feedforward inhibitory connections subtracting a linear prediction generated from nearby receptors implement such compression, resulting in biphasic center-surround receptive fields. However, feedback inhibitory circuits are common in early sensory circuits and furthermore their dynamics may be nonlinear. Can such circuits implement predictive coding as well? Here, solving the transient dynamics of nonlinear reciprocal feedback circuits through analogy to a signal-processing algorithm called linearized Bregman iteration we show that nonlinear predictive coding can be implemented in an inhibitory feedback circuit. In response to a step stimulus, interneuron activity in time constructs progressively less sparse but more accurate representations of the stimulus, a temporally evolving prediction. This analysis provides a powerful theoretical framework to interpret and understand the dynamics of early sensory processing in a variety of physiological experiments and yields novel predictions regarding the relation between activity and stimulus statistics.
Proprioceptive Coupling within Motor Neurons Drives C. elegans Forward LocomotionNeuron 2012
Q. Wen, M. D. Po, E. Hulme, S. Chen, X. Liu, S. Kwok, M. Gershow, A. M. Leifer, V. Butler, C. Fang-Yen, T. Kawano, W. R. Schafer, G. Whitesides, M. Wyart, D. B. Chklovskii, and A. D T. Samuel Neuron, 76:750-761 (2012)
Our brains are capable of remarkably stable stimulus representations despite time-varying neural activity. For instance, during delay periods in working memory tasks, while stimuli are represented in working memory, neurons in the prefrontal cortex, thought to support the memory representation, exhibit time-varying neuronal activity. Since neuronal activity encodes the stimulus, its time-varying dynamics appears to be paradoxical and incompatible with stable network stimulus representations. Indeed, this finding raises a fundamental question: can stable representations only be encoded with stable neural activity, or, its corollary, is every change in activity a sign of change in stimulus representation?
High resolution segmentation of neuronal tissues from low depth-resolution EM imagery8th International Conference of Energy Minimization Methods in Computer Vision and Pattern Recognition Energy Minimization Methods in Computer Vision and Pattern Recognition 2011
D. Glasner, T. Hu, J. Nunez-Iglesias, L. Sheffer, S. Xu, H. Hess, R. Fetter, D. Chklovskii, and R. Basri 8th International Conference of Energy Minimization Methods in Computer Vision and Pattern Recognition Energy Minimization Methods in Computer Vision and Pattern Recognition, (2011)
Large-scale automated histology in the pursuit of connectomes.The Journal of Neuroscience : the Official Journal of the Society for Neuroscience 2011
D. Kleinfeld, A. Bharioke, P. Blinder, D. D. Bock, K. L. Briggman, D. B. Chklovskii, W. Denk, M. Helmstaedter, J. P. Kaufhold, W. Lee, H. S. Meyer, K. D. Micheva, M. Oberlaender, S. Prohaska, C. R. Reid, S. J. Smith, S. Takemura, P. S. Tsai, and B. Sakmann The Journal of Neuroscience : the Official Journal of the Society for Neuroscience, 31:16125-16138 (2011)
How does the brain compute? Answering this question necessitates neuronal connectomes, annotated graphs of all synaptic connections within defined brain areas. Further, understanding the energetics of the brain's computations requires vascular graphs. The assembly of a connectome requires sensitive hardware tools to measure neuronal and neurovascular features in all three dimensions, as well as software and machine learning for data analysis and visualization. We present the state of the art on the reconstruction of circuits and vasculature that link brain anatomy and function. Analysis at the scale of tens of nanometers yields connections between identified neurons, while analysis at the micrometer scale yields probabilistic rules of connection between neurons and exact vascular connectivity.
Despite recent interest in reconstructing neuronal networks, complete wiring diagrams on the level of individual synapses remain scarce and the insights into function they can provide remain unclear. Even for Caenorhabditis elegans, whose neuronal network is relatively small and stereotypical from animal to animal, published wiring diagrams are neither accurate nor complete and self-consistent. Using materials from White et al. and new electron micrographs we assemble whole, self-consistent gap junction and chemical synapse networks of hermaphrodite C. elegans. We propose a method to visualize the wiring diagram, which reflects network signal flow. We calculate statistical and topological properties of the network, such as degree distributions, synaptic multiplicities, and small-world properties, that help in understanding network signal propagation. We identify neurons that may play central roles in information processing, and network motifs that could serve as functional modules of the network. We explore propagation of neuronal activity in response to sensory or artificial stimulation using linear systems theory and find several activity patterns that could serve as substrates of previously described behaviors. Finally, we analyze the interaction between the gap junction and the chemical synapse networks. Since several statistical properties of the C. elegans network, such as multiplicity and motif distributions are similar to those found in mammalian neocortex, they likely point to general principles of neuronal networks. The wiring diagram reported here can help in understanding the mechanistic basis of behavior by generating predictions about future experiments involving genetic perturbations, laser ablations, or monitoring propagation of neuronal activity in response to stimulation.
Reconstructing neuronal circuits at the level of synapses is a central problem in neuroscience, and the focus of the nascent field of connectomics. Previously used to reconstruct the C. elegans wiring diagram, serial-section transmission electron microscopy (ssTEM) is a proven technique for the task. However, to reconstruct more complex circuits, ssTEM will require the automation of image processing. We review progress in the processing of electron microscopy images and, in particular, a semi-automated reconstruction pipeline deployed at Janelia Farm. Drosophila circuits underlying identified behaviors are being reconstructed in the pipeline with the goal of generating a complete Drosophila connectome.
Increasing depth resolution of electron microscopy of neural circuits using sparse tomographic reconstructionComputer Vision and Pattern Recognition (CVPR) 2010
A. Veeraraghavan, A. V. Genkin, S. Vitaladevuni, L. Scheffer, S. Xu, H. Hess, R. Fetter, M. Cantoni, G. Knott, and D. B. Chklovskii Computer Vision and Pattern Recognition (CVPR), (2010)
Complete reconstructions of vertebrate neuronal circuits on the synaptic level require new approaches. Here, serial section transmission electron microscopy was automated to densely reconstruct four volumes, totaling 670 μm(3), from the rat hippocampus as proving grounds to determine when axo-dendritic proximities predict synapses. First, in contrast with Peters' rule, the density of axons within reach of dendritic spines did not predict synaptic density along dendrites because the fraction of axons making synapses was variable. Second, an axo-dendritic touch did not predict a synapse; nevertheless, the density of synapses along a hippocampal dendrite appeared to be a universal fraction, 0.2, of the density of touches. Finally, the largest touch between an axonal bouton and spine indicated the site of actual synapses with about 80% precision but would miss about half of all synapses. Thus, it will be difficult to predict synaptic connectivity using data sets missing ultrastructural details that distinguish between axo-dendritic touches and bona fide synapses.
Maximization of the connectivity repertoire as a statistical principle governing the shapes of dendritic arbors.Proceedings of the National Academy of Sciences of the United States of America 2009
Q. Wen, A. Stepanyants, G. N. Elston, A. Y. Grosberg, and D. B. Chklovskii Proceedings of the National Academy of Sciences of the United States of America, 106:12536-41 (2009)
The shapes of dendritic arbors are fascinating and important, yet the principles underlying these complex and diverse structures remain unclear. Here, we analyzed basal dendritic arbors of 2,171 pyramidal neurons sampled from mammalian brains and discovered 3 statistical properties: the dendritic arbor size scales with the total dendritic length, the spatial correlation of dendritic branches within an arbor has a universal functional form, and small parts of an arbor are self-similar. We proposed that these properties result from maximizing the repertoire of possible connectivity patterns between dendrites and surrounding axons while keeping the cost of dendrites low. We solved this optimization problem by drawing an analogy with maximization of the entropy for a given energy in statistical physics. The solution is consistent with the above observations and predicts scaling relations that can be tested experimentally. In addition, our theory explains why dendritic branches of pyramidal cells are distributed more sparsely than those of Purkinje cells. Our results represent a step toward a unifying view of the relationship between neuronal morphology and function.
Time invariant description of synaptic connectivity in cortical circuits may be precluded by the ongoing growth and retraction of dendritic spines accompanied by the formation and elimination of synapses. On the other hand, the spatial arrangement of axonal and dendritic branches appears stable. This suggests that an invariant description of connectivity can be cast in terms of potential synapses, which are locations in the neuropil where an axon branch of one neuron is proximal to a dendritic branch of another neuron. In this paper, we attempt to reconstruct the potential connectivity in local cortical circuits of the cat primary visual cortex (V1). Based on multiple single-neuron reconstructions of axonal and dendritic arbors in 3 dimensions, we evaluate the expected number of potential synapses and the probability of potential connectivity among excitatory (pyramidal and spiny stellate) neurons and inhibitory basket cells. The results provide a quantitative description of structural organization of local cortical circuits. For excitatory neurons from different cortical layers, we compute local domains, which contain their potentially pre- and postsynaptic excitatory partners. These domains have columnar shapes with laminar specific radii and are roughly of the size of the ocular dominance column. Therefore, connections between most excitatory neurons in the ocular dominance column can be implemented by local synaptogenesis. Structural connectivity involving inhibitory basket cells is generally weaker than excitatory connectivity. Here, only nearby neurons are capable of establishing more than one potential synapse, implying that within the ocular dominance column these connections have more limited potential for circuit remodeling.
Over hundreds of millions of years, evolution has optimized brain design to maximize its functionality while minimizing costs associated with building and maintenance. This observation suggests that one can use optimization theory to rationalize various features of brain design. Here, we attempt to explain the dimensions and branching structure of dendritic arbors by minimizing dendritic cost for given potential synaptic connectivity. Assuming only that dendritic cost increases with total dendritic length and path length from synapses to soma, we find that branching, planar, and compact dendritic arbors, such as those belonging to Purkinje cells in the cerebellum, are optimal. The theory predicts that adjacent Purkinje dendritic arbors should spatially segregate. In addition, we propose two explicit cost function expressions, falsifiable by measuring dendritic caliber near bifurcations.
Wiring optimization can relate neuronal structure and function.Proceedings of the National Academy of Sciences of the United States of America 2006
B. L. Chen, D. H. Hall, and D. B. Chklovskii Proceedings of the National Academy of Sciences of the United States of America, 103:4723-8 (2006)
We pursue the hypothesis that neuronal placement in animals minimizes wiring costs for given functional constraints, as specified by synaptic connectivity. Using a newly compiled version of the Caenorhabditis elegans wiring diagram, we solve for the optimal layout of 279 nonpharyngeal neurons. In the optimal layout, most neurons are located close to their actual positions, suggesting that wiring minimization is an important factor. Yet some neurons exhibit strong deviations from "optimal" position. We propose that biological factors relating to axonal guidance and command neuron functions contribute to these deviations. We capture these factors by proposing a modified wiring cost function.
Experimental investigations have revealed that synapses possess interesting and, in some cases, unexpected properties. We propose a theoretical framework that accounts for three of these properties: typical central synapses are noisy, the distribution of synaptic weights among central synapses is wide, and synaptic connectivity between neurons is sparse. We also comment on the possibility that synaptic weights may vary in discrete steps. Our approach is based on maximizing information storage capacity of neural tissue under resource constraints. Based on previous experimental and theoretical work, we use volume as a limited resource and utilize the empirical relationship between volume and synaptic weight. Solutions of our constrained optimization problems are not only consistent with existing experimental measurements but also make nontrivial predictions.
Prior Publications (20)
Can neuronal morphology predict functional synaptic circuits? In the rat barrel cortex, 'barrels' and 'septa' delineate an orderly matrix of cortical columns. Using quantitative laser scanning photostimulation we measured the strength of excitatory projections from layer 4 (L4) and L5A to L2/3 pyramidal cells in barrel- and septum-related columns. From morphological reconstructions of excitatory neurons we computed the geometric circuit predicted by axodendritic overlap. Within most individual projections, functional inputs were predicted by geometry and a single scale factor, the synaptic strength per potential synapse. This factor, however, varied between projections and, in one case, even within a projection, up to 20-fold. Relationships between geometric overlap and synaptic strength thus depend on the laminar and columnar locations of both the pre- and postsynaptic neurons, even for neurons of the same type. A large plasticity potential appears to be incorporated into these circuits, allowing for functional 'tuning' with fixed axonal and dendritic arbor geometry.
A ubiquitous feature of the vertebrate anatomy is the segregation of the brain into white and gray matter. Assuming that evolution maximized brain functionality, what is the reason for such segregation? To answer this question, we posit that brain functionality requires high interconnectivity and short conduction delays. Based on this assumption we searched for the optimal brain architecture by comparing different candidate designs. We found that the optimal design depends on the number of neurons, interneuronal connectivity, and axon diameter. In particular, the requirement to connect neurons with many fast axons drives the segregation of the brain into white and gray matter. These results provide a possible explanation for the structure of various regions of the vertebrate brain, such as the mammalian neocortex and neostriatum, the avian telencephalon, and the spinal cord.
How different is local cortical circuitry from a random network? To answer this question, we probed synaptic connections with several hundred simultaneous quadruple whole-cell recordings from layer 5 pyramidal neurons in the rat visual cortex. Analysis of this dataset revealed several nonrandom features in synaptic connectivity. We confirmed previous reports that bidirectional connections are more common than expected in a random network. We found that several highly clustered three-neuron connectivity patterns are overrepresented, suggesting that connections tend to cluster together. We also analyzed synaptic connection strength as defined by the peak excitatory postsynaptic potential amplitude. We found that the distribution of synaptic connection strength differs significantly from the Poisson distribution and can be fitted by a lognormal distribution. Such a distribution has a heavier tail and implies that synaptic weight is concentrated among few synaptic connections. In addition, the strengths of synaptic connections sharing pre- or postsynaptic neurons are correlated, implying that strong connections are even more clustered than the weak ones. Therefore, the local cortical network structure can be viewed as a skeleton of stronger connections in a sea of weaker ones. Such a skeleton is likely to play an important role in network dynamics and should be investigated further.
The advent of high-quality 3D reconstructions of neuronal arbors has revived the hope of inferring synaptic connectivity from the geometric shapes of axons and dendrites, or 'neurogeometry'. A quantitative description of connectivity must be built on a sound theoretical framework. Here, we review recent developments in neurogeometry that can provide such a framework. We base the geometric description of connectivity on the concept of a 'potential synapse'--the close apposition between axons and dendrites necessary to form an actual synapse. In addition to describing potential synaptic connectivity in neuronal circuits, neurogeometry provides insight into basic features of functional connectivity, such as specificity and plasticity.
In mammalian visual cortex, neurons are organized according to their functional properties into multiple maps such as retinotopic, ocular dominance, orientation preference, direction of motion, and others. What determines the organization of cortical maps? We argue that cortical maps reflect neuronal connectivity in intracortical circuits. Because connecting distant neurons requires costly wiring (i.e., axons and dendrites), there is an evolutionary pressure to place connected neurons as close to each other as possible. Then, cortical maps may be viewed as solutions that minimize wiring cost for given intracortical connectivity. These solutions can help us in inferring intracortical connectivity and, ultimately, in understanding the function of the visual system.
Evolution perfected brain design by maximizing its functionality while minimizing costs associated with building and maintaining it. Assumption that brain functionality is specified by neuronal connectivity, implemented by costly biological wiring, leads to the following optimal design problem. For a given neuronal connectivity, find a spatial layout of neurons that minimizes the wiring cost. Unfortunately, this problem is difficult to solve because the number of possible layouts is often astronomically large. We argue that the wiring cost may scale as wire length squared, reducing the optimal layout problem to a constrained minimization of a quadratic form. For biologically plausible constraints, this problem has exact analytical solutions, which give reasonable approximations to actual layouts in the brain. These solutions make the inverse problem of inferring neuronal connectivity from neuronal layout more tractable.
Brain function relies on specificity of synaptic connectivity patterns among different classes of neurons. Yet, the substrates of specificity in complex neuropil remain largely unknown. We search for imprints of specificity in the layout of axonal and dendritic arbors from the rat neocortex. An analysis of 3D reconstructions of pairs consisting of pyramidal cells (PCs) and GABAergic interneurons (GIs) revealed that the layout of GI axons is specific. This specificity is manifested in a relatively high tortuosity, small branch length of these axons, and correlations of their trajectories with the positions of postsynaptic neuron dendrites. Axons of PCs show no such specificity, usually taking a relatively straight course through neuropil. However, wiring patterns among PCs hold a large potential for circuit remodeling and specificity through growth and retraction of dendritic spines. Our results define distinct class-specific rules in establishing synaptic connectivity, which could be crucial in formulating a canonical cortical circuit.
Neurons often possess elaborate axonal and dendritic arbors. Why do these arbors exist and what determines their form and dimensions? To answer these questions, I consider the wiring up of a large highly interconnected neuronal network, such as the cortical column. Implementation of such a network in the allotted volume requires all the salient features of neuronal morphology: the existence of branching dendrites and axons and the presence of dendritic spines. Therefore, the requirement of high interconnectivity is, in itself, sufficient to account for the existence of these features. Moreover, the actual lengths of axons and dendrites are close to the smallest possible length for a given interconnectivity, arguing that high interconnectivity is essential for cortical function.
Does the C. elegans nervous system contain multi-neuron computational modules that perform stereotypical functions? We attempt to answer this question by searching for recurring multi-neuron inter-connectivity patterns in the C. elegans nervous system's wiring diagram.
Axon calibers vary widely among different animals, neuron classes, and even within the same neuron. What determines the diameter of axon branches?
Wiring a brain presents a formidable problem because neural circuits require an enormous number of fast and durable connections. We propose that evolution was likely to have optimized neural circuits to minimize conduction delays in axons, passive cable attenuation in dendrites, and the length of "wire" used to construct circuits, and to have maximized the density of synapses. Here we ask the question: "What fraction of the volume should be taken up by axons and dendrites (i.e., wire) when these variables are at their optimal values?" The biophysical properties of axons and dendrites dictate that wire should occupy 3/5 of the volume in an optimally wired gray matter. We have measured the fraction of the volume occupied by each cellular component and find that the volume of wire is close to the predicted optimal value.
Changes in synaptic connectivity patterns through the formation and elimination of dendritic spines may contribute to structural plasticity in the brain. We characterize this contribution quantitatively by estimating the number of different synaptic connectivity patterns attainable without major arbor remodeling. This number depends on the ratio of the synapses on a dendrite to the axons that pass within a spine length of that dendrite. We call this ratio the filling fraction and calculate it from geometrical analysis and anatomical data. The filling fraction is 0.26 in mouse neocortex, 0.22-0.34 in rat hippocampus. In the macaque visual cortex, the filling fraction increases by a factor of 1.6-1.8 from area V1 to areas V2, V4, and 7a. Since the filling fraction is much smaller than 1, spine remodeling can make a large contribution to structural plasticity.