Filter
Associated Lab
- Remove Chklovskii Lab filter Chklovskii Lab
- Magee Lab (1) Apply Magee Lab filter
Associated Project Team
Publication Date
Type of Publication
13 Publications
Showing 1-10 of 13 resultsAlthough undulatory swimming is observed in many organisms, the neuromuscular basis for undulatory movement patterns is not well understood. To better understand the basis for the generation of these movement patterns, we studied muscle activity in the nematode Caenorhabditis elegans. Caenorhabditis elegans exhibits a range of locomotion patterns: in low viscosity fluids the undulation has a wavelength longer than the body and propagates rapidly, while in high viscosity fluids or on agar media the undulatory waves are shorter and slower. Theoretical treatment of observed behaviour has suggested a large change in force-posture relationships at different viscosities, but analysis of bend propagation suggests that short-range proprioceptive feedback is used to control and generate body bends. How muscles could be activated in a way consistent with both these results is unclear. We therefore combined automated worm tracking with calcium imaging to determine muscle activation strategy in a variety of external substrates. Remarkably, we observed that across locomotion patterns spanning a threefold change in wavelength, peak muscle activation occurs approximately 45° (1/8th of a cycle) ahead of peak midline curvature. Although the location of peak force is predicted to vary widely, the activation pattern is consistent with required force in a model incorporating putative length- and velocity-dependence of muscle strength. Furthermore, a linear combination of local curvature and velocity can match the pattern of activation. This suggests that proprioception can enable the worm to swim effectively while working within the limitations of muscle biomechanics and neural control.
A neuron is a basic physiological and computational unit of the brain. While much is known about the physiological properties of a neuron, its computational role is poorly understood. Here we propose to view a neuron as a signal processing device that represents the incoming streaming data matrix as a sparse vector of synaptic weights scaled by an outgoing sparse activity vector. Formally, a neuron minimizes a cost function comprising a cumulative squared representation error and regularization terms. We derive an online algorithm that minimizes such cost function by alternating between the minimization with respect to activity and with respect to synaptic weights. The steps of this algorithm reproduce well-known physiological properties of a neuron, such as weighted summation and leaky integration of synaptic inputs, as well as an Oja-like, but parameter-free, synaptic learning rule. Our theoretical framework makes several predictions, some of which can be verified by the existing data, others require further experiments. Such framework should allow modeling the function of neuronal circuits without necessarily measuring all the microscopic biophysical parameters, as well as facilitate the design of neuromorphic electronics.
We describe an approach for automation of the process of reconstruction of neural tissue from serial section transmission electron micrographs. Such reconstructions require 3D segmentation of individual neuronal processes (axons and dendrites) performed in densely packed neuropil. We first detect neuronal cell profiles in each image in a stack of serial micrographs with multi-scale ridge detector. Short breaks in detected boundaries are interpolated using anisotropic contour completion formulated in fuzzy-logic framework. Detected profiles from adjacent sections are linked together based on cues such as shape similarity and image texture. Thus obtained 3D segmentation is validated by human operators in computer-guided proofreading process. Our approach makes possible reconstructions of neural tissue at final rate of about 5 microm3/manh, as determined primarily by the speed of proofreading. To date we have applied this approach to reconstruct few blocks of neural tissue from different regions of rat brain totaling over 1000microm3, and used these to evaluate reconstruction speed, quality, error rates, and presence of ambiguous locations in neuropil ssTEM imaging data.
Sample size is a critical component in the design of any high-throughput genetic screening approach. Sample size determination from assumptions or limited data at the planning stages, though standard practice, may at times be unreliable because of the difficulty of a priori modeling of effect sizes and variance. Methods to update the sample size estimate during the course of the study could improve statistical power. In this article, we introduce an approach to estimate the power and update it continuously during the screen. We use this estimate to decide where to sample next to achieve maximum overall statistical power. Finally, in simulations, we demonstrate significant gains in study recall over the naive strategy of equal sample sizes while maintaining the same total number of samples.
In the primate primary visual area (V1), the ocular dominance pattern consists of alternating monocular stripes. Stripe orientation follows systematic trends preserved across several species. I propose that these trends result from minimizing the length of intra-cortical wiring needed to recombine information from the two eyes in order to achieve the perception of depth. I argue that the stripe orientation at any point of V1 should follow the direction of binocular disparity in the corresponding point of the visual field. The optimal pattern of stripes determined from this argument agrees with the ocular dominance pattern of macaque and Cebus monkeys. This theory predicts that for any point in the visual field the limits of depth perception are greatest in the direction along the ocular dominance stripes at that point.
The excitability of individual dendritic branches is a plastic property of neurons. We found that experience in an enriched environment increased propagation of dendritic Na(+) spikes in a subset of individual dendritic branches in rat hippocampal CA1 pyramidal neurons and that this effect was mainly mediated by localized downregulation of A-type K(+) channel function. Thus, dendritic plasticity might be used to store recent experience in individual branches of the dendritic arbor.
The shapes of dendritic arbors are fascinating and important, yet the principles underlying these complex and diverse structures remain unclear. Here, we analyzed basal dendritic arbors of 2,171 pyramidal neurons sampled from mammalian brains and discovered 3 statistical properties: the dendritic arbor size scales with the total dendritic length, the spatial correlation of dendritic branches within an arbor has a universal functional form, and small parts of an arbor are self-similar. We proposed that these properties result from maximizing the repertoire of possible connectivity patterns between dendrites and surrounding axons while keeping the cost of dendrites low. We solved this optimization problem by drawing an analogy with maximization of the entropy for a given energy in statistical physics. The solution is consistent with the above observations and predicts scaling relations that can be tested experimentally. In addition, our theory explains why dendritic branches of pyramidal cells are distributed more sparsely than those of Purkinje cells. Our results represent a step toward a unifying view of the relationship between neuronal morphology and function.
I consider a topographic projection between two neuronal layers with different densities of neurons. Given the number of output neurons connected to each input neuron (divergence) and the number of input neurons synapsing on each output neuron (convergence), I determine the widths of axonal and dendritic arbors which minimize the total volume of axons and dendrites. Analytical results for one-dimensional and two-dimensional projections can be summarized qualitatively in the following rule: neurons of the sparser layer should have arbors wider than those of the denser layer. This agrees with the anatomic data for retinal, cerebellar, olfactory bulb, and neocortical neurons the morphology and connectivity of which are known. The rule may be used to infer connectivity of neurons from their morphology.