The activity of individual neurons and networks changes and fluctuates on the scale of tens of milliseconds or even less, yet the outside world that it presumably represents changes far more slowly on a typical scale of seconds. How can this discrepancy be reconciled? Are these dynamics the result of biological messiness, or are they the hard-to-interpret substrate of circuit computations?
My research goal is to understand how dynamics in neuronal circuits relate and constrain the representation of information and computations upon it. Traditionally, neural dynamics has been studied by dynamical systems physicists, but mostly as a phenomenon in and of itself, separate from representation. Similarly, representation and coding have been studied in the context of statistics, information theory, and machine learning, but mostly in the static regime. Yet, the nature of the brain is the union of both aspects. Its structure and biophysics cause strong dynamical activity, and this activity in turn must ultimately represent information. Therefore, my lab aims to bring together these two disparate approaches and further our understanding of the unique form of computation that occurs in neural circuits.
How does one determine the role of dynamics in neural computation? In the lab, we adopt three synergistic strategies. First, we are interested in directly analyzing the dynamics of neural circuits to better understand the relation between neural dynamics and behavior in well-understood tasks such as sensory discrimination or working memory. Second, we theoretically explore the types of dynamics that could be associated with particular network computations. Third, we analyze the structural properties of neural circuits in an effort to understand how these properties constrain circuit dynamics and the types of computations that circuits can perform.
For instance, in previous work, we have shown the consequences of two ubiquitous properties of cortical structures for the relation between dynamics and representation. These properties are the large ratio of cortical neurons to thalamic neurons and the numerous lateral connections between cortical neurons. The former means that different neurons must encode partially overlapping properties, technically referred to as an "overcomplete" representation. The latter implies that different neurons have the opportunity to shape each other’s activity. We have shown that these properties taken together raise the possibility of network architectures in which the activity of each neuron is constantly changing, yet the representation of the network as a whole remains constant. Such networks may explain the very diverse activity of neurons in working memory tasks, in which the representation must remain stable.
Now is an exciting time to be in theoretical neuroscience. The convergence of multiple experimental techniques is increasingly making mapping the detailed dynamics and connectivity of neural circuits a reality. This new data inspires us to think of novel theories and allows us to confront new and old theoretical ideas about the behavior of neural circuits in unprecedented detail.
Our brains are capable of remarkably stable stimulus representations despite time-varying neural activity. For instance, during delay periods in working memory tasks, while stimuli are represented in working memory, neurons in the prefrontal cortex, thought to support the memory representation, exhibit time-varying neuronal activity. Since neuronal activity encodes the stimulus, its time-varying dynamics appears to be paradoxical and incompatible with stable network stimulus representations. Indeed, this finding raises a fundamental question: can stable representations only be encoded with stable neural activity, or, its corollary, is every change in activity a sign of change in stimulus representation?
Early stages of sensory systems face the challenge of compressing information from numerous receptors onto a much smaller number of projection neurons, a so called communication bottleneck. To make more efficient use of limited bandwidth, compression may be achieved using predictive coding, whereby predictable, or redundant, components of the stimulus are removed. In the case of the retina, Srinivasan et al. (1982) suggested that feedforward inhibitory connections subtracting a linear prediction generated from nearby receptors implement such compression, resulting in biphasic center-surround receptive fields. However, feedback inhibitory circuits are common in early sensory circuits and furthermore their dynamics may be nonlinear. Can such circuits implement predictive coding as well? Here, solving the transient dynamics of nonlinear reciprocal feedback circuits through analogy to a signal-processing algorithm called linearized Bregman iteration we show that nonlinear predictive coding can be implemented in an inhibitory feedback circuit. In response to a step stimulus, interneuron activity in time constructs progressively less sparse but more accurate representations of the stimulus, a temporally evolving prediction. This analysis provides a powerful theoretical framework to interpret and understand the dynamics of early sensory processing in a variety of physiological experiments and yields novel predictions regarding the relation between activity and stimulus statistics.
A striking aspect of cortical neural networks is the divergence of a relatively small number of input channels from the peripheral sensory apparatus into a large number of cortical neurons, an over-complete representation strategy. Cortical neurons are then connected by a sparse network of lateral synapses. Here we propose that such architecture may increase the persistence of the representation of an incoming stimulus, or a percept. We demonstrate that for a family of networks in which the receptive field of each neuron is re-expressed by its outgoing connections, a represented percept can remain constant despite changing activity. We term this choice of connectivity REceptive FIeld REcombination (REFIRE) networks. The sparse REFIRE network may serve as a high-dimensional integrator and a biologically plausible model of the local cortical circuit.
Prior Publications (3)
Although the diversity of cortical interneuron electrical properties is well recognized, the number of distinct electrical types (e-types) is still a matter of debate. Recently, descriptions of interneuron variability were standardized by multiple laboratories on the basis of a subjective classification scheme as set out by the Petilla convention (Petilla Interneuron Nomenclature Group, PING). Here, we present a quantitative, statistical analysis of a database of nearly five hundred neurons manually annotated according to the PING nomenclature. For each cell, 38 features were extracted from responses to suprathreshold current stimuli and statistically analyzed to examine whether cortical interneurons subdivide into e-types. We showed that the partitioning into different e-types is indeed the major component of data variability. The analysis suggests refining the PING e-type classification to be hierarchical, whereby most variability is first captured within a coarse subpartition, and then subsequently divided into finer subpartitions. The coarse partition matches the well-known partitioning of interneurons into fast spiking and adapting cells. Finer subpartitions match the burst, continuous, and delayed subtypes. Additionally, our analysis enabled the ranking of features according to their ability to differentiate among e-types. We showed that our quantitative e-type assignment is more than 90% accurate and manages to catch several human errors.
Neuron models, in particular conductance-based compartmental models, often have numerous parameters that cannot be directly determined experimentally and must be constrained by an optimization procedure. A common practice in evaluating the utility of such procedures is using a previously developed model to generate surrogate data (e.g., traces of spikes following step current pulses) and then challenging the algorithm to recover the original parameters (e.g., the value of maximal ion channel conductances) that were used to generate the data. In this fashion, the success or failure of the model fitting procedure to find the original parameters can be easily determined. Here we show that some model fitting procedures that provide an excellent fit in the case of such model-to-model comparisons provide ill-balanced results when applied to experimental data. The main reason is that surrogate and experimental data test different aspects of the algorithm's function. When considering model-generated surrogate data, the algorithm is required to locate a perfect solution that is known to exist. In contrast, when considering experimental target data, there is no guarantee that a perfect solution is part of the search space. In this case, the optimization procedure must rank all imperfect approximations and ultimately select the best approximation. This aspect is not tested at all when considering surrogate data since at least one perfect solution is known to exist (the original parameters) making all approximations unnecessary. Furthermore, we demonstrate that distance functions based on extracting a set of features from the target data (such as time-to-first-spike, spike width, spike frequency, etc.)--rather than using the original data (e.g., the whole spike trace) as the target for fitting-are capable of finding imperfect solutions that are good approximations of the experimental data.
We present a novel framework for automatically constraining parameters of compartmental models of neurons, given a large set of experimentally measured responses of these neurons. In experiments, intrinsic noise gives rise to a large variability (e.g., in firing pattern) in the voltage responses to repetitions of the exact same input. Thus, the common approach of fitting models by attempting to perfectly replicate, point by point, a single chosen trace out of the spectrum of variable responses does not seem to do justice to the data. In addition, finding a single error function that faithfully characterizes the distance between two spiking traces is not a trivial pursuit. To address these issues, one can adopt a multiple objective optimization approach that allows the use of several error functions jointly. When more than one error function is available, the comparison between experimental voltage traces and model response can be performed on the basis of individual features of interest (e.g., spike rate, spike width). Each feature can be compared between model and experimental mean, in units of its experimental variability, thereby incorporating into the fitting this variability. We demonstrate the success of this approach, when used in conjunction with genetic algorithm optimization, in generating an excellent fit between model behavior and the firing pattern of two distinct electrical classes of cortical interneurons, accommodating and fast-spiking. We argue that the multiple, diverse models generated by this method could serve as the building blocks for the realistic simulation of large neuronal networks.
Now looking for applications in theoretical neuroscience at the postdoctoral or graduate stage.
If you have specific salary requirements, please include them in your e-mail; all information is confidential. HHMI is an equal opportunity employer.