Main Menu (Mobile)- Block

Main Menu - Block

janelia7_blocks-janelia7_fake_breadcrumb | block
Druckmann Lab / Publications
general_search_page-panel_pane_1 | views_panes

25 Publications

Showing 21-25 of 25 results
Druckmann Lab
01/01/12 | A mechanistic model of early sensory processing based on subtracting sparse representations.
Druckmann S, Hu T, Chklovskii D
Advances in Neural Information Processing Systems. 2012;25:1979-87

Early stages of sensory systems face the challenge of compressing information from numerous receptors onto a much smaller number of projection neurons, a so called communication bottleneck. To make more efficient use of limited bandwidth, compression may be achieved using predictive coding, whereby predictable, or redundant, components of the stimulus are removed. In the case of the retina, Srinivasan et al. (1982) suggested that feedforward inhibitory connections subtracting a linear prediction generated from nearby receptors implement such compression, resulting in biphasic center-surround receptive fields. However, feedback inhibitory circuits are common in early sensory circuits and furthermore their dynamics may be nonlinear. Can such circuits implement predictive coding as well? Here, solving the transient dynamics of nonlinear reciprocal feedback circuits through analogy to a signal-processing algorithm called linearized Bregman iteration we show that nonlinear predictive coding can be implemented in an inhibitory feedback circuit. In response to a step stimulus, interneuron activity in time constructs progressively less sparse but more accurate representations of the stimulus, a temporally evolving prediction. This analysis provides a powerful theoretical framework to interpret and understand the dynamics of early sensory processing in a variety of physiological experiments and yields novel predictions regarding the relation between activity and stimulus statistics.

View Publication Page
Druckmann Lab
08/01/11 | Effective stimuli for constructing reliable neuron models.
Druckmann S, Berger TK, Schürmann F, Hill S, Markram H, Segev I
PLoS Computational Biology. 2011 Aug;7(8):e1002133. doi: 10.1371/journal.pcbi.1002133

The rich dynamical nature of neurons poses major conceptual and technical challenges for unraveling their nonlinear membrane properties. Traditionally, various current waveforms have been injected at the soma to probe neuron dynamics, but the rationale for selecting specific stimuli has never been rigorously justified. The present experimental and theoretical study proposes a novel framework, inspired by learning theory, for objectively selecting the stimuli that best unravel the neuron’s dynamics. The efficacy of stimuli is assessed in terms of their ability to constrain the parameter space of biophysically detailed conductance-based models that faithfully replicate the neuron’s dynamics as attested by their ability to generalize well to the neuron’s response to novel experimental stimuli. We used this framework to evaluate a variety of stimuli in different types of cortical neurons, ages and animals. Despite their simplicity, a set of stimuli consisting of step and ramp current pulses outperforms synaptic-like noisy stimuli in revealing the dynamics of these neurons. The general framework that we propose paves a new way for defining, evaluating and standardizing effective electrical probing of neurons and will thus lay the foundation for a much deeper understanding of the electrical nature of these highly sophisticated and non-linear devices and of the neuronal networks that they compose.

View Publication Page
Druckmann Lab
01/01/10 | Over-complete representations on recurrent neural networks can support persistent percepts.
Druckmann S, Chklovskii D
Neural Information Processing Systems 23 (NIPS 2010). 2010;23:541-9

A striking aspect of cortical neural networks is the divergence of a relatively small number of input channels from the peripheral sensory apparatus into a large number of cortical neurons, an over-complete representation strategy. Cortical neurons are then connected by a sparse network of lateral synapses. Here we propose that such architecture may increase the persistence of the representation of an incoming stimulus, or a percept. We demonstrate that for a family of networks in which the receptive field of each neuron is re-expressed by its outgoing connections, a represented percept can remain constant despite changing activity. We term this choice of connectivity REceptive FIeld REcombination (REFIRE) networks. The sparse REFIRE network may serve as a high-dimensional integrator and a biologically plausible model of the local cortical circuit.

View Publication Page
Druckmann Lab
11/01/08 | Evaluating automated parameter constraining procedures of neuron models by experimental and surrogate data.
Druckmann S, Berger TK, Hill S, Schürmann F, Markram H, Segev I
Biological Cybernetics. 2008 Nov;99(4-5):371-9. doi: 10.1007/s00422-008-0269-2

Neuron models, in particular conductance-based compartmental models, often have numerous parameters that cannot be directly determined experimentally and must be constrained by an optimization procedure. A common practice in evaluating the utility of such procedures is using a previously developed model to generate surrogate data (e.g., traces of spikes following step current pulses) and then challenging the algorithm to recover the original parameters (e.g., the value of maximal ion channel conductances) that were used to generate the data. In this fashion, the success or failure of the model fitting procedure to find the original parameters can be easily determined. Here we show that some model fitting procedures that provide an excellent fit in the case of such model-to-model comparisons provide ill-balanced results when applied to experimental data. The main reason is that surrogate and experimental data test different aspects of the algorithm’s function. When considering model-generated surrogate data, the algorithm is required to locate a perfect solution that is known to exist. In contrast, when considering experimental target data, there is no guarantee that a perfect solution is part of the search space. In this case, the optimization procedure must rank all imperfect approximations and ultimately select the best approximation. This aspect is not tested at all when considering surrogate data since at least one perfect solution is known to exist (the original parameters) making all approximations unnecessary. Furthermore, we demonstrate that distance functions based on extracting a set of features from the target data (such as time-to-first-spike, spike width, spike frequency, etc.)–rather than using the original data (e.g., the whole spike trace) as the target for fitting-are capable of finding imperfect solutions that are good approximations of the experimental data.

View Publication Page
Druckmann Lab
11/01/07 | A novel multiple objective optimization framework for constraining conductance-based neuron models by experimental data.
Druckmann S, Banitt Y, Gidon A, Schürmann F, Markram H, Segev I
Frontiers in Neuroscience. 2007 Nov;1(1):7-18. doi: 10.3389/neuro.01.1.1.001.2007

We present a novel framework for automatically constraining parameters of compartmental models of neurons, given a large set of experimentally measured responses of these neurons. In experiments, intrinsic noise gives rise to a large variability (e.g., in firing pattern) in the voltage responses to repetitions of the exact same input. Thus, the common approach of fitting models by attempting to perfectly replicate, point by point, a single chosen trace out of the spectrum of variable responses does not seem to do justice to the data. In addition, finding a single error function that faithfully characterizes the distance between two spiking traces is not a trivial pursuit. To address these issues, one can adopt a multiple objective optimization approach that allows the use of several error functions jointly. When more than one error function is available, the comparison between experimental voltage traces and model response can be performed on the basis of individual features of interest (e.g., spike rate, spike width). Each feature can be compared between model and experimental mean, in units of its experimental variability, thereby incorporating into the fitting this variability. We demonstrate the success of this approach, when used in conjunction with genetic algorithm optimization, in generating an excellent fit between model behavior and the firing pattern of two distinct electrical classes of cortical interneurons, accommodating and fast-spiking. We argue that the multiple, diverse models generated by this method could serve as the building blocks for the realistic simulation of large neuronal networks.

View Publication Page