Main Menu (Mobile)- Block

Main Menu - Block

janelia7_blocks-janelia7_secondary_menu | block
More in this page
janelia7_blocks-janelia7_fake_breadcrumb | block
Darshan Lab / Publications
custom | custom

Filter

facetapi-Q2b17qCsTdECvJIqZJgYMaGsr8vANl1n | block
facetapi-W9JlIB1X0bjs93n1Alu3wHJQTTgDCBGe | block

Associated Project Team

facetapi-PV5lg7xuz68EAY8eakJzrcmwtdGEnxR0 | block
facetapi-021SKYQnqXW6ODq5W5dPAFEDBaEJubhN | block
general_search_page-panel_pane_1 | views_panes

7 Publications

Showing 1-7 of 7 results
06/02/21 | Learning to represent continuous variables in heterogeneous neural networks
Ran Darshan , Alexander Rivkind
bioRxiv. 2021 Jun 02:. doi: 10.1101/2021.06.01.446635

Manifold attractors are a key framework for understanding how continuous variables, such as position or head direction, are encoded in the brain. In this framework, the variable is represented along a continuum of persistent neuronal states which forms a manifold attactor. Neural networks with symmetric synaptic connectivity that can implement manifold attractors have become the dominant model in this framework. In addition to a symmetric connectome, these networks imply homogeneity of individual-neuron tuning curves and symmetry of the representational space; these features are largely inconsistent with neurobiological data. Here, we developed a theory for computations based on manifold attractors in trained neural networks and show how these manifolds can cope with diverse neuronal responses, imperfections in the geometry of the manifold and a high level of synaptic heterogeneity. In such heterogeneous trained networks, a continuous representational space emerges from a small set of stimuli used for training. Furthermore, we find that the network response to external inputs depends on the geometry of the representation and on the level of synaptic heterogeneity in an analytically tractable and interpretable way. Finally, we show that a too complex geometry of the neuronal representation impairs the attractiveness of the manifold and may lead to its destabilization. Our framework reveals that continuous features can be represented in the recurrent dynamics of heterogeneous networks without assuming unrealistic symmetry. It suggests that the representational space of putative manifold attractors in the brain dictates the dynamics in their vicinity.

View Publication Page
09/02/19 | Idiosyncratic choice bias in decision tasks naturally emerges from neuronal network dynamics.
Lebovich L, Darshan R, Lavi Y, Hansel D, Loewenstein Y
Nature Human Behavior. 2019 Sep 02;3(11):1190-1202. doi: 10.1101/284877

Idiosyncratic tendency to choose one alternative over others in the absence of an identified reason, is a common observation in two-alternative forced-choice experiments. It is tempting to account for it as resulting from the (unknown) participant-specific history and thus treat it as a measurement noise. Indeed, idiosyncratic choice biases are typically considered as nuisance. Care is taken to account for them by adding an ad-hoc bias parameter or by counterbalancing the choices to average them out. Here we quantify idiosyncratic choice biases in a perceptual discrimination task and a motor task. We report substantial and significant biases in both cases. Then, we present theoretical evidence that even in idealized experiments, in which the settings are symmetric, idiosyncratic choice bias is expected to emerge from the dynamics of competing neuronal networks. We thus argue that idiosyncratic choice bias reflects the microscopic dynamics of choice and therefore is virtually inevitable in any comparison or decision task.

View Publication Page
02/01/19 | Neuronal activity and learning in local cortical networks are modulated by the action-perception state
Ben Engelhard , Ran Darshan , Nofar Ozeri-Engelhard , Zvi Israel , Uri Werner-Reiss , David Hansel , Hagai Bergman , Eilon Vaadia
bioRxiv. 2019 Feb 01:. doi: 10.1101/537613

During sensorimotor learning, neuronal networks change to optimize the associations between action and perception. In this study, we examine how the brain harnesses neuronal patterns that correspond to the current action-perception state during learning. To this end, we recorded activity from motor cortex while monkeys either performed a familiar motor task (movement-state) or learned to control the firing rate of a target neuron using a brain-machine interface (BMI-state). Before learning, monkeys were placed in an observation-state, where no action was required. We found that neuronal patterns during the BMI-state were markedly different from the movement-state patterns. BMI-state patterns were initially similar to those in the observation-state and evolved to produce an increase in the firing rate of the target neuron. The overall activity of the non-target neurons remained similar after learning, suggesting that excitatory-inhibitory balance was maintained. Indeed, a novel neural-level reinforcement-learning network model operating in a chaotic regime of balanced excitation and inhibition predicts our results in detail. We conclude that during BMI learning, the brain can adapt patterns corresponding to the current action-perception state to gain rewards. Moreover, our results show that we can predict activity changes that occur during learning based on the pre-learning activity. This new finding may serve as a key step toward clinical brain-machine interface applications to modify impaired brain activity.

View Publication Page
09/17/18 | Strength of correlations in strongly recurrent neuronal networks.
Darshan R, van Vreeswijk C, Hansel D
Physical Review X. 2018 Sep 17:031072. doi: 10.1103/PhysRevX.8.031072

Spatiotemporal correlations in brain activity are functionally important and have been implicated in perception, learning and plasticity, exploratory behavior, and various aspects of cognition. Neurons in the cerebral cortex are strongly interacting. Their activity is temporally irregular and can exhibit substantial correlations. However, how the collective dynamics of highly recurrent and strongly interacting neurons can evolve into a state in which the activity of individual cells is highly irregular yet macroscopically correlated is an open question. Here, we develop a general theory that relates the strength of pairwise correlations to the anatomical features of networks of strongly coupled neurons. To this end, we investigate networks of binary units. When interactions are strong, the activity is irregular in a large region of parameter space. We find that despite the strong interactions, the correlations are generally very weak. Nevertheless, we identify architectural features, which if present, give rise to strong correlations without destroying the irregularity of the activity. For networks with such features, we determine how correlations scale with the network size and the number of connections. Our work shows the mechanism by which strong correlations can be consistent with highly irregular activity, two hallmarks of neuronal dynamics in the central nervous system.

View Publication Page
05/22/17 | A canonical neural mechanism for behavioral variability.
Darshan R, Wood WE, Peters S, Leblois A, Hansel D
Nature Communications. 2017 May 22;8:15415. doi: 10.1038/ncomms15415

The ability to generate variable movements is essential for learning and adjusting complex behaviours. This variability has been linked to the temporal irregularity of neuronal activity in the central nervous system. However, how neuronal irregularity actually translates into behavioural variability is unclear. Here we combine modelling, electrophysiological and behavioural studies to address this issue. We demonstrate that a model circuit comprising topographically organized and strongly recurrent neural networks can autonomously generate irregular motor behaviours. Simultaneous recordings of neurons in singing finches reveal that neural correlations increase across the circuit driving song variability, in agreement with the model predictions. Analysing behavioural data, we find remarkable similarities in the babbling statistics of 5-6-month-old human infants and juveniles from three songbird species and show that our model naturally accounts for these 'universal' statistics.

View Publication Page
03/24/15 | Basal ganglia: songbird models
Leblois A, Darshan R
Encyclopedia of Computational Neuroscience:356-61

Songbirds produce complex vocalizations, a behavior that depends on the ability of juveniles to imitate the song of an adult. Song learning relies on a specialized basal ganglia-thalamocortical loop. Several computational models have examined the role of this circuit in song learning, shedding light on the neurobiological mechanisms underlying sensorimotor learning.

View Publication Page
01/09/14 | Interference and shaping in sensorimotor adaptations with rewards.
Darshan R, Leblois A, Hansel D
PLoS Computational B2014-01-09iology. 2014 Jan;10(1):e1003377. doi: 10.1371/journal.pcbi.1003377

When a perturbation is applied in a sensorimotor transformation task, subjects can adapt and maintain performance by either relying on sensory feedback, or, in the absence of such feedback, on information provided by rewards. For example, in a classical rotation task where movement endpoints must be rotated to reach a fixed target, human subjects can successfully adapt their reaching movements solely on the basis of binary rewards, although this proves much more difficult than with visual feedback. Here, we investigate such a reward-driven sensorimotor adaptation process in a minimal computational model of the task. The key assumption of the model is that synaptic plasticity is gated by the reward. We study how the learning dynamics depend on the target size, the movement variability, the rotation angle and the number of targets. We show that when the movement is perturbed for multiple targets, the adaptation process for the different targets can interfere destructively or constructively depending on the similarities between the sensory stimuli (the targets) and the overlap in their neuronal representations. Destructive interferences can result in a drastic slowdown of the adaptation. As a result of interference, the time to adapt varies non-linearly with the number of targets. Our analysis shows that these interferences are weaker if the reward varies smoothly with the subject's performance instead of being binary. We demonstrate how shaping the reward or shaping the task can accelerate the adaptation dramatically by reducing the destructive interferences. We argue that experimentally investigating the dynamics of reward-driven sensorimotor adaptation for more than one sensory stimulus can shed light on the underlying learning rules.

View Publication Page