Main Menu (Mobile)- Block

Main Menu - Block

janelia7_blocks-janelia7_fake_breadcrumb | block
Koyama Lab / Publications
general_search_page-panel_pane_1 | views_panes

12 Publications

Showing 1-10 of 12 results
Your Criteria:
    Darshan Lab
    05/22/17 | A canonical neural mechanism for behavioral variability.
    Darshan R, Wood WE, Peters S, Leblois A, Hansel D
    Nature Communications. 2017 May 22;8:15415. doi: 10.1038/ncomms15415

    The ability to generate variable movements is essential for learning and adjusting complex behaviours. This variability has been linked to the temporal irregularity of neuronal activity in the central nervous system. However, how neuronal irregularity actually translates into behavioural variability is unclear. Here we combine modelling, electrophysiological and behavioural studies to address this issue. We demonstrate that a model circuit comprising topographically organized and strongly recurrent neural networks can autonomously generate irregular motor behaviours. Simultaneous recordings of neurons in singing finches reveal that neural correlations increase across the circuit driving song variability, in agreement with the model predictions. Analysing behavioural data, we find remarkable similarities in the babbling statistics of 5-6-month-old human infants and juveniles from three songbird species and show that our model naturally accounts for these 'universal' statistics.

    View Publication Page
    Darshan Lab
    06/27/23 | A scalable implementation of the recursive least-squares algorithm for training spiking neural networks
    Benjamin J. Arthur , Christopher M. Kim , Susu Chen , Stephan Preibisch , Ran Darshan
    Frontiers in Neuroinformatics. 2023 Jun 27:. doi: 10.3389/fninf.2023.1099510

    Training spiking recurrent neural networks on neuronal recordings or behavioral tasks has become a prominent tool to study computations in the brain. With an increasing size and complexity of neural recordings, there is a need for fast algorithms that can scale to large datasets. We present optimized CPU and GPU implementations of the recursive least-squares algorithm in spiking neural networks. The GPU implementation allows training networks to reproduce neural activity of an order of millions neurons at order of magnitude times faster than the CPU implementation. We demonstrate this by applying our algorithm to reproduce the activity of > 66, 000 recorded neurons of a mouse performing a decision-making task. The fast implementation enables efficient training of large-scale spiking models, thus allowing for in-silico study of the dynamics and connectivity underlying multi-area computations.

    View Publication Page
    Darshan Lab
    03/24/15 | Basal ganglia: songbird models
    Leblois A, Darshan R
    Encyclopedia of Computational Neuroscience:356-61

    Songbirds produce complex vocalizations, a behavior that depends on the ability of juveniles to imitate the song of an adult. Song learning relies on a specialized basal ganglia-thalamocortical loop. Several computational models have examined the role of this circuit in song learning, shedding light on the neurobiological mechanisms underlying sensorimotor learning.

    View Publication Page
    Darshan LabSvoboda Lab
    11/26/23 | Connectivity underlying motor cortex activity during naturalistic goal-directed behavior.
    Arseny Finkelstein , Kayvon Daie , Márton Rózsa , Ran Darshan , Karel Svoboda
    bioRxiv. 2023 Nov 26:. doi: 10.1101/2023.11.25.568673

    Neural representations of information are shaped by local network interactions. Previous studies linking neural coding and cortical connectivity focused on stimulus selectivity in the sensory cortex 14. Here we study neural activity in the motor cortex during naturalistic behavior in which mice gathered rewards with multidirectional tongue reaching. This behavior does not require training and thus allowed us to probe neural coding and connectivity in motor cortex before its activity is shaped by learning a specific task. Neurons typically responded during and after reaching movements and exhibited conjunctive tuning to target location and reward outcome. We used an all-optical 5,4,6,7 method for large-scale causal functional connectivity mapping in vivo. Mapping connectivity between > 20,000,000 excitatory neuronal pairs revealed fine-scale columnar architecture in layer 2/3 of the motor cortex. Neurons displayed local (< 100 µm) like-to-like connectivity according to target-location tuning, and inhibition over longer spatial scales. Connectivity patterns comprised a continuum, with abundant weakly connected neurons and sparse strongly connected neurons that function as network hubs. Hub neurons were weakly tuned to target-location and reward-outcome but strongly influenced neighboring neurons. This network of neurons, encoding location and outcome of movements to different motor goals, may be a general substrate for rapid learning of complex, goal-directed behaviors.

    View Publication Page
    Svoboda LabDarshan Lab
    06/18/22 | Distributing task-related neural activity across a cortical network through task-independent connections
    Christopher M. Kim , Arseny Finkelstein , Carson C. Chow , Karel Svoboda , Ran Darshan
    bioRxiv. 2022 Jun 18:. doi: 10.1101/2022.06.17.496618

    Task-related neural activity is widespread across populations of neurons during goal-directed behaviors. However, little is known about the synaptic reorganization and circuit mechanisms that lead to broad activity changes. Here we trained a limited subset of neurons in a spiking network with strong synaptic interactions to reproduce the activity of neurons in the motor cortex during a decision-making task. We found that task-related activity, resembling the neural data, emerged across the network, even in the untrained neurons. Analysis of trained networks showed that strong untrained synapses, which were independent of the task and determined the dynamical state of the network, mediated the spread of task-related activity. Optogenetic perturbations suggest that the motor cortex is strongly-coupled, supporting the applicability of the mechanism to cortical networks. Our results reveal a cortical mechanism that facilitates distributed representations of task-variables by spreading the activity from a subset of plastic neurons to the entire network through task-independent strong synapses.

    View Publication Page
    Svoboda LabDarshan Lab
    05/18/23 | Distributing task-related neural activity across a cortical network through task-independent connections.
    Kim CM, Finkelstein A, Chow CC, Svoboda K, Darshan R
    Nature Communications. 2023 May 18;14(1):2851. doi: 10.1038/s41467-023-38529-y

    Task-related neural activity is widespread across populations of neurons during goal-directed behaviors. However, little is known about the synaptic reorganization and circuit mechanisms that lead to broad activity changes. Here we trained a subset of neurons in a spiking network with strong synaptic interactions to reproduce the activity of neurons in the motor cortex during a decision-making task. Task-related activity, resembling the neural data, emerged across the network, even in the untrained neurons. Analysis of trained networks showed that strong untrained synapses, which were independent of the task and determined the dynamical state of the network, mediated the spread of task-related activity. Optogenetic perturbations suggest that the motor cortex is strongly-coupled, supporting the applicability of the mechanism to cortical networks. Our results reveal a cortical mechanism that facilitates distributed representations of task-variables by spreading the activity from a subset of plastic neurons to the entire network through task-independent strong synapses.

    View Publication Page
    Darshan Lab
    09/02/19 | Idiosyncratic choice bias in decision tasks naturally emerges from neuronal network dynamics.
    Lebovich L, Darshan R, Lavi Y, Hansel D, Loewenstein Y
    Nature Human Behavior. 2019 Sep 02;3(11):1190-1202. doi: 10.1101/284877

    Idiosyncratic tendency to choose one alternative over others in the absence of an identified reason, is a common observation in two-alternative forced-choice experiments. It is tempting to account for it as resulting from the (unknown) participant-specific history and thus treat it as a measurement noise. Indeed, idiosyncratic choice biases are typically considered as nuisance. Care is taken to account for them by adding an ad-hoc bias parameter or by counterbalancing the choices to average them out. Here we quantify idiosyncratic choice biases in a perceptual discrimination task and a motor task. We report substantial and significant biases in both cases. Then, we present theoretical evidence that even in idealized experiments, in which the settings are symmetric, idiosyncratic choice bias is expected to emerge from the dynamics of competing neuronal networks. We thus argue that idiosyncratic choice bias reflects the microscopic dynamics of choice and therefore is virtually inevitable in any comparison or decision task.

    View Publication Page
    Darshan Lab
    01/09/14 | Interference and shaping in sensorimotor adaptations with rewards.
    Darshan R, Leblois A, Hansel D
    PLoS Computational B2014-01-09iology. 2014 Jan;10(1):e1003377. doi: 10.1371/journal.pcbi.1003377

    When a perturbation is applied in a sensorimotor transformation task, subjects can adapt and maintain performance by either relying on sensory feedback, or, in the absence of such feedback, on information provided by rewards. For example, in a classical rotation task where movement endpoints must be rotated to reach a fixed target, human subjects can successfully adapt their reaching movements solely on the basis of binary rewards, although this proves much more difficult than with visual feedback. Here, we investigate such a reward-driven sensorimotor adaptation process in a minimal computational model of the task. The key assumption of the model is that synaptic plasticity is gated by the reward. We study how the learning dynamics depend on the target size, the movement variability, the rotation angle and the number of targets. We show that when the movement is perturbed for multiple targets, the adaptation process for the different targets can interfere destructively or constructively depending on the similarities between the sensory stimuli (the targets) and the overlap in their neuronal representations. Destructive interferences can result in a drastic slowdown of the adaptation. As a result of interference, the time to adapt varies non-linearly with the number of targets. Our analysis shows that these interferences are weaker if the reward varies smoothly with the subject's performance instead of being binary. We demonstrate how shaping the reward or shaping the task can accelerate the adaptation dramatically by reducing the destructive interferences. We argue that experimentally investigating the dynamics of reward-driven sensorimotor adaptation for more than one sensory stimulus can shed light on the underlying learning rules.

    View Publication Page
    Darshan Lab
    04/05/22 | Learning to represent continuous variables in heterogeneous neural networks
    Ran Darshan , Alexander Rivkind
    Cell Reports. 2022 Apr 05;39(1):110612. doi: 10.1016/j.celrep.2022.110612

    Manifold attractors are a key framework for understanding how continuous variables, such as position or head direction, are encoded in the brain. In this framework, the variable is represented along a continuum of persistent neuronal states which forms a manifold attactor. Neural networks with symmetric synaptic connectivity that can implement manifold attractors have become the dominant model in this framework. In addition to a symmetric connectome, these networks imply homogeneity of individual-neuron tuning curves and symmetry of the representational space; these features are largely inconsistent with neurobiological data. Here, we developed a theory for computations based on manifold attractors in trained neural networks and show how these manifolds can cope with diverse neuronal responses, imperfections in the geometry of the manifold and a high level of synaptic heterogeneity. In such heterogeneous trained networks, a continuous representational space emerges from a small set of stimuli used for training. Furthermore, we find that the network response to external inputs depends on the geometry of the representation and on the level of synaptic heterogeneity in an analytically tractable and interpretable way. Finally, we show that a too complex geometry of the neuronal representation impairs the attractiveness of the manifold and may lead to its destabilization. Our framework reveals that continuous features can be represented in the recurrent dynamics of heterogeneous networks without assuming unrealistic symmetry. It suggests that the representational space of putative manifold attractors in the brain dictates the dynamics in their vicinity.

    View Publication Page
    Darshan Lab
    02/01/19 | Neuronal activity and learning in local cortical networks are modulated by the action-perception state
    Ben Engelhard , Ran Darshan , Nofar Ozeri-Engelhard , Zvi Israel , Uri Werner-Reiss , David Hansel , Hagai Bergman , Eilon Vaadia
    bioRxiv. 2019 Feb 01:. doi: 10.1101/537613

    During sensorimotor learning, neuronal networks change to optimize the associations between action and perception. In this study, we examine how the brain harnesses neuronal patterns that correspond to the current action-perception state during learning. To this end, we recorded activity from motor cortex while monkeys either performed a familiar motor task (movement-state) or learned to control the firing rate of a target neuron using a brain-machine interface (BMI-state). Before learning, monkeys were placed in an observation-state, where no action was required. We found that neuronal patterns during the BMI-state were markedly different from the movement-state patterns. BMI-state patterns were initially similar to those in the observation-state and evolved to produce an increase in the firing rate of the target neuron. The overall activity of the non-target neurons remained similar after learning, suggesting that excitatory-inhibitory balance was maintained. Indeed, a novel neural-level reinforcement-learning network model operating in a chaotic regime of balanced excitation and inhibition predicts our results in detail. We conclude that during BMI learning, the brain can adapt patterns corresponding to the current action-perception state to gain rewards. Moreover, our results show that we can predict activity changes that occur during learning based on the pre-learning activity. This new finding may serve as a key step toward clinical brain-machine interface applications to modify impaired brain activity.

    View Publication Page