Main Menu (Mobile)- Block

Main Menu - Block

janelia7_blocks-janelia7_fake_breadcrumb | block
Koyama Lab / Publications
general_search_page-panel_pane_1 | views_panes

10 Publications

Showing 1-10 of 10 results
Your Criteria:
    09/27/22 | A scalable implementation of the recursive least-squares algorithm for training spiking neural networks
    Benjamin J. Arthur , Christopher M. Kim , Susu Chen , Stephan Preibisch , Ran Darshan
    bioRxiv. 2022 Sep 27:. doi: 10.1101/2022.09.26.509578

    Training spiking recurrent neural networks on neuronal recordings or behavioral tasks has become a prominent tool to study computations in the brain. With an increasing size and complexity of neural recordings, there is a need for fast algorithms that can scale to large datasets. We present optimized CPU and GPU implementations of the recursive least-squares algorithm in spiking neural networks. The GPU implementation allows training networks to reproduce neural activity of an order of millions neurons at order of magnitude times faster than the CPU implementation. We demonstrate this by applying our algorithm to reproduce the activity of > 66, 000 recorded neurons of a mouse performing a decision-making task. The fast implementation enables efficient training of large-scale spiking models, thus allowing for in-silico study of the dynamics and connectivity underlying multi-area computations.

    View Publication Page
    Svoboda LabDarshan Lab
    06/18/22 | Distributing task-related neural activity across a cortical network through task-independent connections
    Christopher M. Kim , Arseny Finkelstein , Carson C. Chow , Karel Svoboda , Ran Darshan
    bioRxiv. 2022 Jun 18:. doi: 10.1101/2022.06.17.496618

    Task-related neural activity is widespread across populations of neurons during goal-directed behaviors. However, little is known about the synaptic reorganization and circuit mechanisms that lead to broad activity changes. Here we trained a limited subset of neurons in a spiking network with strong synaptic interactions to reproduce the activity of neurons in the motor cortex during a decision-making task. We found that task-related activity, resembling the neural data, emerged across the network, even in the untrained neurons. Analysis of trained networks showed that strong untrained synapses, which were independent of the task and determined the dynamical state of the network, mediated the spread of task-related activity. Optogenetic perturbations suggest that the motor cortex is strongly-coupled, supporting the applicability of the mechanism to cortical networks. Our results reveal a cortical mechanism that facilitates distributed representations of task-variables by spreading the activity from a subset of plastic neurons to the entire network through task-independent strong synapses.

    View Publication Page
    05/25/22 | Expectation-based learning rules underlie dynamic foraging in Drosophila
    Adithya E. Rajagopalan , Ran Darshan , James E. Fitzgerald , Glenn C. Turner
    bioRxiv. 2022 May 25:. doi: 10.1101/2022.05.24.493252

    Foraging animals must use decision-making strategies that dynamically account for uncertainty in the world. To cope with this uncertainty, animals have developed strikingly convergent strategies that use information about multiple past choices and reward to learn representations of the current state of the world. However, the underlying learning rules that drive the required learning have remained unclear. Here, working in the relatively simple nervous system of Drosophila, we combine behavioral measurements, mathematical modeling, and neural circuit perturbations to show that dynamic foraging depends on a learning rule incorporating reward expectation. Using a novel olfactory dynamic foraging task, we characterize the behavioral strategies used by individual flies when faced with unpredictable rewards and show, for the first time, that they perform operant matching. We build on past theoretical work and demonstrate that this strategy requires the existence of a covariance-based learning rule in the mushroom body - a hub for learning in the fly. In particular, the behavioral consequences of optogenetic perturbation experiments suggest that this learning rule incorporates reward expectation. Our results identify a key element of the algorithm underlying dynamic foraging in flies and suggest a comprehensive mechanism that could be fundamental to these behaviors across species.

    View Publication Page
    04/05/22 | Learning to represent continuous variables in heterogeneous neural networks
    Ran Darshan , Alexander Rivkind
    Cell Reports. 2022 Apr 05;39(1):110612. doi: 10.1016/j.celrep.2022.110612

    Manifold attractors are a key framework for understanding how continuous variables, such as position or head direction, are encoded in the brain. In this framework, the variable is represented along a continuum of persistent neuronal states which forms a manifold attactor. Neural networks with symmetric synaptic connectivity that can implement manifold attractors have become the dominant model in this framework. In addition to a symmetric connectome, these networks imply homogeneity of individual-neuron tuning curves and symmetry of the representational space; these features are largely inconsistent with neurobiological data. Here, we developed a theory for computations based on manifold attractors in trained neural networks and show how these manifolds can cope with diverse neuronal responses, imperfections in the geometry of the manifold and a high level of synaptic heterogeneity. In such heterogeneous trained networks, a continuous representational space emerges from a small set of stimuli used for training. Furthermore, we find that the network response to external inputs depends on the geometry of the representation and on the level of synaptic heterogeneity in an analytically tractable and interpretable way. Finally, we show that a too complex geometry of the neuronal representation impairs the attractiveness of the manifold and may lead to its destabilization. Our framework reveals that continuous features can be represented in the recurrent dynamics of heterogeneous networks without assuming unrealistic symmetry. It suggests that the representational space of putative manifold attractors in the brain dictates the dynamics in their vicinity.

    View Publication Page
    09/02/19 | Idiosyncratic choice bias in decision tasks naturally emerges from neuronal network dynamics.
    Lebovich L, Darshan R, Lavi Y, Hansel D, Loewenstein Y
    Nature Human Behavior. 2019 Sep 02;3(11):1190-1202. doi: 10.1101/284877

    Idiosyncratic tendency to choose one alternative over others in the absence of an identified reason, is a common observation in two-alternative forced-choice experiments. It is tempting to account for it as resulting from the (unknown) participant-specific history and thus treat it as a measurement noise. Indeed, idiosyncratic choice biases are typically considered as nuisance. Care is taken to account for them by adding an ad-hoc bias parameter or by counterbalancing the choices to average them out. Here we quantify idiosyncratic choice biases in a perceptual discrimination task and a motor task. We report substantial and significant biases in both cases. Then, we present theoretical evidence that even in idealized experiments, in which the settings are symmetric, idiosyncratic choice bias is expected to emerge from the dynamics of competing neuronal networks. We thus argue that idiosyncratic choice bias reflects the microscopic dynamics of choice and therefore is virtually inevitable in any comparison or decision task.

    View Publication Page
    02/01/19 | Neuronal activity and learning in local cortical networks are modulated by the action-perception state
    Ben Engelhard , Ran Darshan , Nofar Ozeri-Engelhard , Zvi Israel , Uri Werner-Reiss , David Hansel , Hagai Bergman , Eilon Vaadia
    bioRxiv. 2019 Feb 01:. doi: 10.1101/537613

    During sensorimotor learning, neuronal networks change to optimize the associations between action and perception. In this study, we examine how the brain harnesses neuronal patterns that correspond to the current action-perception state during learning. To this end, we recorded activity from motor cortex while monkeys either performed a familiar motor task (movement-state) or learned to control the firing rate of a target neuron using a brain-machine interface (BMI-state). Before learning, monkeys were placed in an observation-state, where no action was required. We found that neuronal patterns during the BMI-state were markedly different from the movement-state patterns. BMI-state patterns were initially similar to those in the observation-state and evolved to produce an increase in the firing rate of the target neuron. The overall activity of the non-target neurons remained similar after learning, suggesting that excitatory-inhibitory balance was maintained. Indeed, a novel neural-level reinforcement-learning network model operating in a chaotic regime of balanced excitation and inhibition predicts our results in detail. We conclude that during BMI learning, the brain can adapt patterns corresponding to the current action-perception state to gain rewards. Moreover, our results show that we can predict activity changes that occur during learning based on the pre-learning activity. This new finding may serve as a key step toward clinical brain-machine interface applications to modify impaired brain activity.

    View Publication Page
    09/17/18 | Strength of correlations in strongly recurrent neuronal networks.
    Darshan R, van Vreeswijk C, Hansel D
    Physical Review X. 2018 Sep 17:031072. doi: 10.1103/PhysRevX.8.031072

    Spatiotemporal correlations in brain activity are functionally important and have been implicated in perception, learning and plasticity, exploratory behavior, and various aspects of cognition. Neurons in the cerebral cortex are strongly interacting. Their activity is temporally irregular and can exhibit substantial correlations. However, how the collective dynamics of highly recurrent and strongly interacting neurons can evolve into a state in which the activity of individual cells is highly irregular yet macroscopically correlated is an open question. Here, we develop a general theory that relates the strength of pairwise correlations to the anatomical features of networks of strongly coupled neurons. To this end, we investigate networks of binary units. When interactions are strong, the activity is irregular in a large region of parameter space. We find that despite the strong interactions, the correlations are generally very weak. Nevertheless, we identify architectural features, which if present, give rise to strong correlations without destroying the irregularity of the activity. For networks with such features, we determine how correlations scale with the network size and the number of connections. Our work shows the mechanism by which strong correlations can be consistent with highly irregular activity, two hallmarks of neuronal dynamics in the central nervous system.

    View Publication Page
    05/22/17 | A canonical neural mechanism for behavioral variability.
    Darshan R, Wood WE, Peters S, Leblois A, Hansel D
    Nature Communications. 2017 May 22;8:15415. doi: 10.1038/ncomms15415

    The ability to generate variable movements is essential for learning and adjusting complex behaviours. This variability has been linked to the temporal irregularity of neuronal activity in the central nervous system. However, how neuronal irregularity actually translates into behavioural variability is unclear. Here we combine modelling, electrophysiological and behavioural studies to address this issue. We demonstrate that a model circuit comprising topographically organized and strongly recurrent neural networks can autonomously generate irregular motor behaviours. Simultaneous recordings of neurons in singing finches reveal that neural correlations increase across the circuit driving song variability, in agreement with the model predictions. Analysing behavioural data, we find remarkable similarities in the babbling statistics of 5-6-month-old human infants and juveniles from three songbird species and show that our model naturally accounts for these 'universal' statistics.

    View Publication Page
    03/24/15 | Basal ganglia: songbird models
    Leblois A, Darshan R
    Encyclopedia of Computational Neuroscience:356-61

    Songbirds produce complex vocalizations, a behavior that depends on the ability of juveniles to imitate the song of an adult. Song learning relies on a specialized basal ganglia-thalamocortical loop. Several computational models have examined the role of this circuit in song learning, shedding light on the neurobiological mechanisms underlying sensorimotor learning.

    View Publication Page
    01/09/14 | Interference and shaping in sensorimotor adaptations with rewards.
    Darshan R, Leblois A, Hansel D
    PLoS Computational B2014-01-09iology. 2014 Jan;10(1):e1003377. doi: 10.1371/journal.pcbi.1003377

    When a perturbation is applied in a sensorimotor transformation task, subjects can adapt and maintain performance by either relying on sensory feedback, or, in the absence of such feedback, on information provided by rewards. For example, in a classical rotation task where movement endpoints must be rotated to reach a fixed target, human subjects can successfully adapt their reaching movements solely on the basis of binary rewards, although this proves much more difficult than with visual feedback. Here, we investigate such a reward-driven sensorimotor adaptation process in a minimal computational model of the task. The key assumption of the model is that synaptic plasticity is gated by the reward. We study how the learning dynamics depend on the target size, the movement variability, the rotation angle and the number of targets. We show that when the movement is perturbed for multiple targets, the adaptation process for the different targets can interfere destructively or constructively depending on the similarities between the sensory stimuli (the targets) and the overlap in their neuronal representations. Destructive interferences can result in a drastic slowdown of the adaptation. As a result of interference, the time to adapt varies non-linearly with the number of targets. Our analysis shows that these interferences are weaker if the reward varies smoothly with the subject's performance instead of being binary. We demonstrate how shaping the reward or shaping the task can accelerate the adaptation dramatically by reducing the destructive interferences. We argue that experimentally investigating the dynamics of reward-driven sensorimotor adaptation for more than one sensory stimulus can shed light on the underlying learning rules.

    View Publication Page