Main Menu (Mobile)- Block

Main Menu - Block

custom | custom

Search Results

general_search_page-panel_pane_1 | views_panes

8 Janelia Publications

Showing 1-8 of 8 results
Your Criteria:
    Darshan LabSvoboda Lab
    11/26/23 | Connectivity underlying motor cortex activity during naturalistic goal-directed behavior.
    Arseny Finkelstein , Kayvon Daie , Márton Rózsa , Ran Darshan , Karel Svoboda
    bioRxiv. 2023 Nov 26:. doi: 10.1101/2023.11.25.568673

    Neural representations of information are shaped by local network interactions. Previous studies linking neural coding and cortical connectivity focused on stimulus selectivity in the sensory cortex 14. Here we study neural activity in the motor cortex during naturalistic behavior in which mice gathered rewards with multidirectional tongue reaching. This behavior does not require training and thus allowed us to probe neural coding and connectivity in motor cortex before its activity is shaped by learning a specific task. Neurons typically responded during and after reaching movements and exhibited conjunctive tuning to target location and reward outcome. We used an all-optical 5,4,6,7 method for large-scale causal functional connectivity mapping in vivo. Mapping connectivity between > 20,000,000 excitatory neuronal pairs revealed fine-scale columnar architecture in layer 2/3 of the motor cortex. Neurons displayed local (< 100 µm) like-to-like connectivity according to target-location tuning, and inhibition over longer spatial scales. Connectivity patterns comprised a continuum, with abundant weakly connected neurons and sparse strongly connected neurons that function as network hubs. Hub neurons were weakly tuned to target-location and reward-outcome but strongly influenced neighboring neurons. This network of neurons, encoding location and outcome of movements to different motor goals, may be a general substrate for rapid learning of complex, goal-directed behaviors.

    View Publication Page
    09/26/23 | Reward expectations direct learning and drive operant matching in Drosophila
    Adithya E. Rajagopalan , Ran Darshan , Karen L. Hibbard , James E. Fitzgerald , Glenn C. Turner
    Proceedings of the National Academy of Sciences of the U.S.A.. 2023 Sep 26;120(39):e2221415120. doi: 10.1073/pnas.2221415120

    Foraging animals must use decision-making strategies that dynamically adapt to the changing availability of rewards in the environment. A wide diversity of animals do this by distributing their choices in proportion to the rewards received from each option, Herrnstein’s operant matching law. Theoretical work suggests an elegant mechanistic explanation for this ubiquitous behavior, as operant matching follows automatically from simple synaptic plasticity rules acting within behaviorally relevant neural circuits. However, no past work has mapped operant matching onto plasticity mechanisms in the brain, leaving the biological relevance of the theory unclear. Here we discovered operant matching in Drosophila and showed that it requires synaptic plasticity that acts in the mushroom body and incorporates the expectation of reward. We began by developing a novel behavioral paradigm to measure choices from individual flies as they learn to associate odor cues with probabilistic rewards. We then built a model of the fly mushroom body to explain each fly’s sequential choice behavior using a family of biologically-realistic synaptic plasticity rules. As predicted by past theoretical work, we found that synaptic plasticity rules could explain fly matching behavior by incorporating stimulus expectations, reward expectations, or both. However, by optogenetically bypassing the representation of reward expectation, we abolished matching behavior and showed that the plasticity rule must specifically incorporate reward expectations. Altogether, these results reveal the first synaptic level mechanisms of operant matching and provide compelling evidence for the role of reward expectation signals in the fly brain.

    View Publication Page
    Darshan Lab
    06/27/23 | A scalable implementation of the recursive least-squares algorithm for training spiking neural networks
    Benjamin J. Arthur , Christopher M. Kim , Susu Chen , Stephan Preibisch , Ran Darshan
    Frontiers in Neuroinformatics. 2023 Jun 27:. doi: 10.3389/fninf.2023.1099510

    Training spiking recurrent neural networks on neuronal recordings or behavioral tasks has become a prominent tool to study computations in the brain. With an increasing size and complexity of neural recordings, there is a need for fast algorithms that can scale to large datasets. We present optimized CPU and GPU implementations of the recursive least-squares algorithm in spiking neural networks. The GPU implementation allows training networks to reproduce neural activity of an order of millions neurons at order of magnitude times faster than the CPU implementation. We demonstrate this by applying our algorithm to reproduce the activity of > 66, 000 recorded neurons of a mouse performing a decision-making task. The fast implementation enables efficient training of large-scale spiking models, thus allowing for in-silico study of the dynamics and connectivity underlying multi-area computations.

    View Publication Page
    Svoboda LabDarshan Lab
    05/18/23 | Distributing task-related neural activity across a cortical network through task-independent connections.
    Kim CM, Finkelstein A, Chow CC, Svoboda K, Darshan R
    Nature Communications. 2023 May 18;14(1):2851. doi: 10.1038/s41467-023-38529-y

    Task-related neural activity is widespread across populations of neurons during goal-directed behaviors. However, little is known about the synaptic reorganization and circuit mechanisms that lead to broad activity changes. Here we trained a subset of neurons in a spiking network with strong synaptic interactions to reproduce the activity of neurons in the motor cortex during a decision-making task. Task-related activity, resembling the neural data, emerged across the network, even in the untrained neurons. Analysis of trained networks showed that strong untrained synapses, which were independent of the task and determined the dynamical state of the network, mediated the spread of task-related activity. Optogenetic perturbations suggest that the motor cortex is strongly-coupled, supporting the applicability of the mechanism to cortical networks. Our results reveal a cortical mechanism that facilitates distributed representations of task-variables by spreading the activity from a subset of plastic neurons to the entire network through task-independent strong synapses.

    View Publication Page
    Svoboda LabDarshan Lab
    06/18/22 | Distributing task-related neural activity across a cortical network through task-independent connections
    Christopher M. Kim , Arseny Finkelstein , Carson C. Chow , Karel Svoboda , Ran Darshan
    bioRxiv. 2022 Jun 18:. doi: 10.1101/2022.06.17.496618

    Task-related neural activity is widespread across populations of neurons during goal-directed behaviors. However, little is known about the synaptic reorganization and circuit mechanisms that lead to broad activity changes. Here we trained a limited subset of neurons in a spiking network with strong synaptic interactions to reproduce the activity of neurons in the motor cortex during a decision-making task. We found that task-related activity, resembling the neural data, emerged across the network, even in the untrained neurons. Analysis of trained networks showed that strong untrained synapses, which were independent of the task and determined the dynamical state of the network, mediated the spread of task-related activity. Optogenetic perturbations suggest that the motor cortex is strongly-coupled, supporting the applicability of the mechanism to cortical networks. Our results reveal a cortical mechanism that facilitates distributed representations of task-variables by spreading the activity from a subset of plastic neurons to the entire network through task-independent strong synapses.

    View Publication Page
    Darshan Lab
    04/05/22 | Learning to represent continuous variables in heterogeneous neural networks
    Ran Darshan , Alexander Rivkind
    Cell Reports. 2022 Apr 05;39(1):110612. doi: 10.1016/j.celrep.2022.110612

    Manifold attractors are a key framework for understanding how continuous variables, such as position or head direction, are encoded in the brain. In this framework, the variable is represented along a continuum of persistent neuronal states which forms a manifold attactor. Neural networks with symmetric synaptic connectivity that can implement manifold attractors have become the dominant model in this framework. In addition to a symmetric connectome, these networks imply homogeneity of individual-neuron tuning curves and symmetry of the representational space; these features are largely inconsistent with neurobiological data. Here, we developed a theory for computations based on manifold attractors in trained neural networks and show how these manifolds can cope with diverse neuronal responses, imperfections in the geometry of the manifold and a high level of synaptic heterogeneity. In such heterogeneous trained networks, a continuous representational space emerges from a small set of stimuli used for training. Furthermore, we find that the network response to external inputs depends on the geometry of the representation and on the level of synaptic heterogeneity in an analytically tractable and interpretable way. Finally, we show that a too complex geometry of the neuronal representation impairs the attractiveness of the manifold and may lead to its destabilization. Our framework reveals that continuous features can be represented in the recurrent dynamics of heterogeneous networks without assuming unrealistic symmetry. It suggests that the representational space of putative manifold attractors in the brain dictates the dynamics in their vicinity.

    View Publication Page
    Darshan Lab
    09/02/19 | Idiosyncratic choice bias in decision tasks naturally emerges from neuronal network dynamics.
    Lebovich L, Darshan R, Lavi Y, Hansel D, Loewenstein Y
    Nature Human Behavior. 2019 Sep 02;3(11):1190-1202. doi: 10.1101/284877

    Idiosyncratic tendency to choose one alternative over others in the absence of an identified reason, is a common observation in two-alternative forced-choice experiments. It is tempting to account for it as resulting from the (unknown) participant-specific history and thus treat it as a measurement noise. Indeed, idiosyncratic choice biases are typically considered as nuisance. Care is taken to account for them by adding an ad-hoc bias parameter or by counterbalancing the choices to average them out. Here we quantify idiosyncratic choice biases in a perceptual discrimination task and a motor task. We report substantial and significant biases in both cases. Then, we present theoretical evidence that even in idealized experiments, in which the settings are symmetric, idiosyncratic choice bias is expected to emerge from the dynamics of competing neuronal networks. We thus argue that idiosyncratic choice bias reflects the microscopic dynamics of choice and therefore is virtually inevitable in any comparison or decision task.

    View Publication Page
    Darshan Lab
    09/17/18 | Strength of correlations in strongly recurrent neuronal networks.
    Darshan R, van Vreeswijk C, Hansel D
    Physical Review X. 2018 Sep 17:031072. doi: 10.1103/PhysRevX.8.031072

    Spatiotemporal correlations in brain activity are functionally important and have been implicated in perception, learning and plasticity, exploratory behavior, and various aspects of cognition. Neurons in the cerebral cortex are strongly interacting. Their activity is temporally irregular and can exhibit substantial correlations. However, how the collective dynamics of highly recurrent and strongly interacting neurons can evolve into a state in which the activity of individual cells is highly irregular yet macroscopically correlated is an open question. Here, we develop a general theory that relates the strength of pairwise correlations to the anatomical features of networks of strongly coupled neurons. To this end, we investigate networks of binary units. When interactions are strong, the activity is irregular in a large region of parameter space. We find that despite the strong interactions, the correlations are generally very weak. Nevertheless, we identify architectural features, which if present, give rise to strong correlations without destroying the irregularity of the activity. For networks with such features, we determine how correlations scale with the network size and the number of connections. Our work shows the mechanism by which strong correlations can be consistent with highly irregular activity, two hallmarks of neuronal dynamics in the central nervous system.

    View Publication Page