Main Menu (Mobile)- Block

Main Menu - Block

custom | custom

Search Results

filters_region_cap | custom

Filter

facetapi-Q2b17qCsTdECvJIqZJgYMaGsr8vANl1n | block

Associated Lab

facetapi-W9JlIB1X0bjs93n1Alu3wHJQTTgDCBGe | block

Associated Project Team

facetapi-61yz1V0li8B1bixrCWxdAe2aYiEXdhd0 | block

Associated Support Team

facetapi-PV5lg7xuz68EAY8eakJzrcmwtdGEnxR0 | block
facetapi-aK0bSsPXQOqhYQEgonL2xGNrv4SPvFLb | block

Tool Types

general_search_page-panel_pane_1 | views_panes

3 Janelia Publications

Showing 1-3 of 3 results
Your Criteria:
    06/02/21 | Learning to represent continuous variables in heterogeneous neural networks
    Ran Darshan , Alexander Rivkind
    bioRxiv. 2021 Jun 02:. doi: 10.1101/2021.06.01.446635

    Manifold attractors are a key framework for understanding how continuous variables, such as position or head direction, are encoded in the brain. In this framework, the variable is represented along a continuum of persistent neuronal states which forms a manifold attactor. Neural networks with symmetric synaptic connectivity that can implement manifold attractors have become the dominant model in this framework. In addition to a symmetric connectome, these networks imply homogeneity of individual-neuron tuning curves and symmetry of the representational space; these features are largely inconsistent with neurobiological data. Here, we developed a theory for computations based on manifold attractors in trained neural networks and show how these manifolds can cope with diverse neuronal responses, imperfections in the geometry of the manifold and a high level of synaptic heterogeneity. In such heterogeneous trained networks, a continuous representational space emerges from a small set of stimuli used for training. Furthermore, we find that the network response to external inputs depends on the geometry of the representation and on the level of synaptic heterogeneity in an analytically tractable and interpretable way. Finally, we show that a too complex geometry of the neuronal representation impairs the attractiveness of the manifold and may lead to its destabilization. Our framework reveals that continuous features can be represented in the recurrent dynamics of heterogeneous networks without assuming unrealistic symmetry. It suggests that the representational space of putative manifold attractors in the brain dictates the dynamics in their vicinity.

    View Publication Page
    09/02/19 | Idiosyncratic choice bias in decision tasks naturally emerges from neuronal network dynamics.
    Lebovich L, Darshan R, Lavi Y, Hansel D, Loewenstein Y
    Nature Human Behavior. 2019 Sep 02;3(11):1190-1202. doi: 10.1101/284877

    Idiosyncratic tendency to choose one alternative over others in the absence of an identified reason, is a common observation in two-alternative forced-choice experiments. It is tempting to account for it as resulting from the (unknown) participant-specific history and thus treat it as a measurement noise. Indeed, idiosyncratic choice biases are typically considered as nuisance. Care is taken to account for them by adding an ad-hoc bias parameter or by counterbalancing the choices to average them out. Here we quantify idiosyncratic choice biases in a perceptual discrimination task and a motor task. We report substantial and significant biases in both cases. Then, we present theoretical evidence that even in idealized experiments, in which the settings are symmetric, idiosyncratic choice bias is expected to emerge from the dynamics of competing neuronal networks. We thus argue that idiosyncratic choice bias reflects the microscopic dynamics of choice and therefore is virtually inevitable in any comparison or decision task.

    View Publication Page
    09/17/18 | Strength of correlations in strongly recurrent neuronal networks.
    Darshan R, van Vreeswijk C, Hansel D
    Physical Review X. 2018 Sep 17:031072. doi: 10.1103/PhysRevX.8.031072

    Spatiotemporal correlations in brain activity are functionally important and have been implicated in perception, learning and plasticity, exploratory behavior, and various aspects of cognition. Neurons in the cerebral cortex are strongly interacting. Their activity is temporally irregular and can exhibit substantial correlations. However, how the collective dynamics of highly recurrent and strongly interacting neurons can evolve into a state in which the activity of individual cells is highly irregular yet macroscopically correlated is an open question. Here, we develop a general theory that relates the strength of pairwise correlations to the anatomical features of networks of strongly coupled neurons. To this end, we investigate networks of binary units. When interactions are strong, the activity is irregular in a large region of parameter space. We find that despite the strong interactions, the correlations are generally very weak. Nevertheless, we identify architectural features, which if present, give rise to strong correlations without destroying the irregularity of the activity. For networks with such features, we determine how correlations scale with the network size and the number of connections. Our work shows the mechanism by which strong correlations can be consistent with highly irregular activity, two hallmarks of neuronal dynamics in the central nervous system.

    View Publication Page