Main Menu (Mobile)- Block

Main Menu - Block

janelia7_blocks-janelia7_secondary_menu | block
janelia7_blocks-janelia7_fake_breadcrumb | block
Dudman Lab / Publications
custom | custom

Filter

facetapi-Q2b17qCsTdECvJIqZJgYMaGsr8vANl1n | block
facetapi-W9JlIB1X0bjs93n1Alu3wHJQTTgDCBGe | block

Associated Project Team

facetapi-PV5lg7xuz68EAY8eakJzrcmwtdGEnxR0 | block
facetapi-021SKYQnqXW6ODq5W5dPAFEDBaEJubhN | block
general_search_page-panel_pane_1 | views_panes

2 Publications

Showing 1-2 of 2 results
Your Criteria:
    10/05/23 | Conjoint specification of action by neocortex and striatum.
    Junchol Park , Peter Polidoro , Catia Fortunato , Jon Arnold , Brett Mensh , Juan A. Gallego , Joshua T. Dudman
    bioRxiv. 2023 Oct 05:. doi: 10.1101/2023.10.04.560957

    The interplay between two major forebrain structures - cortex and subcortical striatum - is critical for flexible, goal-directed action. Traditionally, it has been proposed that striatum is critical for selecting what type of action is initiated while the primary motor cortex is involved in the online control of movement execution. Recent data indicates that striatum may also be critical for specifying movement execution. These alternatives have been difficult to reconcile because when comparing very distinct actions, as in the vast majority of work to date, they make essentially indistinguishable predictions. Here, we develop quantitative models to reveal a somewhat paradoxical insight: only comparing neural activity during similar actions makes strongly distinguishing predictions. We thus developed a novel reach-to-pull task in which mice reliably selected between two similar, but distinct reach targets and pull forces. Simultaneous cortical and subcortical recordings were uniquely consistent with a model in which cortex and striatum jointly specify flexible parameters of action during movement execution.

    View Publication Page
    01/18/23 | Mesolimbic dopamine adapts the rate of learning from action.
    Coddington LT, Lindo SE, Dudman JT
    Nature. 2023 Jan 18:. doi: 10.1038/s41586-022-05614-z

    Recent success in training artificial agents and robots derives from a combination of direct learning of behavioural policies and indirect learning through value functions. Policy learning and value learning use distinct algorithms that optimize behavioural performance and reward prediction, respectively. In animals, behavioural learning and the role of mesolimbic dopamine signalling have been extensively evaluated with respect to reward prediction; however, so far there has been little consideration of how direct policy learning might inform our understanding. Here we used a comprehensive dataset of orofacial and body movements to understand how behavioural policies evolved as naive, head-restrained mice learned a trace conditioning paradigm. Individual differences in initial dopaminergic reward responses correlated with the emergence of learned behavioural policy, but not the emergence of putative value encoding for a predictive cue. Likewise, physiologically calibrated manipulations of mesolimbic dopamine produced several effects inconsistent with value learning but predicted by a neural-network-based model that used dopamine signals to set an adaptive rate, not an error signal, for behavioural policy learning. This work provides strong evidence that phasic dopamine activity can regulate direct learning of behavioural policies, expanding the explanatory power of reinforcement learning models for animal learning.

    View Publication Page