Main Menu (Mobile)- Block

Main Menu - Block

custom | custom

Search Results

filters_region_cap | custom

Filter

facetapi-Q2b17qCsTdECvJIqZJgYMaGsr8vANl1n | block
facetapi-W9JlIB1X0bjs93n1Alu3wHJQTTgDCBGe | block

Associated Project Team

facetapi-61yz1V0li8B1bixrCWxdAe2aYiEXdhd0 | block
facetapi-PV5lg7xuz68EAY8eakJzrcmwtdGEnxR0 | block
general_search_page-panel_pane_1 | views_panes

2 Janelia Publications

Showing 1-2 of 2 results
Your Criteria:
    02/01/23 | TEMPO enables sequential genetic labeling and manipulation of vertebrate cell lineages.
    Espinosa-Medina I, Feliciano D, Belmonte-Mateos C, Linda Miyares R, Garcia-Marques J, Foster B, Lindo S, Pujades C, Koyama M, Lee T
    Neuron. 2023 Feb 01;111(3):345-361.e10. doi: 10.1016/j.neuron.2022.10.035

    During development, regulatory factors appear in a precise order to determine cell fates over time. Consequently, to investigate complex tissue development, it is necessary to visualize and manipulate cell lineages with temporal control. Current strategies for tracing vertebrate cell lineages lack genetic access to sequentially produced cells. Here, we present TEMPO (Temporal Encoding and Manipulation in a Predefined Order), an imaging-readable genetic tool allowing differential labeling and manipulation of consecutive cell generations in vertebrates. TEMPO is based on CRISPR and powered by a cascade of gRNAs that drive orderly activation and inactivation of reporters and/or effectors. Using TEMPO to visualize zebrafish and mouse neurogenesis, we recapitulated birth-order-dependent neuronal fates. Temporally manipulating cell-cycle regulators in mouse cortex progenitors altered the proportion and distribution of neurons and glia, revealing the effects of temporal gene perturbation on serial cell fates. Thus, TEMPO enables sequential manipulation of molecular factors, crucial to study cell-type specification.

    View Publication Page
    01/18/23 | Mesolimbic dopamine adapts the rate of learning from action.
    Coddington LT, Lindo SE, Dudman JT
    Nature. 2023 Jan 18:. doi: 10.1038/s41586-022-05614-z

    Recent success in training artificial agents and robots derives from a combination of direct learning of behavioural policies and indirect learning through value functions. Policy learning and value learning use distinct algorithms that optimize behavioural performance and reward prediction, respectively. In animals, behavioural learning and the role of mesolimbic dopamine signalling have been extensively evaluated with respect to reward prediction; however, so far there has been little consideration of how direct policy learning might inform our understanding. Here we used a comprehensive dataset of orofacial and body movements to understand how behavioural policies evolved as naive, head-restrained mice learned a trace conditioning paradigm. Individual differences in initial dopaminergic reward responses correlated with the emergence of learned behavioural policy, but not the emergence of putative value encoding for a predictive cue. Likewise, physiologically calibrated manipulations of mesolimbic dopamine produced several effects inconsistent with value learning but predicted by a neural-network-based model that used dopamine signals to set an adaptive rate, not an error signal, for behavioural policy learning. This work provides strong evidence that phasic dopamine activity can regulate direct learning of behavioural policies, expanding the explanatory power of reinforcement learning models for animal learning.

    View Publication Page