Main Menu (Mobile)- Block

Main Menu - Block

custom | custom

Search Results

general_search_page-panel_pane_1 | views_panes

5 Janelia Publications

Showing 1-5 of 5 results
Your Criteria:
    12/12/23 | Model-Based Inference of Synaptic Plasticity Rules
    Yash Mehta , Danil Tyulmankov , Adithya E. Rajagopalan , Glenn C. Turner , James E. Fitzgerald , Jan Funke
    bioRxiv. 2023 Dec 12:. doi: 10.1101/2023.12.11.571103

    Understanding learning through synaptic plasticity rules in the brain is a grand challenge for neuroscience. Here we introduce a novel computational framework for inferring plasticity rules from experimental data on neural activity trajectories and behavioral learning dynamics. Our methodology parameterizes the plasticity function to provide theoretical interpretability and facilitate gradient-based optimization. For instance, we use Taylor series expansions or multilayer perceptrons to approximate plasticity rules, and we adjust their parameters via gradient descent over entire trajectories to closely match observed neural activity and behavioral data. Notably, our approach can learn intricate rules that induce long nonlinear time-dependencies, such as those incorporating postsynaptic activity and current synaptic weights. We validate our method through simulations, accurately recovering established rules, like Oja’s, as well as more complex hypothetical rules incorporating reward-modulated terms. We assess the resilience of our technique to noise and, as a tangible application, apply it to behavioral data from Drosophila during a probabilistic reward-learning experiment. Remarkably, we identify an active forgetting component of reward learning in flies that enhances the predictive accuracy of previous models. Overall, our modeling framework provides an exciting new avenue to elucidate the computational principles governing synaptic plasticity and learning in the brain.

    View Publication Page
    10/31/23 | Tensor formalism for predicting synaptic connections with ensemble modeling or optimization.
    Tirthabir Biswas , Tianzhi Lambus Li , James E. Fitzgerald
    arXiv. 2023 Oct 31:. doi: 10.48550/arXiv.2310.20309

    Theoretical neuroscientists often try to understand how the structure of a neural network relates to its function by focusing on structural features that would either follow from optimization or occur consistently across possible implementations. Both optimization theories and ensemble modeling approaches have repeatedly proven their worth, and it would simplify theory building considerably if predictions from both theory types could be derived and tested simultaneously. Here we show how tensor formalism from theoretical physics can be used to unify and solve many optimization and ensemble modeling approaches to predicting synaptic connectivity from neuronal responses. We specifically focus on analyzing the solution space of synaptic weights that allow a thresholdlinear neural network to respond in a prescribed way to a limited number of input conditions. For optimization purposes, we compute the synaptic weight vector that minimizes an arbitrary quadratic loss function. For ensemble modeling, we identify synaptic weight features that occur consistently across all solutions bounded by an arbitrary quadratic function. We derive a common solution to this suite of nonlinear problems by showing how each of them reduces to an equivalent linear problem that can be solved analytically. Although identifying the equivalent linear problem is nontrivial, our tensor formalism provides an elegant geometrical perspective that allows us to solve the problem numerically. The final algorithm is applicable to a wide range of interesting neuroscience problems, and the associated geometric insights may carry over to other scientific problems that require constrained optimization.

    View Publication Page
    09/26/23 | Reward expectations direct learning and drive operant matching in Drosophila
    Adithya E. Rajagopalan , Ran Darshan , Karen L. Hibbard , James E. Fitzgerald , Glenn C. Turner
    Proceedings of the National Academy of Sciences of the U.S.A.. 2023 Sep 26;120(39):e2221415120. doi: 10.1073/pnas.2221415120

    Foraging animals must use decision-making strategies that dynamically adapt to the changing availability of rewards in the environment. A wide diversity of animals do this by distributing their choices in proportion to the rewards received from each option, Herrnstein’s operant matching law. Theoretical work suggests an elegant mechanistic explanation for this ubiquitous behavior, as operant matching follows automatically from simple synaptic plasticity rules acting within behaviorally relevant neural circuits. However, no past work has mapped operant matching onto plasticity mechanisms in the brain, leaving the biological relevance of the theory unclear. Here we discovered operant matching in Drosophila and showed that it requires synaptic plasticity that acts in the mushroom body and incorporates the expectation of reward. We began by developing a novel behavioral paradigm to measure choices from individual flies as they learn to associate odor cues with probabilistic rewards. We then built a model of the fly mushroom body to explain each fly’s sequential choice behavior using a family of biologically-realistic synaptic plasticity rules. As predicted by past theoretical work, we found that synaptic plasticity rules could explain fly matching behavior by incorporating stimulus expectations, reward expectations, or both. However, by optogenetically bypassing the representation of reward expectation, we abolished matching behavior and showed that the plasticity rule must specifically incorporate reward expectations. Altogether, these results reveal the first synaptic level mechanisms of operant matching and provide compelling evidence for the role of reward expectation signals in the fly brain.

    View Publication Page
    08/07/23 | Learning produces a hippocampal cognitive map in the form of an orthogonalized state machine.
    Weinan Sun , Johan Winnubst , Maanasa Natrajan , Chongxi Lai , Koichiro Kajikawa , Michalis Michaelos , Rachel Gattoni , Carsen Stringer , Daniel Flickinger , James E. Fitzgerald , Nelson Spruston
    bioRxiv. 2023 Aug 07:. doi: 10.1101/2023.08.03.551900

    Cognitive maps confer animals with flexible intelligence by representing spatial, temporal, and abstract relationships that can be used to shape thought, planning, and behavior. Cognitive maps have been observed in the hippocampus, but their algorithmic form and the processes by which they are learned remain obscure. Here, we employed large-scale, longitudinal two-photon calcium imaging to record activity from thousands of neurons in the CA1 region of the hippocampus while mice learned to efficiently collect rewards from two subtly different versions of linear tracks in virtual reality. The results provide a detailed view of the formation of a cognitive map in the hippocampus. Throughout learning, both the animal behavior and hippocampal neural activity progressed through multiple intermediate stages, gradually revealing improved task understanding and behavioral efficiency. The learning process led to progressive decorrelations in initially similar hippocampal neural activity within and across tracks, ultimately resulting in orthogonalized representations resembling a state machine capturing the inherent structure of the task. We show that a Hidden Markov Model (HMM) and a biologically plausible recurrent neural network trained using Hebbian learning can both capture core aspects of the learning dynamics and the orthogonalized representational structure in neural activity. In contrast, we show that gradient-based learning of sequence models such as Long Short-Term Memory networks (LSTMs) and Transformers do not naturally produce such representations. We further demonstrate that mice exhibited adaptive behavior in novel task settings, with neural activity reflecting flexible deployment of the state machine. These findings shed light on the mathematical form of cognitive maps, the learning rules that sculpt them, and the algorithms that promote adaptive behavior in animals. The work thus charts a course toward a deeper understanding of biological intelligence and offers insights toward developing more robust learning algorithms in artificial intelligence.

    View Publication Page
    08/01/23 | Organizing memories for generalization in complementary learning systems.
    Weinan Sun , Madhu Advani , Nelson Spruston , Andrew Saxe , James E. Fitzgerald
    Nature Neuroscience. 2023 Aug 01;26(8):1438-1448. doi: 10.1038/s41593-023-01382-9

    Our ability to remember the past is essential for guiding our future behavior. Psychological and neurobiological features of declarative memories are known to transform over time in a process known as systems consolidation. While many theories have sought to explain the time-varying role of hippocampal and neocortical brain areas, the computational principles that govern these transformations remain unclear. Here we propose a theory of systems consolidation in which hippocampal-cortical interactions serve to optimize generalizations that guide future adaptive behavior. We use mathematical analysis of neural network models to characterize fundamental performance tradeoffs in systems consolidation, revealing that memory components should be organized according to their predictability. The theory shows that multiple interacting memory systems can outperform just one, normatively unifying diverse experimental observations and making novel experimental predictions. Our results suggest that the psychological taxonomy and neurobiological organization of declarative memories reflect a system optimized for behaving well in an uncertain future.

    View Publication Page