Main Menu (Mobile)- Block

Main Menu - Block

janelia7_blocks-janelia7_fake_breadcrumb | block
Koyama Lab / Publications
general_search_page-panel_pane_1 | views_panes

28 Publications

Showing 11-20 of 28 results
Your Criteria:
    06/15/15 | Impermanence of dendritic spines in live adult CA1 hippocampus.
    Attardo A, Fitzgerald JE, Schnitzer MJ
    Nature. 2015 Jul 30;523(7562):592-6. doi: 10.1038/nature14467

    The mammalian hippocampus is crucial for episodic memory formation and transiently retains information for about 3-4 weeks in adult mice and longer in humans. Although neuroscientists widely believe that neural synapses are elemental sites of information storage, there has been no direct evidence that hippocampal synapses persist for time intervals commensurate with the duration of hippocampal-dependent memory. Here we tested the prediction that the lifetimes of hippocampal synapses match the longevity of hippocampal memory. By using time-lapse two-photon microendoscopy in the CA1 hippocampal area of live mice, we monitored the turnover dynamics of the pyramidal neurons' basal dendritic spines, postsynaptic structures whose turnover dynamics are thought to reflect those of excitatory synaptic connections. Strikingly, CA1 spine turnover dynamics differed sharply from those seen previously in the neocortex. Mathematical modelling revealed that the data best matched kinetic models with a single population of spines with a mean lifetime of approximately 1-2 weeks. This implies ∼100% turnover in ∼2-3 times this interval, a near full erasure of the synaptic connectivity pattern. Although N-methyl-d-aspartate (NMDA) receptor blockade stabilizes spines in the neocortex, in CA1 it transiently increased the rate of spine loss and thus lowered spine density. These results reveal that adult neocortical and hippocampal pyramidal neurons have divergent patterns of spine regulation and quantitatively support the idea that the transience of hippocampal-dependent memory directly reflects the turnover dynamics of hippocampal synapses.

    View Publication Page
    08/07/23 | Learning produces a hippocampal cognitive map in the form of an orthogonalized state machine.
    Weinan Sun , Johan Winnubst , Maanasa Natrajan , Chongxi Lai , Koichiro Kajikawa , Michalis Michaelos , Rachel Gattoni , Carsen Stringer , Daniel Flickinger , James E. Fitzgerald , Nelson Spruston
    bioRxiv. 2023 Aug 07:. doi: 10.1101/2023.08.03.551900

    Cognitive maps confer animals with flexible intelligence by representing spatial, temporal, and abstract relationships that can be used to shape thought, planning, and behavior. Cognitive maps have been observed in the hippocampus, but their algorithmic form and the processes by which they are learned remain obscure. Here, we employed large-scale, longitudinal two-photon calcium imaging to record activity from thousands of neurons in the CA1 region of the hippocampus while mice learned to efficiently collect rewards from two subtly different versions of linear tracks in virtual reality. The results provide a detailed view of the formation of a cognitive map in the hippocampus. Throughout learning, both the animal behavior and hippocampal neural activity progressed through multiple intermediate stages, gradually revealing improved task understanding and behavioral efficiency. The learning process led to progressive decorrelations in initially similar hippocampal neural activity within and across tracks, ultimately resulting in orthogonalized representations resembling a state machine capturing the inherent structure of the task. We show that a Hidden Markov Model (HMM) and a biologically plausible recurrent neural network trained using Hebbian learning can both capture core aspects of the learning dynamics and the orthogonalized representational structure in neural activity. In contrast, we show that gradient-based learning of sequence models such as Long Short-Term Memory networks (LSTMs) and Transformers do not naturally produce such representations. We further demonstrate that mice exhibited adaptive behavior in novel task settings, with neural activity reflecting flexible deployment of the state machine. These findings shed light on the mathematical form of cognitive maps, the learning rules that sculpt them, and the algorithms that promote adaptive behavior in animals. The work thus charts a course toward a deeper understanding of biological intelligence and offers insights toward developing more robust learning algorithms in artificial intelligence.

    View Publication Page
    10/25/18 | Long-Term Consolidation of Ensemble Neural Plasticity Patterns in Hippocampal Area CA1.
    Attardo A, Lu J, Kawashima T, Okuno H, Fitzgerald JE, Bito H, Schnitzer MJ
    Cell reports. 2018 Oct 16;25(3):640-650.e2. doi: 10.1016/j.celrep.2018.09.064

    Neural network remodeling underpins the ability to remember life experiences, but little is known about the long-term plasticity of neural populations. To study how the brain encodes episodic events, we used time-lapse two-photon microscopy and a fluorescent reporter of neural plasticity based on an enhanced form of the synaptic activity-responsive element (E-SARE) within the Arc promoter to track thousands of CA1 hippocampal pyramidal cells over weeks in mice that repeatedly encountered different environments. Each environment evokes characteristic patterns of ensemble neural plasticity, but with each encounter, the set of activated cells gradually evolves. After repeated exposures, the plasticity patterns evoked by an individual environment progressively stabilize. Compared with young adults, plasticity patterns in aged mice are less specific to individual environments and less stable across repeat experiences. Long-term consolidation of hippocampal plasticity patterns may support long-term memory formation, whereas weaker consolidation in aged subjects might reflect declining memory function.

    View Publication Page
    03/10/09 | Mimicking the folding pathway to improve homology-free protein structure prediction.
    DeBartolo J, Colubri A, Jha AK, Fitzgerald JE, Freed KF, Sosnick TR
    Proceedings of the National Academy of Sciences of the United States of America. 2009 Mar 10;106(10):3734-9. doi: 10.1073/pnas.0811363106

    Since the demonstration that the sequence of a protein encodes its structure, the prediction of structure from sequence remains an outstanding problem that impacts numerous scientific disciplines, including many genome projects. By iteratively fixing secondary structure assignments of residues during Monte Carlo simulations of folding, our coarse-grained model without information concerning homology or explicit side chains can outperform current homology-based secondary structure prediction methods for many proteins. The computationally rapid algorithm using only single (phi,psi) dihedral angle moves also generates tertiary structures of accuracy comparable with existing all-atom methods for many small proteins, particularly those with low homology. Hence, given appropriate search strategies and scoring functions, reduced representations can be used for accurately predicting secondary structure and providing 3D structures, thereby increasing the size of proteins approachable by homology-free methods and the accuracy of template methods that depend on a high-quality input secondary structure.

    View Publication Page
    12/12/23 | Model-Based Inference of Synaptic Plasticity Rules
    Yash Mehta , Danil Tyulmankov , Adithya E. Rajagopalan , Glenn C. Turner , James E. Fitzgerald , Jan Funke
    bioRxiv. 2023 Dec 12:. doi: 10.1101/2023.12.11.571103

    Understanding learning through synaptic plasticity rules in the brain is a grand challenge for neuroscience. Here we introduce a novel computational framework for inferring plasticity rules from experimental data on neural activity trajectories and behavioral learning dynamics. Our methodology parameterizes the plasticity function to provide theoretical interpretability and facilitate gradient-based optimization. For instance, we use Taylor series expansions or multilayer perceptrons to approximate plasticity rules, and we adjust their parameters via gradient descent over entire trajectories to closely match observed neural activity and behavioral data. Notably, our approach can learn intricate rules that induce long nonlinear time-dependencies, such as those incorporating postsynaptic activity and current synaptic weights. We validate our method through simulations, accurately recovering established rules, like Oja’s, as well as more complex hypothetical rules incorporating reward-modulated terms. We assess the resilience of our technique to noise and, as a tangible application, apply it to behavioral data from Drosophila during a probabilistic reward-learning experiment. Remarkably, we identify an active forgetting component of reward learning in flies that enhances the predictive accuracy of previous models. Overall, our modeling framework provides an exciting new avenue to elucidate the computational principles governing synaptic plasticity and learning in the brain.

    View Publication Page
    10/24/15 | Nonlinear circuits for naturalistic visual motion estimation.
    Fitzgerald JE, Clark DA
    eLife. 2015 Oct 24;4:e09123. doi: 10.7554/eLife.09123

    Many animals use visual signals to estimate motion. Canonical models suppose that animals estimate motion by cross-correlating pairs of spatiotemporally separated visual signals, but recent experiments indicate that humans and flies perceive motion from higher-order correlations that signify motion in natural environments. Here we show how biologically plausible processing motifs in neural circuits could be tuned to extract this information. We emphasize how known aspects of Drosophila's visual circuitry could embody this tuning and predict fly behavior. We find that segregating motion signals into ON/OFF channels can enhance estimation accuracy by accounting for natural light/dark asymmetries. Furthermore, a diversity of inputs to motion detecting neurons can provide access to more complex higher-order correlations. Collectively, these results illustrate how non-canonical computations improve motion estimation with naturalistic inputs. This argues that the complexity of the fly's motion computations, implemented in its elaborate circuits, represents a valuable feature of its visual motion estimator.

    View Publication Page
    04/25/24 | Optimization in Visual Motion Estimation.
    Clark DA, Fitzgerald JE
    Annu Rev Vis Sci. 2024 Apr 25:. doi: 10.1146/annurev-vision-101623-025432

    Sighted animals use visual signals to discern directional motion in their environment. Motion is not directly detected by visual neurons, and it must instead be computed from light signals that vary over space and time. This makes visual motion estimation a near universal neural computation, and decades of research have revealed much about the algorithms and mechanisms that generate directional signals. The idea that sensory systems are optimized for performance in natural environments has deeply impacted this research. In this article, we review the many ways that optimization has been used to quantitatively model visual motion estimation and reveal its underlying principles. We emphasize that no single optimization theory has dominated the literature. Instead, researchers have adeptly incorporated different computational demands and biological constraints that are pertinent to the specific brain system and animal model under study. The successes and failures of the resulting optimization models have thereby provided insights into how computational demands and biological constraints together shape neural computation.

    View Publication Page
    08/01/23 | Organizing memories for generalization in complementary learning systems.
    Weinan Sun , Madhu Advani , Nelson Spruston , Andrew Saxe , James E. Fitzgerald
    Nature Neuroscience. 2023 Aug 01;26(8):1438-1448. doi: 10.1038/s41593-023-01382-9

    Our ability to remember the past is essential for guiding our future behavior. Psychological and neurobiological features of declarative memories are known to transform over time in a process known as systems consolidation. While many theories have sought to explain the time-varying role of hippocampal and neocortical brain areas, the computational principles that govern these transformations remain unclear. Here we propose a theory of systems consolidation in which hippocampal-cortical interactions serve to optimize generalizations that guide future adaptive behavior. We use mathematical analysis of neural network models to characterize fundamental performance tradeoffs in systems consolidation, revealing that memory components should be organized according to their predictability. The theory shows that multiple interacting memory systems can outperform just one, normatively unifying diverse experimental observations and making novel experimental predictions. Our results suggest that the psychological taxonomy and neurobiological organization of declarative memories reflect a system optimized for behaving well in an uncertain future.

    View Publication Page
    01/08/13 | Photon shot noise limits on optical detection of neuronal spikes and estimation of spike timing.
    Wilt BA, Fitzgerald JE, Schnitzer MJ
    Biophysical journal. 2013 Jan 08;104(1):51-62. doi: 10.1016/j.bpj.2012.07.058

    Optical approaches for tracking neural dynamics are of widespread interest, but a theoretical framework quantifying the physical limits of these techniques has been lacking. We formulate such a framework by using signal detection and estimation theory to obtain physical bounds on the detection of neural spikes and the estimation of their occurrence times as set by photon counting statistics (shot noise). These bounds are succinctly expressed via a discriminability index that depends on the kinetics of the optical indicator and the relative fluxes of signal and background photons. This approach facilitates quantitative evaluations of different indicators, detector technologies, and data analyses. Our treatment also provides optimal filtering techniques for optical detection of spikes. We compare various types of Ca(2+) indicators and show that background photons are a chief impediment to voltage sensing. Thus, voltage indicators that change color in response to membrane depolarization may offer a key advantage over those that change intensity. We also examine fluorescence resonance energy transfer indicators and identify the regimes in which the widely used ratiometric analysis of signals is substantially suboptimal. Overall, by showing how different optical factors interact to affect signal quality, our treatment offers a valuable guide to experimental design and provides measures of confidence to assess optically extracted traces of neural activity.

    View Publication Page
    01/23/07 | Polypeptide motions are dominated by peptide group oscillations resulting from dihedral angle correlations between nearest neighbors.
    Fitzgerald JE, Jha AK, Sosnick TR, Freed KF
    Biochemistry. 2007 Jan 23;46(3):669-82. doi: 10.1021/bi061575x

    To identify basic local backbone motions in unfolded chains, simulations are performed for a variety of peptide systems using three popular force fields and for implicit and explicit solvent models. A dominant "crankshaft-like" motion is found that involves only a localized oscillation of the plane of the peptide group. This motion results in a strong anticorrelated motion of the phi angle of the ith residue (phi(i)) and the psi angle of the residue i - 1 (psi(i-1)) on the 0.1 ps time scale. Only a slight correlation is found between the motions of the two backbone dihedral angles of the same residue. Aside from the special cases of glycine and proline, no correlations are found between backbone dihedral angles that are separated by more than one torsion angle. These short time, correlated motions are found both in equilibrium fluctuations and during the transit process between Ramachandran basins, e.g., from the beta to the alpha region. A residue's complete transit from one Ramachandran basin to another, however, occurs in a manner independent of its neighbors' conformational transitions. These properties appear to be intrinsic because they are robust across different force fields, solvent models, nonbonded interaction routines, and most amino acids.

    View Publication Page