Main Menu (Mobile)- Block

Main Menu - Block

janelia7_blocks-janelia7_fake_breadcrumb | block
Koyama Lab / Publications
custom | custom

Filter

facetapi-Q2b17qCsTdECvJIqZJgYMaGsr8vANl1n | block
facetapi-W9JlIB1X0bjs93n1Alu3wHJQTTgDCBGe | block

Associated Project Team

facetapi-PV5lg7xuz68EAY8eakJzrcmwtdGEnxR0 | block
facetapi-021SKYQnqXW6ODq5W5dPAFEDBaEJubhN | block
general_search_page-panel_pane_1 | views_panes

3 Publications

Showing 1-3 of 3 results
Your Criteria:
    12/22/22 | A brainstem integrator for self-localization and positional homeostasis
    Yang E, Zwart MF, Rubinov M, James B, Wei Z, Narayan S, Vladimirov N, Mensh BD, Fitzgerald JE, Ahrens MB
    Cell. 2022 Dec 22;185(26):5011-5027.e20. doi: 10.1101/2021.11.26.468907

    To accurately track self-location, animals need to integrate their movements through space. In amniotes, representations of self-location have been found in regions such as the hippocampus. It is unknown whether more ancient brain regions contain such representations and by which pathways they may drive locomotion. Fish displaced by water currents must prevent uncontrolled drift to potentially dangerous areas. We found that larval zebrafish track such movements and can later swim back to their earlier location. Whole-brain functional imaging revealed the circuit enabling this process of positional homeostasis. Position-encoding brainstem neurons integrate optic flow, then bias future swimming to correct for past displacements by modulating inferior olive and cerebellar activity. Manipulation of position-encoding or olivary neurons abolished positional homeostasis or evoked behavior as if animals had experienced positional shifts. These results reveal a multiregional hindbrain circuit in vertebrates for optic flow integration, memory of self-location, and its neural pathway to behavior.Competing Interest StatementThe authors have declared no competing interest.

    View Publication Page
    12/09/22 | Exact learning dynamics of deep linear networks with prior knowledge
    Lukas Braun , Clémentine Dominé , James Fitzgerald , Andrew Saxe
    Neural Information Processing Systems:

    Learning in deep neural networks is known to depend critically on the knowledge embedded in the initial network weights. However, few theoretical results have precisely linked prior knowledge to learning dynamics. Here we derive exact solutions to the dynamics of learning with rich prior knowledge in deep linear networks by generalising Fukumizu's matrix Riccati solution \citep{fukumizu1998effect}. We obtain explicit expressions for the evolving network function, hidden representational similarity, and neural tangent kernel over training for a broad class of initialisations and tasks. The expressions reveal a class of task-independent initialisations that radically alter learning dynamics from slow non-linear dynamics to fast exponential trajectories while converging to a global optimum with identical representational similarity, dissociating learning trajectories from the structure of initial internal representations. We characterise how network weights dynamically align with task structure, rigorously justifying why previous solutions successfully described learning from small initial weights without incorporating their fine-scale structure. Finally, we discuss the implications of these findings for continual learning, reversal learning and learning of structured knowledge. Taken together, our results provide a mathematical toolkit for understanding the impact of prior knowledge on deep learning.

    View Publication Page
    06/29/22 | A geometric framework to predict structure from function in neural networks
    Biswas T, Fitzgerald JE
    Physical Review Research. 2022 Jun 29;4(2):023255. doi: 10.1103/PhysRevResearch.4.023255

    Neural computation in biological and artificial networks relies on nonlinear synaptic integration. The structural connectivity matrix of synaptic weights between neurons is a critical determinant of overall network function. However, quantitative links between neural network structure and function are complex and subtle. For example, many networks can give rise to similar functional responses, and the same network can function differently depending on context. Whether certain patterns of synaptic connectivity are required to generate specific network-level computations is largely unknown. Here we introduce a geometric framework for identifying synaptic connections required by steady-state responses in recurrent networks of rectified-linear neurons. Assuming that the number of specified response patterns does not exceed the number of input synapses, we analytically calculate all feedforward and recurrent connectivity matrices that can generate the specified responses from the network inputs. We then use this analytical characterization to rigorously analyze the solution space geometry and derive certainty conditions guaranteeing a non-zero synapse between neurons. Numerical simulations of feedforward and recurrent networks verify our analytical results. Our theoretical framework could be applied to neural activity data to make anatomical predictions that follow generally from the model architecture. It thus provides novel opportunities for discerning what model features are required to accurately relate neural network structure and function.

    View Publication Page