Main Menu (Mobile)- Block
- Our Research
-
Support Teams
- Overview
- Anatomy and Histology
- Cryo-Electron Microscopy
- Electron Microscopy
- Flow Cytometry
- Gene Targeting and Transgenics
- Immortalized Cell Line Culture
- Integrative Imaging
- Invertebrate Shared Resource
- Janelia Experimental Technology
- Mass Spectrometry
- Media Prep
- Molecular Genomics
- Primary & iPS Cell Culture
- Project Pipeline Support
- Project Technical Resources
- Quantitative Genomics
- Scientific Computing Software
- Scientific Computing Systems
- Viral Tools
- Vivarium
- Open Science
- You + Janelia
- About Us
Main Menu - Block
- Overview
- Anatomy and Histology
- Cryo-Electron Microscopy
- Electron Microscopy
- Flow Cytometry
- Gene Targeting and Transgenics
- Immortalized Cell Line Culture
- Integrative Imaging
- Invertebrate Shared Resource
- Janelia Experimental Technology
- Mass Spectrometry
- Media Prep
- Molecular Genomics
- Primary & iPS Cell Culture
- Project Pipeline Support
- Project Technical Resources
- Quantitative Genomics
- Scientific Computing Software
- Scientific Computing Systems
- Viral Tools
- Vivarium
Abstract
Learning in deep neural networks is known to depend critically on the knowledge embedded in the initial network weights. However, few theoretical results have precisely linked prior knowledge to learning dynamics. Here we derive exact solutions to the dynamics of learning with rich prior knowledge in deep linear networks by generalising Fukumizu's matrix Riccati solution \citep{fukumizu1998effect}. We obtain explicit expressions for the evolving network function, hidden representational similarity, and neural tangent kernel over training for a broad class of initialisations and tasks. The expressions reveal a class of task-independent initialisations that radically alter learning dynamics from slow non-linear dynamics to fast exponential trajectories while converging to a global optimum with identical representational similarity, dissociating learning trajectories from the structure of initial internal representations. We characterise how network weights dynamically align with task structure, rigorously justifying why previous solutions successfully described learning from small initial weights without incorporating their fine-scale structure. Finally, we discuss the implications of these findings for continual learning, reversal learning and learning of structured knowledge. Taken together, our results provide a mathematical toolkit for understanding the impact of prior knowledge on deep learning.