Main Menu (Mobile)- Block
- Overview
-
Support Teams
- Overview
- Anatomy and Histology
- Cryo-Electron Microscopy
- Electron Microscopy
- Flow Cytometry
- Gene Targeting and Transgenics
- High Performance Computing
- Immortalized Cell Line Culture
- Integrative Imaging
- Invertebrate Shared Resource
- Janelia Experimental Technology
- Mass Spectrometry
- Media Prep
- Molecular Genomics
- Primary & iPS Cell Culture
- Project Pipeline Support
- Project Technical Resources
- Quantitative Genomics
- Scientific Computing
- Viral Tools
- Vivarium
- Open Science
- You + Janelia
- About Us
Main Menu - Block
- Overview
- Anatomy and Histology
- Cryo-Electron Microscopy
- Electron Microscopy
- Flow Cytometry
- Gene Targeting and Transgenics
- High Performance Computing
- Immortalized Cell Line Culture
- Integrative Imaging
- Invertebrate Shared Resource
- Janelia Experimental Technology
- Mass Spectrometry
- Media Prep
- Molecular Genomics
- Primary & iPS Cell Culture
- Project Pipeline Support
- Project Technical Resources
- Quantitative Genomics
- Scientific Computing
- Viral Tools
- Vivarium
Abstract
Memories are believed to be stored in synapses and retrieved by reactivating neural ensembles. Learning alters synaptic weights, which can interfere with previously stored memories that share the same synapses, creating a trade-off between plasticity and stability. Interestingly, neural representations change even in stable environments, without apparent learning or forgetting-a phenomenon known as representational drift. Theoretical studies have suggested that multiple neural representations can correspond to a memory, with postlearning exploration of these representation solutions driving drift. However, it remains unclear whether representations explored through drift differ from those learned or offer unique advantages. Here, we show that representational drift uncovers noise-robust representations that are otherwise difficult to learn. We first define the nonlinear solution space manifold of synaptic weights for fixed input-output mappings, which allows us to disentangle drift from learning and forgetting and simulate drift as diffusion within this manifold. Solutions explored by drift have many inactive and saturated neurons, making them robust to weight perturbations due to noise or continual learning. Such solutions are prevalent and entropically favored by drift, but their lack of gradients makes them difficult to learn and nonconducive to future learning. To overcome this, we introduce an allocation procedure that selectively shifts representations for new stimuli into a learning-conducive regime. By combining allocation with drift, we resolve the trade-off between learnability and robustness.



