Filter
Associated Lab
- Ahrens Lab (1) Apply Ahrens Lab filter
- Darshan Lab (1) Apply Darshan Lab filter
- Dickson Lab (1) Apply Dickson Lab filter
- Remove Fitzgerald Lab filter Fitzgerald Lab
- Funke Lab (1) Apply Funke Lab filter
- Simpson Lab (1) Apply Simpson Lab filter
- Spruston Lab (2) Apply Spruston Lab filter
- Stringer Lab (1) Apply Stringer Lab filter
- Turner Lab (2) Apply Turner Lab filter
Associated Project Team
Publication Date
- 2024 (1) Apply 2024 filter
- 2023 (5) Apply 2023 filter
- 2022 (3) Apply 2022 filter
- 2020 (3) Apply 2020 filter
- 2019 (2) Apply 2019 filter
- 2018 (2) Apply 2018 filter
- 2016 (1) Apply 2016 filter
- 2015 (3) Apply 2015 filter
- 2014 (1) Apply 2014 filter
- 2013 (1) Apply 2013 filter
- 2012 (1) Apply 2012 filter
- 2011 (1) Apply 2011 filter
- 2009 (1) Apply 2009 filter
- 2008 (1) Apply 2008 filter
- 2007 (2) Apply 2007 filter
Type of Publication
28 Publications
Showing 1-10 of 28 resultsTo accurately track self-location, animals need to integrate their movements through space. In amniotes, representations of self-location have been found in regions such as the hippocampus. It is unknown whether more ancient brain regions contain such representations and by which pathways they may drive locomotion. Fish displaced by water currents must prevent uncontrolled drift to potentially dangerous areas. We found that larval zebrafish track such movements and can later swim back to their earlier location. Whole-brain functional imaging revealed the circuit enabling this process of positional homeostasis. Position-encoding brainstem neurons integrate optic flow, then bias future swimming to correct for past displacements by modulating inferior olive and cerebellar activity. Manipulation of position-encoding or olivary neurons abolished positional homeostasis or evoked behavior as if animals had experienced positional shifts. These results reveal a multiregional hindbrain circuit in vertebrates for optic flow integration, memory of self-location, and its neural pathway to behavior.Competing Interest StatementThe authors have declared no competing interest.
Neural computation in biological and artificial networks relies on nonlinear synaptic integration. The structural connectivity matrix of synaptic weights between neurons is a critical determinant of overall network function. However, quantitative links between neural network structure and function are complex and subtle. For example, many networks can give rise to similar functional responses, and the same network can function differently depending on context. Whether certain patterns of synaptic connectivity are required to generate specific network-level computations is largely unknown. Here we introduce a geometric framework for identifying synaptic connections required by steady-state responses in recurrent networks of rectified-linear neurons. Assuming that the number of specified response patterns does not exceed the number of input synapses, we analytically calculate all feedforward and recurrent connectivity matrices that can generate the specified responses from the network inputs. We then use this analytical characterization to rigorously analyze the solution space geometry and derive certainty conditions guaranteeing a non-zero synapse between neurons. Numerical simulations of feedforward and recurrent networks verify our analytical results. Our theoretical framework could be applied to neural activity data to make anatomical predictions that follow generally from the model architecture. It thus provides novel opportunities for discerning what model features are required to accurately relate neural network structure and function.
All animals must transform ambiguous sensory data into successful behavior. This requires sensory representations that accurately reflect the statistics of natural stimuli and behavior. Multiple studies show that visual motion processing is tuned for accuracy under naturalistic conditions, but the sensorimotor circuits extracting these cues and implementing motion-guided behavior remain unclear. Here we show that the larval zebrafish retina extracts a diversity of naturalistic motion cues, and the retinorecipient pretectum organizes these cues around the elements of behavior. We find that higher-order motion stimuli, gliders, induce optomotor behavior matching expectations from natural scene analyses. We then image activity of retinal ganglion cell terminals and pretectal neurons. The retina exhibits direction-selective responses across glider stimuli, and anatomically clustered pretectal neurons respond with magnitudes matching behavior. Peripheral computations thus reflect natural input statistics, whereas central brain activity precisely codes information needed for behavior. This general principle could organize sensorimotor transformations across animal species.
Animals detect motion using a variety of visual cues that reflect regularities in the natural world. Experiments in animals across phyla have shown that motion percepts incorporate both pairwise and triplet spatiotemporal correlations that could theoretically benefit motion computation. However, it remains unclear how visual systems assemble these cues to build accurate motion estimates. Here we used systematic behavioral measurements of fruit fly motion perception to show how flies combine local pairwise and triplet correlations to reduce variability in motion estimates across natural scenes. By generating synthetic images with statistics controlled by maximum entropy distributions, we show that the triplet correlations are useful only when images have light-dark asymmetries that mimic natural ones. This suggests that asymmetric ON-OFF processing is tuned to the particular statistics of natural scenes. Since all animals encounter the world's light-dark asymmetries, many visual systems are likely to use asymmetric ON-OFF processing to improve motion estimation.
A pathogenetic feature of Alzhemier disease is the aggregation of monomeric beta-amyloid proteins (Abeta) to form oligomers. Usually these oligomers of long peptides aggregate on time scales of microseconds or longer, making computational studies using atomistic molecular dynamics models prohibitively expensive and making it essential to develop computational models that are cheaper and at the same time faithful to physical features of the process. We benchmark the ability of our implicit solvent model to describe equilibrium and dynamic properties of monomeric Abeta(10-35) using all-atom Langevin dynamics (LD) simulations, since Alphabeta(10-35) is the only fragment whose monomeric properties have been measured. The accuracy of the implicit solvent model is tested by comparing its predictions with experiment and with those from a new explicit water MD simulation, (performed using CHARMM and the TIP3P water model) which is approximately 200 times slower than the implicit water simulations. The dependence on force field is investigated by running multiple trajectories for Alphabeta(10-35) using the CHARMM, OPLS-aal, and GS-AMBER94 force fields, whereas the convergence to equilibrium is tested for each force field by beginning separate trajectories from the native NMR structure, a completely stretched structure, and from unfolded initial structures. The NMR order parameter, S2, is computed for each trajectory and is compared with experimental data to assess the best choice for treating aggregates of Alphabeta. The computed order parameters vary significantly with force field. Explicit and implicit solvent simulations using the CHARMM force fields display excellent agreement with each other and once again support the accuracy of the implicit solvent model. Alphabeta(10-35) exhibits great flexibility, consistent with experiment data for the monomer in solution, while maintaining a general strand-loop-strand motif with a solvent-exposed hydrophobic patch that is believed to be important for aggregation. Finally, equilibration of the peptide structure requires an implicit solvent LD simulation as long as 30 ns.
Breakthrough technologies for monitoring and manipulating single-neuron activity provide unprecedented opportunities for whole-brain neuroscience in larval zebrafish1–9. Understanding the neural mechanisms of visually guided behavior also requires precise stimulus control, but little prior research has accounted for physical distortions that result from refraction and reflection at an air-water interface that usually separates the projected stimulus from the fish10–12. Here we provide a computational tool that transforms between projected and received stimuli in order to detect and control these distortions. The tool considers the most commonly encountered interface geometry, and we show that this and other common configurations produce stereotyped distortions. By correcting these distortions, we reduced discrepancies in the literature concerning stimuli that evoke escape behavior13,14, and we expect this tool will help reconcile other confusing aspects of the literature. This tool also aids experimental design, and we illustrate the dangers that uncorrected stimuli pose to receptive field mapping experiments.
One approach to super-resolution fluorescence microscopy, termed stochastic localization microscopy, relies on the nanometer scale spatial localization of individual fluorescent emitters that stochastically label specific features of the specimen. The precision of emitter localization is an important determinant of the resulting image resolution but is insufficient to specify how well the derived images capture the structure of the specimen. We address this deficiency by considering the inference of specimen structure based on the estimated emitter locations. By using estimation theory, we develop a measure of spatial resolution that jointly depends on the density of the emitter labels, the precision of emitter localization, and prior information regarding the spatial frequency content of the labeled object. The Nyquist criterion does not set the scaling of this measure with emitter number. Given prior information and a fixed emitter labeling density, our resolution measure asymptotes to a finite value as the precision of emitter localization improves. By considering the present experimental capabilities, this asymptotic behavior implies that further resolution improvements require increases in labeling density above typical current values. Our treatment also yields algorithms to enhance reliable image features. Overall, our formalism facilitates the rigorous statistical interpretation of the data produced by stochastic localization imaging techniques.
Learning in deep neural networks is known to depend critically on the knowledge embedded in the initial network weights. However, few theoretical results have precisely linked prior knowledge to learning dynamics. Here we derive exact solutions to the dynamics of learning with rich prior knowledge in deep linear networks by generalising Fukumizu's matrix Riccati solution \citep{fukumizu1998effect}. We obtain explicit expressions for the evolving network function, hidden representational similarity, and neural tangent kernel over training for a broad class of initialisations and tasks. The expressions reveal a class of task-independent initialisations that radically alter learning dynamics from slow non-linear dynamics to fast exponential trajectories while converging to a global optimum with identical representational similarity, dissociating learning trajectories from the structure of initial internal representations. We characterise how network weights dynamically align with task structure, rigorously justifying why previous solutions successfully described learning from small initial weights without incorporating their fine-scale structure. Finally, we discuss the implications of these findings for continual learning, reversal learning and learning of structured knowledge. Taken together, our results provide a mathematical toolkit for understanding the impact of prior knowledge on deep learning.
Sighted animals extract motion information from visual scenes by processing spatiotemporal patterns of light falling on the retina. The dominant models for motion estimation exploit intensity correlations only between pairs of points in space and time. Moving natural scenes, however, contain more complex correlations. We found that fly and human visual systems encode the combined direction and contrast polarity of moving edges using triple correlations that enhance motion estimation in natural environments. Both species extracted triple correlations with neural substrates tuned for light or dark edges, and sensitivity to specific triple correlations was retained even as light and dark edge motion signals were combined. Thus, both species separately process light and dark image contrasts to capture motion signatures that can improve estimation accuracy. This convergence argues that statistical structures in natural scenes have greatly affected visual processing, driving a common computational strategy over 500 million years of evolution.
Detailed descriptions of brain-scale sensorimotor circuits underlying vertebrate behavior remain elusive. Recent advances in zebrafish neuroscience offer new opportunities to dissect such circuits via whole-brain imaging, behavioral analysis, functional perturbations, and network modeling. Here, we harness these tools to generate a brain-scale circuit model of the optomotor response, an orienting behavior evoked by visual motion. We show that such motion is processed by diverse neural response types distributed across multiple brain regions. To transform sensory input into action, these regions sequentially integrate eye- and direction-specific sensory streams, refine representations via interhemispheric inhibition, and demix locomotor instructions to independently drive turning and forward swimming. While experiments revealed many neural response types throughout the brain, modeling identified the dimensions of functional connectivity most critical for the behavior. We thus reveal how distributed neurons collaborate to generate behavior and illustrate a paradigm for distilling functional circuit models from whole-brain data.