Filter
Associated Lab
- Ahrens Lab (1) Apply Ahrens Lab filter
- Darshan Lab (1) Apply Darshan Lab filter
- Dickson Lab (1) Apply Dickson Lab filter
- Fitzgerald Lab (29) Apply Fitzgerald Lab filter
- Funke Lab (1) Apply Funke Lab filter
- Romani Lab (1) Apply Romani Lab filter
- Simpson Lab (1) Apply Simpson Lab filter
- Spruston Lab (2) Apply Spruston Lab filter
- Stringer Lab (1) Apply Stringer Lab filter
- Turner Lab (2) Apply Turner Lab filter
Associated Project Team
Publication Date
- 2025 (1) Apply 2025 filter
- 2024 (2) Apply 2024 filter
- 2023 (4) Apply 2023 filter
- 2022 (3) Apply 2022 filter
- 2020 (3) Apply 2020 filter
- 2019 (2) Apply 2019 filter
- 2018 (2) Apply 2018 filter
- 2016 (1) Apply 2016 filter
- 2015 (3) Apply 2015 filter
- 2014 (1) Apply 2014 filter
- 2013 (1) Apply 2013 filter
- 2012 (1) Apply 2012 filter
- 2011 (1) Apply 2011 filter
- 2009 (1) Apply 2009 filter
- 2008 (1) Apply 2008 filter
- 2007 (2) Apply 2007 filter
Type of Publication
29 Publications
Showing 21-29 of 29 resultsTo identify basic local backbone motions in unfolded chains, simulations are performed for a variety of peptide systems using three popular force fields and for implicit and explicit solvent models. A dominant "crankshaft-like" motion is found that involves only a localized oscillation of the plane of the peptide group. This motion results in a strong anticorrelated motion of the phi angle of the ith residue (phi(i)) and the psi angle of the residue i - 1 (psi(i-1)) on the 0.1 ps time scale. Only a slight correlation is found between the motions of the two backbone dihedral angles of the same residue. Aside from the special cases of glycine and proline, no correlations are found between backbone dihedral angles that are separated by more than one torsion angle. These short time, correlated motions are found both in equilibrium fluctuations and during the transit process between Ramachandran basins, e.g., from the beta to the alpha region. A residue's complete transit from one Ramachandran basin to another, however, occurs in a manner independent of its neighbors' conformational transitions. These properties appear to be intrinsic because they are robust across different force fields, solvent models, nonbonded interaction routines, and most amino acids.
We developed a series of statistical potentials to recognize the native protein from decoys, particularly when using only a reduced representation in which each side chain is treated as a single C(beta) atom. Beginning with a highly successful all-atom statistical potential, the Discrete Optimized Protein Energy function (DOPE), we considered the implications of including additional information in the all-atom statistical potential and subsequently reducing to the C(beta) representation. One of the potentials includes interaction energies conditional on backbone geometries. A second potential separates sequence local from sequence nonlocal interactions and introduces a novel reference state for the sequence local interactions. The resultant potentials perform better than the original DOPE statistical potential in decoy identification. Moreover, even upon passing to a reduced C(beta) representation, these statistical potentials outscore the original (all-atom) DOPE potential in identifying native states for sets of decoys. Interestingly, the backbone-dependent statistical potential is shown to retain nearly all of the information content of the all-atom representation in the C(beta) representation. In addition, these new statistical potentials are combined with existing potentials to model hydrogen bonding, torsion energies, and solvation energies to produce even better performing potentials. The ability of the C(beta) statistical potentials to accurately represent protein interactions bodes well for computational efficiency in protein folding calculations using reduced backbone representations, while the extensions to DOPE illustrate general principles for improving knowledge-based potentials.
Foraging animals must use decision-making strategies that dynamically adapt to the changing availability of rewards in the environment. A wide diversity of animals do this by distributing their choices in proportion to the rewards received from each option, Herrnstein’s operant matching law. Theoretical work suggests an elegant mechanistic explanation for this ubiquitous behavior, as operant matching follows automatically from simple synaptic plasticity rules acting within behaviorally relevant neural circuits. However, no past work has mapped operant matching onto plasticity mechanisms in the brain, leaving the biological relevance of the theory unclear. Here we discovered operant matching in Drosophila and showed that it requires synaptic plasticity that acts in the mushroom body and incorporates the expectation of reward. We began by developing a novel behavioral paradigm to measure choices from individual flies as they learn to associate odor cues with probabilistic rewards. We then built a model of the fly mushroom body to explain each fly’s sequential choice behavior using a family of biologically-realistic synaptic plasticity rules. As predicted by past theoretical work, we found that synaptic plasticity rules could explain fly matching behavior by incorporating stimulus expectations, reward expectations, or both. However, by optogenetically bypassing the representation of reward expectation, we abolished matching behavior and showed that the plasticity rule must specifically incorporate reward expectations. Altogether, these results reveal the first synaptic level mechanisms of operant matching and provide compelling evidence for the role of reward expectation signals in the fly brain.
The estimation of visual motion has long been studied as a paradigmatic neural computation, and multiple models have been advanced to explain behavioral and neural responses to motion signals. A broad class of models, originating with the Reichardt correlator model, proposes that animals estimate motion by computing a temporal cross-correlation of light intensities from two neighboring points in visual space. These models provide a good description of experimental data in specific contexts but cannot explain motion percepts in stimuli lacking pairwise correlations. Here, we develop a theoretical formalism that can accommodate diverse stimuli and behavioral goals. To achieve this, we treat motion estimation as a problem of Bayesian inference. Pairwise models emerge as one component of the generalized strategy for motion estimation. However, correlation functions beyond second order enable more accurate motion estimation. Prior expectations that are asymmetric with respect to bright and dark contrast use correlations of both even and odd orders, and we show that psychophysical experiments using visual stimuli with symmetric probability distributions for contrast cannot reveal whether the subject uses odd-order correlators for motion estimation. This result highlights a gap in previous experiments, which have largely relied on symmetric contrast distributions. Our theoretical treatment provides a natural interpretation of many visual motion percepts, indicates that motion estimation should be revisited using a broader class of stimuli, demonstrates how correlation-based motion estimation is related to stimulus statistics, and provides multiple experimentally testable predictions.
Theoretical neuroscientists often try to understand how the structure of a neural network relates to its function by focusing on structural features that would either follow from optimization or occur consistently across possible implementations. Both optimization theories and ensemble modeling approaches have repeatedly proven their worth, and it would simplify theory building considerably if predictions from both theory types could be derived and tested simultaneously. Here we show how tensor formalism from theoretical physics can be used to unify and solve many optimization and ensemble modeling approaches to predicting synaptic connectivity from neuronal responses. We specifically focus on analyzing the solution space of synaptic weights that allow a thresholdlinear neural network to respond in a prescribed way to a limited number of input conditions. For optimization purposes, we compute the synaptic weight vector that minimizes an arbitrary quadratic loss function. For ensemble modeling, we identify synaptic weight features that occur consistently across all solutions bounded by an arbitrary quadratic function. We derive a common solution to this suite of nonlinear problems by showing how each of them reduces to an equivalent linear problem that can be solved analytically. Although identifying the equivalent linear problem is nontrivial, our tensor formalism provides an elegant geometrical perspective that allows us to solve the problem numerically. The final algorithm is applicable to a wide range of interesting neuroscience problems, and the associated geometric insights may carry over to other scientific problems that require constrained optimization.
Both vertebrates and invertebrates perceive illusory motion, known as "reverse-phi," in visual stimuli that contain sequential luminance increments and decrements. However, increment (ON) and decrement (OFF) signals are initially processed by separate visual neurons, and parallel elementary motion detectors downstream respond selectively to the motion of light or dark edges, often termed ON- and OFF-edges. It remains unknown how and where ON and OFF signals combine to generate reverse-phi motion signals. Here, we show that each of Drosophila's elementary motion detectors encodes motion by combining both ON and OFF signals. Their pattern of responses reflects combinations of increments and decrements that co-occur in natural motion, serving to decorrelate their outputs. These results suggest that the general principle of signal decorrelation drives the functional specialization of parallel motion detection channels, including their selectivity for moving light or dark edges.
Modern recording techniques now permit brain-wide sensorimotor circuits to be observed at single neuron resolution in small animals. Extracting theoretical understanding from these recordings requires principles that organize findings and guide future experiments. Here we review theoretical principles that shed light onto brain-wide sensorimotor processing. We begin with an analogy that conceptualizes principles as streetlamps that illuminate the empirical terrain, and we illustrate the analogy by showing how two familiar principles apply in new ways to brain-wide phenomena. We then focus the bulk of the review on describing three more principles that have wide utility for mapping brain-wide neural activity, making testable predictions from highly parameterized mechanistic models, and investigating the computational determinants of neuronal response patterns across the brain.
Goal-directed animal behaviors are typically composed of sequences of motor actions whose order and timing are critical for a successful outcome. Although numerous theoretical models for sequential action generation have been proposed, few have been supported by the identification of control neurons sufficient to elicit a sequence. Here, we identify a pair of descending neurons that coordinate a stereotyped sequence of engagement actions during Drosophila melanogaster male courtship behavior. These actions are initiated sequentially but persist cumulatively, a feature not explained by existing models of sequential behaviors. We find evidence consistent with a ramp-to-threshold mechanism, in which increasing neuronal activity elicits each action independently at successively higher activity thresholds.
In order to localize the neural circuits involved in generating behaviors, it is necessary to assign activity onto anatomical maps of the nervous system. Using brain registration across hundreds of larval zebrafish, we have built an expandable open-source atlas containing molecular labels and definitions of anatomical regions, the Z-Brain. Using this platform and immunohistochemical detection of phosphorylated extracellular signal–regulated kinase (ERK) as a readout of neural activity, we have developed a system to create and contextualize whole-brain maps of stimulus- and behavior-dependent neural activity. This mitogen-activated protein kinase (MAP)-mapping assay is technically simple, and data analysis is completely automated. Because MAP-mapping is performed on freely swimming fish, it is applicable to studies of nearly any stimulus or behavior. Here we demonstrate our high-throughput approach using pharmacological, visual and noxious stimuli, as well as hunting and feeding. The resultant maps outline hundreds of areas associated with behaviors.