Filter
Associated Lab
- Druckmann Lab (25) Apply Druckmann Lab filter
- Gonen Lab (2) Apply Gonen Lab filter
- Jayaraman Lab (3) Apply Jayaraman Lab filter
- Keller Lab (1) Apply Keller Lab filter
- Looger Lab (2) Apply Looger Lab filter
- Magee Lab (2) Apply Magee Lab filter
- Podgorski Lab (2) Apply Podgorski Lab filter
- Romani Lab (3) Apply Romani Lab filter
- Rubin Lab (1) Apply Rubin Lab filter
- Schreiter Lab (1) Apply Schreiter Lab filter
- Svoboda Lab (5) Apply Svoboda Lab filter
Publication Date
- 2019 (3) Apply 2019 filter
- 2018 (4) Apply 2018 filter
- 2017 (3) Apply 2017 filter
- 2016 (3) Apply 2016 filter
- 2015 (3) Apply 2015 filter
- 2014 (1) Apply 2014 filter
- 2013 (1) Apply 2013 filter
- 2012 (3) Apply 2012 filter
- 2011 (1) Apply 2011 filter
- 2010 (1) Apply 2010 filter
- 2008 (1) Apply 2008 filter
- 2007 (1) Apply 2007 filter
Type of Publication
25 Publications
Showing 11-20 of 25 resultsNature provides many examples of self- and co-assembling protein-based molecular machines, including icosahedral protein cages that serve as scaffolds, enzymes, and compartments for essential biochemical reactions and icosahedral virus capsids, which encapsidate and protect viral genomes and mediate entry into host cells. Inspired by these natural materials, we report the computational design and experimental characterization of co-assembling, two-component, 120-subunit icosahedral protein nanostructures with molecular weights (1.8 to 2.8 megadaltons) and dimensions (24 to 40 nanometers in diameter) comparable to those of small viral capsids. Electron microscopy, small-angle x-ray scattering, and x-ray crystallography show that 10 designs spanning three distinct icosahedral architectures form materials closely matching the design models. In vitro assembly of icosahedral complexes from independently purified components occurs rapidly, at rates comparable to those of viral capsids, and enables controlled packaging of molecular cargo through charge complementarity. The ability to design megadalton-scale materials with atomic-level accuracy and controllable assembly opens the door to a new generation of genetically programmable protein-based molecular machines.
The icosahedron is the largest of the Platonic solids, and icosahedral protein structures are widely used in biological systems for packaging and transport. There has been considerable interest in repurposing such structures for applications ranging from targeted delivery to multivalent immunogen presentation. The ability to design proteins that self-assemble into precisely specified, highly ordered icosahedral structures would open the door to a new generation of protein containers with properties custom-tailored to specific applications. Here we describe the computational design of a 25-nanometre icosahedral nanocage that self-assembles from trimeric protein building blocks. The designed protein was produced in Escherichia coli, and found by electron microscopy to assemble into a homogenous population of icosahedral particles nearly identical to the design model. The particles are stable in 6.7 molar guanidine hydrochloride at up to 80 degrees Celsius, and undergo extremely abrupt, but reversible, disassembly between 2 molar and 2.25 molar guanidinium thiocyanate. The icosahedron is robust to genetic fusions: one or two copies of green fluorescent protein (GFP) can be fused to each of the 60 subunits to create highly fluorescent 'standard candles' for use in light microscopy, and a designed protein pentamer can be placed in the centre of each of the 20 pentameric faces to modulate the size of the entrance/exit channels of the cage. Such robust and customizable nanocages should have considerable utility in targeted drug delivery, vaccine design and synthetic biology.
Neural activity maintains representations that bridge past and future events, often over many seconds. Network models can produce persistent and ramping activity, but the positive feedback that is critical for these slow dynamics can cause sensitivity to perturbations. Here we use electrophysiology and optogenetic perturbations in the mouse premotor cortex to probe the robustness of persistent neural representations during motor planning. We show that preparatory activity is remarkably robust to large-scale unilateral silencing: detailed neural dynamics that drive specific future movements were quickly and selectively restored by the network. Selectivity did not recover after bilateral silencing of the premotor cortex. Perturbations to one hemisphere are thus corrected by information from the other hemisphere. Corpus callosum bisections demonstrated that premotor cortex hemispheres can maintain preparatory activity independently. Redundancy across selectively coupled modules, as we observed in the premotor cortex, is a hallmark of robust control systems. Network models incorporating these principles show robustness that is consistent with data.
Evaluation of confidence about one's knowledge is key to the brain's ability to monitor cognition. To investigate the neural mechanism of confidence assessment, we examined a biologically realistic spiking network model and found that it reproduced salient behavioral observations and single-neuron activity data from a monkey experiment designed to study confidence about a decision under uncertainty. Interestingly, the model predicts that changes of mind can occur in a mnemonic delay when confidence is low; the probability of changes of mind increases (decreases) with task difficulty in correct (error) trials. Furthermore, a so-called "hard-easy effect" observed in humans naturally emerges, i.e., behavior shows underconfidence (underestimation of correct rate) for easy or moderately difficult tasks and overconfidence (overestimation of correct rate) for very difficult tasks. Importantly, in the model, confidence is computed using a simple neural signal in individual trials, without explicit representation of probability functions. Therefore, even a concept of metacognition can be explained by sampling a stochastic neural activity pattern.
Behavioral strategies employed for chemotaxis have been described across phyla, but the sensorimotor basis of this phenomenon has seldom been studied in naturalistic contexts. Here, we examine how signals experienced during free olfactory behaviors are processed by first-order olfactory sensory neurons (OSNs) of the Drosophila larva. We find that OSNs can act as differentiators that transiently normalize stimulus intensity-a property potentially derived from a combination of integral feedback and feed-forward regulation of olfactory transduction. In olfactory virtual reality experiments, we report that high activity levels of the OSN suppress turning, whereas low activity levels facilitate turning. Using a generalized linear model, we explain how peripheral encoding of olfactory stimuli modulates the probability of switching from a run to a turn. Our work clarifies the link between computations carried out at the sensory periphery and action selection underlying navigation in odor gradients.
Mapping mammalian synaptic connectivity has long been an important goal of neuroscience because knowing how neurons and brain areas are connected underpins an understanding of brain function. Meeting this goal requires advanced techniques with single synapse resolution and large-scale capacity, especially at multiple scales tethering the meso- and micro-scale connectome. Among several advanced LM-based connectome technologies, Array Tomography (AT) and mammalian GFP-Reconstitution Across Synaptic Partners (mGRASP) can provide relatively high-throughput mapping synaptic connectivity at multiple scales. AT- and mGRASP-assisted circuit mapping (ATing and mGRASPing), combined with techniques such as retrograde virus, brain clearing techniques, and activity indicators will help unlock the secrets of complex neural circuits. Here, we discuss these useful new tools to enable mapping of brain circuits at multiple scales, some functional implications of spatial synaptic distribution, and future challenges and directions of these endeavors.
The organization of synaptic connectivity within a neuronal circuit is a prime determinant of circuit function. We performed a comprehensive fine-scale circuit mapping of hippocampal regions (CA3-CA1) using the newly developed synapse labeling method, mGRASP. This mapping revealed spatially nonuniform and clustered synaptic connectivity patterns. Furthermore, synaptic clustering was enhanced between groups of neurons that shared a similar developmental/migration time window, suggesting a mechanism for establishing the spatial structure of synaptic connectivity. Such connectivity patterns are thought to effectively engage active dendritic processing and storage mechanisms, thereby potentially enhancing neuronal feature selectivity.
Mapping mammalian synaptic connectivity has long been an important goal of neuroscientists since it is considered crucial for explaining human perception and behavior. Yet, despite enormous efforts, the overwhelming complexity of the neural circuitry and the lack of appropriate techniques to unravel it have limited the success of efforts to map connectivity. However, recent technological advances designed to overcome the limitations of conventional methods for connectivity mapping may bring about a turning point. Here, we address the promises and pitfalls of these new mapping technologies.
Our brains are capable of remarkably stable stimulus representations despite time-varying neural activity. For instance, during delay periods in working memory tasks, while stimuli are represented in working memory, neurons in the prefrontal cortex, thought to support the memory representation, exhibit time-varying neuronal activity. Since neuronal activity encodes the stimulus, its time-varying dynamics appears to be paradoxical and incompatible with stable network stimulus representations. Indeed, this finding raises a fundamental question: can stable representations only be encoded with stable neural activity, or, its corollary, is every change in activity a sign of change in stimulus representation?
Although the diversity of cortical interneuron electrical properties is well recognized, the number of distinct electrical types (e-types) is still a matter of debate. Recently, descriptions of interneuron variability were standardized by multiple laboratories on the basis of a subjective classification scheme as set out by the Petilla convention (Petilla Interneuron Nomenclature Group, PING). Here, we present a quantitative, statistical analysis of a database of nearly five hundred neurons manually annotated according to the PING nomenclature. For each cell, 38 features were extracted from responses to suprathreshold current stimuli and statistically analyzed to examine whether cortical interneurons subdivide into e-types. We showed that the partitioning into different e-types is indeed the major component of data variability. The analysis suggests refining the PING e-type classification to be hierarchical, whereby most variability is first captured within a coarse subpartition, and then subsequently divided into finer subpartitions. The coarse partition matches the well-known partitioning of interneurons into fast spiking and adapting cells. Finer subpartitions match the burst, continuous, and delayed subtypes. Additionally, our analysis enabled the ranking of features according to their ability to differentiate among e-types. We showed that our quantitative e-type assignment is more than 90% accurate and manages to catch several human errors.