Filter
Associated Lab
- Druckmann Lab (3) Apply Druckmann Lab filter
- Hermundstad Lab (2) Apply Hermundstad Lab filter
- Jayaraman Lab (4) Apply Jayaraman Lab filter
- Lee (Albert) Lab (1) Apply Lee (Albert) Lab filter
- Leonardo Lab (1) Apply Leonardo Lab filter
- Magee Lab (2) Apply Magee Lab filter
- Pastalkova Lab (1) Apply Pastalkova Lab filter
- Reiser Lab (4) Apply Reiser Lab filter
- Remove Romani Lab filter Romani Lab
- Rubin Lab (1) Apply Rubin Lab filter
- Spruston Lab (1) Apply Spruston Lab filter
- Svoboda Lab (5) Apply Svoboda Lab filter
Associated Project Team
Publication Date
- 2023 (3) Apply 2023 filter
- 2022 (3) Apply 2022 filter
- 2021 (4) Apply 2021 filter
- 2020 (2) Apply 2020 filter
- 2019 (3) Apply 2019 filter
- 2018 (3) Apply 2018 filter
- 2017 (6) Apply 2017 filter
- 2016 (2) Apply 2016 filter
- 2015 (4) Apply 2015 filter
- 2014 (2) Apply 2014 filter
- 2013 (1) Apply 2013 filter
- 2011 (1) Apply 2011 filter
- 2010 (1) Apply 2010 filter
- 2008 (2) Apply 2008 filter
- 2007 (1) Apply 2007 filter
- 2006 (1) Apply 2006 filter
- 2005 (1) Apply 2005 filter
Type of Publication
40 Publications
Showing 21-30 of 40 resultsMean-Field theory is extended to recurrent networks of spiking neurons endowed with short-term depression (STD) of synaptic transmission. The extension involves the use of the distribution of interspike intervals of an integrate-and-fire neuron receiving a Gaussian current, with a given mean and variance, in input. This, in turn, is used to obtain an accurate estimate of the resulting postsynaptic current in presence of STD. The stationary states of the network are obtained requiring self-consistency for the currents-those driving the emission processes and those generated by the emitted spikes. The model network stores in the distribution of two-state efficacies of excitatory-to-excitatory synapses, a randomly composed set of external stimuli. The resulting synaptic structure allows the network to exhibit selective persistent activity for each stimulus in the set. Theory predicts the onset of selective persistent, or working memory (WM) activity upon varying the constitutive parameters (e.g. potentiated/depressed long-term efficacy ratio, parameters associated with STD), and provides the average emission rates in the various steady states. Theoretical estimates are in remarkably good agreement with data "recorded" in computer simulations of the microscopic model.
Hippocampal area CA3 is thought to play a central role in memory formation and retrieval. Although various network mechanisms have been hypothesized to mediate these computations, direct evidence is lacking. Using intracellular membrane potential recordings from CA3 neurons and optogenetic manipulations in behaving mice we found that place field activity is produced by a symmetric form of Behavioral Timescale Synaptic Plasticity (BTSP) at recurrent synaptic connections among CA3 principal neurons but not at synapses from the dentate gyrus (DG). Additional manipulations revealed that excitatory input from the entorhinal cortex (EC) but not DG was required to update place cell activity based on the animal’s movement. These data were captured by a computational model that used BTSP and an external updating input to produce attractor dynamics under online learning conditions. Additional theoretical results demonstrate the enhanced memory storage capacity of such networks, particularly in the face of correlated input patterns. The evidence sheds light on the cellular and circuit mechanisms of learning and memory formation in the hippocampus.
The dilemma that neurotheorists face is that (1) detailed biophysical models that can be constrained by direct measurements, while being of great importance, offer no immediate insights into cognitive processes in the brain, and (2) high-level abstract cognitive models, on the other hand, while relevant for understanding behavior, are largely detached from neuronal processes and typically have many free, experimentally unconstrained parameters that have to be tuned to a particular data set and, hence, cannot be readily generalized to other experimental paradigms. In this contribution, we propose a set of "first principles" for neurally inspired cognitive modeling of memory retrieval that has no biologically unconstrained parameters and can be analyzed mathematically both at neuronal and cognitive levels. We apply this framework to the classical cognitive paradigm of free recall. We show that the resulting model accounts well for puzzling behavioral data on human participants and makes predictions that could potentially be tested with neurophysiological recording techniques.
The brain plans and executes volitional movements. The underlying patterns of neural population activity have been explored in the context of movements of the eyes, limbs, tongue, and head in nonhuman primates and rodents. How do networks of neurons produce the slow neural dynamics that prepare specific movements and the fast dynamics that ultimately initiate these movements? Recent work exploits rapid and calibrated perturbations of neural activity to test specific dynamical systems models that are capable of producing the observed neural activity. These joint experimental and computational studies show that cortical dynamics during motor planning reflect fixed points of neural activity (attractors). Subcortical control signals reshape and move attractors over multiple timescales, causing commitment to specific actions and rapid transitions to movement execution. Experiments in rodents are beginning to reveal how these algorithms are implemented at the level of brain-wide neural circuits. Expected final online publication date for the , Volume 45 is July 2022. Please see http://www.annualreviews.org/page/journal/pubdates for revised estimates.
The brain plans and executes volitional movements. The underlying patterns of neural population activity have been explored in the context of movements of the eyes, limbs, tongue, and head in nonhuman primates and rodents. How do networks of neurons produce the slow neural dynamics that prepare specific movements and the fast dynamics that ultimately initiate these movements? Recent work exploits rapid and calibrated perturbations of neural activity to test specific dynamical systems models that are capable of producing the observed neural activity. These joint experimental and computational studies show that cortical dynamics during motor planning reflect fixed points of neural activity (attractors). Subcortical control signals reshape and move attractors over multiple timescales, causing commitment to specific actions and rapid transitions to movement execution. Experiments in rodents are beginning to reveal how these algorithms are implemented at the level of brain-wide neural circuits.
Human memory can store large amount of information. Nevertheless, recalling is often a challenging task. In a classical free recall paradigm, where participants are asked to repeat a briefly presented list of words, people make mistakes for lists as short as 5 words. We present a model for memory retrieval based on a Hopfield neural network where transition between items are determined by similarities in their long-term memory representations. Meanfield analysis of the model reveals stable states of the network corresponding (1) to single memory representations and (2) intersection between memory representations. We show that oscillating feedback inhibition in the presence of noise induces transitions between these states triggering the retrieval of different memories. The network dynamics qualitatively predicts the distribution of time intervals required to recall new memory items observed in experiments. It shows that items having larger number of neurons in their representation are statistically easier to recall and reveals possible bottlenecks in our ability of retrieving memories. Overall, we propose a neural network model of information retrieval broadly compatible with experimental observations and is consistent with our recent graphical model (Romani et al., 2013).
Diverse sensory systems, from audition to thermosensation, feature a separation of inputs into ON (increments) and OFF (decrements) signals. In the Drosophila visual system, separate ON and OFF pathways compute the direction of motion, yet anatomical and functional studies have identified some crosstalk between these channels. We used this well-studied circuit to ask whether the motion computation depends on ON-OFF pathway crosstalk. Using whole-cell electrophysiology, we recorded visual responses of T4 (ON) and T5 (OFF) cells, mapped their composite ON-OFF receptive fields, and found that they share a similar spatiotemporal structure. We fit a biophysical model to these receptive fields that accurately predicts directionally selective T4 and T5 responses to both ON and OFF moving stimuli. This model also provides a detailed mechanistic explanation for the directional preference inversion in response to the prominent reverse-phi illusion. Finally, we used the steering responses of tethered flying flies to validate the model's predicted effects of varying stimulus parameters on the behavioral turning inversion.
A network of excitatory synapses trained with a conservative version of Hebbian learning is used as a model for recognizing the familiarity of thousands of once-seen stimuli from those never seen before. Such networks were initially proposed for modeling memory retrieval (selective delay activity). We show that the same framework allows the incorporation of both familiarity recognition and memory retrieval, and estimate the network's capacity. In the case of binary neurons, we extend the analysis of Amit and Fusi (1994) to obtain capacity limits based on computations of signal-to-noise ratio of the field difference between selective and non-selective neurons of learned signals. We show that with fast learning (potentiation probability approximately 1), the most recently learned patterns can be retrieved in working memory (selective delay activity). A much higher number of once-seen learned patterns elicit a realistic familiarity signal in the presence of an external field. With potentiation probability much less than 1 (slow learning), memory retrieval disappears, whereas familiarity recognition capacity is maintained at a similarly high level. This analysis is corroborated in simulations. For analog neurons, where such analysis is more difficult, we simplify the capacity analysis by studying the excess number of potentiated synapses above the steady-state distribution. In this framework, we derive the optimal constraint between potentiation and depression probabilities that maximizes the capacity.
A large variability in performance is observed when participants recall briefly presented lists of words. The sources of such variability are not known. Our analysis of a large data set of free recall revealed a small fraction of participants that reached an extremely high performance, including many trials with the recall of complete lists. Moreover, some of them developed a number of consistent input-position-dependent recall strategies, in particular recalling words consecutively ("chaining") or in groups of consecutively presented words ("chunking"). The time course of acquisition and particular choice of positional grouping were variable among participants. Our results show that acquiring positional strategies plays a crucial role in improvement of recall performance.
Ring attractors are a class of recurrent networks hypothesized to underlie the representation of heading direction. Such network structures, schematized as a ring of neurons whose connectivity depends on their heading preferences, can sustain a bump-like activity pattern whose location can be updated by continuous shifts along either turn direction. We recently reported that a population of fly neurons represents the animal's heading via bump-like activity dynamics. We combined two-photon calcium imaging in head-fixed flying flies with optogenetics to overwrite the existing population representation with an artificial one, which was then maintained by the circuit with naturalistic dynamics. A network with local excitation and global inhibition enforces this unique and persistent heading representation. Ring attractor networks have long been invoked in theoretical work; our study provides physiological evidence of their existence and functional architecture.