Main Menu (Mobile)- Block

Main Menu - Block

janelia7_blocks-janelia7_secondary_menu | block
More in this page
janelia7_blocks-janelia7_fake_breadcrumb | block
Romani Lab / Publications
custom | custom

Filter

facetapi-Q2b17qCsTdECvJIqZJgYMaGsr8vANl1n | block
facetapi-W9JlIB1X0bjs93n1Alu3wHJQTTgDCBGe | block
facetapi-PV5lg7xuz68EAY8eakJzrcmwtdGEnxR0 | block
facetapi-021SKYQnqXW6ODq5W5dPAFEDBaEJubhN | block
general_search_page-panel_pane_1 | views_panes

43 Publications

Showing 31-40 of 43 results
02/01/15 | Theta sequences are essential for internally generated hippocampal firing fields.
Wang Y, Romani S, Lustig B, Leonardo A, Pastalkova E
Nature Neuroscience. 2015 Feb;18(2):282-8. doi: 10.1038/nn.3904

Sensory cue inputs and memory-related internal brain activities govern the firing of hippocampal neurons, but which specific firing patterns are induced by either of the two processes remains unclear. We found that sensory cues guided the firing of neurons in rats on a timescale of seconds and supported the formation of spatial firing fields. Independently of the sensory inputs, the memory-related network activity coordinated the firing of neurons not only on a second-long timescale, but also on a millisecond-long timescale, and was dependent on medial septum inputs. We propose a network mechanism that might coordinate this internally generated firing. Overall, we suggest that two independent mechanisms support the formation of spatial firing fields in hippocampus, but only the internally organized system supports short-timescale sequential firing and episodic memory.

View Publication Page
01/15/15 | Effects of long-term representations on free recall of unrelated words.
Katkov M, Romani S, Tsodyks M
Learning & Memory. 2015 Jan 15;22(2):101-8. doi: 10.1101/lm.035238.114

Human memory stores vast amounts of information. Yet recalling this information is often challenging when specific cues are lacking. Here we consider an associative model of retrieval where each recalled item triggers the recall of the next item based on the similarity between their long-term neuronal representations. The model predicts that different items stored in memory have different probability to be recalled depending on the size of their representation. Moreover, items with high recall probability tend to be recalled earlier and suppress other items. We performed an analysis of a large data set on free recall and found a highly specific pattern of statistical dependencies predicted by the model, in particular negative correlations between the number of words recalled and their average recall probability. Taken together, experimental and modeling results presented here reveal complex interactions between memory items during recall that severely constrain recall capacity.

View Publication Page
01/01/15 | Short-term plasticity based network model of place cells dynamics.
Romani S, Tsodyks M
Hippocampus. 2015 Jan;25(1):94-105. doi: 10.1002/hipo.22355

Rodent hippocampus exhibits strikingly different regimes of population activity in different behavioral states. During locomotion, hippocampal activity oscillates at theta frequency (5-12 Hz) and cells fire at specific locations in the environment, the place fields. As the animal runs through a place field, spikes are emitted at progressively earlier phases of the theta cycles. During immobility, hippocampus exhibits sharp irregular bursts of activity, with occasional rapid orderly activation of place cells expressing a possible trajectory of the animal. The mechanisms underlying this rich repertoire of dynamics are still unclear. We developed a novel recurrent network model that accounts for the observed phenomena. We assume that the network stores a map of the environment in its recurrent connections, which are endowed with short-term synaptic depression. We show that the network dynamics exhibits two different regimes that are similar to the experimentally observed population activity states in the hippocampus. The operating regime can be solely controlled by external inputs. Our results suggest that short-term synaptic plasticity is a potential mechanism contributing to shape the population activity in hippocampus.

View Publication Page
10/14/14 | Word length effect in free recall of randomly assembled word lists.
Katkov M, Romani S, Tsodyks M
Frontiers in Computational Neuroscience. 2014 Oct 14;8:129. doi: 10.3389/fncom.2014.00129

In serial recall experiments, human subjects are requested to retrieve a list of words in the same order as they were presented. In a classical study, participants were reported to recall more words from study lists composed of short words compared to lists of long words, the word length effect. The world length effect was also observed in free recall experiments, where subjects can retrieve the words in any order. Here we analyzed a large dataset from free recall experiments of unrelated words, where short and long words were randomly mixed, and found a seemingly opposite effect: long words are recalled better than the short ones. We show that our recently proposed mechanism of associative retrieval can explain both these observations. Moreover, the direction of the effect depends solely on the way study lists are composed.

View Publication Page
04/17/14 | Continuous attractor network model for conjunctive position-by-velocity tuning of grid cells.
Si B, Romani S, Tsodyks M
PLoS Computational Biology. 2014 Apr 17;10(4):e1003558. doi: 10.1371/journal.pcbi.1003558

The spatial responses of many of the cells recorded in layer II of rodent medial entorhinal cortex (MEC) show a triangular grid pattern, which appears to provide an accurate population code for animal spatial position. In layer III, V and VI of the rat MEC, grid cells are also selective to head-direction and are modulated by the speed of the animal. Several putative mechanisms of grid-like maps were proposed, including attractor network dynamics, interactions with theta oscillations or single-unit mechanisms such as firing rate adaptation. In this paper, we present a new attractor network model that accounts for the conjunctive position-by-velocity selectivity of grid cells. Our network model is able to perform robust path integration even when the recurrent connections are subject to random perturbations.

View Publication Page
10/01/13 | Scaling laws of associative memory retrieval.
Romani S, Pinkoviezky I, Rubin A, Tsodyks M
Neural Computation. 2013 Oct;25(10):2523-44. doi: 10.1162/NECO_a_00499

Most people have great difficulty in recalling unrelated items. For example, in free recall experiments, lists of more than a few randomly selected words cannot be accurately repeated. Here we introduce a phenomenological model of memory retrieval inspired by theories of neuronal population coding of information. The model predicts nontrivial scaling behaviors for the mean and standard deviation of the number of recalled words for lists of increasing length. Our results suggest that associative information retrieval is a dominating factor that limits the number of recalled items.

View Publication Page
03/01/11 | Intracellular dynamics of virtual place cells.
Romani S, Sejnowski TJ, Tsodyks M
Neural Computation. 2011 Mar;23(3):651-5. doi: 10.1162/NECO_a_00087

The pattern of spikes recorded from place cells in the rodent hippocampus is strongly modulated by both the spatial location in the environment and the theta rhythm. The phases of the spikes in the theta cycle advance during movement through the place field. Recently intracellular recordings from hippocampal neurons (Harvey, Collman, Dombeck, & Tank, 2009 ) showed an increase in the amplitude of membrane potential oscillations inside the place field, which was interpreted as evidence that an intracellular mechanism caused phase precession. Here we show that an existing network model of the hippocampus (Tsodyks, Skaggs, Sejnowski, & McNaughton, 1996 ) can equally reproduce this and other aspects of the intracellular recordings, which suggests that new experiments are needed to distinguish the contributions of intracellular and network mechanisms to phase precession.

View Publication Page
08/05/10 | Continuous attractors with morphed/correlated maps.
Romani S, Tsodyks M
PLoS Computational Biology. 2010 Aug 5;6(8):e1000869. doi: 10.1371/journal.pcbi.1000869

Continuous attractor networks are used to model the storage and representation of analog quantities, such as position of a visual stimulus. The storage of multiple continuous attractors in the same network has previously been studied in the context of self-position coding. Several uncorrelated maps of environments are stored in the synaptic connections, and a position in a given environment is represented by a localized pattern of neural activity in the corresponding map, driven by a spatially tuned input. Here we analyze networks storing a pair of correlated maps, or a morph sequence between two uncorrelated maps. We find a novel state in which the network activity is simultaneously localized in both maps. In this state, a fixed cue presented to the network does not determine uniquely the location of the bump, i.e. the response is unreliable, with neurons not always responding when their preferred input is present. When the tuned input varies smoothly in time, the neuronal responses become reliable and selective for the environment: the subset of neurons responsive to a moving input in one map changes almost completely in the other map. This form of remapping is a non-trivial transformation between the tuned input to the network and the resulting tuning curves of the neurons. The new state of the network could be related to the formation of direction selectivity in one-dimensional environments and hippocampal remapping. The applicability of the model is not confined to self-position representations; we show an instance of the network solving a simple delayed discrimination task.

View Publication Page
08/01/08 | Optimizing one-shot learning with binary synapses.
Romani S, Amit DJ, Amit Y
Neural Computation. 2008 Aug;20(8):1928-50. doi: 10.1162/neco.2008.10-07-618

A network of excitatory synapses trained with a conservative version of Hebbian learning is used as a model for recognizing the familiarity of thousands of once-seen stimuli from those never seen before. Such networks were initially proposed for modeling memory retrieval (selective delay activity). We show that the same framework allows the incorporation of both familiarity recognition and memory retrieval, and estimate the network's capacity. In the case of binary neurons, we extend the analysis of Amit and Fusi (1994) to obtain capacity limits based on computations of signal-to-noise ratio of the field difference between selective and non-selective neurons of learned signals. We show that with fast learning (potentiation probability approximately 1), the most recently learned patterns can be retrieved in working memory (selective delay activity). A much higher number of once-seen learned patterns elicit a realistic familiarity signal in the presence of an external field. With potentiation probability much less than 1 (slow learning), memory retrieval disappears, whereas familiarity recognition capacity is maintained at a similarly high level. This analysis is corroborated in simulations. For analog neurons, where such analysis is more difficult, we simplify the capacity analysis by studying the excess number of potentiated synapses above the steady-state distribution. In this framework, we derive the optimal constraint between potentiation and depression probabilities that maximizes the capacity.

View Publication Page
01/02/08 | Universal memory mechanism for familiarity recognition and identification.
Yakovlev V, Amit DJ, Romani S, Hochstein S
The Journal of Neuroscience : The Official Journal of the Society for Neuroscience. 2008 Jan 2;28(1):239-48. doi: 10.1523/JNEUROSCI.4799-07.2008

Macaque monkeys were tested on a delayed-match-to-multiple-sample task, with either a limited set of well trained images (in randomized sequence) or with never-before-seen images. They performed much better with novel images. False positives were mostly limited to catch-trial image repetitions from the preceding trial. This result implies extremely effective one-shot learning, resembling Standing's finding that people detect familiarity for 10,000 once-seen pictures (with 80% accuracy) (Standing, 1973). Familiarity memory may differ essentially from identification, which embeds and generates contextual information. When encountering another person, we can say immediately whether his or her face is familiar. However, it may be difficult for us to identify the same person. To accompany the psychophysical findings, we present a generic neural network model reproducing these behaviors, based on the same conservative Hebbian synaptic plasticity that generates delay activity identification memory. Familiarity becomes the first step toward establishing identification. Adding an inter-trial reset mechanism limits false positives for previous-trial images. The model, unlike previous proposals, relates repetition-recognition with enhanced neural activity, as recently observed experimentally in 92% of differential cells in prefrontal cortex, an area directly involved in familiarity recognition. There may be an essential functional difference between enhanced responses to novel versus to familiar images: The maximal signal from temporal cortex is for novel stimuli, facilitating additional sensory processing of newly acquired stimuli. The maximal signal for familiar stimuli arising in prefrontal cortex facilitates the formation of selective delay activity, as well as additional consolidation of the memory of the image in an upstream cortical module.

View Publication Page