Main Menu (Mobile)- Block

Main Menu - Block

janelia7_blocks-janelia7_fake_breadcrumb | block
Lee Tzumin Lab / Publications
custom | custom

Filter

facetapi-Q2b17qCsTdECvJIqZJgYMaGsr8vANl1n | block

Associated Lab

facetapi-PV5lg7xuz68EAY8eakJzrcmwtdGEnxR0 | block
facetapi-021SKYQnqXW6ODq5W5dPAFEDBaEJubhN | block

Type of Publication

general_search_page-panel_pane_1 | views_panes

4 Publications

Showing 1-4 of 4 results
Your Criteria:
    11/14/14 | Variance predicts salience in central sensory processing.
    Hermundstad AM, Briguglio JJ, Conte MM, Victor JD, Balasubramanian V, Tkačik G
    eLife. 2014;3:. doi: 10.7554/eLife.03722

    Information processing in the sensory periphery is shaped by natural stimulus statistics. In the periphery, a transmission bottleneck constrains performance; thus efficient coding implies that natural signal components with a predictably wider range should be compressed. In a different regime--when sampling limitations constrain performance--efficient coding implies that more resources should be allocated to informative features that are more variable. We propose that this regime is relevant for sensory cortex when it extracts complex features from limited numbers of sensory samples. To test this prediction, we use central visual processing as a model: we show that visual sensitivity for local multi-point spatial correlations, described by dozens of independently-measured parameters, can be quantitatively predicted from the structure of natural images. This suggests that efficient coding applies centrally, where it extends to higher-order sensory features and operates in a regime in which sensitivity increases with feature variability.

    View Publication Page
    05/15/14 | Structurally-constrained relationships between cognitive states in the human brain.
    Hermundstad AM, Brown KS, Bassett DS, Aminoff EM, Frithsen A, Johnson A, Tipper CM, Miller MB, Grafton ST, Carlson JM
    PLoS computational biology. 2014 May;10(5):e1003591. doi: 10.1371/journal.pcbi.1003591

    The anatomical connectivity of the human brain supports diverse patterns of correlated neural activity that are thought to underlie cognitive function. In a manner sensitive to underlying structural brain architecture, we examine the extent to which such patterns of correlated activity systematically vary across cognitive states. Anatomical white matter connectivity is compared with functional correlations in neural activity measured via blood oxygen level dependent (BOLD) signals. Functional connectivity is separately measured at rest, during an attention task, and during a memory task. We assess these structural and functional measures within previously-identified resting-state functional networks, denoted task-positive and task-negative networks, that have been independently shown to be strongly anticorrelated at rest but also involve regions of the brain that routinely increase and decrease in activity during task-driven processes. We find that the density of anatomical connections within and between task-positive and task-negative networks is differentially related to strong, task-dependent correlations in neural activity. The space mapped out by the observed structure-function relationships is used to define a quantitative measure of separation between resting, attention, and memory states. We find that the degree of separation between states is related to both general measures of behavioral performance and relative differences in task-specific measures of attention versus memory performance. These findings suggest that the observed separation between cognitive states reflects underlying organizational principles of human brain structure and function.

    View Publication Page
    04/09/13 | Structural foundations of resting-state and task-based functional connectivity in the human brain.
    Hermundstad AM, Bassett DS, Brown KS, Aminoff EM, Clewett D, Freeman S, Frithsen A, Johnson A, Tipper CM, Miller MB, Grafton ST, Carlson JM
    Proceedings of the National Academy of Sciences of the United States of America. 2013 Apr 9;110(15):6169-74. doi: 10.1073/pnas.1219562110

    Magnetic resonance imaging enables the noninvasive mapping of both anatomical white matter connectivity and dynamic patterns of neural activity in the human brain. We examine the relationship between the structural properties of white matter streamlines (structural connectivity) and the functional properties of correlations in neural activity (functional connectivity) within 84 healthy human subjects both at rest and during the performance of attention- and memory-demanding tasks. We show that structural properties, including the length, number, and spatial location of white matter streamlines, are indicative of and can be inferred from the strength of resting-state and task-based functional correlations between brain regions. These results, which are both representative of the entire set of subjects and consistently observed within individual subjects, uncover robust links between structural and functional connectivity in the human brain.

    View Publication Page
    06/30/11 | Learning, memory, and the role of neural network architecture.
    Hermundstad AM, Brown KS, Bassett DS, Carlson JM
    PLoS computational biology. 2011 Jun;7(6):e1002063. doi: 10.1371/journal.pcbi.1002063

    The performance of information processing systems, from artificial neural networks to natural neuronal ensembles, depends heavily on the underlying system architecture. In this study, we compare the performance of parallel and layered network architectures during sequential tasks that require both acquisition and retention of information, thereby identifying tradeoffs between learning and memory processes. During the task of supervised, sequential function approximation, networks produce and adapt representations of external information. Performance is evaluated by statistically analyzing the error in these representations while varying the initial network state, the structure of the external information, and the time given to learn the information. We link performance to complexity in network architecture by characterizing local error landscape curvature. We find that variations in error landscape structure give rise to tradeoffs in performance; these include the ability of the network to maximize accuracy versus minimize inaccuracy and produce specific versus generalizable representations of information. Parallel networks generate smooth error landscapes with deep, narrow minima, enabling them to find highly specific representations given sufficient time. While accurate, however, these representations are difficult to generalize. In contrast, layered networks generate rough error landscapes with a variety of local minima, allowing them to quickly find coarse representations. Although less accurate, these representations are easily adaptable. The presence of measurable performance tradeoffs in both layered and parallel networks has implications for understanding the behavior of a wide variety of natural and artificial learning systems.

    View Publication Page