Main Menu (Mobile)- Block

Main Menu - Block

custom | custom

Search Results

filters_region_cap | custom


facetapi-Q2b17qCsTdECvJIqZJgYMaGsr8vANl1n | block
facetapi-61yz1V0li8B1bixrCWxdAe2aYiEXdhd0 | block
facetapi-PV5lg7xuz68EAY8eakJzrcmwtdGEnxR0 | block
general_search_page-panel_pane_1 | views_panes

2 Janelia Publications

Showing 1-2 of 2 results
Your Criteria:
    06/29/22 | A geometric framework to predict structure from function in neural networks
    Biswas T, Fitzgerald JE
    Physical Review Research. 2022 Jun 29;4(2):023255. doi: 10.1103/PhysRevResearch.4.023255

    Neural computation in biological and artificial networks relies on nonlinear synaptic integration. The structural connectivity matrix of synaptic weights between neurons is a critical determinant of overall network function. However, quantitative links between neural network structure and function are complex and subtle. For example, many networks can give rise to similar functional responses, and the same network can function differently depending on context. Whether certain patterns of synaptic connectivity are required to generate specific network-level computations is largely unknown. Here we introduce a geometric framework for identifying synaptic connections required by steady-state responses in recurrent networks of rectified-linear neurons. Assuming that the number of specified response patterns does not exceed the number of input synapses, we analytically calculate all feedforward and recurrent connectivity matrices that can generate the specified responses from the network inputs. We then use this analytical characterization to rigorously analyze the solution space geometry and derive certainty conditions guaranteeing a non-zero synapse between neurons. Numerical simulations of feedforward and recurrent networks verify our analytical results. Our theoretical framework could be applied to neural activity data to make anatomical predictions that follow generally from the model architecture. It thus provides novel opportunities for discerning what model features are required to accurately relate neural network structure and function.

    View Publication Page
    05/25/22 | Expectation-based learning rules underlie dynamic foraging in Drosophila
    Adithya E. Rajagopalan , Ran Darshan , James E. Fitzgerald , Glenn C. Turner
    bioRxiv. 2022 May 25:. doi: 10.1101/2022.05.24.493252

    Foraging animals must use decision-making strategies that dynamically account for uncertainty in the world. To cope with this uncertainty, animals have developed strikingly convergent strategies that use information about multiple past choices and reward to learn representations of the current state of the world. However, the underlying learning rules that drive the required learning have remained unclear. Here, working in the relatively simple nervous system of Drosophila, we combine behavioral measurements, mathematical modeling, and neural circuit perturbations to show that dynamic foraging depends on a learning rule incorporating reward expectation. Using a novel olfactory dynamic foraging task, we characterize the behavioral strategies used by individual flies when faced with unpredictable rewards and show, for the first time, that they perform operant matching. We build on past theoretical work and demonstrate that this strategy requires the existence of a covariance-based learning rule in the mushroom body - a hub for learning in the fly. In particular, the behavioral consequences of optogenetic perturbation experiments suggest that this learning rule incorporates reward expectation. Our results identify a key element of the algorithm underlying dynamic foraging in flies and suggest a comprehensive mechanism that could be fundamental to these behaviors across species.

    View Publication Page