Main Menu (Mobile)- Block

Main Menu - Block

Stringer Lab

janelia7_blocks-janelia7_secondary_menu | block
More in this Lab Landing Page
custom_misc-custom_misc_lab_updates | block
node:field_content_header | entity_field
Current Research
node:field_content_summary | entity_field

We are constantly bombarded with sensory information, and our brains have to quickly parse this information to determine the relevant sensory features to decide our motor actions. Picking up a coffee mug or catching a ball requires complex visual processing to guide the given motor action. To determine how neurons work together to perform such tasks, we developed techniques to record the activity of 20,000+ neurons simultaneously. 

node:body | entity_field

One popular hypothesis in neuroscience was that neural activity is “simple” and low-dimensional, and we could summarize even 20,000 neuron recordings with just a few numbers at any one time. Many analytical tools and theories have been developed based on this assumption. However, in our large-scale recordings we found that neural responses to visual stimuli were high-dimensional, exploring many diverse patterns of activity that could not be reduced to a few numbers. Now that we have access to this interesting high-dimensional neural data, how do we extract structure and understanding from it?

Using deep neural networks and other machine learning techniques, my lab creates tools to visualize large-scale data and to explore high-dimensional structure. I’m particularly interested in using these tools to determine the computations that visual areas perform, such as object segmentation or object recognition, and how these computations are implemented. I’m also interested in the integration of complex behavioral and sensory information for decision-making.