Main Menu (Mobile)- Block

Main Menu - Block

Stringer Lab

janelia7_blocks-janelia7_secondary_menu | block
More in this Lab Landing Page
custom_misc-custom_misc_lab_updates | block
node:field_content_header | entity_field
Current Research
node:field_content_summary | entity_field

We are constantly bombarded with sensory information, and our brains have to quickly parse this information to determine the relevant sensory features to decide our motor actions. Picking up a coffee mug or catching a ball requires complex visual processing to guide the given motor action. To determine how neurons work together to perform such tasks, we analyze recordings of 20,000+ neurons.

node:body | entity_field

We develop techniques to analyze large-scale neural data, and from these analyses, generate hypotheses about how neural circuits compute behaviorally-relevant visual features.

Some examples of ongoing neuroscience projects in the lab include:

  • Creating a neural atlas of behavioral representations across mice and brain areas [see facemap]
  • Creating data-inspired methods for structure discovery in large-scale recordings [see rastermap]
  • Determining the goals of various visual areas by comparing neural activity to deep neural networks trained on various visual tasks
  • Fitting biologically-plausible deep network models to visual cortical neural activity

We also work on tools to process large-scale imaging data:

  • Suite2p is a neuronal imaging processing pipeline primarily used for calcium imaging data
  • Cellpose is a general anatomical segmentation algorithm for cellular data

We are hiring students and postdocs to work on these projects, please see ad here.