Main Menu (Mobile)- Block
Main Menu - Block
We investigate complex, high-dimensional cortical computations using large-scale neural data.
We are constantly bombarded with sensory information, and our brains have to quickly parse this information to determine the relevant sensory features to decide our motor actions. Picking up a coffee mug or catching a ball requires complex visual processing to guide the given motor action. To determine how neurons work together to perform such tasks, we analyze recordings of 20,000+ neurons.
We develop techniques to analyze large-scale neural data, and from these analyses, generate hypotheses about how neural circuits compute behaviorally-relevant visual features.
Some examples of ongoing neuroscience projects in the lab include:
- Creating a neural atlas of behavioral representations across mice and brain areas [see facemap]
- Creating data-inspired methods for structure discovery in large-scale recordings [see rastermap]
- Determining the goals of various visual areas by comparing neural activity to deep neural networks trained on various visual tasks
- Fitting biologically-plausible deep network models to visual cortical neural activity
We also work on tools to process large-scale imaging data:
- Suite2p is a neuronal imaging processing pipeline primarily used for calcium imaging data
- Cellpose is a general anatomical segmentation algorithm for cellular data
We are hiring grad students to work on these projects, please see ad here.