Main Menu (Mobile)- Block
- Our Research
-
Support Teams
- Overview
- Anatomy and Histology
- Cell and Tissue Culture
- Cryo-Electron Microscopy
- Electron Microscopy
- Flow Cytometry
- Fly Facility
- Gene Targeting and Transgenics
- Janelia Experimental Technology
- Integrative Imaging
- Media Prep
- Molecular Genomics
- Project Pipeline Support
- Project Technical Resources
- Quantitative Genomics
- Scientific Computing Software
- Scientific Computing Systems
- Viral Tools
- Vivarium
- Open Science
- You + Janelia
- About Us
Main Menu - Block
- Overview
- Anatomy and Histology
- Cell and Tissue Culture
- Cryo-Electron Microscopy
- Electron Microscopy
- Flow Cytometry
- Fly Facility
- Gene Targeting and Transgenics
- Janelia Experimental Technology
- Integrative Imaging
- Media Prep
- Molecular Genomics
- Project Pipeline Support
- Project Technical Resources
- Quantitative Genomics
- Scientific Computing Software
- Scientific Computing Systems
- Viral Tools
- Vivarium

Abstract
Deep neural networks trained to inpaint partially occluded images show a deep understanding of image composition and have even been shown to remove objects from images convincingly. In this work, we investigate how this implicit knowledge of image composition can be be used to separate cells in densely populated microscopy images. We propose a measure for the independence of two image regions given a fully self-supervised inpainting network and separate objects by maximizing this independence. We evaluate our method on two cell segmentation datasets and show that cells can be separated completely unsupervised. Furthermore, combined with simple foreground detection, our method yields instance segmentation of similar quality to fully supervised methods.