Filter
Associated Lab
Publication Date
2 Janelia Publications
Showing 1-2 of 2 resultsAs observed in human language learning and song learning in birds, the fruit fly Drosophila melanogaster changes its' auditory behaviors according to prior sound experiences. Female flies that have heard male courtship songs of the same species are less responsive to courtship songs of different species. This phenomenon, known as song preference learning in flies, requires GABAergic input to pC1 neurons in the central brain, with these neurons playing a key role in mating behavior by integrating multimodal sensory and internal information. The neural circuit basis of this GABAergic input, however, has not yet been identified. Here, we find that pCd-2 neurons, totaling four cells per hemibrain and expressing the sex-determination gene doublesex, provide the GABAergic input to pC1 neurons for song preference learning. First, RNAi-mediated knockdown of GABA production in pCd-2 neurons abolished song preference learning. Second, pCd-2 neurons directly, and in many cases mutually, connect with pC1 neurons, suggesting the existence of reciprocal circuits between pC1 and pCd-2 neurons. Finally, GABAergic and dopaminergic inputs to pCd-2 neurons are necessary for song preference learning. Together, this study suggests that reciprocal circuits between pC1 and pCd-2 neurons serve as a sensory and internal state-integrated hub, allowing flexible control over female copulation. Consequently, this provides a neural circuit model that underlies experience-dependent auditory plasticity.
Segmentation of objects in microscopy images is required for many biomedical applications. We introduce object-centric embeddings (OCEs), which embed image patches such that the spatial offsets between patches cropped from the same object are preserved. Those learnt embeddings can be used to delineate individual objects and thus obtain instance segmentations. Here, we show theoretically that, under assumptions commonly found in microscopy images, OCEs can be learnt through a self-supervised task that predicts the spatial offset between image patches. Together, this forms an unsupervised cell instance segmentation method which we evaluate on nine diverse large-scale microscopy datasets. Segmentations obtained with our method lead to substantially improved results, compared to state-of-the-art baselines on six out of nine datasets, and perform on par on the remaining three datasets. If ground-truth annotations are available, our method serves as an excellent starting point for supervised training, reducing the required amount of ground-truth needed by one order of magnitude, thus substantially increasing the practical applicability of our method. Source code is available at github.com/funkelab/cellulus.