Main Menu (Mobile)- Block

Main Menu - Block

janelia7_blocks-janelia7_fake_breadcrumb | block
Jan Funke Lab / Publications
custom | custom

Filter

facetapi-Q2b17qCsTdECvJIqZJgYMaGsr8vANl1n | block

Associated Lab

facetapi-W9JlIB1X0bjs93n1Alu3wHJQTTgDCBGe | block
facetapi-PV5lg7xuz68EAY8eakJzrcmwtdGEnxR0 | block
facetapi-021SKYQnqXW6ODq5W5dPAFEDBaEJubhN | block
general_search_page-panel_pane_1 | views_panes

4067 Publications

Showing 571-580 of 4067 results
06/15/10 | Automated tracking and analysis of centrosomes in early Caenorhabditis elegans embryos.
Jaensch S, Decker M, Hyman AA, Myers EW
Bioinformatics. 2010 Jun 15;26(12):i13-20. doi: 10.1093/bioinformatics/btq190

The centrosome is a dynamic structure in animal cells that serves as a microtubule organizing center during mitosis and also regulates cell-cycle progression and sets polarity cues. Automated and reliable tracking of centrosomes is essential for genetic screens that study the process of centrosome assembly and maturation in the nematode Caenorhabditis elegans.

View Publication Page
Svoboda Lab
07/01/12 | Automated tracking of whiskers in videos of head fixed rodents.
Clack NG, O’Connor DH, Huber D, Petreanu L, Hires A, Peron S, Svoboda K, Myers EW
PLoS Computational Biology. 2012 Jul;8:e1002591. doi: 10.1371/journal.pcbi.1002591

We have developed software for fully automated tracking of vibrissae (whiskers) in high-speed videos (>500 Hz) of head-fixed, behaving rodents trimmed to a single row of whiskers. Performance was assessed against a manually curated dataset consisting of 1.32 million video frames comprising 4.5 million whisker traces. The current implementation detects whiskers with a recall of 99.998% and identifies individual whiskers with 99.997% accuracy. The average processing rate for these images was 8 Mpx/s/cpu (2.6 GHz Intel Core2, 2 GB RAM). This translates to 35 processed frames per second for a 640 px×352 px video of 4 whiskers. The speed and accuracy achieved enables quantitative behavioral studies where the analysis of millions of video frames is required. We used the software to analyze the evolving whisking strategies as mice learned a whisker-based detection task over the course of 6 days (8148 trials, 25 million frames) and measure the forces at the sensory follicle that most underlie haptic perception.

View Publication Page
01/01/11 | Automatic 3D neuron tracing using all-paths pruning.
Long F, Peng H, Myers E
Conference on Intelligent Systems for Molecular Biology. 2011:

Motivation: Digital reconstruction, or tracing, of 3D neuron structures is critical toward reverse engineering the wiring and functions of a brain. However, despite a number of existing studies, this task is still challenging, especially when a 3D microscopic image has low signal-to-noise ratio (SNR) and fragmented neuron segments. Published work can handle these hard situations only by introducing global prior information, such as where a neurite segment starts and terminates. However, manual incorporation of such global information can be very time consuming. Thus, a completely automatic approach for these hard situations is highly desirable.

Results: We have developed an automatic graph algorithm, called the all-path pruning (APP), to trace the 3D structure of a neuron. To avoid potential mis-tracing of some parts of a neuron, an APP first produces an initial over-reconstruction, by tracing the optimal geodesic shortest path from the seed location to every possible destination voxel/pixel location in the image. Since the initial reconstruction contains all the possible paths and thus could contain redundant structural components (SC), we simplify the entire reconstruction without compromising its connectedness by pruning the redundant structural elements, using a new maximal-covering minimal-redundant (MCMR) subgraph algorithm. We show that MCMR has a linear computational complexity and will converge. We examined the performance of our method using challenging 3D neuronal image datasets of model organisms (e.g. fruit fly).

View Publication Page
Chklovskii LabFlyEM
08/06/15 | Automatic adaptation to fast input changes in a time-invariant neural circuit.
Bharioke A, Chklovskii DB
PLoS Computational Biology. 2015 Aug 6;11(8):e1004315. doi: 10.1371/journal.pcbi.1004315
Kainmueller Lab
10/01/12 | Automatic detection and classification of teeth in CT data.
Duy NT, Lamecker H, Kainmueller D, Zachow S
Medical image computing and computer-assisted intervention : MICCAI ... International Conference on Medical Image Computing and Computer-Assisted Intervention. 2012;15(Pt 1):609-16

We propose a fully automatic method for tooth detection and classification in CT or cone-beam CT image data. First we compute an accurate segmentation of the maxilla bone. Based on this segmentation, our method computes a complete and optimal separation of the row of teeth into 16 subregions and classifies the resulting regions as existing or missing teeth. This serves as a prerequisite for further individual tooth segmentation. We show the robustness of our approach by providing extensive validation on 43 clinical head CT scans.

View Publication Page
07/01/21 | Automatic Detection of Synaptic Partners in a Whole-Brain Drosophila EM Dataset
Buhmann J, Sheridan A, Gerhard S, Krause R, Nguyen T, Heinrich L, Schlegel P, Lee WA, Wilson R, Saalfeld S, Jefferis G, Bock D, Turaga S, Cook M, Funke J
Nature Methods. 2021 Jul 1;18(7):771-4. doi: 10.1038/s41592-021-01183-7

The study of neural circuits requires the reconstruction of neurons and the identification of synaptic connections between them. To scale the reconstruction to the size of whole-brain datasets, semi-automatic methods are needed to solve those tasks. Here, we present an automatic method for synaptic partner identification in insect brains, which uses convolutional neural networks to identify post-synaptic sites and their pre-synaptic partners. The networks can be trained from human generated point annotations alone and requires only simple post-processing to obtain final predictions. We used our method to extract 244 million putative synaptic partners in the fifty-teravoxel full adult fly brain (FAFB) electron microscopy (EM) dataset and evaluated its accuracy on 146,643 synapses from 702 neurons with a total cable length of 312 mm in four different brain regions. The predicted synaptic connections can be used together with a neuron segmentation to infer a connectivity graph with high accuracy: 96% of edges between connected neurons are correctly classified as weakly connected (less than five synapses) and strongly connected (at least five synapses). Our synaptic partner predictions for the FAFB dataset are publicly available, together with a query library allowing automatic retrieval of up- and downstream neurons.

View Publication Page
Grigorieff Lab
11/01/15 | Automatic estimation and correction of anisotropic magnification distortion in electron microscopes.
Grant T, Grigorieff N
Journal of Structural Biology. 2015 Nov;192(2):204-8. doi: 10.1016/j.jsb.2015.08.006

We demonstrate a significant anisotropic magnification distortion, found on an FEI Titan Krios microscope and affecting magnifications commonly used for data acquisition on a Gatan K2 Summit detector. We describe a program (mag_distortion_estimate) to automatically estimate anisotropic magnification distortion from a set of images of a standard gold shadowed diffraction grating. We also describe a program (mag_distortion_correct) to correct for the estimated distortion in collected images. We demonstrate that the distortion present on the Titan Krios microscope limits the resolution of a set of rotavirus VP6 images to ∼7 Å, which increases to ∼3 Å following estimation and correction of the distortion. We also use a 70S ribosome sample to demonstrate that in addition to affecting resolution, magnification distortion can also interfere with the classification of heterogeneous data.

View Publication Page
Kainmueller Lab
08/07/09 | Automatic Extraction of Anatomical Landmarks From Medical Image Data: An Evaluation of Different Methods
Kainmueller D, Hans-Christian Hege , Heiko Seim , Markus Heller , Stefan Zachow

This work presents three different methods for automatic detection of anatomical landmarks in CT data, namely for the left and right anterior superior iliac spines and the pubic symphysis. The methods exhibit different degrees of generality in terms of portability to other anatomical landmarks and require a different amount of training data. The ſrst method is problem-speciſc and is based on the convex hull of the pelvis. Method two is a more generic approach based on a statistical shape model including the landmarks of interest for every training shape. With our third method we present the most generic approach, where only a small set of training landmarks is required. Those landmarks are transferred to the patient speciſc geometry based on Mean Value Coordinates (MVCs). The methods work on surfaces of the pelvis that need to be extracted beforehand. We perform this geometry reconstruction with our previously introduced fully automatic segmentation framework for the pelvic bones. With a focus on the accuracy of our novel MVC-based approach, we evaluate and compare our methods on 100 clinical CT datasets, for which gold standard landmarks were deſned manually by multiple observers.

View Publication Page
Kainmueller Lab
08/19/11 | Automatic extraction of mandibular nerve and bone from cone-beam CT data.
Kainmueller D, Lamecker H, Seim H, Zinser M, Zachow S
Medical image computing and computer-assisted intervention : MICCAI ... International Conference on Medical Image Computing and Computer-Assisted Intervention. 2009;12(Pt 2):76-83

The exact localization of the mandibular nerve with respect to the bone is important for applications in dental implantology and maxillofacial surgery. Cone beam computed tomography (CBCT), often also called digital volume tomography (DVT), is increasingly utilized in maxillofacial or dental imaging. Compared to conventional CT, however, soft tissue discrimination is worse due to a reduced dose. Thus, small structures like the alveolar nerves are even harder recognizable within the image data. We show that it is nonetheless possible to accurately reconstruct the 3D bone surface and the course of the nerve in a fully automatic fashion, with a method that is based on a combined statistical shape model of the nerve and the bone and a Dijkstra-based optimization procedure. Our method has been validated on 106 clinical datasets: the average reconstruction error for the bone is 0.5 +/- 0.1 mm, and the nerve can be detected with an average error of 1.0 +/- 0.6 mm.

View Publication Page
07/10/07 | Automatic image analysis for gene expression patterns of fly embryos.
Peng H, Long F, Zhou J, Leung G, Eisen MB, Myers EW
BMC Cell Biology. 2007 Jul 10;8(Supplement 1):S7. doi: 10.1007/s12021-010-9090-x

Staining the mRNA of a gene via in situ hybridization (ISH) during the development of a D. melanogaster embryo delivers the detailed spatio-temporal pattern of expression of the gene. Many biological problems such as the detection of co-expressed genes, co-regulated genes, and transcription factor binding motifs rely heavily on the analyses of these image patterns. The increasing availability of ISH image data motivates the development of automated computational approaches to the analysis of gene expression patterns.

View Publication Page