Main Menu (Mobile)- Block

Main Menu - Block

custom | custom

Search Results

filters_region_cap | custom

Filter

facetapi-Q2b17qCsTdECvJIqZJgYMaGsr8vANl1n | block

Associated Lab

facetapi-PV5lg7xuz68EAY8eakJzrcmwtdGEnxR0 | block
general_search_page-panel_pane_1 | views_panes

3 Janelia Publications

Showing 1-3 of 3 results
Your Criteria:
    Simpson Lab
    05/01/10 | Mutation of the Drosophila vesicular GABA transporter disrupts visual figure detection.
    Fei H, Chow DM, Chen A, Romero-Calderón R, Ong WS, Ackerson LC, Maidment NT, Simpson JH, Frye MA, Krantz DE
    The Journal of Experimental Biology. 2010 May;213(Pt 10):1717-30. doi: 10.1242/jeb.036053

    The role of gamma amino butyric acid (GABA) release and inhibitory neurotransmission in regulating most behaviors remains unclear. The vesicular GABA transporter (VGAT) is required for the storage of GABA in synaptic vesicles and provides a potentially useful probe for inhibitory circuits. However, specific pharmacologic agents for VGAT are not available, and VGAT knockout mice are embryonically lethal, thus precluding behavioral studies. We have identified the Drosophila ortholog of the vesicular GABA transporter gene (which we refer to as dVGAT), immunocytologically mapped dVGAT protein expression in the larva and adult and characterized a dVGAT(minos) mutant allele. dVGAT is embryonically lethal and we do not detect residual dVGAT expression, suggesting that it is either a strong hypomorph or a null. To investigate the function of VGAT and GABA signaling in adult visual flight behavior, we have selectively rescued the dVGAT mutant during development. We show that reduced GABA release does not compromise the active optomotor control of wide-field pattern motion. Conversely, reduced dVGAT expression disrupts normal object tracking and figure-ground discrimination. These results demonstrate that visual behaviors are segregated by the level of GABA signaling in flies, and more generally establish dVGAT as a model to study the contribution of GABA release to other complex behaviors.

    View Publication Page
    Simpson Lab
    04/01/10 | VAA3D enables real-time 3D visualization and quantitative analysis of large-scale biological image data sets.
    Peng H, Ruan Z, Long F, Simpson JH, Myers EW
    Nature Biotechnology. 2010 Apr;28:348-53. doi: 10.1038/nbt.1612

    The V3D system provides three-dimensional (3D) visualization of gigabyte-sized microscopy image stacks in real time on current laptops and desktops. V3D streamlines the online analysis, measurement and proofreading of complicated image patterns by combining ergonomic functions for selecting a location in an image directly in 3D space and for displaying biological measurements, such as from fluorescent probes, using the overlaid surface objects. V3D runs on all major computer platforms and can be enhanced by software plug-ins to address specific biological problems. To demonstrate this extensibility, we built a V3D-based application, V3D-Neuron, to reconstruct complex 3D neuronal structures from high-resolution brain images. V3D-Neuron can precisely digitize the morphology of a single neuron in a fruitfly brain in minutes, with about a 17-fold improvement in reliability and tenfold savings in time compared with other neuron reconstruction tools. Using V3D-Neuron, we demonstrate the feasibility of building a 3D digital atlas of neurite tracts in the fruitfly brain.

    View Publication Page
    Simpson Lab
    02/01/10 | Segmentation of center brains and optic lobes in 3D confocal images of adult fruit fly brains.
    Lam SC, Ruan Z, Zhao T, Long F, Jenett A, Simpson J, Myers EW, Peng H
    Methods. 2010 Feb;50(2):63-9. doi: 10.1016/j.ymeth.2009.08.004

    Automatic alignment (registration) of 3D images of adult fruit fly brains is often influenced by the significant displacement of the relative locations of the two optic lobes (OLs) and the center brain (CB). In one of our ongoing efforts to produce a better image alignment pipeline of adult fruit fly brains, we consider separating CB and OLs and align them independently. This paper reports our automatic method to segregate CB and OLs, in particular under conditions where the signal to noise ratio (SNR) is low, the variation of the image intensity is big, and the relative displacement of OLs and CB is substantial. We design an algorithm to find a minimum-cost 3D surface in a 3D image stack to best separate an OL (of one side, either left or right) from CB. This surface is defined as an aggregation of the respective minimum-cost curves detected in each individual 2D image slice. Each curve is defined by a list of control points that best segregate OL and CB. To obtain the locations of these control points, we derive an energy function that includes an image energy term defined by local pixel intensities and two internal energy terms that constrain the curve’s smoothness and length. Gradient descent method is used to optimize this energy function. To improve both the speed and robustness of the method, for each stack, the locations of optimized control points in a slice are taken as the initialization prior for the next slice. We have tested this approach on simulated and real 3D fly brain image stacks and demonstrated that this method can reasonably segregate OLs from CBs despite the aforementioned difficulties.

    View Publication Page