Main Menu (Mobile)- Block

Main Menu - Block

custom | custom

Search Results

filters_region_cap | custom

Filter

facetapi-Q2b17qCsTdECvJIqZJgYMaGsr8vANl1n | block
facetapi-PV5lg7xuz68EAY8eakJzrcmwtdGEnxR0 | block
general_search_page-panel_pane_1 | views_panes

2 Janelia Publications

Showing 1-2 of 2 results
Your Criteria:
    04/01/17 | Optogenetic control with a photocleavable protein, PhoCl.
    Zhang W, Lohman AW, Zhuravlova Y, Lu X, Wiens MD, Hoi H, Yaganoglu S, Mohr MA, Kitova EN, Klassen JS, Pantazis P, Thompson RJ, Campbell RE
    Nature Methods. 2017 Apr;14(4):391-394. doi: 10.1038/nmeth.4222

    To expand the range of experiments that are accessible with optogenetics, we developed a photocleavable protein (PhoCl) that spontaneously dissociates into two fragments after violet-light-induced cleavage of a specific bond in the protein backbone. We demonstrated that PhoCl can be used to engineer light-activatable Cre recombinase, Gal4 transcription factor, and a viral protease that in turn was used to activate opening of the large-pore ion channel Pannexin-1.

    View Publication Page
    04/01/17 | Time-accuracy tradeoffs in kernel prediction: controlling prediction quality.
    Kpotufe S, Verma N
    Journal of Machine Learning Research. 2017 Apr 1 ;18(44):1-29

    Kernel regression or classification (also referred to as weighted ε-NN methods in Machine Learning) are appealing for their simplicity and therefore ubiquitous in data analysis. How- ever, practical implementations of kernel regression or classification consist of quantizing or sub-sampling data for improving time efficiency, often at the cost of prediction quality. While such tradeoffs are necessary in practice, their statistical implications are generally not well understood, hence practical implementations come with few performance guaran- tees. In particular, it is unclear whether it is possible to maintain the statistical accuracy of kernel prediction—crucial in some applications—while improving prediction time.

    The present work provides guiding principles for combining kernel prediction with data- quantization so as to guarantee good tradeoffs between prediction time and accuracy, and in particular so as to approximately maintain the good accuracy of vanilla kernel prediction.

    Furthermore, our tradeoff guarantees are worked out explicitly in terms of a tuning parameter which acts as a knob that favors either time or accuracy depending on practical needs. On one end of the knob, prediction time is of the same order as that of single-nearest- neighbor prediction (which is statistically inconsistent) while maintaining consistency; on the other end of the knob, the prediction risk is nearly minimax-optimal (in terms of the original data size) while still reducing time complexity. The analysis thus reveals the interaction between the data-quantization approach and the kernel prediction method, and most importantly gives explicit control of the tradeoff to the practitioner rather than fixing the tradeoff in advance or leaving it opaque.

    The theoretical results are validated on data from a range of real-world application domains; in particular we demonstrate that the theoretical knob performs as expected. 

    View Publication Page