Preprocessing for Optimization of Probabilistic-Logic Models for Sequence Analysis

Publikation: Bidrag til tidsskriftKonferenceartikelForskningpeer review

Abstract

A class of probabilistic-logic models is considered, which increases the expressibility from HMM's and SCFG's regular and context-free languages to, in principle, Turing complete languages. In general, such models are computationally far too complex for direct use, so optimization by pruning and approximation are needed. The first steps are taken towards a methodology for optimizing such models by approximations using auxiliary models for preprocessing or splitting them into submodels. Evaluation of such approximating models is challenging as authoritative test data may be sparse. On the other hand, the original complex models may be used for generating artificial evaluation data by efficient sampling, which can be used in the evaluation, although it does not constitute a foolproof test procedure. These models and evaluation processes are illustrated in the PRISM system developed by other authors, and we discuss their applicability and limitations.

OriginalsprogEngelsk
BogserieLecture Notes in Computer Science
Sider (fra-til)70-83
Antal sider14
ISSN0302-9743
StatusUdgivet - 2009
BegivenhedInternational Conference on Logic Programming 2009 - Pasadena, USA
Varighed: 11 jul. 200917 jul. 2009
Konferencens nummer: 25

Konference

KonferenceInternational Conference on Logic Programming 2009
Nummer25
Land/OmrådeUSA
ByPasadena
Periode11/07/200917/07/2009

Bibliografisk note

Volumne: 5649

Citer dette