Constraint-Satisfaction Inference for Entity Recognition

S. Canisius*, A. Van den Bosch, W. Daelemans

*Corresponding author for this work

Research output: Chapter in Book/Report/Conference proceedingChapterAcademicpeer-review

Abstract

One approach to QA answering is to match a question to candidate answers in a background corpus based on semantic overlap, possibly in combination with other levels of matching, such as lexical vector space similarity and syntactic similarity. While the computation of deep semantic similarity is as yet generally infeasible, semantic analysis in a specific domain is feasible, if the analysis is constrained to finding domain-specific entities and basic relations. Finding domainspecific entities, the focus of this chapter, is still not a trivial task due to ambiguities of terms. This problem, like many others in Natural Language Processing, is a sequence labelling task. We describe the development of a new approach to sequence labelling in general, based on the constraint satisfaction inference. The output of the machine-learning-based classifiers that solve aspects of the task (such as subsequently predicting the output of the label sequence) are considered as constraints on the global structured output analysis. The constraint-satisfaction in- ference method is compared to other state-of-the-art sequence labelling approaches, showing competitive performance.
Original languageEnglish
Title of host publicationInteractive Multi-modal Question-Answering
EditorsAntal Bosch, Gosse Bouma
Place of PublicationBerlin, Heidelberg
PublisherSpringer
Pages199–221
Edition1
ISBN (Electronic)978-3-642-17525-1
ISBN (Print)978-3-642-17524-4
DOIs
Publication statusPublished - 12 May 2011
Externally publishedYes

Publication series

NameTheory and Applications of Natural Language Processing
PublisherSpringer
ISSN (Print)2192-032X
ISSN (Electronic)2192-0338

Fingerprint

Dive into the research topics of 'Constraint-Satisfaction Inference for Entity Recognition'. Together they form a unique fingerprint.

Cite this