Not logged in.

Contribution Details

Type Conference or Workshop Paper
Scope Discipline-based scholarship
Published in Proceedings No
Title Answer Extraction Towards better Evaluations of NLP Systems
Authors
  • Rolf Schwitter
  • Diego Mollà Aliod
  • Rachel Fournier
  • Michael Hess
Item Subtype Original Work
Refereed Yes
Status Published in final form
Page Range 20 - 27
Event Title Workshop on Reading Comprehension Tests as Evaluation for Computer-Based Language Understanding Systems. ANLP-NAACL
Place of Publication Seattle, Washington, US
Abstract Text We argue that reading comprehension tests are not particularly suited for the evaluation of NLP systems. Reading comprehension tests are specifically designed to evaluate human reading skills, and these require vast amounts of world knowledge and common-sense reasoning capabilities. Experience has shown that this kind of full-fledged question answering (QA) over texts from a wide range of domains is so difficult for machines as to be far beyond the present state of the art of NLP. To advance the field we propose a much more modest evaluation set-up, viz. Answer Extraction (AE) over texts from highly restricted domains. AE aims at retrieving those sentences from documents that contain the explicit answer to a user query. AE is less ambitious than full-fledged QA but has a number of important advantages over QA. It relies mainly on linguistic knowledge and needs only a very limited amount of world knowledge and few inference rules. However, it requires the solution of a number of key linguistic ...
Export BibTeX
EP3 XML (ZORA)