Crowdsourcing discourse interpretations: On the influence of context and the reliability of a connective insertion task

Merel C.J. Scholman, Vera Demberg

Research output: Chapter in Book/Report/Conference proceedingChapterAcademicpeer-review

Abstract

Traditional discourse annotation tasks are considered costly and time-consuming, and the reliability and validity of these tasks is in question. In this paper, we investigate whether crowdsourcing can be used to obtain reliable discourse relation annotations. We also examine the influence of context on the reliability of the data. The results of the crowdsourced connective insertion task showed that the majority of the inserted connectives converged with the original label. Further, the distribution of inserted connectives revealed that multiple senses can often be inferred for a single relation. Regarding the presence of context, the results show no significant difference in distributions of insertions between conditions overall. However, a by-item comparison revealed several characteristics of segments that determine whether the presence of context makes a difference in annotations. The findings discussed in this paper can be taken as preliminary evidence that crowdsourcing can be used as a valuable method to obtain insights into the sense(s) of relations.
Original languageEnglish
Title of host publicationLAW 2017 - 11th Linguistic Annotation Workshop, Proceedings of the Workshop
PublisherAssociation for Computational Linguistics
Pages24-33
Number of pages10
ISBN (Print)9781945626395
DOIs
Publication statusPublished - 2017
Externally publishedYes

Publication series

NameLAW 2017 - 11th Linguistic Annotation Workshop, Proceedings of the Workshop

Fingerprint

Dive into the research topics of 'Crowdsourcing discourse interpretations: On the influence of context and the reliability of a connective insertion task'. Together they form a unique fingerprint.

Cite this