TY - CHAP
T1 - Crowdsourcing discourse interpretations
T2 - On the influence of context and the reliability of a connective insertion task
AU - Scholman, Merel C.J.
AU - Demberg, Vera
PY - 2017
Y1 - 2017
N2 - Traditional discourse annotation tasks are considered costly and time-consuming, and the reliability and validity of these tasks is in question. In this paper, we investigate whether crowdsourcing can be used to obtain reliable discourse relation annotations. We also examine the influence of context on the reliability of the data. The results of the crowdsourced connective insertion task showed that the majority of the inserted connectives converged with the original label. Further, the distribution of inserted connectives revealed that multiple senses can often be inferred for a single relation. Regarding the presence of context, the results show no significant difference in distributions of insertions between conditions overall. However, a by-item comparison revealed several characteristics of segments that determine whether the presence of context makes a difference in annotations. The findings discussed in this paper can be taken as preliminary evidence that crowdsourcing can be used as a valuable method to obtain insights into the sense(s) of relations.
AB - Traditional discourse annotation tasks are considered costly and time-consuming, and the reliability and validity of these tasks is in question. In this paper, we investigate whether crowdsourcing can be used to obtain reliable discourse relation annotations. We also examine the influence of context on the reliability of the data. The results of the crowdsourced connective insertion task showed that the majority of the inserted connectives converged with the original label. Further, the distribution of inserted connectives revealed that multiple senses can often be inferred for a single relation. Regarding the presence of context, the results show no significant difference in distributions of insertions between conditions overall. However, a by-item comparison revealed several characteristics of segments that determine whether the presence of context makes a difference in annotations. The findings discussed in this paper can be taken as preliminary evidence that crowdsourcing can be used as a valuable method to obtain insights into the sense(s) of relations.
UR - https://www.mendeley.com/catalogue/dbd51e1a-ce3e-37e0-8070-f82a8f36fded/
U2 - 10.18653/v1/w17-0803
DO - 10.18653/v1/w17-0803
M3 - Chapter
SN - 9781945626395
T3 - LAW 2017 - 11th Linguistic Annotation Workshop, Proceedings of the Workshop
SP - 24
EP - 33
BT - LAW 2017 - 11th Linguistic Annotation Workshop, Proceedings of the Workshop
PB - Association for Computational Linguistics
ER -