Taming our wild data: On intercoder reliability in discourse research

Renske van Enschot*, Wilbert Spooren, A. van den Bosch, Christian Burgers, Liesbeth Degand, Jacqueline Evers-Vermeul, Florian Kunneman, Christine Liebrecht, Yvette Linders, Alfons Maes

*Corresponding author for this work

Research output: Contribution to journalArticleAcademicpeer-review

Abstract

Many research questions in the field of applied linguistics are answered by manually analyzing data collections or corpora: collections of spoken, written and/or visual communicative messages. In this kind of quantitative content analysis, the coding of subjective language data often leads to disagreement among raters. In this paper, we discuss causes of and solutions to disagreement problems in the analysis of discourse. We discuss crucial factors determining the quality and outcome of corpus analyses, and focus on the sometimes tense relation between reliability and validity. We evaluate formal assessments of intercoder reliability. We suggest a number of ways to improve the intercoder reliability, such as the precise specification of the variables and their coding categories and carving up the coding process into smaller substeps. The paper ends with a reflection on challenges for future work in discourse analysis, with special attention to big data and multimodal discourse.
Original languageEnglish
Pages (from-to)1-24
JournalDutch Journal of Applied Linguistics
Volume13
DOIs
Publication statusPublished - 25 Mar 2024

Bibliographical note

Publisher Copyright:
© Author(s).

Keywords

  • complex discourse data
  • discourse
  • hands-on procedures
  • intercoder reliability
  • quantitative content analysis

Fingerprint

Dive into the research topics of 'Taming our wild data: On intercoder reliability in discourse research'. Together they form a unique fingerprint.

Cite this