Communicating the cultural other: trust and bias in generative AI and large language models

Christopher J. Jenks*

*Corresponding author for this work

Research output: Contribution to journalArticleAcademicpeer-review

Abstract

This paper is concerned with issues of trust and bias in generative AI in general, and chatbots based on large language models in particular (e.g. ChatGPT). The discussion argues that intercultural communication scholars must do more to better understand generative AI and more specifically large language models, as such technologies produce and circulate discourse in an ostensibly impartial way, reinforcing the widespread assumption that machines are objective resources for societies to learn about important intercultural issues, such as racism and discrimination. Consequently, there is an urgent need to understand how trust and bias factor into the ways in which such technologies deal with topics and themes central to intercultural communication. It is also important to scrutinize the ways in which societies make use of AI and large language models to carry out important social actions and practices, such as teaching and learning about historical or political issues.

Original languageEnglish
Number of pages9
JournalApplied Linguistics Review
DOIs
Publication statusE-pub ahead of print - 28 Jun 2024

Keywords

  • AI
  • bias
  • intercultural communication
  • large language models
  • trust

Fingerprint

Dive into the research topics of 'Communicating the cultural other: trust and bias in generative AI and large language models'. Together they form a unique fingerprint.

Cite this