Abstract
This paper is concerned with issues of trust and bias in generative AI in general, and chatbots based on large language models in particular (e.g. ChatGPT). The discussion argues that intercultural communication scholars must do more to better understand generative AI and more specifically large language models, as such technologies produce and circulate discourse in an ostensibly impartial way, reinforcing the widespread assumption that machines are objective resources for societies to learn about important intercultural issues, such as racism and discrimination. Consequently, there is an urgent need to understand how trust and bias factor into the ways in which such technologies deal with topics and themes central to intercultural communication. It is also important to scrutinize the ways in which societies make use of AI and large language models to carry out important social actions and practices, such as teaching and learning about historical or political issues.
| Original language | English |
|---|---|
| Pages (from-to) | 787–795 |
| Number of pages | 9 |
| Journal | Applied Linguistics Review |
| Volume | 16 |
| Issue number | 2 |
| Early online date | 28 Jun 2024 |
| DOIs | |
| Publication status | Published - 26 Mar 2025 |
Bibliographical note
Publisher Copyright:© 2024 the author(s), published by De Gruyter, Berlin/Boston 2024.
Keywords
- AI
- bias
- intercultural communication
- large language models
- trust
Fingerprint
Dive into the research topics of 'Communicating the cultural other: trust and bias in generative AI and large language models'. Together they form a unique fingerprint.Cite this
- APA
- Author
- BIBTEX
- Harvard
- Standard
- RIS
- Vancouver