TY - JOUR
T1 - Perceptions of artificial intelligence system's aptitude to judge morality and competence amidst the rise of Chatbots
AU - Oliveira, Manuel
AU - Brands, Justus
AU - Mashudi, Judith
AU - Liefooghe, Baptist
AU - Hortensius, Ruud
N1 - Publisher Copyright:
© The Author(s) 2024.
PY - 2024/7/18
Y1 - 2024/7/18
N2 - This paper examines how humans judge the capabilities of artificial intelligence (AI) to evaluate human attributes, specifically focusing on two key dimensions of human social evaluation: morality and competence. Furthermore, it investigates the impact of exposure to advanced Large Language Models on these perceptions. In three studies (combined N = 200), we tested the hypothesis that people will find it less plausible that AI is capable of judging the morality conveyed by a behavior compared to judging its competence. Participants estimated the plausibility of AI origin for a set of written impressions of positive and negative behaviors related to morality and competence. Studies 1 and 3 supported our hypothesis that people would be more inclined to attribute AI origin to competence-related impressions compared to morality-related ones. In Study 2, we found this effect only for impressions of positive behaviors. Additional exploratory analyses clarified that the differentiation between the AI origin of competence and morality judgments persisted throughout the first half year after the public launch of popular AI chatbot (i.e., ChatGPT) and could not be explained by participants' general attitudes toward AI, or the actual source of the impressions (i.e., AI or human). These findings suggest an enduring belief that AI is less adept at assessing the morality compared to the competence of human behavior, even as AI capabilities continued to advance.
AB - This paper examines how humans judge the capabilities of artificial intelligence (AI) to evaluate human attributes, specifically focusing on two key dimensions of human social evaluation: morality and competence. Furthermore, it investigates the impact of exposure to advanced Large Language Models on these perceptions. In three studies (combined N = 200), we tested the hypothesis that people will find it less plausible that AI is capable of judging the morality conveyed by a behavior compared to judging its competence. Participants estimated the plausibility of AI origin for a set of written impressions of positive and negative behaviors related to morality and competence. Studies 1 and 3 supported our hypothesis that people would be more inclined to attribute AI origin to competence-related impressions compared to morality-related ones. In Study 2, we found this effect only for impressions of positive behaviors. Additional exploratory analyses clarified that the differentiation between the AI origin of competence and morality judgments persisted throughout the first half year after the public launch of popular AI chatbot (i.e., ChatGPT) and could not be explained by participants' general attitudes toward AI, or the actual source of the impressions (i.e., AI or human). These findings suggest an enduring belief that AI is less adept at assessing the morality compared to the competence of human behavior, even as AI capabilities continued to advance.
KW - Artificial intelligence
KW - Chatbots
KW - Competence
KW - Impression formation
KW - Large language models
KW - Morality
KW - Social evaluation
UR - http://www.scopus.com/inward/record.url?scp=85198859871&partnerID=8YFLogxK
U2 - 10.1186/s41235-024-00573-7
DO - 10.1186/s41235-024-00573-7
M3 - Article
C2 - 39019988
AN - SCOPUS:85198859871
SN - 2365-7464
VL - 9
JO - Cognitive Research: Principles and Implications
JF - Cognitive Research: Principles and Implications
IS - 1
M1 - 47
ER -