Skip to main navigation Skip to search Skip to main content

Authority Bias in Human-AI Decision Making: The Effects of AI Appraisals and Journal Cues in Abstract Screening

Research output: Working paperPreprintAcademic

Abstract

Human-AI collaboration becomes increasingly embedded in decision-making tasks, including systematic reviewing workflows such as title and abstract screening. Yet humans as final arbiters remain susceptible to influence from peripheral cues, leaving these hybrid workflows vulnerable to new forms of bias. This paper examined how two authority cues influence screening judgements: AI appraisals and journal prestige. Using preregistered experiments, we investigated how these cues shape inclusion decisions in a realistic abstract-screening task. We employed a 3 × 3 mixed design. Participants were randomly assigned to receive AI recommendations, AI disapprovals, or no AI input. Across trials, each participant evaluated abstracts that appeared with all three journal cues: prestigious labels, non-prestigious labels, and no journal information. Across Western graduate students, Asian bachelor’s-degree holders, and Western professionals (total N = 977), AI appraisals functioned as a strong and consistent authority cue, systematically biasing screening decisions. Journal prestige produced limited influence, emerging primarily when irrelevant abstracts were paired with prestigious journals, which increased incorrect inclusion. These findings demonstrate that AI-generated cues can introduce powerful new authority biases in human-AI collaborative screening. Implications for designing and governing (AI-aided) reviewing systems to ensure accurate, unbiased decision making are discussed.
Original languageEnglish
PublisherPsyArXiv
Number of pages100
DOIs
Publication statusPublished - 8 Jan 2026

Fingerprint

Dive into the research topics of 'Authority Bias in Human-AI Decision Making: The Effects of AI Appraisals and Journal Cues in Abstract Screening'. Together they form a unique fingerprint.

Cite this