Abstract
Diffusion model-generated images can appear indistinguishable from authentic photographs, but these images often contain artifacts and implausibilities that reveal their AI-generated provenance. Given the challenge to public trust in media posed by photorealistic AI-generated images, we conducted a large-scale experiment measuring human detection accuracy on 450 diffusion-model generated images and 149 real images. Based on collecting 749,828 observations and 34,675 comments from 50,444 participants, we find that scene complexity of an image, artifact types within an image, display time of an image, and human curation of AI-generated images all play significant roles in how accurately people distinguish real from AI-generated images. Additionally, we propose a taxonomy characterizing artifacts often appearing in images generated by diffusion models. Our empirical observations and taxonomy offer nuanced insights into the capabilities and limitations of diffusion models to generate photorealistic images in 2024.
| Original language | English |
|---|---|
| Title of host publication | CHI 2025 - Proceedings of the 2025 CHI Conference on Human Factors in Computing Systems |
| Publisher | Association for Computing Machinery |
| ISBN (Electronic) | 9798400713941 |
| DOIs | |
| Publication status | Published - 26 Apr 2025 |
| Event | 2025 CHI Conference on Human Factors in Computing Systems, CHI 2025 - Yokohama, Japan Duration: 26 Apr 2025 → 1 May 2025 |
Publication series
| Name | Conference on Human Factors in Computing Systems - Proceedings |
|---|
Conference
| Conference | 2025 CHI Conference on Human Factors in Computing Systems, CHI 2025 |
|---|---|
| Country/Territory | Japan |
| City | Yokohama |
| Period | 26/04/25 → 1/05/25 |
Bibliographical note
Publisher Copyright:© 2025 Copyright held by the owner/author(s).
Keywords
- deepfakes
- diffusion models
- generative AI
- misinformation
- photorealism
- synthetic media