Skip to main navigation Skip to search Skip to main content

Towards Validating an Artificial Intelligence Concept Inventory for Non-Experts (AICI-NE): Common Misconceptions and Item Development

  • Linda Mannila*
  • , Julie Henry
  • , Tobias Bahr
  • , Christos Chytas
  • , Harold Connamacher
  • , Barbara C.N. Müller
  • , Simone Opel
  • , Andreas Scholl
  • *Corresponding author for this work

Research output: Chapter in Book/Report/Conference proceedingConference contributionAcademicpeer-review

Abstract

Artificial intelligence (AI) is an integral part of daily life, yet public understanding of its core concepts remains limited and often influenced by misconceptions. While AI literacy is increasingly recognized as a key competence for all, efforts to support such knowledge and skills are still in their early stages. A notable gap exists in resources and tools for assessing non-expert understanding of core concepts and uncovering potential misconceptions. Without such insights it is difficult to determine what curricula and educational initiatives should address. Our working group responds to this gap by developing a research-based AI concept inventory for diverse non-expert audiences, which here refer to individuals engaging with AI technologies without formal training or professional expertise in computer science or AI. A concept inventory is a multiple-choice assessment designed to measure understanding of core concepts in a subject area and to identify common misconceptions, with one correct answer per item and remaining options serving as distractors. Following established concept inventory methodologies, we first identified key AI concepts and common misconceptions through literature reviews, expert consultations, and empirical data collection. These findings informed the creation of multiple-choice items with empirically-derived distractors, refined through iterative evaluation with both experts and non-experts to ensure clarity and applicability across contexts. The resulting instrument is a first draft to assess AI understanding, supporting benchmarking across populations, and enabling tracking of changes over time, thus providing an evidence base to inform education, guide policy and advance the broader goal of AI literacy in everyday contexts.

Original languageEnglish
Title of host publicationITiCSE-WGR 2025 - Publication of the 2025 Working Group Reports on Innovation and Technology in Computer Science Education
PublisherAssociation for Computing Machinery
Pages360-406
Number of pages47
ISBN (Electronic)9798400721670
DOIs
Publication statusPublished - 12 Feb 2026
Event30th Annual Conference on Innovation and Technology in Computer Science Education, ITiCSE 2025 - Nijmegen, Netherlands
Duration: 27 Jun 20252 Jul 2025

Publication series

NameITiCSE-WGR 2025 - Publication of the 2025 Working Group Reports on Innovation and Technology in Computer Science Education
Name

Conference

Conference30th Annual Conference on Innovation and Technology in Computer Science Education, ITiCSE 2025
Country/TerritoryNetherlands
CityNijmegen
Period27/06/252/07/25

Bibliographical note

Publisher Copyright:
© 2025 Owner/Author(s).

Keywords

  • ai literacy
  • artificial intelligence
  • concept inventory
  • conceptions
  • distractors
  • key concepts
  • misconceptions
  • non-experts
  • preconceptions

Fingerprint

Dive into the research topics of 'Towards Validating an Artificial Intelligence Concept Inventory for Non-Experts (AICI-NE): Common Misconceptions and Item Development'. Together they form a unique fingerprint.

Cite this