The case for neurons: a no-go theorem for consciousness on a chip

  • Johannes Kleiner*
  • , Tim Ludwig
  • *Corresponding author for this work

Research output: Contribution to journalArticleAcademicpeer-review

Abstract

We apply the methodology of no-go theorems as developed in physics to the question of artificial consciousness. The result is a no-go theorem which shows that under a general assumption, called dynamical relevance, Artificial Intelligence (AI) systems that run on contemporary computer chips cannot be conscious. Consciousness is dynamically relevant, simply put, if, according to a theory of consciousness, it is relevant for the temporal evolution of a system’s states. The no-go theorem rests on facts about semiconductor development: that AI systems run on central processing units, graphics processing units, tensor processing units, or other processors which have been designed and verified to adhere to computational dynamics that systematically preclude or suppress deviations. Whether our result resolves the question of AI consciousness on contemporary processors depends on the truth of the theorem’s main assumption, dynamical relevance, which this paper does not establish.

Original languageEnglish
Article numberniae037
JournalNeuroscience of Consciousness
Volume2024
Issue number1
DOIs
Publication statusPublished - 2024

Bibliographical note

Publisher Copyright:
© The Author(s) 2024. Published by Oxford University Press.

Keywords

  • Artificial Consciousness
  • Artificial Intelligence
  • Artificial Sentience
  • Large Language Model
  • Machine Consciousness
  • No-Go Theorem
  • Synthetic Phenomenology

Fingerprint

Dive into the research topics of 'The case for neurons: a no-go theorem for consciousness on a chip'. Together they form a unique fingerprint.

Cite this