The erosion of human(e) judgement in targeting? Quantification logics, AI-enabled decision support systems and proportionality assessments in IHL

Research output: Contribution to journalArticleAcademicpeer-review

Abstract

This article examines the growing use of artificial intelligence (AI)-enabled decision support systems in targeting operations and their implications for proportionality assessments under international humanitarian law (IHL). Emphasizing the primacy of the duty of constant care and precautions in attack as obligations that must be exhausted before and during proportionality assessments, the article advocates for a fuller understanding of civilian harm. It traces the historical trajectory of “quantification logics” in targeting, from the Vietnam War to contemporary AI integration, and analyzes how such systems may reshape decision spaces, cognitive processes and accountability within the context of armed conflict. Specifically, the article argues that over-reliance on computational models risks displacing the contextual, qualitative judgement essential to lawful proportionality determinations, potentially normalizing civilian harm. It concludes with recommendations to preserve the space that human reasoning occupies as central to IHL compliance in targeting operations.

Original languageEnglish
Number of pages31
JournalInternational review of the Red Cross
DOIs
Publication statusE-pub ahead of print - 13 Jan 2026

Bibliographical note

Publisher Copyright:
© The Author(s), 2026.

Keywords

  • AI-enabled decision support systems
  • civilian harm
  • joint targeting cycle
  • network-centric warfare
  • non-combatant casualty cut-off value
  • proportionality
  • quantification logics

Fingerprint

Dive into the research topics of 'The erosion of human(e) judgement in targeting? Quantification logics, AI-enabled decision support systems and proportionality assessments in IHL'. Together they form a unique fingerprint.

Cite this