Abstract
Generative large language models as tools in the legal domain have the potential to improve the justice system. However, the reasoning behavior of current generative models is brittle and poorly understood, hence cannot be responsibly applied in the domains of law and evidence. In this paper, we introduce an approach for creating benchmarks that can be used to evaluate the reasoning capabilities of generative language models. These benchmarks are dynamically varied, scalable in their complexity, and have formally unambiguous interpretations. In this study, we illustrate the approach on the basis of witness testimony, focusing on the underlying argument attack structure. We dynamically generate both linear and non-linear argument attack graphs of varying complexity and translate these into reasoning puzzles about witness testimony expressed in natural language. We show that state-of-the-art large language models often fail in these reasoning puzzles, already at low complexity. Obvious mistakes are made by the models, and their inconsistent performance indicates that their reasoning capabilities are brittle. Furthermore, at higher complexity, even state-of-the-art models specifically presented for reasoning capabilities make mistakes. We show the viability of using a parametrized benchmark with varying complexity to evaluate the reasoning capabilities of generative language models. As such, the findings contribute to a better understanding of the limitations of the reasoning capabilities of generative models, which is essential when designing responsible AI systems in the legal domain.
| Original language | English |
|---|---|
| Publisher | arXiv |
| Number of pages | 10 |
| DOIs | |
| Publication status | Published - 2 May 2025 |
Bibliographical note
This manuscript has been accepted for presentation as a short paper at the 20th International Conference of AI & Law in Chicago, June 16 to 20 of 2025UN SDGs
This output contributes to the following UN Sustainable Development Goals (SDGs)
-
SDG 16 Peace, Justice and Strong Institutions
Keywords
- cs.AI
- cs.LG
Fingerprint
Dive into the research topics of 'Parameterized Argumentation-based Reasoning Tasks for Benchmarking Generative Language Models'. Together they form a unique fingerprint.Research output
- 1 Conference contribution
-
Parameterized Argumentation-based Reasoning Tasks for Benchmarking Generative Language Models
Steging, C., Renooij, S. & Verheij, B., 13 Jan 2026, Proceedings of the Twentieth International Conference on Artificial Intelligence and Law. Maranhão, J. (ed.). Association for Computing Machinery, p. 455-459 5 p.Research output: Chapter in Book/Report/Conference proceeding › Conference contribution › Academic › peer-review
Open AccessFile
Cite this
- APA
- Author
- BIBTEX
- Harvard
- Standard
- RIS
- Vancouver