Abstract
Data-driven AI systems can make the right decisions for the wrong reasons, which can lead to irresponsible behavior. The rationale of such machine learning models can be evaluated and improved using a previously introduced hybrid method. This method, however, was tested using synthetic data under ideal circumstances, whereas labelled datasets in the legal domain are usually relatively small and often contain missing facts or inconsistencies. In this paper, we therefore investigate rationales under such imperfect conditions. We apply the hybrid method to machine learning models that are trained on court cases, generated from a structured representation of Article 6 of the ECHR, as designed by legal experts. We first evaluate the rationale of our models, and then improve it by creating tailored training datasets. We show that applying the rationale evaluation and improvement method can yield relevant improvements in terms of both performance and soundness of rationale, even under imperfect conditions.
Original language | English |
---|---|
Title of host publication | Legal Knowledge and Information Systems - JURIX 2023 |
Subtitle of host publication | 36th Annual Conference |
Editors | Giovanni Sileno, Jerry Spanakis, Gijs van Dijck |
Publisher | IOS Press |
Pages | 53-62 |
Number of pages | 10 |
ISBN (Electronic) | 978-1-64368-473-4 |
ISBN (Print) | 978-1-64368-472-7 |
DOIs | |
Publication status | Published - 2023 |
Event | International Conference on Legal Knowledge and Information Systems - Maastricht, Netherlands Duration: 18 Dec 2023 → 20 Dec 2023 Conference number: 36 |
Publication series
Name | Frontiers in Artificial Intelligence and Applications |
---|---|
Volume | 379 |
ISSN (Print) | 0922-6389 |
Conference
Conference | International Conference on Legal Knowledge and Information Systems |
---|---|
Abbreviated title | JURIX |
Country/Territory | Netherlands |
City | Maastricht |
Period | 18/12/23 → 20/12/23 |
Keywords
- Data
- Explainable AI
- Knowledge
- Machine Learning
- Responsible AI