Disentangled Representation Learning for Privacy-Preserving Case-Based Explanations

Helena Montenegro*, Wilson Silva, Jaime S. Cardoso

*Corresponding author for this work

Research output: Chapter in Book/Report/Conference proceedingChapterAcademicpeer-review

Abstract

The lack of interpretability of Deep Learning models hinders their deployment in clinical contexts. Case-based explanations can be used to justify these models’ decisions and improve their trustworthiness. However, providing medical cases as explanations may threaten the privacy of patients. We propose a generative adversarial network to disentangle identity and medical features from images. Using this network, we can alter the identity of an image to anonymize it while preserving relevant explanatory features. As a proof of concept, we apply the proposed model to biometric and medical datasets, demonstrating its capacity to anonymize medical images while preserving explanatory evidence and a reasonable level of intelligibility. Finally, we demonstrate that the model is inherently capable of generating counterfactual explanations.
Original languageEnglish
Title of host publicationMedical Applications with Disentanglements
Subtitle of host publicationMAD 2022
PublisherSpringer Nature
Pages33-45
ISBN (Electronic)978-3-031-25046-0
ISBN (Print)978-3-031-25045-3
DOIs
Publication statusPublished - Feb 2023

Publication series

NameLecture Notes in Computer Science
Volume13823
ISSN (Print)0302-9743
ISSN (Electronic)1611-3349

Fingerprint

Dive into the research topics of 'Disentangled Representation Learning for Privacy-Preserving Case-Based Explanations'. Together they form a unique fingerprint.

Cite this