Can We Explain Privacy?

Gonul Ayci, Arzucan Ozgur, Murat K. Sensoy, Pinar P. Yolum

Research output: Contribution to journalArticleAcademicpeer-review

Abstract

Web users want to protect their privacy while sharing content online. This can be done through automated privacy assistants that are capable of taking actions by detecting privacy violations and recommending privacy settings for content that the user intends to share. While these approaches are promising in terms of the accuracy of their privacy decisions, they lack the ability to explain to the end user why certain decisions are being made. In this work, we study how privacy assistants can be enhanced through explanations generated in the context of privacy decisions for the user content. We outline a methodology to create explanations of privacy decisions, discuss core challenges, and show example explanations that are generated by our approach.
Original languageEnglish
Pages (from-to)75-80
Number of pages6
JournalIEEE Internet Computing
Volume27
Issue number4
DOIs
Publication statusPublished - 1 Jul 2023

Bibliographical note

Publisher Copyright:
© 2023 IEEE.

Funding

The first author is supported by the Scientific and Technological Research Council of Turkey (TUBITAK) and Turkish Directorate of Strategy and Budget under the TAM Project number 2007K12 - 873. This research was partially funded by the Hybrid Intelligence Center, a 10-year program funded by the Dutch Ministry of Education, Culture, and Science through the Netherlands Organization for Scientific Research. This work does not relate to Şensoy's position at Amazon.

FundersFunder number
Dutch Ministry of Education, Culture, and Science
Nederlandse Organisatie voor Wetenschappelijk Onderzoek
Türkiye Bilimsel ve Teknolojik Araştırma Kurumu2007K12 - 873

    Keywords

    • Content management
    • Internet
    • Privacy

    Fingerprint

    Dive into the research topics of 'Can We Explain Privacy?'. Together they form a unique fingerprint.

    Cite this