TY - GEN
T1 - VR, Deepfakes and Epistemic Security
AU - Aliman, Nadisha Marie
AU - Kester, Leon
N1 - Publisher Copyright:
© 2022 IEEE.
PY - 2022
Y1 - 2022
N2 - In recent years, technological advancements in the AI and VR fields have increasingly often been paired with considerations on ethics and safety aimed at mitigating unintentional design failures. However, cybersecurity-oriented AI and VR safety research has emphasized the need to additionally appraise instantiations of intentional malice exhibited by unethical actors at pre- and post-deployment stages. On top of that, in view of ongoing malicious deepfake developments that can represent a threat to the epistemic security of a society, security-aware AI and VR design strategies require an epistemically-sensitive stance. In this vein, this paper provides a theoretical basis for two novel AIVR safety research directions: 1) VR as immersive testbed for a VR-deepfake-aided epistemic security training and 2) AI as catalyst within a deepfake-aided so-called cyborgnetic creativity augmentation facilitating an epistemically-sensitive threat modelling. For illustration, we focus our use case on deepfake text - an underestimated deepfake modality. In the main, the two proposed transdisciplinary lines of research exemplify how AIVR safety to defend against unethical actors could naturally converge toward AIVR ethics whilst counteracting epistemic security threats.
AB - In recent years, technological advancements in the AI and VR fields have increasingly often been paired with considerations on ethics and safety aimed at mitigating unintentional design failures. However, cybersecurity-oriented AI and VR safety research has emphasized the need to additionally appraise instantiations of intentional malice exhibited by unethical actors at pre- and post-deployment stages. On top of that, in view of ongoing malicious deepfake developments that can represent a threat to the epistemic security of a society, security-aware AI and VR design strategies require an epistemically-sensitive stance. In this vein, this paper provides a theoretical basis for two novel AIVR safety research directions: 1) VR as immersive testbed for a VR-deepfake-aided epistemic security training and 2) AI as catalyst within a deepfake-aided so-called cyborgnetic creativity augmentation facilitating an epistemically-sensitive threat modelling. For illustration, we focus our use case on deepfake text - an underestimated deepfake modality. In the main, the two proposed transdisciplinary lines of research exemplify how AIVR safety to defend against unethical actors could naturally converge toward AIVR ethics whilst counteracting epistemic security threats.
KW - AIVR Ethics
KW - Deepfakes
KW - Epistemic Security
KW - VR
UR - http://www.scopus.com/inward/record.url?scp=85147845940&partnerID=8YFLogxK
U2 - 10.1109/AIVR56993.2022.00019
DO - 10.1109/AIVR56993.2022.00019
M3 - Conference contribution
AN - SCOPUS:85147845940
T3 - Proceedings - 2022 IEEE International Conference on Artificial Intelligence and Virtual Reality, AIVR 2022
SP - 93
EP - 98
BT - Proceedings - 2022 IEEE International Conference on Artificial Intelligence and Virtual Reality, AIVR 2022
PB - IEEE
T2 - 5th IEEE International Conference on Artificial Intelligence and Virtual Reality, AIVR 2022
Y2 - 12 December 2022 through 14 December 2022
ER -