Abstract
Fake news is a threat for the society and can create a lot of confusion to people regarding what is true and what not. Fake news usually contain manipulated content, such as text or images that attract the interest of the readers with the aim to convince them on their truthfulness. In this article, we propose SceneFND (Scene Fake News Detection), a system that combines textual, contextual scene and visual representation to address the problem of multimodal fake news detection. The textual representation is based on word embeddings that are passed into a bidirectional long short-term memory network. Both the contextual scene and the visual representations are based on the images contained in the news post. The place, weather and season scenes are extracted from the image. Our statistical analysis on the scenes showed that there are statistically significant differences regarding their frequency in fake and real news. In addition, our experimental results on two real world datasets show that the integration of the contextual scenes is effective for fake news detection. In particular, SceneFND improved the performance of the textual baseline by 3.48% in PolitiFact and by 3.32% in GossipCop datasets. Finally, we show the suitability of the scene information for the task and present some examples to explain its effectiveness in capturing the relevance between images and text.
Original language | English |
---|---|
Pages (from-to) | 355-367 |
Number of pages | 13 |
Journal | Journal of Information Science |
Volume | 50 |
Issue number | 2 |
Early online date | 23 Apr 2022 |
DOIs | |
Publication status | Published - Apr 2024 |
Bibliographical note
Funding Information:The author(s) disclosed receipt of the following financial support for the research, authorship and/or publication of this article: The work of Anastasia Giachanou is funded by the Dutch Research Council (grant VI.Vidi.195.152). The work of Paolo Rosso was in the framework of the Iberian Digital Media Research and Fact-Checking Hub (IBERIFIER) funded by the European Digital Media Observatory (2020-EU-IA0252), and of the XAI-DisInfodemics research project on eXplainable AI for disinformation and conspiracy detection during infodemics, funded by the Spanish Ministry of Science and Innovation (PLEC2021-007681).
Publisher Copyright:
© The Author(s) 2022.
Funding
The author(s) disclosed receipt of the following financial support for the research, authorship and/or publication of this article: The work of Anastasia Giachanou is funded by the Dutch Research Council (grant VI.Vidi.195.152). The work of Paolo Rosso was in the framework of the Iberian Digital Media Research and Fact-Checking Hub (IBERIFIER) funded by the European Digital Media Observatory (2020-EU-IA0252), and of the XAI-DisInfodemics research project on eXplainable AI for disinformation and conspiracy detection during infodemics, funded by the Spanish Ministry of Science and Innovation (PLEC2021-007681).
Funders | Funder number |
---|---|
European Digital Media Observatory | 2020-EU-IA0252 |
Iberian Digital Media Research and Fact-Checking Hub | |
Nederlandse Organisatie voor Wetenschappelijk Onderzoek | VI.Vidi.195.152 |
Nederlandse Organisatie voor Wetenschappelijk Onderzoek | |
Ministerio de Ciencia e Innovación | PLEC2021-007681 |
Ministerio de Ciencia e Innovación |
Keywords
- Fake news detection
- multimodal feature fusion
- social media
- visual scene information