Abstract
Language use differs between truthful and deceptive statements, but not all differences are consistent across people and contexts, complicating the identification of deceit in individuals. By relying on fact-checked tweets, we showed in three studies (Study 1: 469 tweets; Study 2: 484 tweets; Study 3: 24 models) how well personalized linguistic deception detection performs by developing the first deception model tailored to an individual: the 45th U.S. president. First, we found substantial linguistic differences between factually correct and factually incorrect tweets. We developed a quantitative model and achieved 73% overall accuracy. Second, we tested out-of-sample prediction and achieved 74% overall accuracy. Third, we compared our personalized model with linguistic models previously reported in the literature. Our model outperformed existing models by 5 percentage points, demonstrating the added value of personalized linguistic analysis in real-world settings. Our results indicate that factually incorrect tweets by the U.S. president are not random mistakes of the sender.
Original language | English |
---|---|
Pages (from-to) | 3-17 |
Number of pages | 15 |
Journal | Psychological Science |
Volume | 33 |
Issue number | 1 |
DOIs | |
Publication status | Published - Jan 2022 |
Bibliographical note
Funding Information:We thank the Washington Post Fact Checker team for providing their fact-checked data set of Trump?s communications, Benjamin Tereick for methodological suggestions, and Jozien Bensing and Annelies Vredeveldt for providing feedback on the manuscript. For a website discussing the themes of this research, see https://www.apersonalmodeloftrumpery.com/.
Publisher Copyright:
© The Author(s) 2021.
Keywords
- LIWC
- deception detection
- linguistic analysis
- open data
- open materials
- tailored model