This is the code to reproduce the results in the paper:
"The Language of Fake News: Opening the Black-Box of Deep Learning Based Detectors", O'Brien et al. (2018)
-
Pre-requisites: Python3 + packages: nltk, numpy, sklearn, Tensorflow (tested in version 1.12.0rc0)
-
Download and ungzip GoogleNews-vectors-negative300.bin.gz. Save the uncompressed GoogleNews-vectors-negative300.bin in the root directory of the repository (same directory as train.py). You can get the file here:
-
Run pattern removal script, clean_data.py:
python clean_data.py
- Train the Neural Network: train.py (experiment could be either Trump or all)
python train.py --experiment=Trump
Stop the training when the validation accuracy does not increase anymore. The validation accuracy is displayed every 100 training steps. A directory in 'run' that cointains the network parameters is created.
- Test the Neural Network eval.py
python eval.py --experiment=Trump
- Get the most relevant patterns for each article:
python get_patterns.py --experiment=Trump
- Display the most relevant patters accross all the dataset by parts of speech:
python parts_of_speech.py --experiment=Trump
- We obtained the following patterns:
Frequent patterns useful in fake and real news
The dataset consists on the Fake News Dataset by Kaggle collected by the BS detector (7,401 articles) + articles collected from "The Guardian" and "The New York Times" (8,999 articles).
In the data directory it can be found:
-
data/raw: the original articles
-
data/processed: the articles after removing words that are not in the English dictionary via PyEnchant
-
data/clean: the articles after cleaning advertisements and announcements, punctuation, etc. This is the data before going to the detector.