In the first part of the project we have implemented the spam classifier using the Naive Bayes classifier from scratch. Our objective was to understand the functioning of naive bayes classifier in the classification of emails throughly.
In the second part of the project we have implemented the spam classifier using naive bayes classifier from the scikit learn library.
We have collected the data set from SpamAssassin public mail corpus containing around 1900 spam emails and 3900 legit emails.
An Email body is divided into three parts header, subject line and body where header contains the information of the sender, subject contains the subject of the email and body contains the actual content of the email. Before the email can be classified by a filter, preprocessing of the email is required to get the desired features.
Preprocessing the Emails
- The body of the emails are extracted from the email.
- A pandas data frame is created having the email bodies and category.
- Morphological analysis of the messages in the data set is done. After the morphological analysis each message body is converted into the individual words. We have used the natural language tool kit (nltk) for this purpose. It involves:
- Removing the HTML Tags from the body of the email.
- Stemming of the words.
- Removal of stop words.
- A pandas series is created having all the words in all the emails.
- A vocabulary is created with the most frequent 2500 words.
- A feature matrix is created having entry for each word in the email.
- Data is split into 70% training and 30% test set.
- A sparse matrix is created for the training and the testing data.
We assume that each word in an email is independent of every other word an hence the name Naive Bayes.
If a word is present in an email we can write the probability of the email being spam given that it contains a particular word as:
and the email being ham as:
For all the words in our vocabulary, we will find , the probability of a given word and
, the probability of the word in spam emails.
We will also calculate the probability of spam emails in training data as as:
Training the Naive Bayes Classifier
Let's say an email contains words X and Y.
The probability of the email being spam is:
The probability of the email being ham is:
For prediction we can remove the term from the denominator of our expression because it is common in both cases.
If then the email will be classified as spam else it will be classified as non-spam or ham.
Testing the Naive Bayes Classifier
- Set the Priori as
- Calculate the probability of an email being spam and non-spam by taking product of the test data and probabilities of individual words and multiply the priori to the result.
- An email will be classified comparing the two probabilities.
Confusion Matrix for 1722 emails in test setResults
Predicted Spam | Predicted Ham | |
Spam | True Positives = 557 | False Negatives = 30 |
Ham | False Positives = 15 | True Negatives = 1120 |
Implementation of spam classifier using the naive bayes classifier from the scikit learn library. The vocabulary was generated using CountVectorizer from the sklearn library instead of generating the features manually.
Results