Data Preprocessing Steps:
- Resizing the images to 48x48 (B/W color channel)
- Manually cleaning the datasets to remove incorrect expressions
- Spliting the data into train ,validation and test(80:10:10)
- Applying image augmentation using image data generator
- Haar Cascades to crop out only faces from the images from live feed while getting real-time predictions
The data is from -> https://www.kaggle.com/jonathanoheix/face-expression-recognition-dataset but we didn’t use the complete dataset as the data was imbalanced we picked out only 4 classes and we manually had to go through all the images in order to clean them and we finally split them into a ratio of 80:10:10 train:test:valid respectively. So the images are 48x48 gray scale images cropped to face using haarcascades. 28275 train 3530 test 3532 validation were the number of images taken from kaggle but the number of images used to train will vary as we have used image generator and manually cleaned was also. For the parameters used for image data generator you can check the model.ipynb.
Deep Learning Model:
After manually pre-processing the dataset, by deleting duplicates and wrongly classified images, we come to the part where we use concepts of Convolutional Neural Network and Transfer learning to build and train a model that predicts the facial emotion of any person.
The four face emotions are: Happy, Sad, Neutral and Angry.
The data is split in training and validation sets: 80% Training, 20% Validation. The data is then augmented accordingly using ImageDataGenerator.
VGG-16 was used as the transfer learning model. After importing it, we set layers.trainable as False, and select a favorable output layer, in this case, ‘block5_conv1’. This freezes the transfer learning model so that we can pre-train or ‘warm up’ the layers of our sequential model on the given data before starting the actual training. This helps the sequential model to adjust weights by training on a lower learning rate.
Setting the Hyper Parameters and constants (Only the best parameters are displayed below): • Batch size : 64
• Image Size : 48 x 48 x 3
• Optimizers : o RMS Prop (Pre-Train) o Adam
• Learning Rate : o Lr1 = 1e-5 (Pre-Train) o Lr2 = 1e-4
• Epochs o Epochs 1 = 30 (Pre-Train) o Epochs 2 = 25
• Loss : Categorical Crossentropy
Defining the Model: Using Sequential, the layers in the model are as follows: • GlobalAveragePooling2D • Flatten • Dense (256, activation: ‘relu’) • Dropout (0.4) • Dense (128, activation: ‘relu’) • Dropout (0.2) • Dense (4, activation: ‘softmax’) The pre-training is done by using RMSProp at learning rate: 1e-5 and for 30 epochs. After pre-training, we set layers.trainable as True for the whole model. Now the actual training will start. It is done by taking Adam optimizer at learning rate: 1e-4 for 25 epochs. We were able to achieve a decent validation accuracy of 75% and an accuracy of 85%. All the metrics observed during the model training are displayed on one plot:
Different Emotions Detected:
Emousic ~ Selenium Automation in Python for Music videos Recommendation based on detected facial emotion
Using Selenium automation in Python, whenever you make prediction from the constructed vgg16 model, you get word as emotion - 'ANGRY 😡', 'HAPPY 😀', 'NEUTRAL 😐', 'SAD 🙁' which is used in automation for parsing the YouTube webpages using a driver called as 'chromedriver' which automatically clicks the buttons you want as per your facial emotion detected, and redirects you to recommended YouTube video.
- So if you're "SAD 🙁", uplift your mood with song videos like "I'll Meet You There - Sapajou", "Relax - Markvard", "Wake Up (feat. ROMY DYA) - Wataboi"
- If you're "ANGRY 😡", calm down with "Maroon 5 - Memories", "The Weeknd - Blinding Lights", "Lloyd P White - Burst Part 2"
- If you're "HAPPY 😀", groove to songs like "The cure- Friday I'm in Love", "The Beatles I want to hold your hand", "Beautiful day"
- For "NEUTRAL 😐" emotions detected, tune in to "Becky Hill - Space", "Justin Bieber & benny blanco - Lonely", "Little Mix - Happiness"
- Requirements:
-
The software requirements are listed below:
• pillow
• numpy==1.16.0
• opencv-python-headless==4.2.0.32
• streamlit
• tensorflow
- Download the zip file from our repository and unzip it at your desired location.
- STEP 1:
$ pip install -r requirements.txt
- STEP 2:
$ streamlit run app.py
- STEP 3:You can now view your Streamlit app in your browser.
Local URL: http://localhost:8501
Made with 💜 by DS Community SRM
Rakesh |
Stuti Sehgal |
Bhavya |
Shubhangi Soni |
Sheel |
Soumya |
Krish |
vignesh |