Giter Site home page Giter Site logo

hadryan / moodycat Goto Github PK

View Code? Open in Web Editor NEW

This project forked from data-science-community-srm/moodycat

0.0 1.0 0.0 37.2 MB

Real-time facial emotions recognition model for music recommendation deployed as a Streamlit application

License: MIT License

Jupyter Notebook 99.33% Python 0.67%

moodycat's Introduction

Face-Emotion-Recognition

Real-time facial emotions recognition model for music recommendation

How we did dataset construction and preprocessing

Data Preprocessing Steps:

  1. Resizing the images to 48x48 (B/W color channel)
  2. Manually cleaning the datasets to remove incorrect expressions
  3. Spliting the data into train ,validation and test(80:10:10)
  4. Applying image augmentation using image data generator
  5. Haar Cascades to crop out only faces from the images from live feed while getting real-time predictions

The data is from -> https://www.kaggle.com/jonathanoheix/face-expression-recognition-dataset but we didn’t use the complete dataset as the data was imbalanced we picked out only 4 classes and we manually had to go through all the images in order to clean them and we finally split them into a ratio of 80:10:10 train:test:valid respectively. So the images are 48x48 gray scale images cropped to face using haarcascades. 28275 train 3530 test 3532 validation were the number of images taken from kaggle but the number of images used to train will vary as we have used image generator and manually cleaned was also. For the parameters used for image data generator you can check the model.ipynb.

Model construction

Deep Learning Model:

After manually pre-processing the dataset, by deleting duplicates and wrongly classified images, we come to the part where we use concepts of Convolutional Neural Network and Transfer learning to build and train a model that predicts the facial emotion of any person. The four face emotions are: Happy, Sad, Neutral and Angry.

The data is split in training and validation sets: 80% Training, 20% Validation. The data is then augmented accordingly using ImageDataGenerator.

VGG-16 was used as the transfer learning model. After importing it, we set layers.trainable as False, and select a favorable output layer, in this case, ‘block5_conv1’. This freezes the transfer learning model so that we can pre-train or ‘warm up’ the layers of our sequential model on the given data before starting the actual training. This helps the sequential model to adjust weights by training on a lower learning rate.

H5 files of the model

Setting the Hyper Parameters and constants (Only the best parameters are displayed below): • Batch size : 64

• Image Size : 48 x 48 x 3

• Optimizers : o RMS Prop (Pre-Train) o Adam

• Learning Rate : o Lr1 = 1e-5 (Pre-Train) o Lr2 = 1e-4

• Epochs o Epochs 1 = 30 (Pre-Train) o Epochs 2 = 25

• Loss : Categorical Crossentropy

Defining the Model: Using Sequential, the layers in the model are as follows: • GlobalAveragePooling2D • Flatten • Dense (256, activation: ‘relu’) • Dropout (0.4) • Dense (128, activation: ‘relu’) • Dropout (0.2) • Dense (4, activation: ‘softmax’) The pre-training is done by using RMSProp at learning rate: 1e-5 and for 30 epochs. After pre-training, we set layers.trainable as True for the whole model. Now the actual training will start. It is done by taking Adam optimizer at learning rate: 1e-4 for 25 epochs. We were able to achieve a decent validation accuracy of 75% and an accuracy of 85%. All the metrics observed during the model training are displayed on one plot:

Different Emotions Detected:

Emousic ~ Selenium Automation in Python for Music videos Recommendation based on detected facial emotion

Using Selenium automation in Python, whenever you make prediction from the constructed vgg16 model, you get word as emotion - 'ANGRY 😡', 'HAPPY 😀', 'NEUTRAL 😐', 'SAD 🙁' which is used in automation for parsing the YouTube webpages using a driver called as 'chromedriver' which automatically clicks the buttons you want as per your facial emotion detected, and redirects you to recommended YouTube video.

Instructions to run

  • Requirements:
  • The software requirements are listed below:

    • pillow

    • numpy==1.16.0

    • opencv-python-headless==4.2.0.32

    • streamlit

    • tensorflow

  • Download the zip file from our repository and unzip it at your desired location.

Enter the following line of code in your teminal to run the streamlit script

  • STEP 1:
$ pip install -r requirements.txt

command-1

  • STEP 2:
$ streamlit run app.py

command-2

  • STEP 3:You can now view your Streamlit app in your browser. Local URL: http://localhost:8501

Output

Implementation of Face emotion recognition

Contributors

License

License

Made with 💜 by DS Community SRM

Rakesh

Your Name Here (Insert Your Image Link In Src

Stuti Sehgal

Your Name Here (Insert Your Image Link In Src

Bhavya

Your Name Here (Insert Your Image Link In Src

Shubhangi Soni

Your Name Here (Insert Your Image Link In Src

Sheel

Your Name Here (Insert Your Image Link In Src

Soumya

Your Name Here (Insert Your Image Link In Src

Krish

Your Name Here (Insert Your Image Link In Src

vignesh

Your Name Here (Insert Your Image Link In Src

moodycat's People

Contributors

stutisehgal avatar stagnito avatar aymuos15 avatar shubhangisoni1603 avatar vignesh772 avatar bhavya1600 avatar

Watchers

 avatar

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.