A Facial Expression Recognition Classifier Model that takes Real-time Video input and predicts the Emotion of users present in front of Webcam. It also gives Graphical Visualization of Expressions when we feed in an Image via Web Cam or manually !
- Deep Learning Techniques : Convolutional Nueral Networks (CNN)
- Python
- Flask
You can watch the Project Demo Video ๐ฅ here
- Fork this repository.
- Clone the repository to your System using
git clone
- Example :
git clone https://github.com<your-github-username>/Facial-Expression-Recognition-Classifier-Model
- Open any Python IDE and run the
fer_main.py
file in theFER_on_Real_Time_Video_Input
Folder to make Facial Expression Recognition of Live Image taken using your Web Cam ! - In case, If you want to try with Images you already had in your system follow the below steps
- Just upload them in
static
folder of the FolderFER_on_Manually_Uploaded_Image
- Now open any Python IDE and run the
fer_main_Manual_U.py
file.
- Import the required Packages and Libraries.
- Data analysis and Creating Training and Validation Batches.
- Create a CNN using 4 Convolutional Layers including Batch Normalization, Activation, Max Pooling, Dropout Layers followed by Flatten Layer, 2 Fully Connected dense Layers and finally Dense Layer with SoftMax Activation Function.
- Compile the model using
Adam
Optimizer and categorical cross entropy loss function. - Training the model for 15 epochs and then Evaluating the model as well as
saving the model Weights in
.h5
Values - Saving the model as
JSON
string. - Creating a Class in a separate file to reload the model and its weights to make predictions and return the probabilities of each emotion.
- Creating one more class in a Separate file which takes in the
Real-time Video input
and returns frames of Images with a Circle detecting the face and putting text of its emotion on it. - A python script is also created which upon running yields the
Graphical
Visualization
of Emotions present in the Image provided. - Finally creating a file which inherits form all the Classes defined by us and deploys our application using Flask.
๐ We can further improve the Validation Accuracy of the model by tuning the hyperparameters like:
- Learning Rate
- Epochs
- Batch Size
- Number of Layers in CNN
- Number of filters
- Size of filters
- Value in Dropout Layers
- Optimizers
- Making the Frontend Attractive ๐ฅ
- Suggesting Music based on Facial Emotion predicted ๐ต
- Deploying the Model ๐
Go through the link If you are new to Open Source Contribution here on making your First Contribution !!
- Fork this repository
- Clone the repository to your System using
git clone https://github.com<your-github-username>/Facial-Expression-Recognition-Classifier-Model
- Create a branch :-
- Change to the repository directory on your computer
cd Facial-Expression-Recognition-Classifier-Model
- Now create a branch using the git checkout command:
git checkout -b your-new-branch-name
- Change to the repository directory on your computer
- Make changes as per your requirement to solve the Issues mentioned in the
Future scope of the Project
and commit those changes. - If you go to the project directory and execute the command git status, you'll see there are changes. Add those changes to the branch you just created using the
git add
- Now commit those changes using the git commit command:
git commit -m "Added the feature of Suggesting Music"
- Push your changes to GitHub using the command
git push origin <add-your-branch-name>
- If you go to your repository on GitHub, you'll see a Compare & pull request button. Click on that button.
- Now describe the changes you made and submit the
pull request
. - Wait for the Maintainers to review :)