Giter Site home page Giter Site logo

mauryaritesh / facial-expression-detection Goto Github PK

View Code? Open in Web Editor NEW
246.0 19.0 142.0 326 KB

Facial Expression or Facial Emotion Detector can be used to know whether a person is sad, happy, angry and so on only through his/her face. This Repository can be used to carry out such a task.

License: MIT License

Python 43.29% Jupyter Notebook 56.71%
deep-learning opencv-python ai face-detection hacktoberfest hactoberfest learn

facial-expression-detection's Introduction

Read this in other languages

हिंदी   日本語

Facial-Expression-Detection

This is the code for this video by Ritesh on YouTube.

Facial Expression or Facial Emotion Detector can be used to know whether a person is sad, happy, angry and so on only through his/her face. This Repository can be used to carry out such a task. It uses your WebCamera and then identifies your expression in Real Time. Yeah, in real-time!

PLAN

This is a three step process. In the first step, we load the XML file for detecting the presence of faces and then we retrain our network with our image on five different categories. After that, we import the label_image.py program from the last video and set up everything in realtime.

DEPENDENCIES

Hit the following in CMD/Terminal if you don't have them already installed:

pip install tensorflow
pip install opencv-python

That's it for now.

So let's take a brief look at each Step.

STEP 1 - Implementation of OpenCV HAAR CASCADES

I'm using the "Frontal Face Alt" classifier for detecting the presence of Face in the WebCam. This file is included with this repository. You can find the other classifiers here.

Next, we have the task to load this file, which can be found in the label.py program. E.g.:

# We load the xml file
classifier = cv2.CascadeClassifier('haarcascade_frontalface_alt.xml')

Now, everything can be set with the Label.py program. So, let's move to the next Step.

STEP 2 - ReTraining the Network - Tensorflow Image Classifier

We are going to create an Image classifier that identifies whether a person is sad, happy and so on and then show this text on the OpenCV Window. This step will consist of several sub steps:

  • We need to first create a directory named images. In this directory, create five or six sub directories with names like Happy, Sad, Angry, Calm and Neutral. You can add more than this.

  • Now fill these directories with respective images by downloading them from the Internet. Ex: In "Happy" directory, fill only images of happy faces.

  • Now run the "face-crop.py" program as suggested in the video

  • Once you have only cleaned images, you are ready to retrain the network. For this purpose, I'm using Mobilenet Model which is quite fast and accurate. To run the training, hit the go to the parent folder and open CMD/Terminal here and hit the following:

    python retrain.py --output_graph=retrained_graph.pb --output_labels=retrained_labels.txt --architecture=MobileNet_1.0_224 --image_dir=images
    

That's it for this Step.

STEP 3 - Importing the ReTrained Model and Setting Everything Up

Finally, I've put everything under the "label_image.py" file from where you can get everything. Now run the "label.py" program by typing the following in CMD/Terminal:

 python label.py

It'll open a new window of OpenCV and then identifies your Facial Expression. We are done now!

Contributing Guidelines

Thank you for considering contributing to the "Facial-Expression-Detection" project!

Code of Conduct

Before you start contributing, please read and adhere to our Code of Conduct. We expect all contributors to follow these rules and maintain a respectful and inclusive environment for everyone.

Getting Started

1.Fork the Repository: To contribute, fork the main repository to your GitHub account.

2.Clone the Repository: Clone your forked repository to your local machine:

git clone https://github.com/your-username/Facial-Expression-Detection.git

3.Set Up Development Environment: Install the necessary dependencies if you haven't already. You can do this by running the following commands:

pip install tensorflow
pip install opencv-python

4.Create a Branch: Create a new branch for your contribution. Choose a descriptive name for the branch that reflects the nature of your contribution.

git checkout -b feature/your-feature-name

5.Make Your Changes: Make the necessary changes and additions in your branch.

6.Commit Your Changes: Make clear, concise, and well-documented commit messages. Reference any relevant issues or pull requests in your commits.

git commit -m "Add new feature"

7.Push Your Changes: Push your branch to your GitHub repository:

git push origin feature/your-feature-name

8.Create a Pull Request: Create a pull request from your forked repository to the main repository.

PLEASE DO STAR THIS REPO IF YOU FOUND SOMETHING INTERESTING. <3

facial-expression-detection's People

Contributors

bhargav-44 avatar dalvishruti14 avatar debghs avatar durgesh4993 avatar mauryaritesh avatar rahulk4102 avatar smty2018 avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

facial-expression-detection's Issues

Add license

The repository currently does not have a license file.
I would like to work on this.

face_crop.py it is not doing anything

Ramendra Pathak

Is there any specific (size or gray) kind of images i need to download or any image will work.
As when m running the face_crop.py it is not doing anything with no error in the console.

i have just downloaded the normal images from google just by writing "happy person face images hd"

Please revert ASAP

this is console...........................:

Python 3.6.8 |Anaconda, Inc.| (default, Dec 30 2018, 18:50:55) [MSC v.1915 64 bit (AMD64)]
Type "copyright", "credits" or "license" for more information.

IPython 7.2.0 -- An enhanced Interactive Python.

runfile('D:/python/my code/face_crop.py', wdir='D:/python/my code')
0
1
2
3
4

OpenCv

How to install opencv in Spyder ?

Trained file

hello,
could you please share or commit your retrained_graph.pb file ?
I do not want to train the model again, but I can you your already trainned model and suppose it is OK.

Other option:
Could you zip and share your images folder. it will save a lot of time downloading each image from internet

tks.

Add comments to label_image.py

Currently, there are very few comments in the label_image.py. Want to add comments to explain its functionality for better understanding of the code.

error @tokenzie.py

buffer = _builtin_open(filename, 'rb')
getting error on this line .i was working on pycharm IDE

then can't able to install the tensorflow module .. and main module error

error while executing terminal command

it is showing error while i am executing this cmnd in cmd prompt
python retrain.py --output_graph=retrained_graph.pb --output_labels=retrained_labels.txt --architecture=MobileNet_1.0_224 --image_dir=images
Error :- File "retrain.py", line 234, in get_image_path
mod_index = index % len(category_list)
ZeroDivisionError: integer division or modulo by zer

Research approach concerning emotions of people with PIMD using physiological parameters and facial expressions

Dear Maurya,

For a research project, I want to take a deeper focus on the combination of physiological parameters and facial expression to analyse the emotional expression of people with profound intellectual and multiple disabilities (PIMD).
During my search, I came across your ideas and codes and, maybe, they are suitable to my approach.

Shorts explanation of the target group:
First, each person with profound intellectual and multiple disabilities is very individual concerning her/his competencies and impairments. However, there are some characteristics that apply to a large number affected persons:

  • profound intellectual disability (IQ < 20) combined with other disabilities (e.g., motor impairment, sensorial disabilities (hearing or visual impairment))
  • communication: usually no verbal language
  • usually no understanding of symbols
  • maybe no use of common behaviour signals (e.g., different showing of facial expression in comparison to people without disabilities) -> for example, “smiling” is not always a signal for happiness

So, the problem is that these target group cannot tell us directly how they feel. Therefore, I created the following plan and maybe you can say me, if this is possible (with your software):

  1. I want to trigger special emotional situations for the person with disabilities based on the information of her/his parents and caregivers.
  2. These situations should be recorded with a focus on the face.
  3. Afterwards, the personal facial expression of an emotion can be extracted -> Several Pictures of several situations that show the same facial expression, which stand for one emotion. The same procedure for other emotions.
  4. The last step includes a field trail, in which these emotions should be recognisable using machine processing/software in daily life.

Do you think that I can train your software with the pictures that I get from the special emotional situations and use this trained software to recognize in the specific facial expression in a totally new recording?
So, is it possible that the software can detect the shown facial expression (which will stand in the final analysis for an emotion) in a video?
Moreover, is it possible to get some further details (like keypoints etc.) of the shown facial expression to use them in a further analysis?

To sum up, an answer would be really helpful.
Thanks!

You can also contact me directly: [email protected]

Best regards

Not able to retrain a locally stored model

I have a trained model stored in my local storage. My model can recognise a person. I want to retrain it to detect expression of that person. How can I use your code to do so? Please help!

error

I'm getting an error that when i'm running "label.py" code it is showing as there is no module found "retrained_graph.pb".
How can i solve that error? Can you please help me out with that.
Thank you

error

Traceback (most recent call last):
File "C:/Users/Angshuman Bardhan/Desktop/exp.py", line 1, in
import matplotlib.pylab as plt
ModuleNotFoundError: No module named 'matplotlib'

retrain.py ?

Does running retrain.py twice will merge to existing .pb file ? or it wil delete and recreate the file ?

Error

I am getting this problem

runfile('C:/Users/Angshuman Bardhan/.spyder-py3/temp.py', wdir='C:/Users/Angshuman Bardhan/.spyder-py3')
Traceback (most recent call last):

File "", line 1, in
runfile('C:/Users/Angshuman Bardhan/.spyder-py3/temp.py', wdir='C:/Users/Angshuman Bardhan/.spyder-py3')

File "C:\Users\Angshuman Bardhan\Anaconda3\lib\site-packages\spyder\utils\site\sitecustomize.py", line 866, in runfile
execfile(filename, namespace)

File "C:\Users\Angshuman Bardhan\Anaconda3\lib\site-packages\spyder\utils\site\sitecustomize.py", line 102, in execfile
exec(compile(f.read(), filename, 'exec'), namespace)

File "C:/Users/Angshuman Bardhan/.spyder-py3/temp.py", line 1, in
import cv2

File "C:\Users\Angshuman Bardhan\Anaconda3\lib\site-packages\cv2_init_.py", line 89, in
bootstrap()

File "C:\Users\Angshuman Bardhan\Anaconda3\lib\site-packages\cv2_init_.py", line 79, in bootstrap
import cv2

ImportError: DLL load failed: The specified module could not be found.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.