Giter Site home page Giter Site logo

justinshenk / fer Goto Github PK

View Code? Open in Web Editor NEW
317.0 9.0 77.0 4.26 MB

Facial Expression Recognition with a deep neural network as a PyPI package

License: MIT License

Python 14.83% Jupyter Notebook 84.92% Dockerfile 0.26%
fer2013 emotion-detection emotion-recognition python facial-expression-recognition tensorflow

fer's Introduction

FER

Facial expression recognition.

image

PyPI version Build Status Downloads

Open In Colab

DOI

INSTALLATION

Currently FER only supports Python 3.6 onwards. It can be installed through pip:

$ pip install fer

This implementation requires OpenCV>=3.2 and Tensorflow>=1.7.0 installed in the system, with bindings for Python3.

They can be installed through pip (if pip version >= 9.0.1):

$ pip install tensorflow>=1.7 opencv-contrib-python==3.3.0.9

or compiled directly from sources (OpenCV3, Tensorflow).

Note that a tensorflow-gpu version can be used instead if a GPU device is available on the system, which will speedup the results. It can be installed with pip:

$ pip install tensorflow-gpu\>=1.7.0

To extract videos that includes sound, ffmpeg and moviepy packages must be installed with pip:

$ pip install ffmpeg moviepy 

USAGE

The following example illustrates the ease of use of this package:

from fer import FER
import cv2

img = cv2.imread("justin.jpg")
detector = FER()
detector.detect_emotions(img)

Sample output:

[{'box': [277, 90, 48, 63], 'emotions': {'angry': 0.02, 'disgust': 0.0, 'fear': 0.05, 'happy': 0.16, 'neutral': 0.09, 'sad': 0.27, 'surprise': 0.41}]

Pretty print it with import pprint; pprint.pprint(result).

Just want the top emotion? Try:

emotion, score = detector.top_emotion(img) # 'happy', 0.99

MTCNN Facial Recognition

Faces by default are detected using OpenCV's Haar Cascade classifier. To use the more accurate MTCNN network, add the parameter:

detector = FER(mtcnn=True)

Video

For recognizing facial expressions in video, the Video class splits video into frames. It can use a local Keras model (default) or Peltarion API for the backend:

from fer import Video
from fer import FER

video_filename = "tests/woman2.mp4"
video = Video(video_filename)

# Analyze video, displaying the output
detector = FER(mtcnn=True)
raw_data = video.analyze(detector, display=True)
df = video.to_pandas(raw_data)

The detector returns a list of JSON objects. Each JSON object contains two keys: 'box' and 'emotions':

  • The bounding box is formatted as [x, y, width, height] under the key 'box'.
  • The emotions are formatted into a JSON object with the keys 'anger', 'disgust', 'fear', 'happy', 'sad', surprise', and 'neutral'.

Other good examples of usage can be found in the files demo.py located in the root of this repository.

To run the examples, install click for command line with pip install click and enter python demo.py [image|video|webcam] --help.

TF-SERVING

Support running with online TF Serving docker image.

To use: Run docker-compose up and initialize FER with FER(..., tfserving=True).

MODEL

FER bundles a Keras model.

The model is a convolutional neural network with weights saved to HDF5 file in the data folder relative to the module's path. It can be overriden by injecting it into the FER() constructor during instantiation with the emotion_model parameter.

LICENSE

MIT License.

CREDIT

This code includes methods and package structure copied or derived from Iván de Paz Centeno's implementation of MTCNN and Octavio Arriaga's facial expression recognition repo.

REFERENCE

FER 2013 dataset curated by Pierre Luc Carrier and Aaron Courville, described in:

"Challenges in Representation Learning: A report on three machine learning contests," by Ian J. Goodfellow, Dumitru Erhan, Pierre Luc Carrier, Aaron Courville, Mehdi Mirza, Ben Hamner, Will Cukierski, Yichuan Tang, David Thaler, Dong-Hyun Lee, Yingbo Zhou, Chetan Ramaiah, Fangxiang Feng, Ruifan Li, Xiaojie Wang, Dimitris Athanasakis, John Shawe-Taylor, Maxim Milakov, John Park, Radu Ionescu, Marius Popescu, Cristian Grozea, James Bergstra, Jingjing Xie, Lukasz Romaszko, Bing Xu, Zhang Chuang, and Yoshua Bengio, arXiv:1307.0414.

fer's People

Contributors

ahardreset avatar gshubham533 avatar habeebrahmankt avatar julia-imlauer avatar justinshenk avatar oarriaga avatar owlwasrowk avatar tekyaygilfethi avatar tharun-anand avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

fer's Issues

IndexError: list index out of range

line 302, in top_emotion
top_emotion = top_emotions[0]
IndexError: list index out of range

I am using the the top_emotion and detect_emotions functions but on some of the videos I get this Error, anything I can do to solve this ?

Didn't you share the trained models?

Hello, I'm glad to see your code, but when I execute python video-example.py', the probability of predicting each expression is almost the same. There is no obvious difference. Like the results shown below, it seems that the probability of happy'should be greater, but the result is not the same:

test79

So I think it may be that your public model is not well trained. I would like to ask if you can publish your trained model?

Analysis for specific time interval of the video

Hi,

is it possibile at the moment running the analysis only for a specific segment of the video? For instance, analyse the video only from second 3.5 to second 7.8. If it is possibile, could you let me know how to do that?
thanks

Having trouble for importing the library

Hi,

I'm having some troubles while trying to use the library. I've installed it with the pip install fer. I have the following error while trying the example script :
ImportError: cannot import name 'FER' from partially initialized module 'fer' (most likely due to a circular import) (C:\Users\mxmdu\Desktop\Programmation\Python\FaceRecognizer\fer.py)

The script

from fer import FER
import cv2

img = cv2.imread("justin.jpg")
detector = FER()
detector.detect_emotions(img)

Python version : 3.9.7

Image with many faces : top_emotion

Hello
When the image have different faces with different emotions : detector.top_emotion(image) give only the top of the first face;
how we can get top list of all faces?

Circular import

Hey,

I'm just trying to get your lib working but am facing some circular dependency issues...

ImportError: cannot import name 'FER' from partially initialized module 'fer' (most likely due to a circular import) (\FaceRecognition\fer.py)

Code is as follows:

from fer import FER
import cv2

img = cv2.imread("justin.jpg")
detector = FER()
detector.detect_emotions(img)

Python version (running under windows) is

python --version
Python 3.8.5

Requirements look good i think:

Requirement already satisfied: fer in c:\python\lib\site-packages (20.1.1)
Requirement already satisfied: pandas in c:\python\lib\site-packages (from fer) (1.2.0)
Requirement already satisfied: mtcnn>=0.1.0 in c:\python\lib\site-packages (from fer) (0.1.0)
Requirement already satisfied: keras in c:\python\lib\site-packages (from fer) (2.4.3)
Requirement already satisfied: tensorflow~=2.0 in c:\python\lib\site-packages (from fer) (2.3.0)
Requirement already satisfied: opencv-contrib-python in c:\python\lib\site-packages (from fer) (4.5.1.48)
Requirement already satisfied: requests in c:\python\lib\site-packages (from fer) (2.24.0)
Requirement already satisfied: matplotlib in c:\python\lib\site-packages (from fer) (3.3.1)
Requirement already satisfied: opencv-python>=4.1.0 in c:\python\lib\site-packages (from mtcnn>=0.1.0->fer) (4.5.1.48)
Requirement already satisfied: pyyaml in c:\python\lib\site-packages (from keras->fer) (5.3.1)
Requirement already satisfied: scipy>=0.14 in c:\python\lib\site-packages (from keras->fer) (1.4.1)
Requirement already satisfied: h5py in c:\python\lib\site-packages (from keras->fer) (2.10.0)
Requirement already satisfied: google-pasta>=0.1.8 in c:\python\lib\site-packages (from tensorflow~=2.0->fer) (0.2.0)
Requirement already satisfied: protobuf>=3.9.2 in c:\python\lib\site-packages (from tensorflow~=2.0->fer) (3.13.0)
Requirement already satisfied: grpcio>=1.8.6 in c:\python\lib\site-packages (from tensorflow~=2.0->fer) (1.32.0)
Requirement already satisfied: six>=1.12.0 in c:\python\lib\site-packages (from tensorflow~=2.0->fer) (1.15.0)
Requirement already satisfied: wheel>=0.26 in c:\python\lib\site-packages (from tensorflow~=2.0->fer) (0.35.1)
Requirement already satisfied: keras-preprocessing<1.2,>=1.1.1 in c:\python\lib\site-packages (from tensorflow~=2.0->fer) (1.1.2)
Requirement already satisfied: tensorboard<3,>=2.3.0 in c:\python\lib\site-packages (from tensorflow~=2.0->fer) (2.4.0)
Requirement already satisfied: wrapt>=1.11.1 in c:\python\lib\site-packages (from tensorflow~=2.0->fer) (1.12.1)
Requirement already satisfied: opt-einsum>=2.3.2 in c:\python\lib\site-packages (from tensorflow~=2.0->fer) (3.3.0)
Requirement already satisfied: termcolor>=1.1.0 in c:\python\lib\site-packages (from tensorflow~=2.0->fer) (1.1.0)
Requirement already satisfied: tensorflow-estimator<2.4.0,>=2.3.0 in c:\python\lib\site-packages (from tensorflow~=2.0->fer) (2.3.0)
Requirement already satisfied: gast==0.3.3 in c:\python\lib\site-packages (from tensorflow~=2.0->fer) (0.3.3)
Requirement already satisfied: absl-py>=0.7.0 in c:\python\lib\site-packages (from tensorflow~=2.0->fer) (0.10.0)
Requirement already satisfied: setuptools in c:\python\lib\site-packages (from protobuf>=3.9.2->tensorflow~=2.0->fer) (47.1.0)
Requirement already satisfied: markdown>=2.6.8 in c:\python\lib\site-packages (from tensorboard<3,>=2.3.0->tensorflow~=2.0->fer) (3.2.2)
Requirement already satisfied: tensorboard-plugin-wit>=1.6.0 in c:\python\lib\site-packages (from tensorboard<3,>=2.3.0->tensorflow~=2.0->fer) (1.7.0)
Requirement already satisfied: werkzeug>=0.11.15 in c:\python\lib\site-packages (from tensorboard<3,>=2.3.0->tensorflow~=2.0->fer) (1.0.1)
Requirement already satisfied: google-auth-oauthlib<0.5,>=0.4.1 in c:\python\lib\site-packages (from tensorboard<3,>=2.3.0->tensorflow~=2.0->fer) (0.4.1)
Requirement already satisfied: google-auth<2,>=1.6.3 in c:\python\lib\site-packages (from tensorboard<3,>=2.3.0->tensorflow~=2.0->fer) (1.21.1)
Requirement already satisfied: pyasn1-modules>=0.2.1 in c:\python\lib\site-packages (from google-auth<2,>=1.6.3->tensorboard<3,>=2.3.0->tensorflow~=2.0->fer) (0.2.8)
Requirement already satisfied: cachetools<5.0,>=2.0.0 in c:\python\lib\site-packages (from google-auth<2,>=1.6.3->tensorboard<3,>=2.3.0->tensorflow~=2.0->fer) (4.1.1)
Requirement already satisfied: rsa<5,>=3.1.4 in c:\python\lib\site-packages (from google-auth<2,>=1.6.3->tensorboard<3,>=2.3.0->tensorflow~=2.0->fer) (4.6)
Requirement already satisfied: requests-oauthlib>=0.7.0 in c:\python\lib\site-packages (from google-auth-oauthlib<0.5,>=0.4.1->tensorboard<3,>=2.3.0->tensorflow~=2.0->fer) (1.3.0)
Requirement already satisfied: pyasn1<0.5.0,>=0.4.6 in c:\python\lib\site-packages (from pyasn1-modules>=0.2.1->google-auth<2,>=1.6.3->tensorboard<3,>=2.3.0->tensorflow~=2.0->fer) (0.4.8)
Requirement already satisfied: urllib3!=1.25.0,!=1.25.1,<1.26,>=1.21.1 in c:\python\lib\site-packages (from requests->fer) (1.25.10)
Requirement already satisfied: idna<3,>=2.5 in c:\python\lib\site-packages (from requests->fer) (2.10)
Requirement already satisfied: certifi>=2017.4.17 in c:\python\lib\site-packages (from requests->fer) (2020.6.20)
Requirement already satisfied: chardet<4,>=3.0.2 in c:\python\lib\site-packages (from requests->fer) (3.0.4)
Requirement already satisfied: oauthlib>=3.0.0 in c:\python\lib\site-packages (from requests-oauthlib>=0.7.0->google-auth-oauthlib<0.5,>=0.4.1->tensorboard<3,>=2.3.0->tensorflow~=2.0->fer) (3.1.0)
Requirement already satisfied: pillow>=6.2.0 in c:\python\lib\site-packages (from matplotlib->fer) (7.2.0)
Requirement already satisfied: pyparsing!=2.0.4,!=2.1.2,!=2.1.6,>=2.0.3 in c:\python\lib\site-packages (from matplotlib->fer) (2.4.7)
Requirement already satisfied: kiwisolver>=1.0.1 in c:\python\lib\site-packages (from matplotlib->fer) (1.2.0)
Requirement already satisfied: python-dateutil>=2.1 in c:\python\lib\site-packages (from matplotlib->fer) (2.8.1)
Requirement already satisfied: cycler>=0.10 in c:\python\lib\site-packages (from matplotlib->fer) (0.10.0)
Requirement already satisfied: pytz>=2017.3 in c:\python\lib\site-packages (from pandas->fer) (2020.5)

video cropping

I've encountered issues with using the detection_box argument in the Video.analyze() function. With a little digging, I've found that self._crop(frame, detection_box) results in the following error:

{TypeError}_crop() missing 1 required positional argument: 'detection_box'

This problem stems from the combination of a @staticmethod decorator and "self" as an input argument to the _crop() function, and can be resolved by removing the "self" input argument from the _crop function definition.


    @staticmethod
    def _crop(self, frame, detection_box):

For the time being I've gotten around this with a monkey patch:

from fer import Video
Video._crop = lambda self, frame, _detection_box: frame[
                                                  _detection_box.get("y_min"): _detection_box.get("y_max"),
                                                  _detection_box.get("x_min"): _detection_box.get("x_max")]

'Model' object has no attribute 'make_predict_function'

I getting this error: 'Model' object has no attribute 'make_predict_function' and I took a look at the fer.py file in the repo. Line 117 has the following code self.__emotion_classifier.make_predict_function() but it doesn't seem like there is a make_predict_function() anywhere so I think that is causing the problem. Anyone else run into the same problem?

TensorFlow 2.0 has no attribute 'ConfigProto'; fix proposed

Nice project! Thanks for putting it together!

I found a breaking issue today, with a simple fix. The versioning strings ask for TensorFlow >=1.14, but there's some internal code that relies on a TF configuration object that was changed in TF2.0. A workaround is to change the tensorflow requirement strings in setup.py and requirements.txt from "tensorflow>=1.14", to "tensorflow>=1.14,<2.0",

I did a pip install fer in a new virtualenv and then started following the usage instructions. Here's a session dump:

In [1]: import tensorflow    
In [2]: tensorflow.__version__  
Out[2]: '2.0.0'
In [3]: from fer import FER   
In [4]: detector = FER()      
---------------------------------------------------------------------------
AttributeError                            Traceback (most recent call last)
<ipython-input-2-a308cfd708c0> in <module>
----> 1 detector = FER()

~/td_venv/lib/python3.7/site-packages/fer/fer.py in __init__(self, cascade_file, mtcnn, emotion_model, scale_factor, min_face_size, min_neighbors, offsets, compile)
    110                 "fer", "data/emotion_model.hdf5"
    111             )
--> 112             self.config = tf.ConfigProto(log_device_placement=False)
    113             self.config.gpu_options.allow_growth = True
    114 

AttributeError: module 'tensorflow' has no attribute 'ConfigProto'

Here's a workaround for TF1/TF2 compatibility that I think would also resolve things.

I'll send a PR later on if I can get something working smoothly. Cheers!

Expression predicts unexpected error

Hello, I have another accidental bug when I tested your code. I executed python example.py and got the following error:

Traceback (most recent call last):
  File "/home/zh/sda1/人脸识别/微表情分析/fer/example.py", line 11, in <module>
    result = detector.detect_emotions(image)
  File "/home/zh/sda1/人脸识别/微表情分析/fer/src/fer/fer.py", line 250, in detect_emotions
    x1 = np.clip(x1, a_min=0)
TypeError: clip() missing 1 required positional argument: 'a_max'

My test picture is:
恐惧2

I think the fix for this bug is to modify the detect_emotions function of fer.py.

 def detect_emotions(self, img: np.ndarray) -> list:
        if img is None or not hasattr(img, "shape"):
            raise InvalidImage("Image not valid.")

        emotion_labels = self._get_labels()
        face_rectangles = self.find_faces(img, bgr=True)
        gray_img = cv2.cvtColor(img, cv2.COLOR_BGR2GRAY)

        emotions = []
        for face_coordinates in face_rectangles:
            face_coordinates = self.tosquare(face_coordinates)
            x1, x2, y1, y2 = self.__apply_offsets(face_coordinates)

            if y1 < 0 or x1 < 0:
                gray_img = self.pad(gray_img)
                x1 += 40
                x2 += 40
                y1 += 40
                y2 += 40
                x1 = np.clip(x1, a_min=0,a_max=0)
                y1 = np.clip(y1, a_min=0,a_max=0)

Do you agree?

Another problem is that when I set mtcnn=True, I will encounter a situation where I can't detect the face:

Traceback (most recent call last):
  File "/home/zh/sda1/人脸识别/微表情分析/fer/example.py", line 14, in <module>
    bounding_box = result[0]["box"]
IndexError: list index out of range

So is mtcnn not accurate enough, not recommended?
The test image I am using is:
成龙1

In addition, I also found other problems. If there are multiple faces in my picture, mtcnn=True can detect multiple faces, mtcnn=False can only detect one face.

Logger progress bar bug.

image

If frequency set to 2 (or any integer) the total frames length provided to tqdm is full length of video instead the total frame length should be divided by the frequency. (length/frequency)

Padding only applied to first bounding box if needed

From scanning the code I can detect a possible error that could result in subtle mispredictions.
When a bounding box overlaps the top or left edge of the input image (why not bottom and right?), the image is padded with 40 pixels in line https://github.com/justinshenk/fer/blob/1810e3e895bc10853872689c88961f11c19f6a00/src/fer/fer.py#L219 and the corresponding bounding box adjusted accordingly. Unfortunately the padding is permanent but the adjustment only applied to this one bounding box. All following bounding boxes will be out of alignment with the padded image. Also if another bounding box also overlaps the former top or left edge, the image will be padded again.
A solution would be to do an in-place adjustment to all bounding boxes when the padding happens.

FER not working with GPU

Hi,
I was able to use FER on CPU, but cannot make it work on GPU.
Based on this link (https://www.tensorflow.org/install/source#tested_build_configurations) I have checked my configuration, and it looks fine and supported:
tensorflow 2.4.0 / python 3.8.10 / cuda 11.0.3 / cudnn 8.0.5
(I have tried other setups as well, but the results were even worse...)

When I try to run the example.py, the GPU device detected, the cuda libraries successfully opened, but after that I get the following errors:
2021-08-27 11:12:16.722491: E tensorflow/stream_executor/cuda/cuda_blas.cc:226] failed to create cublas handle: CUBLAS_STATUS_NOT_INITIALIZED
2021-08-27 11:12:16.722689: W tensorflow/core/framework/op_kernel.cc:1763] OP_REQUIRES failed at conv_ops.cc:1106 : Not found: No algorithm worked!
Traceback (most recent call last):
File "test.py", line 11, in
result = detector.detect_emotions(image)
File "/usr/local/lib/python3.8/dist-packages/fer/fer.py", line 225, in detect_emotions
face_rectangles = self.find_faces(img, bgr=True)
File "/usr/local/lib/python3.8/dist-packages/fer/fer.py", line 182, in find_faces
results = self._mtcnn.detect_faces(img)
File "/usr/local/lib/python3.8/dist-packages/mtcnn/mtcnn.py", line 300, in detect_faces
result = stage(img, result[0], result[1])
File "/usr/local/lib/python3.8/dist-packages/mtcnn/mtcnn.py", line 342, in __stage1
out = self._pnet.predict(img_y)
File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/keras/engine/training_v1.py", line 982, in predict
return func.predict(
File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/keras/engine/training_arrays_v1.py", line 706, in predict
return predict_loop(
File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/keras/engine/training_arrays_v1.py", line 384, in model_iteration
batch_outs = f(ins_batch)
File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/keras/backend.py", line 3956, in call
fetched = self._callable_fn(*array_vals,
File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/client/session.py", line 1480, in call
ret = tf_session.TF_SessionRunCallable(self._session._session,
tensorflow.python.framework.errors_impl.NotFoundError: 2 root error(s) found.
(0) Not found: No algorithm worked!
[[{{node conv2d/Conv2D}}]]
(1) Not found: No algorithm worked!
[[{{node conv2d/Conv2D}}]]
[[conv2d_4/BiasAdd/_783]]
0 successful operations.
0 derived errors ignored.

Any suggestion what shall I do?
(By the way, there is no difference if I install tensorflow==2.4.0 or tensorflow-gpu==2.4.0...)

Thanks!

I can't instantiate FER object

i'm trying to use fer with opencv but these errors occures again and again :

This TensorFlow binary is optimized to use available CPU instructions in performance-critical operations.
To enable the following instructions: AVX2 FMA, in other operations, rebuild TensorFlow with the appropriate compiler flags.
WARNING:py.warnings:C:\Users\IMORE\AppData\Local\Programs\Python\Python312\Lib\site-packages\keras\src\optimizers\base_optimizer.py:34: UserWarning: Argument decay is no longer supported and will be ignored.
warnings.warn(

Traceback (most recent call last):
File "C:\Users\IMORE\Documents\pythonAIs\test2.py", line 11, in
detector = FER()
^^^^^
File "C:\Users\IMORE\AppData\Local\Programs\Python\Python312\Lib\site-packages\fer\fer.py", line 104, in init
self._initialize_model()
File "C:\Users\IMORE\AppData\Local\Programs\Python\Python312\Lib\site-packages\fer\fer.py", line 115, in _initialize_model
self._emotion_classifier = load_model(emotion_model, compile=False)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\IMORE\AppData\Local\Programs\Python\Python312\Lib\site-packages\keras\src\saving\saving_api.py", line 183, in load_model
return legacy_h5_format.load_model_from_hdf5(filepath)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\IMORE\AppData\Local\Programs\Python\Python312\Lib\site-packages\keras\src\legacy\saving\legacy_h5_format.py", line 155, in load_model_from_hdf5
**saving_utils.compile_args_from_training_config(
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\IMORE\AppData\Local\Programs\Python\Python312\Lib\site-packages\keras\src\legacy\saving\saving_utils.py", line 133, in compile_args_from_training_config
optimizer = optimizers.deserialize(optimizer_config)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\IMORE\AppData\Local\Programs\Python\Python312\Lib\site-packages\keras\src\optimizers_init
.py", line 65, in deserialize
return serialization_lib.deserialize_keras_object(
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\IMORE\AppData\Local\Programs\Python\Python312\Lib\site-packages\keras\src\saving\serialization_lib.py", line 576, in deserialize_keras_object
return deserialize_keras_object(
^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\IMORE\AppData\Local\Programs\Python\Python312\Lib\site-packages\keras\src\saving\serialization_lib.py", line 711, in deserialize_keras_object
instance = cls.from_config(inner_config)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\IMORE\AppData\Local\Programs\Python\Python312\Lib\site-packages\keras\src\optimizers\base_optimizer.py", line 809, in from_config
return cls(**config)
^^^^^^^^^^^^^
File "C:\Users\IMORE\AppData\Local\Programs\Python\Python312\Lib\site-packages\keras\src\optimizers\adam.py", line 60, in init
super().init(
File "C:\Users\IMORE\AppData\Local\Programs\Python\Python312\Lib\site-packages\keras\src\backend\tensorflow\optimizer.py", line 22, in init
super().init(*args, **kwargs)
File "C:\Users\IMORE\AppData\Local\Programs\Python\Python312\Lib\site-packages\keras\src\optimizers\base_optimizer.py", line 38, in init
raise ValueError(f"Argument(s) not recognized: {kwargs}")
ValueError: Argument(s) not recognized: {'lr': 0.00010000000474974513}

Import Error on running detector = FER(mtcnn=True) in docker

Error

detector = FER(mtcnn=True)
2021-05-29T05:52:59.658861+00:00 app[web.1]: File "/usr/local/lib/python3.7/site-packages/fer/fer.py", line 98, in __init__
2021-05-29T05:52:59.658861+00:00 app[web.1]: from mtcnn.mtcnn import MTCNN
2021-05-29T05:52:59.658862+00:00 app[web.1]: File "/usr/local/lib/python3.7/site-packages/mtcnn/__init__.py", line 26, in <module>
2021-05-29T05:52:59.658862+00:00 app[web.1]: from mtcnn.mtcnn import MTCNN
2021-05-29T05:52:59.658863+00:00 app[web.1]: File "/usr/local/lib/python3.7/site-packages/mtcnn/mtcnn.py", line 37, in <module>
2021-05-29T05:52:59.658863+00:00 app[web.1]: from mtcnn.network.factory import NetworkFactory
2021-05-29T05:52:59.658863+00:00 app[web.1]: File "/usr/local/lib/python3.7/site-packages/mtcnn/network/factory.py", line 26, in <module>
2021-05-29T05:52:59.658864+00:00 app[web.1]: from keras.layers import Input, Dense, Conv2D, MaxPooling2D, PReLU, Flatten, Softmax
2021-05-29T05:52:59.658864+00:00 app[web.1]: File "/usr/local/lib/python3.7/site-packages/keras/__init__.py", line 20, in <module>
2021-05-29T05:52:59.658865+00:00 app[web.1]: from . import initializers
2021-05-29T05:52:59.658865+00:00 app[web.1]: File "/usr/local/lib/python3.7/site-packages/keras/initializers/__init__.py", line 124, in <module>
2021-05-29T05:52:59.658865+00:00 app[web.1]: populate_deserializable_objects()
2021-05-29T05:52:59.658873+00:00 app[web.1]: File "/usr/local/lib/python3.7/site-packages/keras/initializers/__init__.py", line 112, in populate_deserializable_objects
2021-05-29T05:52:59.658873+00:00 app[web.1]: LOCAL.ALL_OBJECTS[generic_utils.to_snake_case(key)] = value
2021-05-29T05:52:59.658874+00:00 app[web.1]: AttributeError: module 'keras.utils.generic_utils' has no attribute 'to_snake_case'

My dockerfile requirements.txt

fastapi[all]==0.60.1
requests==2.25.0
redis==3.0.1
boto3
environs
pydantic==1.8.1
tensorflow==2.5.0
mtcnn==0.1.0
Keras==2.4.3
keras-nightly==2.5.0.dev2021032900
fer==20.1.3
opencv-contrib-python==4.1.2.30
opencv-python==4.1.2.30
tqdm==4.45.0

My Dockerfile

FROM python:3.7-slim

LABEL description="Test"

ENV LC_ALL C.UTF-8
ENV LANG C.UTF-8

ARG ENVIRONMENT
ENV ENVIRONMENT=${ENVIRONMENT}

RUN mkdir -p /usr/share/man/man1 \
    && apt-get update \
    && DEBIAN_FRONTEND=noninteractive apt-get install -y --no-install-recommends \
    libgtk2.0-dev\
    libglib2.0-0\
    ffmpeg\
    libsm6\
    libxext6\
    git\
    wget\
    && rm -rf /var/lib/apt/lists/*

COPY requirements.txt /tmp/
RUN pip3 install -r /tmp/requirements.txt && rm -rf /var/lib/apt/lists/* /tmp/* /var/tmp/* /root/.cache/*

COPY . /srv/testing
WORKDIR /srv/testing

# set environment variable
ENV PYTHONDONTWRITEBYTECODE 1

EXPOSE 8000
CMD ["bash", "run.sh"]

Doesn't use GPU

I'm running the package using Google Colab and the GPU is detected and available, however the FER packages doesn't seem to use it. I've run it before on the same setup and it worked using the GPU, but now it doesn't. Is there anyone that has the same problem?

can i install fer into ubuntu ??

After install pip install fer in ubuntu and when my code is try to import that it shows me error that no module found please help me.

Error running on multiple images in a row - workaround with TF session context manager

Hi there

Wonderful library - I really enjoy the api design - makes it really easy (and fun) to use. Good work 👍

But I did run into one (minor) issue:

It doesn't work if running on more than one discrete image in a row.

The first image analyzed with FER().detect_emotions(img) always works fine, but running the function again with a different image results in a tensorflow error (paraphrased, I lost the original): "Tensorflow Tensor is not element of this graph".

I'm not sure exactly what went wrong (I'm not too familiar with tensorflow), but wrapping the call in a with tf.Session(): context manager resolved the issue.

I hope you'll look into fixing this in the library, as it was really enjoyable to use (apart from this haunting bug).

Batch processing

My GPU utilisation is very low when processing the video file
I need to process entire video in batch processing
i did'nt know how to process and detect emotions in that way

Json format of the output

Justin,

using the fer application applied to an image the result that I obtain seem do not accomplish with the json standard.
This is an example:

[OrderedDict([('box', (316, -3, 516, 516)), ('emotions', {'angry': 0.88, 'disgust': 0.0, 'fear': 0.06, 'happy': 0.0, 'sad': 0.03, 'surprise': 0.0, 'neutral': 0.03})])]

using the json.dumps function to pretty print the above result, I got an error message.
Any idea?
Thanks.

ValueError: Tensor Tensor("conv2d_4/BiasAdd:0", shape=(?, ?, ?, 4), dtype=float32) is not an element of this graph.

Hi Sir,
First of all, thank you for this great package.

I am using this package in Google Cloud Functions with tensorflow==2.4.1. When I use FER with mtcnn=True, I get this error:

File "/workspace/system/face_emotion_model_api.py", line 66, in face_emotion_model
result = FER_MODEL.detect_emotions(image)
File "/layers/google.python.pip/pip/lib/python3.7/site-packages/fer/fer.py", line 225, in detect_emotions
face_rectangles = self.find_faces(img, bgr=True)
File "/layers/google.python.pip/pip/lib/python3.7/site-packages/fer/fer.py", line 182, in find_faces
results = self._mtcnn.detect_faces(img)
ValueError: Tensor Tensor("conv2d_4/BiasAdd:0", shape=(?, ?, ?, 4), dtype=float32) is not an element of this graph.

Could you please help me?

collab broken?

if running on a video I never get any detects (empty zip, empty vid).

when running the basic example:

from fer import FER
import cv2

img = cv2.imread("justin.jpg")
detector = FER()
detector.detect_emotions(img)

crashes with:

WARNING:tensorflow:From /usr/local/lib/python3.7/dist-packages/tensorflow/python/compat/v2_compat.py:96: disable_resource_variables (from tensorflow.python.ops.variable_scope) is deprecated and will be removed in a future version.
Instructions for updating:
non-resource variables are not supported in the long term
WARNING:tensorflow:From /usr/local/lib/python3.7/dist-packages/tensorflow/python/keras/layers/normalization.py:534: _colocate_with (from tensorflow.python.framework.ops) is deprecated and will be removed in a future version.
Instructions for updating:
Colocations handled automatically by placer.

20-09-2021:10:21:50,660 WARNING  [deprecation.py:336] From /usr/local/lib/python3.7/dist-packages/tensorflow/python/keras/layers/normalization.py:534: _colocate_with (from tensorflow.python.framework.ops) is deprecated and will be removed in a future version.
Instructions for updating:
Colocations handled automatically by placer.
/usr/local/lib/python3.7/dist-packages/tensorflow/python/keras/engine/training.py:2424: UserWarning: `Model.state_updates` will be removed in a future version. This property should not be used in TensorFlow 2.0, as `updates` are applied automatically.
  warnings.warn('`Model.state_updates` will be removed in a future version. '

---------------------------------------------------------------------------

UnknownError                              Traceback (most recent call last)

<ipython-input-2-1c74acac5d76> in <module>()
      4 img = cv2.imread("justin.jpg")
      5 detector = FER()
----> 6 detector.detect_emotions(img)

5 frames

/usr/local/lib/python3.7/dist-packages/fer/fer.py in detect_emotions(self, img, face_rectangles)
    254             gray_face = np.expand_dims(gray_face, -1)
    255 
--> 256             emotion_prediction = self.__emotion_classifier.predict(gray_face)[0]
    257             labelled_emotions = {
    258                 emotion_labels[idx]: round(float(score), 2)

/usr/local/lib/python3.7/dist-packages/tensorflow/python/keras/engine/training_v1.py in predict(self, x, batch_size, verbose, steps, callbacks, max_queue_size, workers, use_multiprocessing)
    995         max_queue_size=max_queue_size,
    996         workers=workers,
--> 997         use_multiprocessing=use_multiprocessing)
    998 
    999   def reset_metrics(self):

/usr/local/lib/python3.7/dist-packages/tensorflow/python/keras/engine/training_arrays_v1.py in predict(self, model, x, batch_size, verbose, steps, callbacks, **kwargs)
    707         verbose=verbose,
    708         steps=steps,
--> 709         callbacks=callbacks)

/usr/local/lib/python3.7/dist-packages/tensorflow/python/keras/engine/training_arrays_v1.py in model_iteration(model, inputs, targets, sample_weights, batch_size, epochs, verbose, callbacks, val_inputs, val_targets, val_sample_weights, shuffle, initial_epoch, steps_per_epoch, validation_steps, validation_freq, mode, validation_in_fit, prepared_feed_values_from_dataset, steps_name, **kwargs)
    378 
    379         # Get outputs.
--> 380         batch_outs = f(ins_batch)
    381         if not isinstance(batch_outs, list):
    382           batch_outs = [batch_outs]

/usr/local/lib/python3.7/dist-packages/tensorflow/python/keras/backend.py in __call__(self, inputs)
   4053 
   4054     fetched = self._callable_fn(*array_vals,
-> 4055                                 run_metadata=self.run_metadata)
   4056     self._call_fetch_callbacks(fetched[-len(self._fetches):])
   4057     output_structure = nest.pack_sequence_as(

/usr/local/lib/python3.7/dist-packages/tensorflow/python/client/session.py in __call__(self, *args, **kwargs)
   1480         ret = tf_session.TF_SessionRunCallable(self._session._session,
   1481                                                self._handle, args,
-> 1482                                                run_metadata_ptr)
   1483         if run_metadata:
   1484           proto_data = tf_session.TF_GetBuffer(run_metadata_ptr)

UnknownError: 2 root error(s) found.
  (0) Unknown: Failed to get convolution algorithm. This is probably because cuDNN failed to initialize, so try looking to see if a warning log message was printed above.
	 [[{{node conv2d_1/Conv2D}}]]
  (1) Unknown: Failed to get convolution algorithm. This is probably because cuDNN failed to initialize, so try looking to see if a warning log message was printed above.
	 [[{{node conv2d_1/Conv2D}}]]
	 [[predictions/Softmax/_485]]
0 successful operations.
0 derived errors ignored.

the repo says tf 1.7, the collab specifies 2.x? switching to 1.x results in same error.

Video Capture Not Opening error on Docker

Hey,

Great package. Thank you for developing this!

I'm trying to get this package up and running in a docker image (Ubuntu 18.04), and everything seems to go smoothly until I try and run the video example. When I run it I get:

Traceback (most recent call last):
  File "<stdin>", line 1, in <module>
  File "/usr/local/lib/python3.6/dist-packages/fer/classes.py", line 176, in analyze
    assert self.cap.open(self.filepath), "Video capture not opening"
AssertionError: Video capture not opening

I'm assuming that this is because in docker you can't open Xwindow by default, so I turned the display=True to display=False but I still get the error. The image example works fine. Outside of docker, I can run the video example OK, and can even run:

raw_data = video.analyze(detector, display=False, output="pandas", annotate_frames=False, save_frames=False)

Without it opening an additional window. Any ideas on a work-around?

Thanks

PS - I noticed when I built my docker image that you are missing requests from requirements.txt, and some of the code seems to only be compatible with python >= 3.6, not 3.4. Just a heads up!

Feature: Analyse only the part of an image/video

I'm currently try to analyse Let's Plays videos. It would be very usefull, if i can provide an additional paremter for the Video analysis with something like a detection box to ensure, that i alwalys analyse only the overlay with the streamer face. I see 2 possible workarounds here:

  1. Perform the video analysis for full images and only return the emotions, where the face box is inside a given box
  2. Perform the video analysis only for the given box to reduce unnecessary calculations.

My actual workaround for 2. is the following snippet:

DETECTION_BOX = {"x_min": 0, "x_max": 150, "y_min": 100, "y_max": 275}
def analyse_emotions(self, detection_box, frequency=None, detector=None):
       #...
        for fno in range(0, total_frames, frequency):
            self.cap.set(cv2.CAP_PROP_POS_FRAMES, fno)
            _, img = self.cap.read()
            detections = self._get_emotions_image(image=img, detection_box=detection_box)
       #...
def _get_emotions_image(self, image, detection_box):
        crop_img = image[
                   detection_box.get("y_min"): detection_box.get("y_max"),
                   detection_box.get("x_min"): detection_box.get("x_max")]
        emotions = self.detector.detect_emotions(crop_img)
        for emotion in emotions:
            original_box = emotion.get("box")
            emotion["box"] = (
                original_box[0] + detection_box.get("x_min"), original_box[1] + detection_box.get("y_min"),
                original_box[2], original_box[3])
        return emotions

Issue depending on mtcnn or simple Cascade Classifier

Hi,

I tried to run the Justin.jpg simple example and got Issues with the detector:

if I try :

detector = FER(mtcnn=True)

I get a :

  File "[]/opt/miniconda3/envs/fer/lib/python3.6/site-packages/keras/initializers/__init__.py", line 49, in populate_deserializable_objects
    LOCAL.GENERATED_WITH_V2 = tf.__internal__.tf2.enabled()
AttributeError: module 'tensorflow.compat.v2.__internal__' has no attribute 'tf2'

if I try :

detector = FER(mtcnn=False)

I get a :

  File "[]/opt/miniconda3/envs/fer/lib/python3.6/site-packages/fer/fer.py", line 168, in find_faces
    if isinstance(self.__face_detector, cv2.CascadeClassifier):
TypeError: isinstance() arg 2 must be a type or tuple of types

Using miniconda 4.10
and this packages

Package                 Version
----------------------- -------------------
absl-py                 0.13.0
astor                   0.8.1
astunparse              1.6.3
bleach                  1.5.0
cached-property         1.5.2
cachetools              4.2.2
certifi                 2021.5.30
chardet                 4.0.0
cycler                  0.10.0
dataclasses             0.8
fer                     20.1.3
flatbuffers             1.12
gast                    0.3.3
google-auth             1.32.1
google-auth-oauthlib    0.4.4
google-pasta            0.2.0
grpcio                  1.32.0
h5py                    2.10.0
html5lib                0.9999999
idna                    2.10
importlib-metadata      4.6.1
Keras                   2.4.3
Keras-Applications      1.0.8
keras-nightly           2.5.0.dev2021032900
Keras-Preprocessing     1.1.2
kiwisolver              1.3.1
Markdown                3.3.4
matplotlib              3.3.4
mtcnn                   0.1.0
numpy                   1.19.5
oauthlib                3.1.1
opencv-contrib-python   3.3.0.9
opencv-python           4.5.2.54
opt-einsum              3.3.0
pandas                  1.1.5
Pillow                  8.3.0
pip                     21.1.3
protobuf                3.17.3
pyasn1                  0.4.8
pyasn1-modules          0.2.8
pyparsing               2.4.7
python-dateutil         2.8.1
pytz                    2021.1
PyYAML                  5.4.1
requests                2.25.1
requests-oauthlib       1.3.0
rsa                     4.7.2
scipy                   1.5.4
setuptools              52.0.0.post20210125
six                     1.15.0
tensorboard             2.5.0
tensorboard-data-server 0.6.1
tensorboard-plugin-wit  1.8.0
tensorflow              2.4.1
tensorflow-estimator    2.4.0
termcolor               1.1.0
typing-extensions       3.7.4.3
urllib3                 1.26.6
Werkzeug                2.0.1
wheel                   0.36.2
wrapt                   1.12.1
zipp                    3.5.0

Thanks in advance

Face selection heuristic

Hi! Awesome project! I have been using it for the past days and I wonder something: what do you thing is the best heuristic to select a face, when multiple are detected but only one is in the picture?

Basically, when running through a video, sometimes FER detects "blobs" as faces, which in reality are not. I was wondering if you have any criteria to pick the right one. I saw that in the method top_emotion you just take the first one, is that always the case?

GPU usage of emotion classification

Hi Justin,

Great project, it really is saving my life! I have a question about GPU usage (it's not really an issue, so sorry if this is not the right way to ask):

I'm using the detect_emotion() function providing already the bounding box of the faces, which come from previous data. I'm wondering, does the emotion classification part of the function use GPU acceleration? I've checked the source code and it's not clear to me, since it's using Tensorflow, which I know can support GPU directly in the recent versions. I know that face detection using mtcnn does use GPU acceleration, but I'm avoiding that part of the function by providing the bounding boxes.

In case it does use GPU acceleration to classify the emotions, I really don't think it provides any noteworthy improvements, since it's a very fast process and cannot be done in batches, am I right?

Thanks again!

unable to import FER from fer

While running 'from fer import FER' command
getting error: 'imageio.ffmpeg.download() has been deprecated. Use 'pip install imageio-ffmpeg' instead.''

Accuracy about the model

Hi justin,

Great work! I wonder do you have the accuracy result when you train the data? thanks.

Two issues - one with mtcnn one with face_rectangles

Hello.

Thanks for your lib. I'm currently trying to use your fer to detect emotions on webcam video. I've installed your fer using pip install fer. But i've met two issues:

  1. When i'm initializing your detector as
    detector = FER(mtcnn=True)
    i'm getting
    AttributeError: module 'keras.utils.generic_utils' has no attribute 'to_snake_case'
    tensorflow 2.5.0, keras 2.4.3

  2. SInce i wanted only to use your model as emotion detector not face detector, i wanted to test your detector.detect_emotions with parameter face_rectangles, so code looks like this

video_capture = cv2.VideoCapture(0)
detector = FER()
while True:
    _, frame = video_capture.read() 
    box = detector.find_faces(frame)
    emotions  = detector.detect_emotions(frame, face_rectangles=box)

So, i'm trying to use fer's find_faces to test how it performs with only emotion detect. Unfortunately, on the line
emotions = detector.detect_emotions(frame, face_rectangles=box)
i'm getting
ValueError: The truth value of an array with more than one element is ambiguous. Use a.any() or a.all()
on the line
if not face_rectangles:
in the file fer.py.

My understanding that it should be this if instead
if (face_rectangles is None):
With this if it works fine. Could you please change this line in your project so other people, installed fer with pip, could use it with their own face detect?

trouble with Video() with webcam

Hi @JustinShenk, I am trying to use fer with a webcam but getting an error when using a web cam instead of video file with Video()

`vid = cv2.VideoCapture(0)

while True:
ret, frame = vid.read()
face_detector = FER(mtcnn=True)
input_video = Video(vid)
processing_data = input_video.analyze(face_detector, display=False)`

The following error occurs at the input_video= Video()

TypeError: stat: path should be string, bytes, os.PathLike or integer, not VideoCapture

Error whe try to run "detect_emotions" method

When I try to run this code, I have an exception

from fer import FER
import cv2 as cv

img = cv.imread(temp_file_name)
detector = FER()
detector.detect_emotions(img=img)
Traceback (most recent call last):
  File "c:/Dev/Python/vocacional-server/photo/emotion.py", line 17, in <module>
    detector.detect_emotions(img=img)
  File "C:\Users\lucas.postingher\AppData\Local\Programs\Python\Python36\lib\site-packages\fer\fer.py", line 236, in detect_emotions
    face_rectangles = self.find_faces(img, bgr=True)
  File "C:\Users\lucas.postingher\AppData\Local\Programs\Python\Python36\lib\site-packages\fer\fer.py", line 180, in find_faces
    if isinstance(self.__face_detector, cv2.CascadeClassifier):
TypeError: isinstance() arg 2 must be a type or tuple of types

Issue while using audio=True parameter in analyze function

Testing a video file having audio in it.

I have followed the pre-requisite of installing:
!pip install ffmpeg moviepy

Tried comparing the output using two approaches:

  1. include_audio = True
Screenshot_1 Screenshot_2
  1. include_audio = False
Screenshot_4 Screenshot_3

Could you confirm if the observed differences in emotion detection results are expected when comparing video files with and without audio? Shouldn't the presence or absence of audio have no impact on the percentage of emotions detected?

Thanks,
Ashwini

Improve Processing time per frame in video while testing

Firstly, amazing work by the contributors.

I have installed the fer library on google colab (through pip).
I wanted to know if there was a way to improve the processing time per frame, my aim is to reduce the processing time while testing, let's say 4 videos at once.

I have already tired multithreading and multiprocessing, both of the methods don't seem to reduce the time for processing. I understand that your model sees each and every frame of the video that is sent, but is there a way to make it parallelly run on more than 1 video so as to reduce the overall execution time?

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.