Giter Site home page Giter Site logo

gazetracking's People

Contributors

antoinelame avatar balezz avatar dependabot[bot] avatar kesharis avatar ruvt avatar vinesmsuic avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

gazetracking's Issues

How does the calibration method find the best threshold for the eye frame?

Nice job, however I'd like to know how does the calibration method works for finding the best threshold? Because when I use a near infrared camera to detect the iris, I find that the contour of iris became the contour of the whole eye region, so it leads to a wrong location of Iris when I calculated the mass center of iris.
Is there any suggestion for this?
Many thanks!

OpenCV Error

frame=cv2.cvtColor(self.frame,cv2.COLOR_BGR2GRAY)
cv2.error: OpenCV(3.4.5) /io/opencv/modules/imgproc/src/color.cpp:181: error: (-215:Assertion failed) !_src.empty() in function 'cvtColor'

ZeroDivisionError: float division by zero in detect_iris

@antoinelame THX for your awesome job first. But when I try example.py the log gives me the error:

Traceback (most recent call last): File "example.py", line 18, in <module> gaze.refresh(frame) File "C:\Users\Red\.spyder\dolist\pyimage\GazeTracking\gaze_tracking\gaze_tracking.py", line 64, in refresh self._analyze() File "C:\Users\Red\.spyder\dolist\pyimage\GazeTracking\gaze_tracking\gaze_tracking.py", line 51, in _analyze self.eye_right = Eye(frame, landmarks, 1, self.calibration) File "C:\Users\Red\.spyder\dolist\pyimage\GazeTracking\gaze_tracking\eye.py", line 22, in __init__ self._analyze(original_frame, landmarks, side, calibration) File "C:\Users\Red\.spyder\dolist\pyimage\GazeTracking\gaze_tracking\eye.py", line 112, in _analyze self.pupil = Pupil(self.frame, threshold) File "C:\Users\Red\.spyder\dolist\pyimage\GazeTracking\gaze_tracking\pupil.py", line 17, in __init__ self.detect_iris(eye_frame) File "C:\Users\Red\.spyder\dolist\pyimage\GazeTracking\gaze_tracking\pupil.py", line 51, in detect_iris self.x = int(moments['m10'] / moments['m00']) ZeroDivisionError: float division by zero

OpenCV 3.1.0. BTW in eye.py line 86 may have the same issue, but I can't repeat it again and so the log lost.

Large error margin

Hello @antoinelame

What do you suggest to solve the problem with the margin of error, especially with the top and bottom positions? I implemented them in example.py and removed the blinking function, however the margin of error is very large.

iris_size() can also suggest eye blink

Thanks for sharing your awesome work, appreciate it!

This is not a bug but a suggestion. We can also find out whether the eye blinked or not by the percentage of iris_size() present on the surface of the eye.

LeftPupil and Right pupil None

Hello,

Thanks for initiating this effort.

I have a stationary video taken from my phone where I am not moving. I changed the code to read from a file and other part of the code remain the same as example.py.

I see the video being played but the left and right pupil coordinates are None. Has this got to do with ambient light?

I do see that the video is clear. Please see a sample image.
Screen Shot 2020-10-12 at 9 24 41 AM

What am I doing wrong?

S

cap = cv2.VideoCapture('/Users/shekartippur/playground/tflite/myvideo.mp4')

while True:
    ret, frame = cap.read()
    # We send this frame to GazeTracking to analyze it
    gaze.refresh(frame)

    frame = gaze.annotated_frame()
    text = ""

    if gaze.is_blinking():
        text = "Blinking"
    elif gaze.is_right():
        text = "Looking right"
    elif gaze.is_left():
        text = "Looking left"
    elif gaze.is_center():
        text = "Looking center"

    cv2.putText(frame, text, (90, 60), cv2.FONT_HERSHEY_DUPLEX, 1.6, (147, 58, 31), 2)

    left_pupil = gaze.pupil_left_coords()
    right_pupil = gaze.pupil_right_coords()
    cv2.putText(frame, "Left pupil:  " + str(left_pupil), (90, 130), cv2.FONT_HERSHEY_DUPLEX, 0.9, (147, 58, 31), 1)
    cv2.putText(frame, "Right pupil: " + str(right_pupil), (90, 165), cv2.FONT_HERSHEY_DUPLEX, 0.9, (147, 58, 31), 1)

    cv2.imshow("Demo", frame)

    if cv2.waitKey(1) == 27:
        break

Run Error

Didn´t modified anything on the code and get these error.
dlib version is 19.9, Package requirement 19.16.0 is needed. Is that important?

Traceback (most recent call last):
File "C:/Users/XXXXX/PycharmProjects/untitled2/example.py", line 9, in
gaze = GazeTracking()
File "C:\Users\XXXXX\PycharmProjects\untitled2\gaze_tracking\gaze_tracking.py", line 28, in init
self._predictor = dlib.shape_predictor(model_path)
RuntimeError: Error deserializing object of type int

Process finished with exit code 1

Import Error for both Python 2 and 3

c4rr0t@carrot ~/C/e/GazeTracking (master) [1]> python3 example.py
Traceback (most recent call last):
File "example.py", line 7, in
from gaze_tracking import GazeTracking
File "/home/c4rr0t/CS Development/eyemouse/GazeTracking/gaze_tracking/init.py", line 1, in
from .gaze_tracking import GazeTracking
File "/home/c4rr0t/CS Development/eyemouse/GazeTracking/gaze_tracking/gaze_tracking.py", line 4, in
import dlib
ImportError: /home/c4rr0t/.local/lib/python3.8/site-packages/dlib.cpython-38-x86_64-linux-gnu.so: undefined symbol: cblas_dtrsm
c4rr0t@carrot ~/C/e/GazeTracking (master) [1]> python2 example.py
Traceback (most recent call last):
File "example.py", line 7, in
from gaze_tracking import GazeTracking
File "/home/c4rr0t/CS Development/eyemouse/GazeTracking/gaze_tracking/init.py", line 1, in
from .gaze_tracking import GazeTracking
File "/home/c4rr0t/CS Development/eyemouse/GazeTracking/gaze_tracking/gaze_tracking.py", line 4, in
import dlib
ImportError: /home/c4rr0t/.local/lib/python2.7/site-packages/dlib.so: undefined symbol: cblas_dtrsm

If any other info is needed, let me know. I'm using Arch Linux.

Unable to detect pupil

@antoinelame
I have ran this code on 3.7 and 2.6 but it is not detecting pupil in any version and it is not giving any error in console.
Please get the attached photo for reference.
Can you please help me.

untitled

Import Error

ImportError: /home/c4rr0t/.local/lib/python2.7/site-packages/dlib.so: undefined symbol: cblas_dtrsm

I'm using Arch, not sure why its undefined.

qt plugin macOS

I get this error:
qt.qpa.plugin: Could not find the Qt platform plugin "cocoa" in ""
This application failed to start because no Qt platform plugin could be initialized. Reinstalling the application may fix this problem.

I tried every solutions that came up when I looked it up and couldn't fix it.
Can anyone help?

Contours sorting and mouse movement

First of all, your work is working very well. And I'm new on this. I'm trying to understand codes.

what does it mean , especially [-2] in moments = cv2.moments(contours[-2])

I also want to ask you how to match the coordinates on the webcam, on the PC screen. Thanks

unable to run example.py

Hello, i am trying to run your algorithm but got this error. Please take a look. I am using windows with Python 3.5. Can you give me a Detailed view on this?@antoinelame

Traceback (most recent call last):
File "example.py", line 9, in
gaze = GazeTracking()
File "C:\Users\Sravani\Desktop\GazeTracking-master\gaze_tracking\gaze_tracking.py", line 16, in init
self.eyes = EyesDetector()
File "C:\Users\Sravani\Desktop\GazeTracking-master\gaze_tracking\eyes.py", line 26, in init
self._predictor = dlib.shape_predictor(model_path)
RuntimeError: Error deserializing object of type int
[ WARN:0] terminating async callback

Method to track Gaze

Hey Antoine,
Very Good solution there. Can i please know on what method/basis have you tracked the gaze from iris center/s . Thank you.

Calibrate

Hi antoinelame;
I'm running the program without error, but the laptob screen and webcam (Logitech C920) need to be calibrated? The given coordinates are working according to the Laptob screen or the webcam screen .. Thanks.

Automatic Calibration

Hi @antoinelame

I would like to know how I do it if I want to disable automatic calibration and use threshold values that I have informed. What should I do for this? I am having trouble with automatic calibration. I saw that @Blackhorse1988 made this change and was able to find a threshold value that would meet his need. I'm a little lost on how to do it.

when i put my head partly close to left edge, an error occurs as follows, what happens? any one met the problem like this?

myPATH\python.exe example.py
[ WARN:0] terminating async callback
Traceback (most recent call last):
File "example.py", line 17, in
gaze.refresh(frame)
File "GazeTracking-master\gaze_tracking\gaze_tracking.py", line 63, in refresh
self._analyze()
File "GazeTracking-master\gaze_tracking\gaze_tracking.py", line 49, in _analyze
self.eye_left = Eye(frame, landmarks, 0, self.calibration)
File "GazeTracking-master\gaze_tracking\eye.py", line 22, in init
self._analyze(original_frame, landmarks, side, calibration)
File "GazeTracking-master\gaze_tracking\eye.py", line 117, in _analyze
self.pupil = Pupil(self.frame, threshold)
File "GazeTracking-master\gaze_tracking\pupil.py", line 17, in init
self.detect_iris(eye_frame)
File "GazeTracking-master\gaze_tracking\pupil.py", line 45, in detect_iris
self.iris_frame = self.image_processing(eye_frame, self.threshold)
File "GazeTracking-master\gaze_tracking\pupil.py", line 33, in image_processing
new_frame = cv2.erode(new_frame, kernel)
cv2.error: OpenCV(3.4.5) opencv-python\opencv\modules\core\src\matrix.cpp:757: error: (-215:Assertion failed) dims <= 2 && step[0] > 0 in function 'cv::Mat::locateROI'

I had a error“RuntimeError: Error deserializing object of type int”

Traceback (most recent call last):
File "C:/Users/Administrator/Desktop/GazeTracking-master/example.py", line 9, in
gaze = GazeTracking()
File "C:\Users\Administrator\Desktop\GazeTracking-master\gaze_tracking\gaze_tracking.py", line 28, in init
self._predictor = dlib.shape_predictor(model_path)
RuntimeError: Error deserializing object of type int

Vertical Ratio exceeds 1.0

According to the the Readme, the return value of gaze.vertical_ratio() is supposed to be bound between 0 (top) and 1 (bottom).

However during my testing I receive return values up to around 1.6.

On first glance at the source I couldn't determine why this would be. I haven't yet investigated further.

question:how to draw the line of gaze?

Hi,sir
I can run this example as well.and I porting it to C++ version with Dlib/opencv.
everything work well.
but I want to draw a line to indicate the Gaze.
could you teach me How to draw the line of gaze?

What kind of webcam are you using?

I am currently trying to track the gaze (using horizontal_ratio() method) using your program, however I think the results are still not up to my expectations. (i.e it really often says "Looking Left", only seldom says "Looking Right"). I have changed the number of frames for calibration, as well as tweaking the pupil size, but still the results are roughly the same. I wonder if this is related to the webcam, as well as the lighting conditions. I am currently using webcam shipped from my ASUS X456U, and using Trisquel 8.0 Linux Distro.

Output when using v4l2-ctl --list-devices:

USB2.0 VGA UVC WebCam (usb-0000:00:14.0-6):
	/dev/video0

Thank you.

Iris detection

First of all thanks for sharing the project on your Github. I am doing a project related to eye gaze. Actually, I am doing the state of art of available methods and test the accuracy of iris detection on BioID dataset. the metrics used are AEC, BEC and WEC well describes in this paper: https://arxiv.org/abs/1907.04325 . Those metrics are based on the computation of the vector EC-IC stands for Eye corner and Iris centre. I would like to know if i can use the contour of iris given in pupil.py to do that.

Thanks!

How to use it to detect pupils in pictures

Hello, I have used your code to detect the pupil direction in the picture, and then I can only detect the left and right sides, but not the top and bottom.The pointer of the left eye is slightly off
. May I ask you to modify yours values of number of self.horizontal_ratio()?

C++ Version

I'm planning to port this library to C++

Does anyone know if there is already a C++ port of this library?

Thank you.

error in dlib

Hi antoinelame your project is really super coll and i loved it... After getting inspired by you even i'm trying to do it but i'm getting this error "module dlib has no 'get_frontal_face_detector' member"...please help me to remove this error i'm not able to solve this error...i hope u reply to me as soon as possible.!!!!!

What is the algorithm based or what is the name of your paper ?

Hello,
These days I am working on a project about the gaze tracking. And I saw your code on the github, It is fast and precise. I know you were about to publish an paper, I want to ask if you did it. If do, what is the titile of the paper. If not, can you offer some paper of your code makes use of so that I can cite it ?

problems to run

hello, how can I run the test? sry!! i´m beginer!!
tks for all!

ValueError: not enough values to unpack (expected 3, got 2)

I'm trying to implement your code and I came across this error. I pass in a picture and want to see if the iris is correct or not.

` image = cv2.imread('data/kimjisoo/3.jpg')

gaze = GazeTracking()

gaze.refresh(image)

new_frame = gaze.annotated_frame()

cv2.imwrite('data/output/0.png', new_frame)

if gaze.is_right():
    print("Looking right")

elif gaze.is_left():
    print("Looking left")

elif gaze.is_center():
    print("Looking center")`

And the error on terminal: File "main.py", line 63, in
gaze.refresh(image)
File "/home/tienhv/Project/LivelyFace/gaze_tracking/gaze_tracking.py", line 64, in refresh
self._analyze()
File "/home/tienhv/Project/LivelyFace/gaze_tracking/gaze_tracking.py", line 50, in _analyze
self.eye_left = Eye(frame, landmarks, 0, self.calibration)
File "/home/tienhv/Project/LivelyFace/gaze_tracking/eye.py", line 22, in init
self._analyze(original_frame, landmarks, side, calibration)
File "/home/tienhv/Project/LivelyFace/gaze_tracking/eye.py", line 117, in _analyze
self.pupil = Pupil(self.frame, threshold)
File "/home/tienhv/Project/LivelyFace/gaze_tracking/pupil.py", line 17, in init
self.detect_iris(eye_frame)
File "/home/tienhv/Project/LivelyFace/gaze_tracking/pupil.py", line 46, in detect_iris
_, contours, _ = cv2.findContours(self.iris_frame, cv2.RETR_TREE, cv2.CHAIN_APPROX_NONE)
ValueError: not enough values to unpack (expected 3, got 2)`

Opencv bilateral filter error

Hello, i am trying to run your algorithm but got this error. Please take a look. I am using Ubuntu 16.04 with Python 3.5.2.

Traceback (most recent call last):
  File "example.py", line 12, in <module>
    gaze.refresh()
  File "/home/rohit/Desktop/drowsiness/GazeTracking/gaze_tracking/gaze_tracking.py", line 24, in refresh
    self.pupil_left.process(self.eyes.frame_left)
  File "/home/rohit/Desktop/drowsiness/GazeTracking/gaze_tracking/pupil.py", line 36, in process
    self.modified_frame = self.image_processing(frame)
  File "/home/rohit/Desktop/drowsiness/GazeTracking/gaze_tracking/pupil.py", line 28, in image_processing
    new_frame = cv2.bilateralFilter(eye_frame, 10, 15, 15)
cv2.error: OpenCV(3.4.5) /io/opencv/modules/imgproc/src/bilateral_filter.cpp:642: error: (-215:Assertion failed) (src.type() == CV_8UC1 || src.type() == CV_8UC3) && src.data != dst.data in function 'bilateralFilter_8u'

Any help will he appreciated.

detect multi gaze & up/down

Hello,
Many thanks for sharing this project!
Is it possible to use this code to detect gaze of multi faces in the same screen? what do we need to adjust in your code?
Also, when looking up or down it detects looking left and blinking, respectively. Anyway to adjust that?

Thanks ,

questions to the show right

Hi @antoinelame
I would like to know what I can't let it show "right" when my eyes looking to the right
I didn't change any program
I hope you can help me answer
thank you

isRight() never returns true

So after trying to rewrite this myself and testing a clone I haven't been able to understand why it isRight() never returns true when looking to the right. It defaults to none. From my understanding it my not be detecting the eyes once looking right.
Might be the bilateralFilter function in Pupil.py?
Not entirely sure, if anyone has solved this any direction would be greatly appreciated!

Import Error

I keep getting this error:
ImportError: attempted relative import with no known parent package

How do I fix this?

Pip install

I got the following error when running pip install -r requirements.txt:

Command "/usr/bin/python -u -c "import setuptools, tokenize;__file__='/tmp/pip-build-RdOiWs/dlib/setup.py';f=getattr(tokenize, 'open', open)(__file__);code=f.read().replace('\r\n', '\n');f.close();exec(compile(code, __file__, 'exec'))" install --record /tmp/pip-7ke0IL-record/install-record.txt --single-version-externally-managed --compile --user --prefix=" failed with error code 1 in /tmp/pip-build-RdOiWs/dlib/

I was able to fix it by doing pip install CMake so I'd suggest including this in requirements.txt

Multiple faces gaze detection

Hi there! is there a way to access several eyes gaze at the same time?
I see in the analyses methode that it only uses the first one landmarks = self._predictor(frame, faces[0]) how could I modify the script to save a pair of eyes for every faces detected?
Thanks!!

Trained Models folder

What exactly is the "shape_predictor_68_face_landmarks.dat" file inside trained models for? When I look at it with a dat file viewer its just a perfect pyramid of points. I'm confused on how that helps predict facial landmarks.

Sorry for bother you again

well the eye tracking works fine when i set

        moments = cv2.moments(contours[-1])

with -2 no pupils are detected.

But the tracking doesn´t follow the pupils when I move just the eyes.
When I move my head it works.
Do you have an idea?

Build into .exe

Hi, I'm trying to build example.py into a .exe using pyinstaller but I'm not very familiar with pyinstaller and don't know which flags to use. I'm using the program strictly for educational purposes and non-commercial so don't worry. Any suggestions?

Pupil's detection problem

Hello Sir,
I was doing a project on eye tracking. In this purpose i have downloaded your repository of eye tracking. But when I run the code my pupils aren't detected. It will be very kind of you if you have a look on this problem and what configuration of webcam you used for your eye tracking. I attached a photo of the display i got when i run the code.
I have sent a photo of the problem in your email.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.