Giter Site home page Giter Site logo

eyelike's Introduction

eyeLike

An OpenCV based webcam gaze tracker based on a simple image gradient-based eye center algorithm by Fabian Timm.

DISCLAIMER

This does not track gaze yet. It is basically just a developer reference implementation of Fabian Timm's algorithm that shows some debugging windows with points on your pupils.

If you want cheap gaze tracking and don't mind hardware check out The Eye Tribe. If you want webcam-based eye tracking contact Xlabs or use their chrome plugin and SDK. If you're looking for open source your only real bet is Pupil but that requires an expensive hardware headset.

Status

The eye center tracking works well but I don't have a reference point like eye corner yet so it can't actually track where the user is looking.

If anyone with more experience than me has ideas on how to effectively track a reference point or head pose so that the gaze point on the screen can be calculated contact me.

Building

CMake is required to build eyeLike.

OSX or Linux with Make

# do things in the build directory so that we don't clog up the main directory
mkdir build
cd build
cmake ../
make
./bin/eyeLike # the executable file

On OSX with XCode

mkdir build
./cmakeBuild.sh

then open the XCode project in the build folder and run from there.

On Windows

There is some way to use CMake on Windows but I am not familiar with it.

Blog Article:

Paper:

Timm and Barth. Accurate eye centre localisation by means of gradients. In Proceedings of the Int. Conference on Computer Theory and Applications (VISAPP), volume 1, pages 125-130, Algarve, Portugal, 2011. INSTICC.

(also see youtube video at http://www.youtube.com/watch?feature=player_embedded&v=aGmGyFLQAFM)

eyelike's People

Contributors

hhirsch avatar riroaki avatar trishume avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

eyelike's Issues

Issues with pathing to haarcascade_frontalface_alt.xml

Hey Tristan, great eye tracking system first of all. 😄

Just a heads up, your current implementation as it is in the repository won't run due to the path given to the haarcascade_frontalface_alt.xml file in main.cpp. Since eyeLike's post-build executable location is under build/bin/, the double ellipse count should be two, not three, eg:

cv::String face_cascade_name = "../../res/haarcascade_frontalface_alt.xml";

I might suggest turning the path of said xml file into an argument, so the user can specify where the xml file is without needing to recompile.

Error loading face cascade

I am looking for a little guidance resolving an error trying to run eyeLike. I am new-ish to the opencv world, so I may be missing something very simple. I followed the install directions and did not receive any errors. When attempting to run the eyeLike executable it gives me the following error:

--(!)Error loading face cascade, please change face_cascade_name in source code.

I have reviewed the open issues here and #17 in particular. I tried changing the file path in main.cpp, I tried moving the face_cascade to the same folder (with script updated to reflect that), and I've tried using the full file path with no luck.

I am running opencv 3.3.0 on a raspberry pi 2 running raspbian stretch. Could my issue lie with using opencv3 instead of opencv2?

Any help would be much appreciated!

Integration with head pose estimation

--- This is not an issue per se, more a notice of re-use of your code ---

Hello!
Thanks for your implementation, it seems to perform quite well in my preliminary tests.

I'm in the process of integrating your code into my head pose estimation application, to perform head + eye gaze tracking. Pupil detection is nicely working. The integration with the overall head orientation will be done in the coming days.

Because I'm using the dlib face tracker, I get a pretty accurate contour of the eyes, that I'm using to get rid of many false detection (like with eye brows).

The branch is here:
https://github.com/severin-lemaignan/attention-tracker/tree/pupils

Thank for your contribution!

EyeLike Python Replicability

UPDATE: This is more or less fixed. Only issue is that I have to lower threshold values for it to be accurate. I am pasting the final python code here for future reference. See below.
Hello,
This week I took the initiative to replicate eyeLike in python (at least the ability to detect the center of the eye given an image of the eye itself). The problem I am facing right now is that:
a) The post-processing threshold is more accurate at lower levels (much lower than 0.97)
b) Eye center tracking is focusing on dark regions...but they are the wrong dark objects.
Example:
threshold postprocessing 0.70, k_fast_width 50

threshold postprocessing 0.97, k_fast_width 50

threshold postprocessing 0.70, k_fast_width 50

threshold postprocessing 0.97, k_fast_width 50

* The only thing I can think of is that the numpy gradient function may not yield the same results as yours. EDIT. ADDED THE GRADIENT IMPLEMENTATION. EDIT-2: VARIOUS FIXES AND ADDED MORE COMMENTS*
My code for reference.

from __future__ import division
import cv2
import numpy as np
import math
from scipy.misc import toimage
import time
from queue import *

# test a possible center (x,y) are coors of possible
# center and gx and gy are the x and y components
# of the gradient of some other point x.
K_WEIGHT_DIVISOR = 1.0
K_FAST_WIDTH = 50
K_GRADIENT_THRESHOLD = 50.0
K_WEIGHT_BLUR_SIZE = 5
K_THRESHOLD_VALUE = 0.60
K_ENABLE_WEIGHT = True
K_POST_PROCESSING = True

# Helpers
def unscale_point(p, orig):
    px, py = p
    height, width = orig.shape
    ratio = K_FAST_WIDTH/width
    x = int(round(px / ratio))
    y = int(round(py / ratio))
    return (x,y)


def scale_to_fast_size(src):
    rows, cols = src.shape
    return cv2.resize(src, (K_FAST_WIDTH, int((K_FAST_WIDTH / cols) * rows)))


def test_possible_centers_formula(x, y, weight, gx, gy, arr):
    rows, cols = np.shape(arr)
    for cy in range(rows):
        for cx in range(cols):
            if x == cx and y == cy:
                continue
            dx = x - cx
            dy = y - cy

            magnitude = math.sqrt((dx * dx) + (dy * dy))
            dx = dx / magnitude
            dy = dy / magnitude
            dot_product = dx * gx + dy * gy
            dot_product = max(0.0, dot_product)
            if K_ENABLE_WEIGHT == True:
                arr[cy][cx] += dot_product * dot_product * (weight[cy][cx]/K_WEIGHT_DIVISOR)
            else:
                arr[cy][cx] += dot_product * dot_product
    return arr


def matrix_magnitude(mat_x, mat_y):
    rows, cols = np.shape(mat_x)
    res_arr = np.zeros((rows, cols))
    for y in range(rows):
        for x in range(cols):
            gX = mat_x[y][x]
            gY = mat_y[y][x]
            magnitude = math.sqrt((gX * gX) + (gY * gY))
            res_arr[y][x] = magnitude
    return res_arr


def compute_dynamic_threshold(mags_mat, std_dev_factor):
    mean_magn_grad, std_magn_grad = cv2.meanStdDev(mags_mat)
    rows, cols = np.shape(mags_mat)
    stddev = std_magn_grad[0] / math.sqrt(rows * cols)
    return std_dev_factor * stddev + mean_magn_grad[0]


def flood_should_push_point(dir, mat):
    px, py = dir
    rows, cols = np.shape(mat)
    if px >= 0 and px < cols and py >= 0 and py < rows:
        return True
    else:
        return False


def flood_kill_edges(mat):
    rows, cols = np.shape(mat)
    cv2.rectangle(mat, (0,0), (cols, rows), 255)
    mask = np.ones((rows, cols), dtype=np.uint8)
    mask = mask * 255
    to_do = Queue()
    to_do.put((0,0))
    while to_do.qsize() > 0:
        px,py = to_do.get()
        if mat[py][px] == 0:
            continue
        right = (px + 1, py)
        if flood_should_push_point(right, mat):
            to_do.put(right)
        left = (px - 1, py)
        if flood_should_push_point(left, mat):
            to_do.put(left)
        down = (px, py + 1)
        if flood_should_push_point(down, mat):
            to_do.put(down)
        top = (px, py - 1)
        if flood_should_push_point(top, mat):
            to_do.put(top)
        mat[py][px] = 0.0
        mask[py][px] = 0
    return mask


def compute_mat_x_gradient(mat): 
    rows, cols = mat.shape
    out = np.zeros((rows, cols), dtype='float64')
    mat = mat.astype(float)
    for y in range(rows):
        out[y][0] = mat[y][1] - mat[y][1]
        for x in range(cols - 1):
            out[y][x] = (mat[y][x+1] - mat[y][x-1])/2.0
        out[y][cols - 1] = (mat[y][cols - 1] - mat[y][cols - 2])
    return out



def find_eye_center(img):
    # get row and column lengths
    rows, cols = np.asarray(img).shape

    # scale down eye image to manageable size
    resized = scale_to_fast_size(img)
    resized_arr = np.asarray(resized)
    res_rows, res_cols = np.shape(resized_arr)

    # compute gradients for x and y components of each point
    grad_arr_x = compute_mat_x_gradient(resized_arr)
    grad_arr_y = np.transpose(compute_mat_x_gradient(np.transpose(resized_arr)))

    # create a matrix composed of the magnitudes of the x and y gradients
    mags_mat = matrix_magnitude(grad_arr_x, grad_arr_y)

    # find a threshold value to get rid gradients that are below gradient threshold
    gradient_threshold = compute_dynamic_threshold(mags_mat, K_GRADIENT_THRESHOLD)
    # and now set those gradients to 0 if < gradient threshold and scale down other
    # gradients

    for y in range(res_rows):
        for x in range(res_cols):
            gX = grad_arr_x[y][x]
            gY = grad_arr_y[y][x]
            mag = mags_mat[y][x]
            if mag > gradient_threshold: 
                grad_arr_x[y][x] = gX/mag
                grad_arr_y[y][x] = gY/mag
            else:
                grad_arr_x[y][x] = 0.0
                grad_arr_y[y][x] = 0.0

    # create a weighted image that has a gausian blur
    weight = cv2.GaussianBlur(resized, (K_WEIGHT_BLUR_SIZE, K_WEIGHT_BLUR_SIZE), 0, 0)
    weight_arr = np.asarray(weight)
    weight_rows, weight_cols = np.shape(weight_arr)
    # invert the weight matrix
    for y in range(weight_rows):
        for x in range(weight_cols):
            weight_arr[y][x] = 255-weight_arr[y][x]

    # create a matrix to store the results from test_possible_centers_formula
    out_sum = np.zeros((res_rows, res_cols))
    out_sum_rows, out_sum_cols = np.shape(out_sum)

    # call test_possible_centers for each point
    for y in range(weight_rows):
        for x in range(weight_cols):
            gX = grad_arr_x[y][x]
            gY = grad_arr_y[y][x]
            if gX == 0.0 and gY == 0.0:
                continue
            test_possible_centers_formula(x, y, weight_arr, gX, gY, out_sum)
    # average all values in out_sum and convert to float32. assign to 'out' matrix
    num_gradients = weight_rows * weight_cols
    out = out_sum.astype(np.float32)*(1/num_gradients)
    _, max_val, _, max_p = cv2.minMaxLoc(out)
    print max_p
    if K_POST_PROCESSING == True:
        flood_thresh = max_val * K_THRESHOLD_VALUE 
        retval, flood_clone = cv2.threshold(out, flood_thresh, 0.0, cv2.THRESH_TOZERO)
        mask = flood_kill_edges(flood_clone)
        _, max_val, _, max_p = cv2.minMaxLoc(out, mask)
        print max_p
    x, y = unscale_point(max_p, img)
    return x,y


img = cv2.imread('eyeold.jpg',0)
center = find_eye_center(img)
cv2.circle(img, center, 5, (255,0,0))
cv2.imshow('final', img)
cv2.waitKey(0)

Implement Gradient Ascent

Currently the main tracker algorithm is quite slow, which necessitates the image being scaled down, which reduces accuracy. There is a method proposed by Dr. Timm (creator of the algorithm) in his thesis for using gradient ascent to speed up the algorithm from O(n^2) to O(n) where n is the number of pixels.

The basic idea is that the eye centre-ness field can be sampled at any pixel independently in O(n) time. Currently eyeLike samples all pixels and finds the one with maximum centre-ness. Instead of doing this it is possible to rearrange the formula to be able to compute the gradient (slope) of the centre-ness field at any point. This direction can then be used to "climb" the gradient towards the maximum in a small number of iterations.

The method for doing this is detailed in his thesis, I might be able to email it to anyone who is interested in working on this. The method in his thesis is stated in terms of general circle identification which would need some tweaking to make it work for eyes. It wouldn't require much code to do, but would require a good level of understanding.

Is it possible to integrate your eye-center implementation to opencv-contrib?

Today our team works hard with opencv "face" module (hosted on https://github.com/Itseez/opencv_contrib).
So when it's possible and usable we try to contribute to it while we think - it's better to keep core logic of CV in libs that close to OpenCV itself and managed with big comunity.
For some reasones we try to use your eye-center implementation and it's looking as simple and effective one.
If you are not disagree we think to copy and rewrie your code and then provide as pull request to https://github.com/Itseez/opencv_contrib as a part of "face" module.
You havn't apply any license to your repository. So i think we should apply usual opencv license to it and then write after it "Initially based on https://github.com/trishume/eyeLike implementation"

Windows are showing blank screen

Required Info  
Camera Model SR300
Firmware Version Open RealSense
Operating System & Version Linux (Ubuntu 14)
Kernel Version (Linux Only) (e.g. 4.4.0)
Platform PC(Laptop)
SDK Version 1.12.0
Language python

I am trying to interface my Intel® RealSense™ Depth Camera Manager (SR300) with the code given above but the windows are showing blank screen.

Note:

Since my PC doesn't have a USB 3.0 port but an USB 3.1 Type C port so I am using a Type C to USB 3.0 converter to connect the camera.

Any help on this issue is highly appreciated.

eyelike with eyetribe

if I use eyelike project with eyetribe what are the right values in findeyes function? it work if i use a gain value 40-45 in eyetribe but the result is not optimal. my scope is to find and track pupils with eyetribe. i have problem to use your solution with halide and i work with your simplest solution

windows are not updating

I only get a static image once i resize, but it doesn't seem to be actively updating.
Using opencv3.

Also - could i use this for things like anti spoofing/detecting of eye blinks?

eyegaze

How would one implement a axis that shows the direction of the eyes are looking at ?

erros with the latest version of opencv 3 gold

Hi ,
Hi all , i'm under ubuntu 14.04 and opencv 3 gold .... i have found your code here for eye detection https://github.com/trishume/eyeLike

when compiling in raspberry pi i got a lot of errors /home/pi/opencv-3.0.0/samples/cpp/test/eyeLike-master/src/main.cpp: In function 'int main(int, const char)': /home/pi/opencv-3.0.0/samples/cpp/test/eyeLike-master/src/main.cpp:82:37: error: no match for 'operator=' in 'frame = cvQueryFrame(capture)' /home/pi/opencv-3.0.0/samples/cpp/test/eyeLike-master/src/main.cpp:82:37: note: candidates are: /usr/local/include/opencv2/core/mat.inl.hpp:560:6: note: cv::Mat& cv::Mat::operator=(const cv::Mat&) /usr/local/include/opencv2/core/mat.inl.hpp:560:6: note: no known conversion for argument 1 from 'IplImage* {aka _IplImage}' to 'const cv::Mat&' /usr/local/include/opencv2/core/mat.inl.hpp:2878:6: note: cv::Mat& cv::Mat::operator=(const cv::MatExpr&) /usr/local/include/opencv2/core/mat.inl.hpp:2878:6: note: no known conversion for argument 1 from 'IplImage {aka _IplImage}' to 'const cv::MatExpr&' /usr/local/include/opencv2/core/mat.hpp:1102:10: note: cv::Mat& cv::Mat::operator=(const Scalar&) /usr/local/include/opencv2/core/mat.hpp:1102:10: note: no known conversion for argument 1 from 'IplImage {aka IplImage}' to 'const Scalar& {aka const cv::Scalar&}' src/CMakeFiles/eyeLike.dir/build.make:54: recipe for target 'src/CMakeFiles/eyeLike.dir/main.cpp.o' failed make[2]: * [src/CMakeFiles/eyeLike.dir/main.cpp.o] Error 1 CMakeFiles/Makefile2:75: recipe for target 'src/CMakeFiles/eyeLike.dir/all' failed make[1]: [src/CMakeFiles/eyeLike.dir/all] Error 2 Makefile:72: recipe for target 'all' failed make: * [all] Error 2

NB : i have tested the program with opencv 2.4.10 and it worked very well :)

thanks for your help

Eye corner

Any new ideas for eye corner detection?

pupilFilter.h won't run if image is continuous

If the image is continuous, minVector is never populated because nRows is set to 1. Is setting nRows = 1 necessary?

Thanks,
Jan-Michael

`if (I.isContinuous())
{
nCols *= nRows;
nRows = 1;
}

int i, j;
uchar* p;
for (i = 2; i < nRows - 2; i = i + 5)
{
	p = I.ptr<uchar>(i);
	for (j = 2; j < nCols - 2; j = j + 5)
	{
        
		minVector.push_back(p[j]);
		if ((p[j] + p[j - 2] + p[j + 2]) / 3 < min) {
			min = p[j];
		}
	}
}`

python bindigns

could you add a way to call the find eyes function from python? tat would be very helpfull

build failing on linux 16.04

I'm not sure what's happening, but after doing cmake in the build dir, and executing make -j4 it gave me:

make -j4
Scanning dependencies of target eyeLike
[ 20%] Building CXX object src/CMakeFiles/eyeLike.dir/findEyeCenter.cpp.o
[ 40%] Building CXX object src/CMakeFiles/eyeLike.dir/findEyeCorner.cpp.o
[ 60%] Building CXX object src/CMakeFiles/eyeLike.dir/main.cpp.o
[ 80%] Building CXX object src/CMakeFiles/eyeLike.dir/helpers.cpp.o
[100%] Linking CXX executable ../bin/eyeLike
//usr/lib/x86_64-linux-gnu/libSM.so.6: undefined reference to `uuid_unparse_lower@UUID_1.0'
//usr/lib/x86_64-linux-gnu/libSM.so.6: undefined reference to `uuid_generate@UUID_1.0'
collect2: error: ld returned 1 exit status
src/CMakeFiles/eyeLike.dir/build.make:212: recipe for target 'bin/eyeLike' failed
make[2]: *** [bin/eyeLike] Error 1
CMakeFiles/Makefile2:85: recipe for target 'src/CMakeFiles/eyeLike.dir/all' failed
make[1]: *** [src/CMakeFiles/eyeLike.dir/all] Error 2
Makefile:83: recipe for target 'all' failed
make: *** [all] Error 2

Then, I tried again with make (and make -j4 again) and it gave me:

make -j4
[ 20%] Linking CXX executable ../bin/eyeLike
//usr/lib/x86_64-linux-gnu/libSM.so.6: undefined reference to `uuid_unparse_lower@UUID_1.0'
//usr/lib/x86_64-linux-gnu/libSM.so.6: undefined reference to `uuid_generate@UUID_1.0'
collect2: error: ld returned 1 exit status
src/CMakeFiles/eyeLike.dir/build.make:212: recipe for target 'bin/eyeLike' failed
make[2]: *** [bin/eyeLike] Error 1
CMakeFiles/Makefile2:85: recipe for target 'src/CMakeFiles/eyeLike.dir/all' failed
make[1]: *** [src/CMakeFiles/eyeLike.dir/all] Error 2
Makefile:83: recipe for target 'all' failed
make: *** [all] Error 2

Haven't been able to solve the problem with googling. Also tried doing make clean and rebuilding with cmake, same problem twice.

Youtube Video Implementation

I was wondering what settings I would need to adjust to get performance and appearance similar to those seen in the Youtube Video on the blog post.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.