Giter Site home page Giter Site logo

jeeliz / jeelizfacefilter Goto Github PK

View Code? Open in Web Editor NEW
2.6K 92.0 533.0 165.97 MB

Javascript/WebGL lightweight face tracking library designed for augmented reality webcam filters. Features : multiple faces detection, rotation, mouth opening. Various integration examples are provided (Three.js, Babylon.js, FaceSwap, Canvas2D, CSS3D...).

Home Page: https://jeeliz.com

License: Apache License 2.0

JavaScript 98.19% CSS 0.04% Python 1.62% SCSS 0.01% HTML 0.01% TypeScript 0.14%
face tracking detection javascript snapchat 3d webgl deep-learning face-detection face-tracking threejs faceswap augmented-reality face-filters face-detect lightweight multiple-faces camera webar

jeelizfacefilter's Introduction

JavaScript/WebGL lightweight and robust face tracking library designed for augmented reality face filters

This JavaScript library detects and tracks the face in real time from the camera video feed captured with WebRTC. Then it is possible to overlay 3D content for augmented reality applications. We provide various demonstrations using the main WebGL 3D engines. We have included in this repository the release versions of the 3D engines to work with a determined version (they are in /libs/<name of the engine>/).

This library is lightweight and it does not include any 3D engine or third party library. We want to keep it framework agnostic so the outputs of the library are raw: if the face is detected or not, the position and the scale of the detected face and the rotation Euler angles. But thanks to the featured helpers, examples and boilerplates, you can quickly deal with a higher level context (for motion head tracking, for face filter or face replacement...). We continuously add new demonstrations, so stay tuned!

Table of contents

Features

Here are the main features of the library:

  • face detection,
  • face tracking,
  • face rotation detection,
  • mouth opening detection,
  • multiple faces detection and tracking,
  • very robust for all lighting conditions,
  • video acquisition with HD video ability,
  • mobile friendly,
  • interfaced with 3D engines like THREE.JS, BABYLON.JS, A-FRAME,
  • interfaced with more accessible APIs like CANVAS, CSS3D.

Architecture

  • /demos/: source code of the demonstrations, sorted by 2D/3D engine used,
  • /dist/: core scripts of the library:
    • jeelizFaceFilter.js: main minified script,
    • jeelizFaceFilter.module.js: main minified script for use as a module (with import or require),
  • /neuralNets: trained neural network models:
    • NN_DEFAULT.json: file storing the neural network parameters, loaded by the main script,
    • NN_<xxx>.json: alternative neural network models,
  • /helpers/: scripts which can help you to use this library in some specific use cases,
  • /libs/: 3rd party libraries and 3D engines used in the demos,
  • /reactThreeFiberDemo/: NPM/React/Webpack/Three-Fiber boilerplate.

Demonstrations and apps

Included in this repository

These demonstration are included in this repository. So they are released under the FaceFilter licence. You will probably find among them the perfect starting point to build your own face based augmented reality application:

Some screenshot videos are available on Youtube. You can also subscribe to the Jeeliz Youtube channel or to the @WebARRocks Twitter account to be kept informed of our cutting edge developments.

Third party

These amazing applications rely on this library for face detection and tracking:

  • Vertebrae VTO: Vertebrae relies on this library for face detection and tracking for some of its virtual try-on products. You can check it out on:

    • Moscot: Click on the VIRTUAL TRY-ON button on the top-left of the product picture,
    • Goodr: Click on the VIRTUAL TRY-ON button on the top-left of the product picture,
    • Tenth Street: click on the Try it on button.

If you have developped an application or a fun demo using this library, we would love to see it and insert a link here! Just contact us on Twitter @WebARRocks or LinkedIn

Specifications

Here we describe how to use this library. Although we planned to add new features, we will keep it backward compatible.

Get started

On your HTML page, you first need to include the main script between the tags <head> and </head>:

 <script src="dist/jeelizFaceFilter.js"></script>

Then you should include a <canvas> HTML element in the DOM, between the tags <body> and </body>. The width and height properties of the <canvas> element should be set. They define the resolution of the canvas and the final rendering will be computed using this resolution. Be careful to not enlarge too much the canvas size using its CSS properties without increasing its resolution, otherwise it may look blurry or pixelated. We advise to fix the resolution to the actual canvas size. Do not forget to call JEELIZFACEFILTER.resize() if you resize the canvas after the initialization step. We strongly encourage you to use our helper /helpers/JeelizResizer.js to set the width and height of the canvas (see Optimization/Canvas and video resolutions section).

<canvas width="600" height="600" id='jeeFaceFilterCanvas'></canvas>

This canvas will be used by WebGL both for the computation and the 3D rendering. When your page is loaded you should launch this function:

JEELIZFACEFILTER.init({
  canvasId: 'jeeFaceFilterCanvas',
  NNCPath: '../../../neuralNets/', // path to JSON neural network model (NN_DEFAULT.json by default)
  callbackReady: function(errCode, spec){
    if (errCode){
      console.log('AN ERROR HAPPENS. ERROR CODE =', errCode);
      return;
    }
    // [init scene with spec...]
    console.log('INFO: JEELIZFACEFILTER IS READY');
  }, //end callbackReady()

  // called at each render iteration (drawing loop)
  callbackTrack: function(detectState){
    // Render your scene here
    // [... do something with detectState]
  } //end callbackTrack()
});

Optional init arguments

  • <boolean> followZRot: Allow full rotation around depth axis. Default value: false. See Issue 42 for more details,
  • <integer> maxFacesDetected: Only for multiple face detection - maximum number of faces which can be detected and tracked. Should be between 1 (no multiple detection) and 8,
  • <integer> animateDelay: It is used only in normal rendering mode (not in slow rendering mode). With this statement you can set accurately the number of milliseconds during which the browser wait at the end of the rendering loop before starting another detection. If you use the canvas of this library as a secondary element (for example in PACMAN or EARTH NAVIGATION demos) you should set a small animateDelay value (for example 2 milliseconds) in order to avoid rendering lags.
  • <function> onWebcamAsk: Function launched just before asking for the user to allow access to its camera,
  • <function> onWebcamGet: Function launched just after the user has accepted to share its video. It is called with the video element as argument,
  • <dict> videoSettings: override WebRTC specified video settings, which are by default:
{
  'videoElement' // not set by default. <video> element used
   // WARN: If you specify this parameter,
   //       1. all other settings will be useless
   //       2. it means that you fully handle the video aspect
   //       3. in case of using web-camera device make sure that
   //          initialization goes after `loadeddata` event of the `videoElement`,
   //          otherwise face detector will yield very low `detectState.detected` values
   //          (to be more sure also await first `timeupdate` event)

  'deviceId'            // not set by default
  'facingMode': 'user', // to use the rear camera, set to 'environment'

  'idealWidth': 800,  // ideal video width in pixels
  'idealHeight': 600, // ideal video height in pixels
  'minWidth': 480,    // min video width in pixels
  'maxWidth': 1920,   // max video width in pixels
  'minHeight': 480,   // min video height in pixels
  'maxHeight': 1920,  // max video height in pixels,
  'rotate': 0,        // rotation in degrees possible values: 0,90,-90,180
  'flipX': false      // if we should flip horizontally the video. Default: false
},

If the user has a mobile device in portrait display mode, the width and height of these parameters are automatically inverted for the first camera request. If it does not succeed, we invert the width and height.

  • <dict> scanSettings: override face scan settings - see set_scanSettings(...) method for more information.
  • <dict> stabilizationSettings: override tracking stabilization settings - see set_stabilizationSettings(...) method for more information.
  • <boolean> isKeepRunningOnWinFocusLost: Whether we should keep the detection loop running even if the user switches the browser tab or minimizes the browser window. Default value is false. This option is useful for a videoconferencing app, where a face mask should be still computed if the FaceFilter window is not the active window. Even with this option toggled on, the face tracking is still slowed down when the FaceFilter window is not active.

Error codes

The initialization function ( callbackReady in the code snippet ) will be called with an error code ( errCode ). It can have these values:

  • false: no error occurs,
  • "GL_INCOMPATIBLE": WebGL is not available, or this WebGL configuration is not enough (there is no WebGL2, or there is WebGL1 without OES_TEXTURE_FLOAT or OES_TEXTURE_HALF_FLOAT extension),
  • "ALREADY_INITIALIZED": the library has been already initialized,
  • "NO_CANVASID": no canvas or canvas ID was specified,
  • "INVALID_CANVASID": cannot found the <canvas> element in the DOM,
  • "INVALID_CANVASDIMENSIONS": the dimensions width and height of the canvas are not specified,
  • "WEBCAM_UNAVAILABLE": cannot get access to the camera (the user has no camera, or it has not accepted to share the device, or the camera is already busy),
  • "GLCONTEXT_LOST": The WebGL context was lost. If the context is lost after the initialization, the callbackReady function will be launched a second time with this value as error code,
  • "MAXFACES_TOOHIGH": The maximum number of detected and tracked faces, specified by the optional init argument maxFacesDetected, is too high.

The returned objects

We detail here the arguments of the callback functions like callbackReady or callbackTrack. The reference of these objects do not change for memory optimization purpose. So you should copy their property values if you want to keep them unchanged outside the callback functions scopes.

The initialization returned object

The initialization callback function ( callbackReady in the code snippet ) is called with a second argument, spec, if there is no error. spec is a dictionnary having these properties:

  • <WebGLRenderingContext> GL: the WebGL context. The rendering 3D engine should use this WebGL context,
  • <canvas> canvasElement: the <canvas> element,
  • <WebGLTexture> videoTexture: a WebGL texture displaying the camera video. It has the same resolution as the camera video,
  • [<float>, <float>, <float>, <float>] videoTransformMat2: flatten 2x2 matrix encoding a scaling and a rotation. We should apply this matrix to viewport coordinates to render videoTexture in the viewport,
  • <HTMLVideoElement> videoElement: the video used as source for the webgl texture videoTexture,
  • <int> maxFacesDetected: the maximum number of detected faces.

The detection state

At each render iteration a callback function is executed ( callbackTrack in the code snippet ). It has one argument ( detectState ) which is a dictionnary with these properties:

  • <float> detected: the face detection probability, between 0 and 1,
  • <float> x, <float> y: The 2D coordinates of the center of the detection frame in the viewport (each between -1 and 1, x from left to right and y from bottom to top),
  • <float> s: the scale along the horizontal axis of the detection frame, between 0 and 1 (1 for the full width). The detection frame is always square,
  • <float> rx, <float> ry, <float> rz: the Euler angles of the head rotation in radians.
  • <Float32Array> expressions: array listing the facial expression coefficients:
    • expressions[0]: mouth opening coefficient (0 → mouth closed, 1 → mouth fully opened)

In multiface detection mode, detectState is an array. Its size is equal to the maximum number of detected faces and each element of this array has the format described just before.

Miscellaneous methods

After the initialization (ie after that callbackReady is launched ) , these methods are available:

  • JEELIZFACEFILTER.resize(): should be called after resizing the <canvas> element to adapt the cut of the video. It should also be called if the device orientation is changed to take account of new video dimensions,

  • JEELIZFACEFILTER.toggle_pause(<boolean> isPause, <boolean> isShutOffVideo): pause/resume. This method will completely stop the rendering/detection loop. If isShutOffVideo is set to true, the media stream track will be stopped and the camera light will turn off. It returns a Promise object,

  • JEELIZFACEFILTER.toggle_slow(<boolean> isSlow): toggle the slow rendering mode: because this library can consume a lot of GPU resources, it may slow down other elements of the application. If the user opens a CSS menu for example, the CSS transitions and the DOM update can be slow. With this function you can slow down the rendering in order to relieve the GPU. Unfortunately the tracking and the 3D rendering will also be slower but this is not a problem is the user is focusing on other elements of the application. We encourage to enable the slow mode as soon as a the user's attention is focused on a different part of the canvas,

  • JEELIZFACEFILTER.set_animateDelay(<integer> delay): Change the animateDelay (see init() arguments),

  • JEELIZFACEFILTER.set_inputTexture(<WebGLTexture> tex, <integer> width, <integer> height): Change the video input by a WebGL Texture instance. The dimensions of the texture, in pixels, should be provided,

  • JEELIZFACEFILTER.reset_inputTexture(): Come back to the user's video as input texture,

  • JEELIZFACEFILTER.get_videoDevices(<function> callback): Should be called before the init method. 2 arguments are provided to the callback function:

    • <array> mediaDevices: an array with all the devices founds. Each device is a javascript object having a deviceId string attribute. This value can be provided to the init method to use a specific camera. If an error happens, this value is set to false,
    • <string> errorLabel: if an error happens, the label of the error. It can be: NOTSUPPORTED, NODEVICESFOUND or PROMISEREJECTED.
  • JEELIZFACEFILTER.set_scanSettings(<object> scanSettings): Override scan settings. scanSettings is a dictionnary with the following properties:

    • <float> scale0Factor: Relative width (1 -> full width) of the searching window at the largest scale level. Default value is 0.8,
    • <int> nScaleLevels: Number of scale levels. Default is 3,
    • [<float>, <float>, <float>] overlapFactors: relative overlap according to X,Y and scale axis between 2 searching window positions. Higher values make scan faster but it may miss some positions. Set to [1, 1, 1] for no overlap. Default value is [2, 2, 3],
    • <int> nDetectsPerLoop: specify the number of detection per drawing loop. -1 for adaptative value. Default: -1
    • <boolean> enableAsyncReadPixels : enable asynchronous GPU reading. Default is false. It will free a lot of CPU resource but it may add latency on some devices
  • JEELIZFACEFILTER.set_stabilizationSettings(<object> stabilizationSettings): Override detection stabilization settings. The output of the neural network is always noisy, so we need to stabilize it using a floatting average to avoid shaking artifacts. The internal algorithm computes first a stabilization factor k between 0 and 1. If k==0.0, the detection is bad and we favor responsivity against stabilization. It happens when the user is moving quickly, rotating the head or when the detection is bad. On the contrary, if k is close to 1, the detection is nice and the user does not move a lot so we can stabilize a lot. stabilizationSettings is a dictionnary with the following properties:

    • [<float> minValue, <float> maxValue] translationFactorRange: multiply k by a factor kTranslation depending on the translation speed of the head (relative to the viewport). kTranslation=0 if translationSpeed<minValue and kTranslation=1 if translationSpeed>maxValue. The regression is linear. Default value: [0.002, 0.005],
    • [<float> minValue, <float> maxValue] rotationFactorRange: analogous to translationFactorRange but for rotation speed. Default value: [0.015, 0.1],
    • [<float> minValue, <float> maxValue] qualityFactorRange: analogous to translationFactorRange but for the head detection coefficient. Default value: [0.9, 0.98],
    • [<float> minValue, <float> maxValue] alphaRange: it specify how to apply k. Between 2 successive detections, we blend the previous detectState values with the current detection values using a mixing factor alpha. alpha=<minValue> if k<0.0 and alpha=<maxValue> if k>1.0. Between the 2 values, the variation is quadratic. Default value: [0.05, 1.0].
  • JEELIZFACEFILTER.update_videoElement(<video> vid, <function|False> callback): change the video element used for the face detection (which can be provided via VIDEOSETTINGS.videoElement) by another video element. A callback function can be called when it is done.

  • JEELIZFACEFILTER.update_videoSettings(<object> videoSettings): dynamically change the video settings (see Optional init arguments for the properties of videoSettings). It is useful to change the camera from the selfie camera (user) to the back (environment) camera. A Promise is returned.

  • JEELIZFACEFILTER.set_videoOrientation(<integer> angle, <boolean> flipX): Dynamically change videoSettings.rotate and videoSettings.flipX. This method should be called after initialization. The default values are 0 and false. The angle should be chosen among these values: 0, 90, 180, -90,

  • JEELIZFACEFILTER.destroy(): Clean both graphic memory and JavaScript memory, uninit the library. After that you need to init the library again. A Promise is returned,

  • JEELIZFACEFILTER.reset_GLState(): reset the WebGL context,

  • JEELIZFACEFILTER.render_video(): render the video on the <canvas> element.

Optimization

1 or 2 Canvas?

You can either:

  1. Use 1 <canvas> with 1 WebGL context, shared by facefilter and THREE.js (or another 3D engine),
  2. Use 2 separate <canvas> elements, aligned using CSS, 1 canvas for AR, and the second one to display the video and to run this library.

The 1. is often more efficient, but the newest versions of THREE.js are not suited to share the WebGL context and some weird bugs can occur. So I strongly advise to use 2 separate canvas.

Canvas and video resolutions

We strongly recommend the use of the JeelizResizer helper in order to size the canvas to the display size in order to not compute more pixels than required. This helper also computes the best camera resolution, which is the closer to the canvas actual size. If the camera resolution is too high compared to the canvas resolution, your application will be unnecessarily slowed because it is quite costly to refresh the WebGL texture for each video frame. And if the video resolution is too low compared to the canvas resolution, the image will be blurry. You can take a look at the THREE.js boilerplate to see how it is used. To use the helper, you first need to include it in the HTML code:

<script src="https://appstatic.jeeliz.com/faceFilter/JeelizResizer.js"></script>

Then in your main script, before initializing Jeeliz FaceFilter, you should call it to size the canvas to the best resolution and to find the optimal video resolution:

JeelizResizer.size_canvas({
  canvasId: 'jeeFaceFilterCanvas',
    callback: function(isError, bestVideoSettings){
      JEELIZFACEFILTER.init({
        videoSettings: bestVideoSettings,
        // ...
        // ...
      });
    }
});

Take a look at the source code of this helper (in helpers/JeelizResize.js) to get more information.

Misc

A few tips:

  • In term of optimisation, the WebGL based demos are more optimized than Canvas2D demos, which are still more optimized than CSS3D demos.
  • Try to use lighter resources as possibles. Each texture image should have the lowest resolution as possible, use mipmapping for texture minification filtering.
  • The more effects you use, the slower it will be. Add the 3D effects gradually to check that they do not penalize too much the frame rate.
  • Use low polygon meshes.

Multiple faces

It is possible to detect and track several faces at the same time. To enable this feature, you only have to specify the optional init parameter maxFacesDetected. Its maximum value is 8. Indeed, if you are tracking for example 8 faces at the same time, the detection will be slower because there is 8 times less computing power per tracked face. If you have set this value to 8 but if there is only 1 face detected, it should not slow down too much compared to the single face tracking.

If multiple face tracking is enabled, the callbackTrack function is called with an array of detection states (instead of being executed with a simple detection state). The detection state format is still the same.

You can use our Three.js multiple faces detection helper, helpers/JeelizThreeHelper.js to get started and test this example. The main script has only 60 lines of code !

Multiple videos

To create a new JEELIZFACEFILTER instance, you need to call:

const JEELIZFACEFILTER2 = JEELIZFACEFILTER.create_new();

Be aware that:

  • Each instance uses a new WebGL context. Depending on the configuration, the number of WebGL context is limited. We advise to not use more than 16 contexts simultaneously,
  • The computing power will be shared between the context. Using multiple instances may increase the latency.

Checkout this demo to have an example of how it works: source code, live demo

Changing the 3D engine

It is possible to use another 3D engine than BABYLON.JS or THREE.JS. If you have accomplished this work, we would be interested to add your demonstration in this repository (or link to your code). Just open a pull request.

The 3D engine can either share the WebGL context and the canvas with FaceFilter, or use a second canvas overlaid on the FaceFilter canvas (the FaceFilter canvas is just used to render the video). In the first case, the WebGL context is created by Jeeliz Face Filter. We strongly encourage the second approach, even if the first one may be a bit more optimized.

Changing the neural network

Since July 2018 it is possible to change the neural network. When calling JEELIZFACEFILTER.init({...}) with NNCPath: <path of NN_DEFAULT.json> you set NNCPath value to a specific neural network file:

  JEELIZFACEFILTER.init({
    NNCPath: '../../neuralNets/NN_LIGHT_1.json'
    // ...
  })

It is also possible to give directly the neural network model JSON file content by using NNC property instead of NNCPath.

We provide several neural network models:

  • neuralNets/NN_DEFAULT.json: this is the default neural network. Good tradeoff between size and performances,
  • neuralNets/NN_WIDEANGLES_<X>.json: this neural network is better to detect wide head angles (but less accurate for small angles),
  • neuralNets/NN_LIGHT_<X>.json: this is a light version of the neural network. The file is twice lighter and it runs faster but it is less accurate for large head rotation angles,
  • neuralNets/NNC_VERYLIGHT_<X>.json: even lighter than the previous version: 250Kbytes, and very fast. But not very accurate and robust to all lighting conditions,
  • neuralNets/NN_VIEWTOP_<X>.json: this neural net is perfect if the camera has a bird's eye view (if you use this library for a kiosk setup for example),
  • neuralNets/NN_INTEL1536.json: neural network working with Intel 1536 Iris GPUs (there is a graphic driver bug, see #85),
  • neuralNets/NN_4EXPR_<X>.json: this neural network also detects 4 facial expressions (mouth opening, smile, frown eyebrows, raised eyebrows).

Using module

/dist/jeelizFaceFilter.module.js is exactly the same as /dist/jeelizFaceFilter.js except that it works as a JavaScript module, so you can import it directly using:

import 'dist/jeelizFaceFilter.module.js'

or using require (see issue #72):

const faceFilter = require('./lib/jeelizFaceFilter.module.js').JEELIZFACEFILTER;

faceFilter.init({
  // you can also provide the canvas directly
  // using the canvas property instead of canvasId:
  canvasId: 'jeeFaceFilterCanvas',
  NNCPath: '../../../neuralNets/', // path to JSON neural network model (NN_DEFAULT.json by default)
  callbackReady: function(errCode, spec){
    if (errCode){
      console.log('AN ERROR HAPPENS. ERROR CODE =', errCode);
      return;
    }
    // [init scene with spec...]
    console.log('INFO: JEELIZFACEFILTER IS READY');
  }, //end callbackReady()

  // called at each render iteration (drawing loop)
  callbackTrack: function(detectState){
      // Render your scene here
      // [... do something with detectState]
  } //end callbackTrack()
});

Integration

With a bundler

If you use this library with a bundler (typically Webpack or Parcel), first you should use the module version.

Then, with the standard library, we load the neural network model (specified by NNCPath provided as initialization parameter) using AJAX for the following reasons:

  • If the user does not accept to share its camera, or if WebGL is not enabled, we don't have to load the neural network model,
  • We suppose that the library is deployed using a static HTTPS server.

With a bundler, it is a bit more complicated. It is easier to load the neural network model using a classical import or require call and to provide it using the NNC init parameter:

const faceFilter = require('./lib/jeelizFaceFilter.module.js').JEELIZFACEFILTER
const neuralNetworkModel = require('./neuralNets/NN_DEFAULT.json')

faceFilter.init({
  NNC:  neuralNetworkModel, // instead of NNCPath
  // ... other init parameters
});

You can check out the amazing work of @jackbilestech, jackbilestech/jeelizFaceFilter if you are interested to use this library in a NPM / ES6 / Webpack environment.

With JavaScript frontend frameworks

With REACT and THREE Fiber

Since October 2020, there is a React/THREE Fiber/Webpack boilerplate in /reactThreeFiberDemo path.

See also

We don't officially cover here integration with mainstream JavaScript frontend frameworks (React, Vue, Angular). Feel free to submit a Pull Request to add a boilerplate or a demo for a specific framework. Here is a bunch of submitted issues dealing with React integration:

You can also take a look at these Github code repositories:

Native

It is possible to execute a JavaScript application using this library into a Webview for a native app integration. For IOS the camera access is disabled inside WKWebview component for IOS before IOS14.3. If you want to make your application run on devices running IOS <= 14.2, you have to implement a hack to stream the camera video into the webview using websockets.

His hack has not been implemented into this repository but in a similar Jeeliz Library, Jeeliz Weboji. Here are the links:

But it is still a dirty hack introducing a bottleneck. It still run pretty well on a high end device (tested on Iphone XR), but it is better to stick on a full web environment.

There is also this Github issue detailing how to embed the library into a Webview component, for React native. It is for Android only:

Unity

Since September 2023, Marks has developed a Unity plugin to create Face filters using Unity and export them for the web. You can buy it on the Unity asset store here: Augmented Reality WebGL - Face Tracking Virtual Try On

Hosting

This library requires the user's camera video feed through MediaStream API. So your application should be hosted by a HTTPS server (even with a self-signed certificate). It won't work at all with unsecure HTTP, even locally with some web browsers.

The development server

For development purpose we provide a simple and minimalist HTTPS server in order to check out the demos or develop your very own filters. To launch it, execute in the bash console:

with phython2

  python2 httpsServer.py

It requires Python 2.X. Then open in your web browser https://localhost:4443.

with node

  npm install
  npm run dev

go to https://127.0.0.1:8000/demos/threejs/cube/index.html

when you open the browser it will show not secure. Go to advance. Click proceed.

Hosting optimization

You can use our hosted and up to date version of the library, available here:

https://appstatic.jeeliz.com/faceFilter/jeelizFaceFilter.js

It uses the neural network NN_DEFAULT.json hosted in the same path. The helpers used in these demos (all scripts in /helpers/) are also hosted on https://appstatic.jeeliz.com/faceFilter/.

It is served through a content delivery network (CDN) using gzip compression. If you host the scripts by yourself, be careful to enable gzip HTTP/HTTPS compression for JSON and JS files. Indeed, the neuron network JSON file, neuralNets/NN_DEFAULT.json is quite heavy, but very well compressed with GZIP. You can check the gzip compression of your server here.

The neuron network file, neuralNets/NN_DEFAULT.json is loaded using an ajax XMLHttpRequest after calling JEEFACEFILTER.init(). This loading is proceeded after the user has accepted to share its camera. So we won't load this quite heavy file if the user refuses to share it or if there is no camera available. The loading can be faster if you systematically preload neuralNets/NN_DEFAULT.json using a service worker or a simple raw XMLHttpRequest just after the HTML page loading. Then the file will be already in the browser cache when Jeeliz Facefilter will request it.

About the tech

Under the hood

This library relies on Jeeliz WebGL Deep Learning technology to detect and track the user's face using a neural network. The accuracy is adaptative: the best is the hardware, the more detections are processed per second. All is done client-side.

Compatibility

  • If WebGL2 is available, it uses WebGL2 and no specific extension is required,
  • If WebGL2 is not available but WebGL1, we require either OES_TEXTURE_FLOAT extension or OES_TEXTURE_HALF_FLOAT extension,
  • If WebGL2 is not available, and if WebGL1 is not available or neither OES_TEXTURE_FLOAT or OES_HALF_TEXTURE_FLOAT are implemented, the user is not compatible.

If a compatibility error occurred, please post an issue on this repository. If this is a problem with the camera access, please first retry after closing all applications which could use the camera (Skype, Messenger, other browser tabs and windows, ...). Please include:

  • the browser, the version of the browser, the operating system, the version of the operating system, the device model and the GPU if it is a desktop computer,
  • a screenshot of webglreport.com - WebGL1 (about your WebGL1 implementation),
  • a screenshot of webglreport.com - WebGL2 (about your WebGL2 implementation),
  • the log from the web console,
  • the steps to reproduce the bug, and screenshots.

Articles and tutorials

You have written a tutorial using this library? Submit a pull request or send us the link, we would be glad to add it.

In English

In French

In Japanese

License

Apache 2.0. This application is free for both commercial and non-commercial use.

References

jeelizfacefilter's People

Contributors

airinterface avatar bjlaa avatar dependabot[bot] avatar dirkk0 avatar dyon048 avatar emmanuelasmfun avatar hadrichouki avatar joe-palmer avatar lacastorine avatar patriciaarnedo avatar polezo avatar rostyq avatar sebftw avatar skdevgig07 avatar thorstenbux avatar tylindberg avatar wenqingl avatar xavierjs avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

jeelizfacefilter's Issues

Canvas video stream freezes on mobile safari

Describe the bug
iPhone 7 and 8 for mobile safari face tracking seems to be broken.

Specifically, if you look at the 2d face paint demo, the canvas loads, shows the first few images, then freezes. Console shows the following error: WebGL: INVALID_OPERATION: texImage2D: type HALF_FLOAT_OES but ArrayBufferView is not NULL

To Reproduce
Steps to reproduce the behavior:

  1. You can reproduce using an iPhone on iOS 11, mobile safari.
  2. Open https://jeeliz.com/demos/faceFilter/demos/canvas2D/faceDraw/
  3. Your picture will load, but video stream will freeze. You'll still be able to draw.

Expected behavior
I'd except the video stream / canvas to not freeze while tracking face. It works properly on desktop and android mobile.

Screenshots
n/a

Smartphone (please complete the following information):

  • Device: iPhone7
  • OS: iOS11
  • Browser safari

How to capture gif face replacement as video

Thanks for giving this awesome face filters.
I created a mobile app with gif face replacement but how to save the output?.
It has 2 canvas. I can able to record video from only 1 canvas at a time.

Then gif file size is too high and quality is very less. Is it possible to do face replacement with any video formats such as webm, mp4 etc instead of gif?

Head horizontal rotation.

Is your feature request related to a problem? Please describe.
In your examples the face detection is broken when the head is rotated horizontally.

Describe the solution you'd like
The in plane face rotation should be properly detected.

Describe alternatives you've considered
There is no good alternative. Some random samples are:
https://github.com/auduno/clmtrackr
https://github.com/YadiraF/PRNet
https://github.com/cmusatyalab/openface
https://github.com/MarekKowalski/DeepAlignmentNetwork

Additional context
The rotation detection should not hinter accuracy or performance much. It also could be a separate pre-processing step.

Multiple filters using .class instead of #ids.

First of all, thank you for the hard work you did and for sharing it :)

Is your feature request related to a problem? Please describe.
I am trying to make a photobooth where users can choose multiple filters (like hats, glasses, jewelry), and activate or desactivate some.
I'm using CSS3, which target an id:
DIV=document.getElementById('jeelizFaceFilterFollow');

Describe the solution you'd like
It would be nice to create as many as wanted tags, and to target them with a class, or with multiple ids.

using existing video element

I'm trying to get the face detection to run with an exiting video element that gets video from the local webcam. I don't see an error but for some reason it doesn't detect anything. I'm using Chrome 70 on Windows.
(If I remove the videoSettings then it starts working right away)

example.zip

Errors with Canvas2D on mobile devices

Hey there,
Thanks for the continued great work with the demos! Unfortunately, the Canvas2D demo doesn't work on my iPhone. Here's what I'm getting in the console:
[Error] WebGL: INVALID_OPERATION: texImage2D: type HALF_FLOAT_OES but ArrayBufferView is not NULL texImage2D b (jeelizFaceFilter.js:86:280) (anonymous function) (jeelizFaceFilter.js:88) l (jeelizFaceFilter.js:88:100) l (jeelizFaceFilter.js:46:152) pc (jeelizFaceFilter.js:53:355) N (jeelizFaceFilter.js:125:260) (anonymous function) (jeelizFaceFilter.js:131) (anonymous function) (jeelizFaceFilter.js:125:118) onreadystatechange (jeelizFaceFilter.js:19:241) [Error] WebGL: INVALID_OPERATION: generateMipmap: level 0 not power of 2 or not all the same size generateMipmap b (jeelizFaceFilter.js:86:332) (anonymous function) (jeelizFaceFilter.js:88) l (jeelizFaceFilter.js:88:100) l (jeelizFaceFilter.js:46:152) pc (jeelizFaceFilter.js:53:355) N (jeelizFaceFilter.js:125:260) (anonymous function) (jeelizFaceFilter.js:131) (anonymous function) (jeelizFaceFilter.js:125:118) onreadystatechange (jeelizFaceFilter.js:19:241) [Error] WebGL: INVALID_OPERATION: texImage2D: type HALF_FLOAT_OES but ArrayBufferView is not NULL texImage2D f (jeelizFaceFilter.js:67:474) a (jeelizFaceFilter.js:79) d (jeelizFaceFilter.js:62) a (jeelizFaceFilter.js:79) a (jeelizFaceFilter.js:96:505) a (jeelizFaceFilter.js:89:142) (anonymous function) (jeelizFaceFilter.js:103:145) map Ac (jeelizFaceFilter.js:103:115) (anonymous function) (jeelizFaceFilter.js:131) (anonymous function) (jeelizFaceFilter.js:125:118) onreadystatechange (jeelizFaceFilter.js:19:241) [Error] WebGL: INVALID_OPERATION: drawElements: attempt to access out of bounds arrays drawElements f (jeelizFaceFilter.js:56:469) ha (jeelizFaceFilter.js:120:220) fa (jeelizFaceFilter.js:119:302) k (jeelizFaceFilter.js:129:453) (anonymous function) (jeelizFaceFilter.js:135:445) (anonymous function) (jeelizFaceFilter.js:125:118) onreadystatechange (jeelizFaceFilter.js:19:241) [Error] WebGL: INVALID_OPERATION: drawElements: attempt to access out of bounds arrays drawElements f (jeelizFaceFilter.js:56:469) ha (jeelizFaceFilter.js:120:303) fa (jeelizFaceFilter.js:119:302) k (jeelizFaceFilter.js:129:453) (anonymous function) (jeelizFaceFilter.js:135:445) (anonymous function) (jeelizFaceFilter.js:125:118) onreadystatechange (jeelizFaceFilter.js:19:241) [Error] WebGL: INVALID_OPERATION: drawElements: attempt to access out of bounds arrays drawElements f (jeelizFaceFilter.js:56:469) G (jeelizFaceFilter.js:92:180) (anonymous function) (jeelizFaceFilter.js:103:300) forEach G (jeelizFaceFilter.js:103:282) ha (jeelizFaceFilter.js:120:322) fa (jeelizFaceFilter.js:119:302) k (jeelizFaceFilter.js:129:453) (anonymous function) (jeelizFaceFilter.js:135:445) (anonymous function) (jeelizFaceFilter.js:125:118) onreadystatechange (jeelizFaceFilter.js:19:241) [Error] WebGL: INVALID_OPERATION: drawElements: attempt to access out of bounds arrays drawElements f (jeelizFaceFilter.js:56:469) wa (jeelizFaceFilter.js:112) G (jeelizFaceFilter.js:92:202) (anonymous function) (jeelizFaceFilter.js:103:300) forEach G (jeelizFaceFilter.js:103:282) ha (jeelizFaceFilter.js:120:322) fa (jeelizFaceFilter.js:119:302) k (jeelizFaceFilter.js:129:453) (anonymous function) (jeelizFaceFilter.js:135:445) (anonymous function) (jeelizFaceFilter.js:125:118) onreadystatechange (jeelizFaceFilter.js:19:241) [Error] WebGL: INVALID_OPERATION: drawElements: attempt to access out of bounds arrays drawElements f (jeelizFaceFilter.js:56:469) wa (jeelizFaceFilter.js:112) G (jeelizFaceFilter.js:92:202) (anonymous function) (jeelizFaceFilter.js:103:300) forEach G (jeelizFaceFilter.js:103:282) ha (jeelizFaceFilter.js:120:322) fa (jeelizFaceFilter.js:119:302) k (jeelizFaceFilter.js:129:453) (anonymous function) (jeelizFaceFilter.js:135:445) (anonymous function) (jeelizFaceFilter.js:125:118) onreadystatechange (jeelizFaceFilter.js:19:241)

Multiple faces detection - Delay on another faces detection

Hello people, I'am facing a strange problem with multiple faces detection. After detect first face, the API take a longer time to detect the secound face, and sometimes detect only the secound face and ignore the first face.

I don't know if I'm doing something wrong, there's no diference from my code and the demos, except the fact that I'm using plane geometry and loading .png images instead of 3d models.

` var threeStuffs = THREE.JeelizHelper.init(spec, SETTINGS.maxFaces, detect_callback);

//Carrego o primeiro sticker
geometry = new THREE.PlaneGeometry(1,1,1);
material = new THREE.MeshBasicMaterial({
    map : THREE.ImageUtils.loadTexture('assets/stickers/sticker1.png'),
    transparent: true,
    //color: 0x0000ff, 
    side: THREE.DoubleSide
});

plane = new THREE.Mesh(geometry, material);
plane.scale.set(3,3,3);
plane.position.set(0.0, 1.1, 0.0);
plane.frustumCulled = false;

threeStuffs.faceObjects.forEach(function(faceObject) {
    faceObject.add(plane.clone());
});

//CREATE THE CAMERA
var aspecRatio=spec.canvasElement.width / spec.canvasElement.height;
THREECAMERA=new THREE.PerspectiveCamera(SETTINGS.cameraFOV, aspecRatio, 0.1, 100);`

Thank you very much!!!

Changing Filter Source

Describe the bug
The actual behavior of the filter is using the user camera, this time i already have a video to apply the filter to, but can't manage to change the library source to take that video instead of the camera, i tried with the dog face filter demo and the babylon cube, but no luck, where can i change that source on those demos?

To Reproduce
Steps to reproduce the behavior:
This is actually not a error/bug

Expected behavior
what i'm expected to happen is the dog face filter or the babylon cube working using an actual video tag as source, i can send that video to a canvas if is necessary

Screenshots
If applicable, add screenshots to help explain your problem.

Desktop (please complete the following information):
-Chrome, Safari, Firefox

Smartphone (please complete the following information):

  • Device: [e.g. iPhone6]
  • OS: [e.g. iOS8.1]
  • Browser [e.g. stock browser, safari]
  • Version [e.g. 22]

Additional context
Add any other context about the problem here.

Mapping face size and 3d model size

First, thanks for this good library.

  1. Can I mapping tracked face size and three js model size? People's face sizes are different. but three js model size is one.
    ex) https://tastenkunst.github.io/brfv4_javascript_examples

  2. I tried your other project. I want make 3d model more nicely.

  • I did modeling(3d max) and export(.obj or .json)
  • Import in three.js editor and create material. but three.js editor have little function. I tried script in three js editor. but It's not efficient.
  • How did you make it? https://jeeliz.com/sunglasses/

Can I get face recognition speed up?

I did research jeelizFaceFilter and jeelizGlassesVTOWidget. Also I did develop web application with jeelizFaceFilter. Because jeelizGlassesVTOWidget library is minified.

But jeelizFaceFilter is slower than jeelizGlassesVTOWidget in face recognition. How can I do it faster like jeelizGlassesVTOWidget in face recognition?

IMG_4472.MOV.zip

I uploaded video.
thanks for jeeliz developer

Face placement?

Hey guys,
Again, loving what you've done! Looking forward to sharing what I've been doing with it with you soon. :)
Do you have any projects in the works (or know of any APIs) that allow you to put a users face in another environment? Like this example :P: https://imgur.com/a/oDdS2N5
Thanks!

is it possible to use this library in react native project?

Is your feature request related to a problem? Please describe.
A clear and concise description of what the problem is. Ex. I'm always frustrated when [...]

Describe the solution you'd like
A clear and concise description of what you want to happen.

Describe alternatives you've considered
A clear and concise description of any alternative solutions or features you've considered.

Additional context
Add any other context or screenshots about the feature request here.

Multiple face tracking.

Amazing library!
Nice to finally see an alternative to the existing (slow) ones.

Is there any intent to add the option to track multiple faces in the future?

Thanks.

Video does not play when opening a new window through other page

First, thanks for this good library.

Describe the bug
demos/threejs/miel_pops/index.html

I made a new page. Then I created an action to go to the video page. However, I found that the video page does not play or play when you open a new window

To Reproduce
Steps to reproduce the behavior:

  1. Create page 'test.html'
<html>
    <head>
        <script>
        </script>
    </head>
    <body>
        <button style='font-size:10vw'; onclick='window.open("demos/threejs/miel_pops/index.html")'>new window</button>
    </body>
</html>
  1. Click button 'new window'
  2. Go to 'demos/threejs/miel_pops/index.html'
  3. Camera permission check -> yes
  4. playing video
  5. return page 'test.html'
  6. 2->3->4-> can't playing video.(return errCode = 'WEBCAM_UNAVAILABLE')

attempt: 1~6
First attempt = playing video
Second attempt = can't playing video
Thrid attempt = playing video
Fourth attempt = can't playing video

I tried other cases.

  1. I tried with 'location.href' instead of 'window.open'. it's always playing video. I think
    it's a problem with safari and 'window.open'.

  2. When the video is played back, the video plays even if the page is refreshed. However, when the video is not playing, the video does not play when the page is refreshed.

Expected behavior
Everyday can playing video.

Desktop (please complete the following information):
Does not happen with Desktop

  • OS: Mac os
  • Browser: chrome 69.0.3497.100, safari 12

Smartphone (please complete the following information):
Occurs only on Mobile Safari(Does not happen with developer mobile tools)

  • Device: iPhone6s, iPhoneX
  • OS: iOS 12.0
  • Browser : safari 12

Additional context
I was trying to get a point of getting the video rights. but it's in jeelizFaceFilterES6. I'm sorry that the situation is complicated.

Face scale on Football demo

Hey Jeeliz,
I've been using the Football Fan effect as a foundation for a project (it's very very good) and would love to be able to add some elements to the chin. When the mouth is opening, though, the texture image doesn't scale. Can you recommend the most efficient way of making the y scale of the 'mask' respond to the mouth movement?
Thanks so much!

Portrait mode does not work ?

Describe the bug
I have tried fiddling with Optionnal init arguments by addingidealwidth and idealheight in videoSetting but it does not seem to work.

function init_faceFilter(videoSettings){

    JEEFACEFILTERAPI.init({
        canvasId: 'jeeFaceFilterCanvas',
        NNCpath: '../jeelizFaceFilter/dist/', // root of NNC.json file
        callbackReady: function(errCode, spec){
          if (errCode){
            console.log('AN ERROR HAPPENS. ERR =', errCode);
            return;
          }

          console.log('INFO : JEEFACEFILTERAPI IS READY');
          init_threeScene(spec);
        }, //end callbackReady()

        //called at each render iteration (drawing loop) :
        callbackTrack: function(detectState){
          onUpdate();
          THREE.JeelizHelper.render(detectState, THREECAMERA);
        }, //end callbackTrack()
        videoSetting:{'idealWidth':720,'idealHeight':1280}
        // videoSetting:{'idealWidth':1280,'idealHeight':720}
    
    }); //end JEEFACEFILTERAPI.init call

   
} // end main()
<body onload="main()" style='color: white'>
        <!-- CANVAS BEHIND : VIDEO FACEFILTER ONLY -->
        <canvas  id='jeeFaceFilterCanvas' style="width :100vw; height: auto;"></canvas>


        <div id="reso" style ="font-size:2em; color: black"></div>

        <!-- CANVAS ABOVE : AR WITH THREE.JS -->
        <!-- <canvas width="600" height="600" id='threejsCanvas'></canvas> -->

        <button id="btn" style="position: relative; font-size:inherit;">Record</button>
    </body>

Result
Webcam texture seems to be in landscape mode still.
alt text

Webcam texture in landscape mode
alt text

Doesn't work for Safari (desktop or iOS). Getting 'Webcam Unavailable' Error.

Describe the bug
I'm unable to run jeeliz face filter on safari. I'm getting the 'webcam_unavailable' error.

To Reproduce
Steps to reproduce the behavior:

  1. Go to https://jsfiddle.net/jeeliz/2p34hbeh/
  2. Open command line to see error.

Expected behavior
You should see the camera, but you're only seeing a blank page.

Screenshots
image

Desktop (please complete the following information):

  • OS: iOS / macbook pro
  • Browser: Safari

Smartphone (please complete the following information):

  • Device: iPhone.
  • Browser: Safari mobile.

gltf_fullScreen demo can not full screen when show shadow.

Hello, thanks for this good library.

I have question at demo.

Environment

Device: Mac OS
Browser: Chrome
I think It occurs in most environments.

demo file

gltf_fullScreen/index.html

Describe

I added spotLight(castShadow on) in this demo. At this time, renderer have border and not full screen.

Step

  1. Go demo
  2. Browser developer console -> enter this code.
THREERENDERER.shadowMap.enabled = true;
var spotLight = new THREE.SpotLight( 0xffffff );
spotLight.castShadow = true;
THREESCENE.add(spotLight)
  1. At this time, created border and can't shown full screen

Additional context

I tried several attempts. It occurs by shadow. And this only occurs in this demo(maybe it's fullscreen). I hesitated very much to put this issue on here or on three js. I am sorry if it is a matter of three js.

Best,
whilemouse

Video stopped when fullscreen mode with viewport

First, thanks for this good library.

Describe the bug
I used three.js demo and modified. video stopped when fullscreen mode with viewport. Probably it happened where only old iPhone.

To Reproduce
Steps to reproduce the behavior:

  1. Go to source file 'demos/threejs/miel_pops/index.html'
  2. Change Code
  • demos/threejs/miel_pops/index.html
    • add viewport
<head>
    <meta name="viewport" content="width=device-width, initial-scale=1">
</head>
  • demos/threejs/miel_pops/demo.js
    • add isFullScreen: true,
function main(){
    JeelizResizer.size_canvas({
        canvasId: 'jeeFaceFilterCanvas',
        isFullScreen: true,
        callback: function(isError, bestVideoSettings){
            init_faceFilter(bestVideoSettings);
        }
    })
}
  1. open safari -> responsive design mode -> select iPhone SE
  2. open 'demos/threejs/miel_pops/index.html'
  3. Stopped video

Expected behavior
Playing video

Screenshots
I don't upload. because screenshot can't know playing or stopped.

Desktop (please complete the following information):

  • OS: MacOS High Sierra 10.13.3
  • Browser [responsive design mode iPhone SE in safari(11.0.3)]

Smartphone (please complete the following information):

  • Device: [iPhone6s, iPhone SE]
  • OS: [iOS 12.0.1]
  • Browser [safari 11.0.3]

Additional context
My test device

  • Video playing device
    • iPhone 8(responsive design mode in mac os safari) : video playing
    • Mac OS(safari) : video playing
  • Video stopped device
    • iPhone 6s(mobile safari): video stopped
    • iPhone SE(responsive design mode in mac os safari) : video stopped
    • galaxy note 9(mobile chrome): video playing

3D model shakes on iOS 12

First, thanks for this good library.

My environment:

  • Device: iPhone6s, iPhoneX
  • OS: iOS 12

NNC Version:

  • NNC.json(default neural network in jeelizFaceFilter tag 1.1)

I was using it well on iOS 11. but I updated it to iOS 12 and I found the 3d model shaking. Is it a problem with ios12?

Loading materials via the JSON model

Thanks so much for this API, it's great!
I'm curious (and not very experienced with threeJS) - do you know a way to import materials via the JSON model as opposed to loading a texture image? I'm trying to determine a workflow that's as streamlined as possible.
Also, do you use JSON models purely as a preference?

CSS3 DIV ~ image position in relation to camera

Hey guys,
I'm playing with the div project, replacing the rectangle with an image. That was my only change.
I notice when rotating the camera away from my face, the image flies off wildly. I've played with the sensitivity settings and it still happens. Do you have any suggestions?

Access to public api through functions

When using facefilter through npm I realized that it is not convenient to import and use the library in my project which is made up of many modules and components. To access the api I need to reference window element like this:
window['JEEFACEFILTERAPI']
which is not really an elegant solution

In my opinion it would be much better if the lib would export function which can be utilized within other scripts so it can be accesed like that:

import facefilter from 'dist/jeelizFaceFilter';

function foo(){ 
     facefilter.init(...);
}

Can we have multiple camera preview in the same page?

Is your feature request related to a problem? Please describe.
A clear and concise description of what the problem is. Ex. I'm always frustrated when [...]

Describe the solution you'd like
A clear and concise description of what you want to happen.

Describe alternatives you've considered
A clear and concise description of any alternative solutions or features you've considered.

Additional context
Add any other context or screenshots about the feature request here.

Face detection using IP camera

Is your feature request related to a problem? Please describe.
We have surveillance IP camera to detect faces from 10-20m distance. we want to detect faces at low lighting conditions using javascript framework.

Describe the solution you'd like
We use dlib for face recognition (using python framework). Using detected faces from jeeliz framework and process it using python framework.

Describe alternatives you've considered
we tried using haarcascade.xml for face recognition it is detecting non-faces as faces and the image quality is very low.

Pls suggest how can we use your js framework along with python framework.

Color facial features, like eyes, lips, cheek bones

Firstly, this is a fantastic library.. really well implemented and documented. Although i did get stuck in the following places.

  1. Using the faceLowPoly.json can i highlight the meshlines in the video, currently when i try to do it. I get the entire mesh as 1 solid object.

  2. Further extension of the one above, can i color certain triangles(facial markers) like lips eyes with colors? Basically detect the lips and then color them..

If so how do i go about doing this

How to run jeeliz on a static image or custom video stream?

I noticed that using JEEFACEFILTER.init() only allows for detection of the face on the video stream accessed through getUserMedia(). As far as I understand there is no way to control the image/video which jeeliz analyses. I found out however that your sunglasses example allows to upload a custom picture.

Is it possible to input custom image/video stream to jeelizFaceFilter ? If yes then how can I do it ? I've looked through the readme and tutorials but with no luck :(

The Great work I ever found....! but one Question?

Is your feature request related to a problem? Please describe.
How's our set position thing works?
Can you please explain how we can put sunglasses and hat on their exact place?

On the other hand, can you please explain me how can I create this three values for sunglasses?

  hatMesh.scale.multiplyScalar(1.2);
  hatMesh.rotation.set(0, -40, 0);
  hatMesh.position.set(0.0, 0.6, 0.0);

Describe the solution you'd like
Should we create more tutorials?

Additional context

I am interested in helping you guys let me know anything where I can help you with.

Detect eyes, nose and mouth precisely.

Congratulations for this incredible library!

Can I get precise coordinates of eyes, mouth and nose?

Let me explain what I want to do ... I'm not a blander/threeJs professional and I want to use CSS3 capabilities to position objects on camera using coordinates delivery from jellizFaceFilter library. This is possible?

Lag on placement when moving

Hey again,
I'm curious is there's a way to modify the easing on elements when you move using the Football fan demo. I didn't see any speed variables and thought it might be a three.js value. As it currently works, it seems the mesh takes a second to catch up with me.

Also, when I turn my head left or right, the position of the elements scales in a strange way. To the left, everything gets smaller, to the right, everything gets bigger.

Thanks again!

Simple demo using VueJS

I'm rebuilding the HeadCursor demo using head tracking to emulate mouse movement of a fake cursor i have on the page. I have code running, but the STABI.xy values never update.

Describe the solution you'd like
See some method of this demo using something like Vue or React.

Describe alternatives you've considered
I could always create this project the "old school" way not using a framework like Vue, but I really wanted to attempt recreating this using VueJS

Additional context
To see what I mean please run git clone https://github.com/tetreault/face-filter-experiments and then cd face-filter-experiments and npm i and then go to localhost:3000/headmouse and check out the console log for what I mean.

The small isolated code snippet can be found inside face-filter-experiments/pages/headmouse.vue

Inconsistent div element placement across devices

Hello again,
I'm doing some tests with 2d masks for prototypes and am getting inconsistent eye placement across devices. I'm using the jeeFaceFilterCanvas to hold it. Here's my tweaks for the object. This image is closely cropped.
var SETTINGS={
rotationOffsetX: 0, //negative -> look upper. in radians
cameraFOV: 40, //in degrees, 3D camera FOV
pivotOffsetYZ: [0,-0], //position the rotation pivot along Y and Z axis
detectionThreshold: 0.9, //sensibility, between 0 and 1. Less -> more sensitive
detectionHysteresis: 0.02,
mouthOpeningThreshold: 0.5, //sensibility of mouth opening, between 0 and 1
mouthOpeningHysteresis: 0.05,
scale: [4.5,4.5], //scale of the DIV along horizontal and vertical axis
positionOffset: [-1.55, -.9,-0.2] //set a 3D position fofset to the div
};

Any suggestions?

Thanks!

Error when I host it in server. Uncaught DOMException: Failed to execute 'texImage2D' on 'WebGL2RenderingContext': Tainted canvases may not be loaded

Uncaught DOMException: Failed to execute 'texImage2D' on 'WebGL2RenderingContext': Tainted canvases may not be loaded

To Reproduce
Steps to reproduce the behavior:

  1. Go to '...'
  2. Click on '....'
  3. Scroll down to '....'
  4. See error

Expected behavior
A clear and concise description of what you expected to happen.

Screenshots
If applicable, add screenshots to help explain your problem.

Desktop (please complete the following information):

  • OS: [e.g. iOS]
  • Browser [e.g. chrome, safari]
  • Version [e.g. 22]

Smartphone (please complete the following information):

  • Device: [e.g. iPhone6]
  • OS: [e.g. iOS8.1]
  • Browser [e.g. stock browser, safari]
  • Version [e.g. 22]

Additional context
Add any other context about the problem here.

GL_INCOMPATIBLE error on Chrome 69, Android 9

Describe the bug
I've upgraded to Android 9. Jeeliz projects now throw a "GL_Incompatible" error in Chrome 69. They work as expected in Firefox on Android, so it seems to be Chrome specific. I have tested and seen the same result on 2 Pixel 2s.

To Reproduce
Steps to reproduce the behavior:

  1. Load any Jeeliz demo on Chrome 69 running on Android 9.

Expected behavior
Jeeliz projects to load

Screenshots
image

Smartphone (please complete the following information):

  • Device: Pixel 2
  • OS: Android 9
  • Browser Chrome
  • Version 69

As usual, I'm a big fan of your work and hope it's a simple resolution.

iOS/Safari does not work if camera facingMode has been changed

Describe the bug
NOTE: Using an established video element

  1. Error thrown (twice) on iOS (as seen in issue #14), but tracking initiates and works if facingMode has not changed:
    WebGL: INVALID_OPERATION: texImage2D: type HALF_FLOAT_OES but ArrayBufferView is not NULL

  2. If the facingMode has been changed ("user" or "environment", as seen below), tracking fails to start.

             this.mediaDevices.getUserMedia({
                 "audio": false,
                 "video": this.params.constraints || {
                     minWidth: this.params.dest_width,
                     minHeight: this.params.dest_height,
                     facingMode: "user"
                 }
             })
    

To Reproduce
Steps to reproduce the behavior:

  1. Instantiate new video element and use getUserMedia to start it up
  2. Change the facingMode of the video element and re-attach to the camera
  3. Attempt to init jeelizFaceFilter on the established video element
  4. Errors are thrown and tracking fails

Expected behavior
Face tracking should start, regardless of if the camera has been re-bound with new constraints prior to init.

Smartphone (please complete the following information):

  • Device: iPhone 6
  • OS: iOS 11.4
  • Browser: Safari
  • Version: 11.4

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.