Giter Site home page Giter Site logo

Comments (9)

nonam4 avatar nonam4 commented on June 3, 2024 1

If you still need to use centerX/Y you can get it with x + (width / 2) or y + (height / 2)

from react-native-vision-camera-face-detector.

nonam4 avatar nonam4 commented on June 3, 2024

Are you sure you’re using the right package?
There’s no centerX|Y exported on this library.

Can you provide me your package.json file?

from react-native-vision-camera-face-detector.

nonam4 avatar nonam4 commented on June 3, 2024

closing due lack of response

from react-native-vision-camera-face-detector.

luanfvieira avatar luanfvieira commented on June 3, 2024
{
  "name": "expo-face",
  "version": "1.0.0",
  "main": "node_modules/expo/AppEntry.js",
  "scripts": {
    "start": "expo start",
    "android": "expo run:android",
    "ios": "expo run:ios",
    "web": "expo start --web"
  },
  "dependencies": {
    "@react-native-community/hooks": "^3.0.0",
    "@react-native-masked-view/masked-view": "0.3.0",
    "@react-navigation/core": "^6.4.16",
    "@react-navigation/native": "^6.1.17",
    "expo": "~50.0.14",
    "expo-status-bar": "~1.11.1",
    "react": "18.2.0",
    "react-native": "0.73.6",
    "react-native-circular-progress": "^1.3.9",
    "react-native-reanimated": "~3.6.2",
    "react-native-safe-area-context": "4.8.2",
    "react-native-vision-camera": "3.9.0",
    "react-native-vision-camera-face-detector": "^1.3.5",
    "react-native-worklets-core": "^0.4.0",
    "react-native-svg": "14.1.0"
  },
  "devDependencies": {
    "@babel/core": "^7.20.0",
    "@types/react": "~18.2.45",
    "typescript": "^5.1.3"
  },
  "private": true
}

from react-native-vision-camera-face-detector.

luanfvieira avatar luanfvieira commented on June 3, 2024
import { Dimensions, StyleSheet, Text, View } from "react-native";
import { useEffect, useReducer, useRef, useState } from "react";
import {
  useCameraDevice,
  Camera as VisionCamera,
} from "react-native-vision-camera";
import {
  Camera,
  DetectionResult,
} from "react-native-vision-camera-face-detector";
import { Worklets } from "react-native-worklets-core";
import MaskedView from "@react-native-masked-view/masked-view";
import { AnimatedCircularProgress } from "react-native-circular-progress";
// import { Camera } from "./Camera";

const { width: windowWidth } = Dimensions.get("window");

const PREVIEW_SIZE = 305;
const PREVIEW_RECT = {
  minX: (windowWidth - PREVIEW_SIZE) / 2,
  minY: 100,
  width: PREVIEW_SIZE,
  height: PREVIEW_SIZE,
};

const instructionsText = {
  initialPrompt: "Posicione seu rosto no círculo",
  performActions: "Mantenha o dispositivo parado e execute as seguintes ações:",
  tooClose: "Você está muito perto. Segure o dispositivo mais distante.",
};

const detections = {
  BLINK: { instruction: "Pisque os dois olhos", minProbability: 0.3 },
  // TURN_HEAD_LEFT: { instruction: "Turn head left", maxAngle: -15 },
  // TURN_HEAD_RIGHT: { instruction: "Turn head right", minAngle: 15 },
  // NOD: { instruction: "Nod", minDiff: 1.5 },
  SMILE: { instruction: "Sorria", minProbability: 0.7 },
};

// type DetectionActions = keyof typeof detections
const detectionsList = [
  "BLINK",
  // "TURN_HEAD_LEFT",
  // "TURN_HEAD_RIGHT",
  // "NOD",
  "SMILE",
];

const initialState = {
  faceDetected: "no",
  faceTooBig: "no",
  detectionsList,
  currentDetectionIndex: 0,
  progressFill: 0,
  processComplete: false,
};

function contains({ outside, inside }) {
  const outsideMaxX = outside.minX + outside.width;
  const insideMaxX = inside.minX + inside.width;

  const outsideMaxY = outside.minY + outside.height;
  const insideMaxY = inside.minY + inside.height;

  console.log(
    inside.minX < outside.minX,
    insideMaxX > outsideMaxX,
    inside.minY < outside.minY,
    insideMaxY > outsideMaxY
  );

  console.log({ outside, inside });
  console.log({ outsideMaxX, insideMaxX, outsideMaxY, insideMaxY });

  if (inside.minX < outside.minX) {
    return false;
  }
  if (insideMaxX > outsideMaxX) {
    return false;
  }
  if (inside.minY < outside.minY) {
    return false;
  }
  if (insideMaxY > outsideMaxY) {
    return false;
  }

  return true;
}

function contains3({ outside, inside }) {
  const outsideMaxX = outside.minX + outside.width;
  const insideMaxX = inside.minX + inside.width;

  const outsideMaxY = outside.minY + outside.height;
  const insideMaxY = inside.minY + inside.height;
  console.log({ outside, inside });
  const xIntersect = Math.max(
    0,
    Math.min(insideMaxX, outsideMaxX) - Math.max(inside.minX, outside.minX)
  );
  const yIntersect = Math.max(
    0,
    Math.min(insideMaxY, outsideMaxY) - Math.max(inside.minY, outside.minY)
  );
  const intersectArea = xIntersect * yIntersect;

  const insideArea = inside.width * inside.height;
  const outsideArea = outside.width * outside.height;

  const unionArea = insideArea + outsideArea - intersectArea;

  return unionArea === outsideArea;
}

const detectionReducer = (state, action) => {
  switch (action.type) {
    case "FACE_DETECTED":
      if (action.payload === "yes") {
        return {
          ...state,
          faceDetected: action.payload,
          progressFill: 100 / (state.detectionsList.length + 1),
        };
      } else {
        // Reset
        return initialState;
      }
    case "FACE_TOO_BIG":
      return { ...state, faceTooBig: action.payload };
    case "NEXT_DETECTION":
      // next detection index
      const nextDetectionIndex = state.currentDetectionIndex + 1;

      // skip 0 index
      const progressMultiplier = nextDetectionIndex + 1;

      const newProgressFill =
        (100 / (state.detectionsList.length + 1)) * progressMultiplier;

      if (nextDetectionIndex === state.detectionsList.length) {
        // success

        return {
          ...state,
          processComplete: true,
          progressFill: newProgressFill,
        };
      }
      // next
      return {
        ...state,
        currentDetectionIndex: nextDetectionIndex,
        progressFill: newProgressFill,
      };
    default:
      throw new Error("Unexpected action type.");
  }
};

export default function App() {
  const rollAngles = useRef([]);
  const camera = useRef<VisionCamera>(null);
  const device = useCameraDevice("front", {
    physicalDevices: ["wide-angle-camera"],
  });
  const [state, dispatch] = useReducer(detectionReducer, initialState);

  const onFacesDetected = (result) => {
    // 1. There is only a single face in the detection results.

    if (result.faces.length !== 1) {
      dispatch({ type: "FACE_DETECTED", payload: "no" });
      return;
    }

    const face = result.faces[0];

    console.log(face.bounds);

    const faceRect = {
      minX: face.bounds.centerX,
      minY: face.bounds.centerY,
      width: face.bounds.width,
      height: face.bounds.height,
    };

    // 2. The face is almost fully contained within the camera preview.
    const edgeOffset = 50;
    const faceRectSmaller = {
      width: faceRect.width - edgeOffset,
      height: faceRect.height - edgeOffset,
      minY: faceRect.minY + edgeOffset / 2,
      minX: faceRect.minX + edgeOffset / 2,
    };
    const previewContainsFace = contains({
      outside: PREVIEW_RECT,
      inside: faceRectSmaller,
    });

    console.log({ previewContainsFace });
    if (!previewContainsFace) {
      dispatch({ type: "FACE_DETECTED", payload: "no" });
      return;
    }

    if (state.faceDetected === "no") {
      // 3. The face is not as big as the camera preview.
      const faceMaxSize = PREVIEW_SIZE - 90;
      if (faceRect.width >= faceMaxSize && faceRect.height >= faceMaxSize) {
        dispatch({ type: "FACE_TOO_BIG", payload: "yes" });
        return;
      }

      if (state.faceTooBig === "yes") {
        dispatch({ type: "FACE_TOO_BIG", payload: "no" });
      }
    }

    if (state.faceDetected === "no") {
      dispatch({ type: "FACE_DETECTED", payload: "yes" });
    }

    const detectionAction = state.detectionsList[state.currentDetectionIndex];

    switch (detectionAction) {
      case "BLINK":
        // Lower probabiltiy is when eyes are closed
        const leftEyeClosed =
          face.leftEyeOpenProbability <= detections.BLINK.minProbability;
        const rightEyeClosed =
          face.rightEyeOpenProbability <= detections.BLINK.minProbability;
        if (leftEyeClosed && rightEyeClosed) {
          dispatch({ type: "NEXT_DETECTION", payload: null });
        }
        return;
      case "NOD":
        // Collect roll angle data
        rollAngles.current.push(face.rollAngle);

        // Don't keep more than 10 roll angles (10 detection frames)
        if (rollAngles.current.length > 10) {
          rollAngles.current.shift();
        }

        // If not enough roll angle data, then don't process
        if (rollAngles.current.length < 10) return;

        // Calculate avg from collected data, except current angle data
        const rollAnglesExceptCurrent = [...rollAngles.current].splice(
          0,
          rollAngles.current.length - 1
        );

        // Summation
        const rollAnglesSum = rollAnglesExceptCurrent.reduce((prev, curr) => {
          return prev + Math.abs(curr);
        }, 0);

        // Average
        const avgAngle = rollAnglesSum / rollAnglesExceptCurrent.length;

        // If the difference between the current angle and the average is above threshold, pass.
        const diff = Math.abs(avgAngle - Math.abs(face.rollAngle));

        if (diff >= detections.NOD.minDiff) {
          dispatch({ type: "NEXT_DETECTION", payload: null });
        }
        return;
      case "TURN_HEAD_LEFT":
        // Negative angle is the when the face turns left
        if (face.yawAngle <= detections.TURN_HEAD_LEFT.maxAngle) {
          dispatch({ type: "NEXT_DETECTION", payload: null });
        }
        return;
      case "TURN_HEAD_RIGHT":
        // Positive angle is the when the face turns right
        if (face.yawAngle >= detections.TURN_HEAD_RIGHT.minAngle) {
          dispatch({ type: "NEXT_DETECTION", payload: null });
        }
        return;
      case "SMILE":
        // Higher probabiltiy is when smiling
        if (face.smilingProbability >= detections.SMILE.minProbability) {
          dispatch({ type: "NEXT_DETECTION", payload: null });
        }
        return;
    }
  };

  const handleFacesDetection = Worklets.createRunInJsFn(
    (result: DetectionResult) => {
      onFacesDetected(result);
    }
  );

  if (device == null) {
    return (
      <View
        style={{
          flex: 1,
          justifyContent: "center",
          alignItems: "center",
          borderBlockColor: "red",
        }}
      >
        <Text>Camera not available</Text>
      </View>
    );
  }

  return (
    <View style={styles.container}>
      <MaskedView
        style={StyleSheet.absoluteFill}
        maskElement={<View style={styles.mask} />}
      >
        <Camera
          // @ts-ignore
          ref={camera}
          style={StyleSheet.absoluteFill}
          device={device}
          faceDetectionCallback={handleFacesDetection}
          faceDetectionOptions={{
            // detection settings
            landmarkMode: "none",
            performanceMode: "accurate",
            classificationMode: "all",
            trackingEnabled: true,
            contourMode: "all",
            // minFaceSize: 0.8,
            // convertFrame: true,
            // contourMode: "all",
          }}
          isActive={true}
        >
          <AnimatedCircularProgress
            style={styles.circularProgress}
            size={PREVIEW_SIZE}
            width={5}
            backgroundWidth={7}
            fill={state.progressFill}
            tintColor="#3485FF"
            backgroundColor="#e8e8e8"
          />
        </Camera>
      </MaskedView>
      <View style={styles.instructionsContainer}>
        <Text style={styles.instructions}>
          {state.faceDetected === "no" &&
            state.faceTooBig === "no" &&
            instructionsText.initialPrompt}

          {state.faceTooBig === "yes" && instructionsText.tooClose}

          {state.faceDetected === "yes" &&
            state.faceTooBig === "no" &&
            instructionsText.performActions}
        </Text>
        <Text style={styles.action}>
          {state.faceDetected === "yes" &&
            state.faceTooBig === "no" &&
            detections[state.detectionsList[state.currentDetectionIndex]]
              .instruction}
        </Text>
      </View>
    </View>
  );
}

const styles = StyleSheet.create({
  container: {
    flex: 1,
    // backgroundColor: "#18181b",
  },
  mask: {
    borderRadius: PREVIEW_SIZE / 2,
    height: PREVIEW_SIZE,
    width: PREVIEW_SIZE,
    marginTop: PREVIEW_RECT.minY,
    alignSelf: "center",
    backgroundColor: "white",
  },
  circularProgress: {
    width: PREVIEW_SIZE,
    height: PREVIEW_SIZE,
    marginTop: PREVIEW_RECT.minY,
    marginLeft: PREVIEW_RECT.minX,
  },
  instructions: {
    fontSize: 20,
    textAlign: "center",
    top: 25,
    position: "absolute",
    color: "#fb923c",
  },
  instructionsContainer: {
    flex: 1,
    justifyContent: "center",
    alignItems: "center",
    marginTop: PREVIEW_RECT.minY + PREVIEW_SIZE,
  },
  action: {
    fontSize: 24,
    textAlign: "center",
    fontWeight: "bold",
    color: "#e11d48",
  },
});

from react-native-vision-camera-face-detector.

luanfvieira avatar luanfvieira commented on June 3, 2024

I sent the package.json and an example of App.tsx above, would it be possible to reopen the issue?

from react-native-vision-camera-face-detector.

nonam4 avatar nonam4 commented on June 3, 2024

as you can see bellow, there's no centerX nor centerY returned

from react-native-vision-camera-face-detector.

nonam4 avatar nonam4 commented on June 3, 2024

Looking into 1.3.5 version I found centerX|Y 😅

In this version you must use left for x and top for y.
In new 1.4.1 version they are renamed to x and y.

from react-native-vision-camera-face-detector.

luanfvieira avatar luanfvieira commented on June 3, 2024

Thank you very much!

from react-native-vision-camera-face-detector.

Related Issues (20)

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.