Giter Site home page Giter Site logo

61315 / mediapipe-prebuilt Goto Github PK

View Code? Open in Web Editor NEW
32.0 3.0 9.0 213.38 MB

Prebuilt mediapipe packages and demos ready to deploy on your device at one go. 💨

License: MIT License

Objective-C 5.16% Starlark 27.32% Objective-C++ 42.41% C++ 25.10%
iris mediapipe eye tracking ios gaze swift multipose pose

mediapipe-prebuilt's People

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar

mediapipe-prebuilt's Issues

Undefined symbol: _OBJC_CLASS_$_MPPBPose

hi,
on macOS12 ,the version of Xcode is 14.2,when i built the mppb-ios-pose.xcodeproj,always got this error:
Undefined symbol: OBJC_CLASS$_MPPBPose ? in fact,MPPBPose.framework was used in this project.
why??

How to Get Face Landmark Hand Landmark and Body Landmark?

Hello, I'm developer with an iOS application,
Now I need Use MediaPipe To Tracking human body、face and hands, I try to Run the Example Demo and get the realtime pixelBuffer, But I can't get the Face Landmark、Hand Landmark and Body Landmark with the code.

I need your help now to get that landmark.

language:Swift/Objective-C
Xcode Version:14.0.1
Device:IPhone 12 Pro Max

LandmarksSmoothingCalculator

Hey,

I am looking to apply the LandmarksSmoothingCalculator to the face landmarks. I tried simply adding the calculator as you mention in the readme, but when I try to build the project I get this error: [libprotobuf ERROR external/com_google_protobuf/src/google/protobuf/text_format.cc:309] Error parsing text-format mediapipe.CalculatorGraphConfig: 141:63: Extension "mediapipe.LandmarksSmoothingCalculatorOptions.ext" is not defined or is not an extension of "mediapipe.CalculatorOptions".. Could you please post your pbtxt file and all other changed you applied to mediapipe in order to user the LandmarksSmoothingCalculator.

Thank you.

diff --git a/mediapipe/modules/face_landmark/face_landmark_cpu.pbtxt b/mediapipe/modules/face_landmark/face_landmark_cpu.pbtxt
index a94a8c8..e196afb 100644
--- a/mediapipe/modules/face_landmark/face_landmark_cpu.pbtxt
+++ b/mediapipe/modules/face_landmark/face_landmark_cpu.pbtxt
@@ -124,11 +124,35 @@ node {
   }
 }
 
+# Extracts the input image frame dimensions as a separate packet.
+node {
+  calculator: "ImagePropertiesCalculator"
+  input_stream: "IMAGE:input_image"
+  output_stream: "SIZE:input_image_size"
+}
+
+# Applies smoothing to the face landmarks.
+node {
+  calculator: "LandmarksSmoothingCalculator"
+  input_stream: "NORM_LANDMARKS:landmarks"
+  input_stream: "IMAGE_SIZE:input_image_size"
+  output_stream: "NORM_FILTERED_LANDMARKS:filtered_landmarks"
+  options: {
+    [mediapipe.LandmarksSmoothingCalculatorOptions.ext] {
+      one_euro_filter {
+        min_cutoff: 0.01
+        beta: 10.0
+        derivate_cutoff: 1.0
+      }
+    }
+  }
+}
+
 # Projects the landmarks from the cropped face image to the corresponding
 # locations on the full image before cropping (input to the graph).
 node {
   calculator: "LandmarkProjectionCalculator"
-  input_stream: "NORM_LANDMARKS:landmarks"
+  input_stream: "NORM_FILTERED_LANDMARKS:filtered_landmarks"
   input_stream: "NORM_RECT:roi"
   output_stream: "NORM_LANDMARKS:face_landmarks"
 }

No pixelbuffer out output on current mediapipe, transform code still missing

Two issues: 1. I'm trying to build this against the current mediapipe, as follows below, and I'm not getting any pixel buffer output. 2. Can you add or post the code that converts the SingleFaceGeometry to the simd_float_4x4 for posting to the delegate method?

Here's my modifications to a stock mediapipe, which is largely comprised of the changes in the two pinned issues (but I smooth the right eye landmarks and irises as well):

diff --git a/mediapipe/examples/ios/iristrackinggpuframework/BUILD b/mediapipe/examples/ios/iristrackinggpuframework/BUILD
new file mode 100644
index 0000000..ad559f5
--- /dev/null
+++ b/mediapipe/examples/ios/iristrackinggpuframework/BUILD
@@ -0,0 +1,51 @@
+load("@build_bazel_rules_apple//apple:ios.bzl", "ios_framework")
+
+ios_framework(
+    name = "MPPIrisTracking",
+    hdrs = [
+        "MPPIrisTracker.h",
+    ],
+    infoplists = ["Info.plist"],
+    bundle_id = "com.example.MPPIrisTraking",
+    families = ["iphone", "ipad"],
+    minimum_os_version = "12.0",
+    deps = [
+        ":MPPIrisTrackingLibrary",
+        "@ios_opencv//:OpencvFramework",
+    ],
+)
+
+objc_library(
+    name = "MPPIrisTrackingLibrary",
+    srcs = [
+        "MPPIrisTracker.mm",
+    ],
+    hdrs = [
+        "MPPIrisTracker.h",
+    ],
+    copts = ["-std=c++17"],
+    data = [
+        "//mediapipe/graphs/iris_tracking:iris_tracking_gpu.binarypb",
+        "//mediapipe/modules/face_detection:face_detection_short_range.tflite",
+        "//mediapipe/modules/face_landmark:face_landmark.tflite",
+        "//mediapipe/modules/iris_landmark:iris_landmark.tflite",
+    ],
+    sdk_frameworks = [
+        "AVFoundation",
+        "CoreGraphics",
+        "CoreMedia",
+        "UIKit"
+    ],
+    deps = [
+        "//mediapipe/objc:mediapipe_framework_ios",
+        "//mediapipe/objc:mediapipe_input_sources_ios",
+        "//mediapipe/objc:mediapipe_layer_renderer",
+    ] + select({
+        "//mediapipe:ios_i386": [],
+        "//mediapipe:ios_x86_64": [],
+        "//conditions:default": [
+            "//mediapipe/graphs/iris_tracking:iris_tracking_gpu_deps",
+            "//mediapipe/framework/formats:landmark_cc_proto",
+        ],
+    }),
+)
\ No newline at end of file
diff --git a/mediapipe/examples/ios/iristrackinggpuframework/Info.plist b/mediapipe/examples/ios/iristrackinggpuframework/Info.plist
new file mode 100644
index 0000000..aadef04
--- /dev/null
+++ b/mediapipe/examples/ios/iristrackinggpuframework/Info.plist
@@ -0,0 +1,16 @@
+<?xml version="1.0" encoding="UTF-8"?>
+<!DOCTYPE plist PUBLIC "-//Apple//DTD PLIST 1.0//EN" "http://www.apple.com/DTDs/PropertyList-1.0.dtd">
+<plist version="1.0">
+<dict>
+  <key>CameraPosition</key>
+  <string>front</string>
+  <key>MainViewController</key>
+  <string>IrisTrackingViewController</string>
+  <key>GraphOutputStream</key>
+  <string>output_video</string>
+  <key>GraphInputStream</key>
+  <string>input_video</string>
+  <key>GraphName</key>
+  <string>iris_tracking_gpu</string>
+</dict>
+</plist>
diff --git a/mediapipe/examples/ios/iristrackinggpuframework/MPPIrisTracker.h b/mediapipe/examples/ios/iristrackinggpuframework/MPPIrisTracker.h
new file mode 100644
index 0000000..75762d9
--- /dev/null
+++ b/mediapipe/examples/ios/iristrackinggpuframework/MPPIrisTracker.h
@@ -0,0 +1,19 @@
+#import <Foundation/Foundation.h>
+#import <CoreVideo/CoreVideo.h>
+#import <CoreMedia/CoreMedia.h>
+
+#include <simd/simd.h>
+
+@class MPPIrisTracker;
+
+@protocol MPPIrisTrackerDelegate <NSObject>
+- (void)irisTracker: (MPPIrisTracker *)irisTracker didOutputPixelBuffer: (CVPixelBufferRef)pixelBuffer;
+- (void)irisTracker: (MPPIrisTracker *)irisTracker didOutputTransform: (simd_float4x4)transform;
+@end
+
+@interface MPPIrisTracker : NSObject
+- (instancetype)init;
+- (void)startGraph;
+- (void)processVideoFrame: (CVPixelBufferRef)imageBuffer timestamp: (CMTime)timestamp;
+@property (weak, nonatomic) id <MPPIrisTrackerDelegate> delegate;
+@end
diff --git a/mediapipe/examples/ios/iristrackinggpuframework/MPPIrisTracker.mm b/mediapipe/examples/ios/iristrackinggpuframework/MPPIrisTracker.mm
new file mode 100644
index 0000000..0785027
--- /dev/null
+++ b/mediapipe/examples/ios/iristrackinggpuframework/MPPIrisTracker.mm
@@ -0,0 +1,142 @@
+#import "MPPIrisTracker.h"
+#import "mediapipe/objc/MPPGraph.h"
+#import "mediapipe/objc/MPPCameraInputSource.h"
+#import "mediapipe/objc/MPPLayerRenderer.h"
+#include "mediapipe/framework/formats/landmark.pb.h"
+
+static NSString* const kGraphName = @"iris_tracking_gpu";
+
+static const char* kInputStream = "input_video";
+static const char* kOutputStream = "output_video";
+
+static const char* kLandmarksOutputStream = "iris_landmarks";
+static const char* kSingleFaceGeometryStream = "single_face_geometry";
+static const char* kVideoQueueLabel = "com.google.mediapipe.example.videoQueue";
+
+
+/// Input side packet for focal length parameter.
+std::map<std::string, mediapipe::Packet> _input_side_packets;
+mediapipe::Packet _focal_length_side_packet;
+
+@interface MPPIrisTracker() <MPPGraphDelegate>
+@property(nonatomic) MPPGraph* mediapipeGraph;
+@end
+
+
+@implementation MPPIrisTracker { }
+
+#pragma mark - Cleanup methods
+
+- (void)dealloc {
+    self.mediapipeGraph.delegate = nil;
+    [self.mediapipeGraph cancel];
+    // Ignore errors since we're cleaning up.
+    [self.mediapipeGraph closeAllInputStreamsWithError:nil];
+    [self.mediapipeGraph waitUntilDoneWithError:nil];
+}
+
+#pragma mark - MediaPipe graph methods
+// https://google.github.io/mediapipe/getting_started/hello_world_ios.html#using-a-mediapipe-graph-in-ios
+
++ (MPPGraph*)loadGraphFromResource:(NSString*)resource {
+    // Load the graph config resource.
+    NSError* configLoadError = nil;
+    NSBundle* bundle = [NSBundle bundleForClass:[self class]];
+    if (!resource || resource.length == 0) {
+        return nil;
+    }
+    NSURL* graphURL = [bundle URLForResource:resource withExtension:@"binarypb"];
+    NSData* data = [NSData dataWithContentsOfURL:graphURL options:0 error:&configLoadError];
+    if (!data) {
+        NSLog(@"Failed to load MediaPipe graph config: %@", configLoadError);
+        return nil;
+    }
+    
+    // Parse the graph config resource into mediapipe::CalculatorGraphConfig proto object.
+    mediapipe::CalculatorGraphConfig config;
+    config.ParseFromArray(data.bytes, data.length);
+    
+    // Create MediaPipe graph with mediapipe::CalculatorGraphConfig proto object.
+    MPPGraph* newGraph = [[MPPGraph alloc] initWithGraphConfig:config];
+    
+    _focal_length_side_packet =
+    mediapipe::MakePacket<std::unique_ptr<float>>(absl::make_unique<float>(0.0));
+    _input_side_packets = {
+        {"focal_length_pixel", _focal_length_side_packet},
+    };
+    [newGraph addSidePackets:_input_side_packets];
+    [newGraph addFrameOutputStream:kLandmarksOutputStream outputPacketType:MPPPacketTypeRaw];
+    [newGraph addFrameOutputStream:kSingleFaceGeometryStream outputPacketType:MPPPacketTypeRaw];
+    [newGraph addFrameOutputStream:kOutputStream outputPacketType:MPPPacketTypePixelBuffer];
+    
+    return newGraph;
+}
+
+- (instancetype)init
+{
+    self = [super init];
+    if (self) {
+        self.mediapipeGraph = [[self class] loadGraphFromResource:kGraphName];
+        self.mediapipeGraph.delegate = self;
+        self.mediapipeGraph.maxFramesInFlight = 2;
+    }
+    return self;
+}
+
+- (void)startGraph {
+    // Start running self.mediapipeGraph.
+    NSError* error;
+    if (![self.mediapipeGraph startWithError:&error]) {
+        NSLog(@"Failed to start graph: %@", error);
+    }
+}
+
+#pragma mark - MPPInputSourceDelegate methods
+
+- (void)processVideoFrame:(CVPixelBufferRef)imageBuffer
+                timestamp:(CMTime)timestamp {
+    
+    mediapipe::Timestamp graphTimestamp(static_cast<mediapipe::TimestampBaseType>(
+        mediapipe::Timestamp::kTimestampUnitsPerSecond * CMTimeGetSeconds(timestamp)));
+    
+    [self.mediapipeGraph sendPixelBuffer:imageBuffer
+                              intoStream:kInputStream
+                              packetType:MPPPacketTypePixelBuffer
+                               timestamp:graphTimestamp];
+}
+
+#pragma mark - MPPGraphDelegate methods
+
+// Receives CVPixelBufferRef from the MediaPipe graph. Invoked on a MediaPipe worker thread.
+- (void)mediapipeGraph:(MPPGraph*)graph
+  didOutputPixelBuffer:(CVPixelBufferRef)pixelBuffer
+            fromStream:(const std::string&)streamName {
+    if (streamName == kOutputStream) {
+        [_delegate irisTracker: self didOutputPixelBuffer: pixelBuffer];
+    }
+}
+
+// Receives a raw packet from the MediaPipe graph. Invoked on a MediaPipe worker thread.
+- (void)mediapipeGraph:(MPPGraph*)graph
+       didOutputPacket:(const ::mediapipe::Packet&)packet
+            fromStream:(const std::string&)streamName {
+    if (streamName == kLandmarksOutputStream) {
+        if (packet.IsEmpty()) {
+            NSLog(@"[TS:%lld] No iris landmarks", packet.Timestamp().Value());
+            return;
+        }
+        
+        const auto& landmarks = packet.Get<::mediapipe::NormalizedLandmarkList>();
+        NSLog(@"[TS:%lld] Number of landmarks on iris: %d", packet.Timestamp().Value(),
+              landmarks.landmark_size());
+        for (int i = 0; i < landmarks.landmark_size(); ++i) {
+          NSLog(@"\tLandmark[%d]: (%f, %f, %f)", i, landmarks.landmark(i).x(),
+                landmarks.landmark(i).y(), landmarks.landmark(i).z());
+        }
+    }
+    // if (streamName == kSingleFaceGeometryStream) {
+
+    // }
+}
+
+@end
diff --git a/mediapipe/graphs/iris_tracking/BUILD b/mediapipe/graphs/iris_tracking/BUILD
index 86e667b..8e56d90 100644
--- a/mediapipe/graphs/iris_tracking/BUILD
+++ b/mediapipe/graphs/iris_tracking/BUILD
@@ -75,6 +75,8 @@ cc_library(
         "//mediapipe/graphs/iris_tracking/subgraphs:iris_and_depth_renderer_gpu",
         "//mediapipe/modules/face_landmark:face_landmark_front_gpu",
         "//mediapipe/modules/iris_landmark:iris_landmark_left_and_right_gpu",
+        "//mediapipe/modules/face_geometry:env_generator_calculator",
+        "//mediapipe/modules/face_geometry:face_geometry_from_landmarks",
     ],
 )
 
diff --git a/mediapipe/graphs/iris_tracking/iris_tracking_gpu.pbtxt b/mediapipe/graphs/iris_tracking/iris_tracking_gpu.pbtxt
index 505a951..82de22f 100644
--- a/mediapipe/graphs/iris_tracking/iris_tracking_gpu.pbtxt
+++ b/mediapipe/graphs/iris_tracking/iris_tracking_gpu.pbtxt
@@ -9,6 +9,9 @@ input_stream: "input_video"
 output_stream: "output_video"
 # Face landmarks with iris. (NormalizedLandmarkList)
 output_stream: "face_landmarks_with_iris"
+# A face geometry with estimated pose transform matrix
+output_stream: "single_face_geometry"
+
 
 # Throttles the images flowing downstream for flow control. It passes through
 # the very first incoming image unaltered, and waits for downstream nodes
@@ -31,6 +34,31 @@ node {
   output_stream: "throttled_input_video"
 }
 
+# Generates an environment that describes the current virtual scene.
+node {
+  calculator: "FaceGeometryEnvGeneratorCalculator"
+  output_side_packet: "ENVIRONMENT:environment"
+  node_options: {
+    [type.googleapis.com/mediapipe.FaceGeometryEnvGeneratorCalculatorOptions] {
+      environment: {
+        origin_point_location: TOP_LEFT_CORNER
+        perspective_camera: {
+          vertical_fov_degrees: 63.0  # 63 degrees
+          near: 1.0  # 1cm
+          far: 10000.0  # 100m
+        }
+      }
+    }
+  }
+}
+
+# Extracts the input image frame dimensions as a separate packet.
+node {
+  calculator: "ImagePropertiesCalculator"
+  input_stream: "IMAGE_GPU:throttled_input_video"
+  output_stream: "SIZE:input_image_size"
+}
+
 # Defines how many faces to detect. Iris tracking currently only handles one
 # face (left and right eye), and therefore this should always be set to 1.
 node {
@@ -67,6 +95,23 @@ node {
   }
 }
 
+# Applies smoothing to the single set of face landmarks.
+node {
+  calculator: "LandmarksSmoothingCalculator"
+  input_stream: "NORM_LANDMARKS:face_landmarks"
+  input_stream: "IMAGE_SIZE:input_image_size"
+  output_stream: "NORM_FILTERED_LANDMARKS:smoothed_face_landmarks"
+  options: {
+    [mediapipe.LandmarksSmoothingCalculatorOptions.ext] {
+      one_euro_filter {
+             min_cutoff: 0.1
+             beta: 40.0
+             derivate_cutoff: 1.0
+           }
+    }
+  }
+}
+
 # Gets the very first and only face rect from "face_rects_from_landmarks"
 # vector.
 node {
@@ -84,7 +129,7 @@ node {
 # Gets two landmarks which define left eye boundary.
 node {
   calculator: "SplitNormalizedLandmarkListCalculator"
-  input_stream: "face_landmarks"
+  input_stream: "smoothed_face_landmarks"
   output_stream: "left_eye_boundary_landmarks"
   node_options: {
     [type.googleapis.com/mediapipe.SplitVectorCalculatorOptions] {
@@ -95,10 +140,27 @@ node {
   }
 }
 
+# Applies smoothing to the single set of iris landmarks.
+node {
+  calculator: "LandmarksSmoothingCalculator"
+  input_stream: "NORM_LANDMARKS:left_eye_boundary_landmarks"
+  input_stream: "IMAGE_SIZE:input_image_size"
+  output_stream: "NORM_FILTERED_LANDMARKS:smoothed_left_eye_boundary_landmarks"
+  options: {
+    [mediapipe.LandmarksSmoothingCalculatorOptions.ext] {
+      one_euro_filter {
+        min_cutoff: 0.1
+        beta: 40.0
+        derivate_cutoff: 1.0
+      }
+    }
+  }
+}
+
 # Gets two landmarks which define right eye boundary.
 node {
   calculator: "SplitNormalizedLandmarkListCalculator"
-  input_stream: "face_landmarks"
+  input_stream: "smoothed_face_landmarks"
   output_stream: "right_eye_boundary_landmarks"
   node_options: {
     [type.googleapis.com/mediapipe.SplitVectorCalculatorOptions] {
@@ -109,12 +171,28 @@ node {
   }
 }
 
+node {
+  calculator: "LandmarksSmoothingCalculator"
+  input_stream: "NORM_LANDMARKS:right_eye_boundary_landmarks"
+  input_stream: "IMAGE_SIZE:input_image_size"
+  output_stream: "NORM_FILTERED_LANDMARKS:smoothed_right_eye_boundary_landmarks"
+  options: {
+    [mediapipe.LandmarksSmoothingCalculatorOptions.ext] {
+      one_euro_filter {
+        min_cutoff: 0.1
+        beta: 40.0
+        derivate_cutoff: 1.0
+      }
+    }
+  }
+}
+
 # Detects iris landmarks, eye contour landmarks, and corresponding rect (ROI).
 node {
   calculator: "IrisLandmarkLeftAndRightGpu"
   input_stream: "IMAGE:throttled_input_video"
-  input_stream: "LEFT_EYE_BOUNDARY_LANDMARKS:left_eye_boundary_landmarks"
-  input_stream: "RIGHT_EYE_BOUNDARY_LANDMARKS:right_eye_boundary_landmarks"
+  input_stream: "LEFT_EYE_BOUNDARY_LANDMARKS:smoothed_left_eye_boundary_landmarks"
+  input_stream: "RIGHT_EYE_BOUNDARY_LANDMARKS:smoothed_right_eye_boundary_landmarks"
   output_stream: "LEFT_EYE_CONTOUR_LANDMARKS:left_eye_contour_landmarks"
   output_stream: "LEFT_EYE_IRIS_LANDMARKS:left_iris_landmarks"
   output_stream: "LEFT_EYE_ROI:left_eye_rect_from_landmarks"
@@ -123,29 +201,114 @@ node {
   output_stream: "RIGHT_EYE_ROI:right_eye_rect_from_landmarks"
 }
 
+# Applies smoothing to the single set of iris landmarks.
+node {
+  calculator: "LandmarksSmoothingCalculator"
+  input_stream: "NORM_LANDMARKS:left_eye_contour_landmarks"
+  input_stream: "IMAGE_SIZE:input_image_size"
+  output_stream: "NORM_FILTERED_LANDMARKS:smoothed_left_eye_contour_landmarks"
+  options: {
+    [mediapipe.LandmarksSmoothingCalculatorOptions.ext] {
+      one_euro_filter {
+        min_cutoff: 0.01
+        beta: 10.0
+        derivate_cutoff: 1.0
+      }
+    }
+  }
+}
+
+# Applies smoothing to the single set of iris landmarks.
+node {
+  calculator: "LandmarksSmoothingCalculator"
+  input_stream: "NORM_LANDMARKS:right_eye_contour_landmarks"
+  input_stream: "IMAGE_SIZE:input_image_size"
+  output_stream: "NORM_FILTERED_LANDMARKS:smoothed_right_eye_contour_landmarks"
+  options: {
+    [mediapipe.LandmarksSmoothingCalculatorOptions.ext] {
+      one_euro_filter {
+        min_cutoff: 0.01
+        beta: 10.0
+        derivate_cutoff: 1.0
+      }
+    }
+  }
+}
+
 node {
   calculator: "ConcatenateNormalizedLandmarkListCalculator"
-  input_stream: "left_eye_contour_landmarks"
-  input_stream: "right_eye_contour_landmarks"
+  input_stream: "smoothed_left_eye_contour_landmarks"
+  input_stream: "smoothed_right_eye_contour_landmarks"
   output_stream: "refined_eye_landmarks"
 }
 
+# Applies smoothing to the single set of iris landmarks.
+node {
+  calculator: "LandmarksSmoothingCalculator"
+  input_stream: "NORM_LANDMARKS:left_iris_landmarks"
+  input_stream: "IMAGE_SIZE:input_image_size"
+  output_stream: "NORM_FILTERED_LANDMARKS:smoothed_left_iris_landmarks"
+  options: {
+    [mediapipe.LandmarksSmoothingCalculatorOptions.ext] {
+      one_euro_filter {
+        min_cutoff: 0.01
+        beta: 10.0
+        derivate_cutoff: 1.0
+      }
+    }
+  }
+}
+
+# Applies smoothing to the single set of iris landmarks.
+node {
+  calculator: "LandmarksSmoothingCalculator"
+  input_stream: "NORM_LANDMARKS:right_iris_landmarks"
+  input_stream: "IMAGE_SIZE:input_image_size"
+  output_stream: "NORM_FILTERED_LANDMARKS:smoothed_right_iris_landmarks"
+  options: {
+    [mediapipe.LandmarksSmoothingCalculatorOptions.ext] {
+      one_euro_filter {
+        min_cutoff: 0.01
+        beta: 10.0
+        derivate_cutoff: 1.0
+      }
+    }
+  }
+}
+
 node {
   calculator: "UpdateFaceLandmarksCalculator"
   input_stream: "NEW_EYE_LANDMARKS:refined_eye_landmarks"
-  input_stream: "FACE_LANDMARKS:face_landmarks"
+  input_stream: "FACE_LANDMARKS:smoothed_face_landmarks"
   output_stream: "UPDATED_FACE_LANDMARKS:updated_face_landmarks"
 }
 
+# Puts the single set of smoothed landmarks back into a collection to simplify
+# passing the result into the `FaceGeometryFromLandmarks` subgraph.
+node {
+  calculator: "ConcatenateLandmarListVectorCalculator"
+  input_stream: "updated_face_landmarks"
+  output_stream: "single_smoothed_face_landmarks"
+}
+
+# Computes face geometry from face landmarks for a single face.
+node {
+  calculator: "FaceGeometryFromLandmarks"
+  input_stream: "MULTI_FACE_LANDMARKS:single_smoothed_face_landmarks"
+  input_stream: "IMAGE_SIZE:input_image_size"
+  input_side_packet: "ENVIRONMENT:environment"
+  output_stream: "MULTI_FACE_GEOMETRY:single_face_geometry"
+}
+
 # Renders annotations and overlays them on top of the input images.
 node {
   calculator: "IrisAndDepthRendererGpu"
   input_stream: "IMAGE:throttled_input_video"
   input_stream: "FACE_LANDMARKS:updated_face_landmarks"
-  input_stream: "EYE_LANDMARKS_LEFT:left_eye_contour_landmarks"
-  input_stream: "EYE_LANDMARKS_RIGHT:right_eye_contour_landmarks"
-  input_stream: "IRIS_LANDMARKS_LEFT:left_iris_landmarks"
-  input_stream: "IRIS_LANDMARKS_RIGHT:right_iris_landmarks"
+  input_stream: "EYE_LANDMARKS_LEFT:smoothed_left_eye_contour_landmarks"
+  input_stream: "EYE_LANDMARKS_RIGHT:smoothed_right_eye_contour_landmarks"
+  input_stream: "IRIS_LANDMARKS_LEFT:smoothed_left_iris_landmarks"
+  input_stream: "IRIS_LANDMARKS_RIGHT:smoothed_right_iris_landmarks"
   input_stream: "NORM_RECT:face_rect"
   input_stream: "LEFT_EYE_RECT:left_eye_rect_from_landmarks"
   input_stream: "RIGHT_EYE_RECT:right_eye_rect_from_landmarks"
diff --git a/mediapipe/modules/face_landmark/BUILD b/mediapipe/modules/face_landmark/BUILD
index f155e46..1c49f36 100644
--- a/mediapipe/modules/face_landmark/BUILD
+++ b/mediapipe/modules/face_landmark/BUILD
@@ -63,6 +63,16 @@ mediapipe_simple_subgraph(
     ],
 )
 
+mediapipe_simple_subgraph(
+    name = "face_landmarks_smoothing",
+    graph = "face_landmarks_smoothing.pbtxt",
+    register_as = "FaceLandmarksSmoothing",
+    deps = [
+        "//mediapipe/calculators/util:landmarks_smoothing_calculator",
+    ],
+)
+
+
 mediapipe_simple_subgraph(
     name = "face_landmark_front_cpu",
     graph = "face_landmark_front_cpu.pbtxt",
@@ -92,6 +102,7 @@ mediapipe_simple_subgraph(
         ":face_detection_front_detection_to_roi",
         ":face_landmark_gpu",
         ":face_landmark_landmarks_to_roi",
+        ":face_landmarks_smoothing",
         "//mediapipe/calculators/core:begin_loop_calculator",
         "//mediapipe/calculators/core:clip_vector_size_calculator",
         "//mediapipe/calculators/core:constant_side_packet_calculator",
diff --git a/mediapipe/modules/face_landmark/face_landmark_gpu.pbtxt b/mediapipe/modules/face_landmark/face_landmark_gpu.pbtxt
index 49e597e..f90392c 100644
--- a/mediapipe/modules/face_landmark/face_landmark_gpu.pbtxt
+++ b/mediapipe/modules/face_landmark/face_landmark_gpu.pbtxt
@@ -181,5 +181,22 @@ node {
   calculator: "LandmarkProjectionCalculator"
   input_stream: "NORM_LANDMARKS:landmarks"
   input_stream: "NORM_RECT:roi"
-  output_stream: "NORM_LANDMARKS:face_landmarks"
+  output_stream: "NORM_LANDMARKS:projected_face_landmarks"
+}
+
+
+# Extracts the input image frame dimensions as a separate packet.
+node {
+  calculator: "ImagePropertiesCalculator"
+  input_stream: "IMAGE_GPU:image"
+  output_stream: "SIZE:input_image_size"
+}
+
+# Applies smoothing to the face landmarks previously extracted from the face
+# detection keypoints.
+node {
+  calculator: "FaceLandmarksSmoothing"
+  input_stream: "NORM_LANDMARKS:projected_face_landmarks"
+  input_stream: "IMAGE_SIZE:input_image_size"
+  output_stream: "NORM_FILTERED_LANDMARKS:face_landmarks"
 }
diff --git a/mediapipe/modules/face_landmark/face_landmarks_smoothing.pbtxt b/mediapipe/modules/face_landmark/face_landmarks_smoothing.pbtxt
new file mode 100644
index 0000000..acf6eae
--- /dev/null
+++ b/mediapipe/modules/face_landmark/face_landmarks_smoothing.pbtxt
@@ -0,0 +1,25 @@
+# MediaPipe subgraph that smoothes face landmarks.
+
+type: "FaceLandmarksSmoothing"
+
+input_stream: "NORM_LANDMARKS:landmarks"
+input_stream: "IMAGE_SIZE:input_image_size"
+output_stream: "NORM_FILTERED_LANDMARKS:filtered_landmarks"
+
+# Applies smoothing to a face landmark list.
+# TODO find good values
+node {
+  calculator: "LandmarksSmoothingCalculator"
+  input_stream: "NORM_LANDMARKS:landmarks"
+  input_stream: "IMAGE_SIZE:input_image_size"
+  output_stream: "NORM_FILTERED_LANDMARKS:filtered_landmarks"
+  node_options: {
+    [type.googleapis.com/mediapipe.LandmarksSmoothingCalculatorOptions] {
+      one_euro_filter {
+        min_cutoff: 0.01
+        beta: 0.1
+        derivate_cutoff: 0.5
+      }
+    }
+  }
+}
\ No newline at end of file

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.