Giter Site home page Giter Site logo

flutter-ml / google_ml_kit_flutter Goto Github PK

View Code? Open in Web Editor NEW
912.0 17.0 709.0 149.53 MB

A flutter plugin that implements Google's standalone ML Kit

License: MIT License

Java 26.06% Objective-C 20.54% Dart 49.22% Ruby 3.64% C 0.07% Shell 0.44% Kotlin 0.03%
hactoberfest

google_ml_kit_flutter's People

Contributors

bensonarafat avatar bharat-biradar avatar dependabot[bot] avatar droplet-js avatar ersankolay avatar fbernaly avatar hovadur avatar ipcjs avatar jaredsburrows avatar jdiazgon55 avatar khjde1207 avatar mitch2na avatar nachtmaar avatar om-ha avatar panmari avatar penkzhou avatar romainfranceschini avatar saahir999 avatar sbis04 avatar sebghatyusuf avatar syahrezafauzi avatar teolemon avatar test0terter0n avatar theunodb avatar tneotia avatar valentin-taleb-looping avatar vers10ne avatar x-slayer avatar younseoryu avatar zanovis avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

google_ml_kit_flutter's Issues

Can't run pod install with the latest versions of firebase plugins

When I try to run pod install and run the app on iOS I encounter this error:

[!] CocoaPods could not find compatible versions for pod "GoogleDataTransport":
  In Podfile:
    firebase_performance (from `.symlinks/plugins/firebase_performance/ios`) was resolved to 0.7.0-3, which depends on
      Firebase/Performance (= 8.0.0) was resolved to 8.0.0, which depends on
        FirebasePerformance (~> 8.0.0) was resolved to 8.0.0, which depends on
          GoogleDataTransport (~> 9.0)

    google_ml_vision (from `.symlinks/plugins/google_ml_vision/ios`) was resolved to 0.0.5, which depends on
      GoogleMLKit/ImageLabeling (~> 2.1.0) was resolved to 2.1.0, which depends on
        GoogleMLKit/MLKitCore (= 2.1.0) was resolved to 2.1.0, which depends on
          MLKitCommon (~> 2.1.0) was resolved to 2.1.0, which depends on
            GoogleDataTransport (~> 8.0)

Dependencies I'm using:

  cloud_firestore: ^2.2.1
  cloud_functions: ^1.1.1
  firebase_analytics: ^8.1.1
  firebase_auth: ^1.3.0
  firebase_core: ^1.2.1
  firebase_crashlytics: ^2.0.5
  firebase_messaging: ^10.0.1
  firebase_performance: ^0.7.0+4
  firebase_remote_config: ^0.10.0+1
  firebase_storage: ^8.1.1
  google_ml_kit: ^0.6.0

Now I can't build my iOS app and the build is blocked.

Barcode scan exception

hi, there is a crash on specific QR
plugin version: 0.6.0

Stacktrace:

java.lang.IllegalArgumentException: Unsupported value: [Ljava.lang.String;@998d0af at io.flutter.plugin.common.StandardMessageCodec.writeValue(StandardMessageCodec.java:276) at io.flutter.plugin.common.StandardMessageCodec.writeValue(StandardMessageCodec.java:273) at io.flutter.plugin.common.StandardMessageCodec.writeValue(StandardMessageCodec.java:265) at io.flutter.plugin.common.StandardMessageCodec.writeValue(StandardMessageCodec.java:273) at io.flutter.plugin.common.StandardMessageCodec.writeValue(StandardMessageCodec.java:265) at io.flutter.plugin.common.StandardMethodCodec.encodeSuccessEnvelope(StandardMethodCodec.java:59) at io.flutter.plugin.common.MethodChannel$IncomingMethodCallHandler$1.success(MethodChannel.java:238) at com.b.biradar.google_ml_kit.vision.BarcodeDetector$2.onSuccess(BarcodeDetector.java:207) at com.b.biradar.google_ml_kit.vision.BarcodeDetector$2.onSuccess(BarcodeDetector.java:85) at com.google.android.gms.tasks.zzm.run(com.google.android.gms:play-services-tasks@@17.2.1:1) at android.os.Handler.handleCallback(Handler.java:938) at android.os.Handler.dispatchMessage(Handler.java:99) at android.os.Looper.loop(Looper.java:223) at android.app.ActivityThread.main(ActivityThread.java:7656) at java.lang.reflect.Method.invoke(Native Method) at com.android.internal.os.RuntimeInit$MethodAndArgsCaller.run(RuntimeInit.java:592) at com.android.internal.os.ZygoteInit.main(ZygoteInit.java:947)

QR code:
crash qr

Pod install error with latest version of firebase core

Related dependencies

  firebase_storage: ^8.1.2
  firebase_core: ^1.0.2
  cloud_firestore: ^1.0.2
  firebase_auth: ^1.4.0

  google_ml_kit: ^0.6.0

I'm getting the following error while installing the pod

[!] CocoaPods could not find compatible versions for pod "GoogleDataTransport":
  In Podfile:
    firebase_core (from `.symlinks/plugins/firebase_core/ios`) was resolved to 1.3.0, which depends on
      Firebase/CoreOnly (= 8.0.0) was resolved to 8.0.0, which depends on
        FirebaseCore (= 8.0.0) was resolved to 8.0.0, which depends on
          FirebaseCoreDiagnostics (~> 8.0) was resolved to 8.1.0, which depends on
            GoogleDataTransport (~> 9.0)

    google_ml_kit (from `.symlinks/plugins/google_ml_kit/ios`) was resolved to 0.6.0, which depends on
      GoogleMLKit/PoseDetectionAccurate was resolved to 0.64.0, which depends on
        MLKitPoseDetectionAccurate (~> 0.64.0) was resolved to 0.64.0, which depends on
          MLKitCommon (~> 0.64) was resolved to 0.64.0, which depends on
            GoogleDataTransport (~> 7.0)

Podfile :

# Uncomment this line to define a global platform for your project
platform :ios, '12.0'

# CocoaPods analytics sends network stats synchronously affecting flutter build latency.
ENV['COCOAPODS_DISABLE_STATS'] = 'true'

project 'Runner', {
  'Debug' => :debug,
  'Profile' => :release,
  'Release' => :release,
}

def flutter_root
  generated_xcode_build_settings_path = File.expand_path(File.join('..', 'Flutter', 'Generated.xcconfig'), __FILE__)
  unless File.exist?(generated_xcode_build_settings_path)
    raise "#{generated_xcode_build_settings_path} must exist. If you're running pod install manually, make sure flutter pub get is executed first"
  end

  File.foreach(generated_xcode_build_settings_path) do |line|
    matches = line.match(/FLUTTER_ROOT\=(.*)/)
    return matches[1].strip if matches
  end
  raise "FLUTTER_ROOT not found in #{generated_xcode_build_settings_path}. Try deleting Generated.xcconfig, then run flutter pub get"
end

require File.expand_path(File.join('packages', 'flutter_tools', 'bin', 'podhelper'), flutter_root)

flutter_ios_podfile_setup

target 'Runner' do
  use_frameworks!
  use_modular_headers!

  flutter_install_all_ios_pods File.dirname(File.realpath(__FILE__))
end

post_install do |installer|
  installer.pods_project.targets.each do |target|
    flutter_additional_ios_build_settings(target)
  end
end

Thermal problem on iPhone

Hi,

Recently I've got iPhone 6 (2014 model) with iOS 12.5.4 for app testing. So, it's quite dated.

I have a thermal issue when scanning QR codes for a minute or longer. iPhone 6 is getting quite warm. However my Samsung A21s (2020 model) is getting just a little warm.

I used both latest (0.6.0) version from pub.dev and latest master branch from GitHub.

When using only camera preview, without image stream and barcode detection, iPhone 6 gets a little bit warm, but with barcode detection it's quite warm.

I wander, is there any thing that could be done with this. Have you tested this package on newer iPhone device? Do you know what is the reason?
Could it be the outdated iPhone processor and constantly processing image stream is just to heavy a task?
Or the way in which Flutter talks with native code?

Thanks in advance :)

Ios GoogleDataTransport dependency error when using firebase_crashlytics

Hello,

Im my project I have a these two dependencies:

google_ml_kit: 0.4.0
firebase_analytics: ^8.0.3
firebase_crashlytics: ^2.0.4

On android this works perfectly :)

But on on IOS when trying to use pod install I have this error:

[!] CocoaPods could not find compatible versions for pod "GoogleDataTransport":
In snapshot (Podfile.lock):
GoogleDataTransport (= 9.0.0, ~> 9.0)

In Podfile:
firebase_crashlytics (from .symlinks/plugins/firebase_crashlytics/ios) was resolved to 2.0.4, which depends on
Firebase/Crashlytics (= 8.0.0) was resolved to 8.0.0, which depends on
FirebaseCrashlytics (> 8.0.0) was resolved to 8.0.0, which depends on
GoogleDataTransport (
> 9.0)

google_ml_kit (from `.symlinks/plugins/google_ml_kit/ios`) was resolved to 0.0.1, which depends on
  GoogleMLKit/ImageLabeling was resolved to 0.60.0, which depends on
    GoogleMLKit/MLKitCore (= 0.60.0) was resolved to 0.60.0, which depends on
      MLKitCommon (~> 0.60.0) was resolved to 0.60.0, which depends on
        GoogleDataTransport (~> 3.2)

Is there a way to have google_ml_kit depending on a on higher version of GoogleDataTransport or any way to go around this issue ?

Thank you in advance

Crash on iOS by memory limit

Simple example with only barcode scanner. Crashing on start with error:

* thread #1, queue = 'com.apple.main-thread', stop reason = EXC_RESOURCE RESOURCE_TYPE_MEMORY (limit=1450 MB, unused=0x0)

May be, i can exclude other ml pods, that do not need me?

No way to pass image orientation

Im trying to detect faces on image created by Camera plugin in frontal camera
scanner return empty faces list for portrait orientation images, but it works correctly when i turn device in landscape

i have tried to use InputImage.fromBytes with inputImageData but there are required fileds
required this.size,
required this.imageRotation,
required this.inputImageFormat,
required this.planeData

how can i say faceDetector.processImage to use actual orientation?

Creating InputImage from Bytes doesn't work

The Pose Detector only seems to work when you create your InputImage from a file path (InputImage.fromFilePath). When I attempt to create the InputImage using the fromBytes() constructor the output is always blank. I've tried all sorts of different configurations, including changing the image rotation and format, but the result is always blank.

Have you had success with this?

Drawing lines that connect joints (body pose detection)

Currently, as per body pose detection, there are dots painted on joints.
On top of that, I'd like to suggest drawing lines connecting the joints just like the image below.
As it was a simple problem, I have already implemented this feature. If you are fine with it, I'd would love to contribute this feature to the repo.
image

CocoaPods could not find compatible versions for pod "GoogleDataTransport":

This is my flutter doctor output

[✓] Flutter (Channel stable, 2.2.1, on macOS 11.4 20F71 darwin-x64, locale es-419)
• Flutter version 2.2.1 at /Users/slaniado/desarrollo/flutter
• Framework revision 02c026b03c (13 days ago), 2021-05-27 12:24:44 -0700
• Engine revision 0fdb562ac8
• Dart version 2.13.1

[✓] Android toolchain - develop for Android devices (Android SDK version 29.0.3)
• Android SDK at /Users/slaniado/Library/Android/sdk
• Platform android-30, build-tools 29.0.3
• Java binary at: /Applications/Android Studio.app/Contents/jre/jdk/Contents/Home/bin/java
• Java version OpenJDK Runtime Environment (build 1.8.0_212-release-1586-b4-5784211)
• All Android licenses accepted.

[✓] Xcode - develop for iOS and macOS
• Xcode at /Applications/Xcode.app/Contents/Developer
• Xcode 12.5, Build version 12E262
• CocoaPods version 1.10.1

[✓] Chrome - develop for the web
• Chrome at /Applications/Google Chrome.app/Contents/MacOS/Google Chrome

[✓] Android Studio (version 3.6)
• Android Studio at /Applications/Android Studio.app/Contents
• Flutter plugin version 45.1.1
• Dart plugin version 192.7761
• Java version OpenJDK Runtime Environment (build 1.8.0_212-release-1586-b4-5784211)

[✓] VS Code (version 1.56.2)
• VS Code at /Applications/Visual Studio Code.app/Contents
• Flutter extension version 3.23.0

[✓] Connected device (2 available)
• iPhone SE (2nd generation) (mobile) • F25E6952-6730-4C74-8AB8-713F1F4F5759 • ios • com.apple.CoreSimulator.SimRuntime.iOS-14-5
(simulator)
• Chrome (web) • chrome • web-javascript • Google Chrome 91.0.4472.77

• No issues found!

flutter run

return this Error:
CocoaPods's specs repository is too out-of-date to satisfy dependencies.

pod install

[!] CocoaPods could not find compatible versions for pod "GoogleDataTransport":
In snapshot (Podfile.lock):
GoogleDataTransport (= 9.0.1, ~> 9.0)

In Podfile:
firebase_core (from .symlinks/plugins/firebase_core/ios) was resolved to 1.2.1, which depends on
Firebase/CoreOnly (= 8.0.0) was resolved to 8.0.0, which depends on
FirebaseCore (= 8.0.0) was resolved to 8.0.0, which depends on
FirebaseCoreDiagnostics (> 8.0) was resolved to 8.1.0, which depends on
GoogleDataTransport (
> 9.0)

google_ml_kit (from `.symlinks/plugins/google_ml_kit/ios`) was resolved to 0.6.0, which depends on
  GoogleMLKit/PoseDetectionAccurate was resolved to 0.64.0, which depends on
    GoogleMLKit/MLKitCore (= 0.64.0) was resolved to 0.64.0, which depends on
      MLKitCommon (~> 0.64.0) was resolved to 0.64.0, which depends on
        GoogleDataTransport (~> 7.0)

You have either:

  • out-of-date source repos which you can update with pod repo update or with pod install --repo-update.
  • changed the constraints of dependency GoogleDataTransport inside your development pod google_ml_kit.
    You should run pod update GoogleDataTransport to apply changes you've made.

I tried every single post to try to clean and reinstall the pods and the same outcome.

CocoaPods could not find compatible versions for pod "google_ml_kit"

I add lib to my project and run command flutter run on ios device. But i met a problem.

[!] CocoaPods could not find compatible versions for pod "google_ml_kit":
      In Podfile:
        google_ml_kit (from `.symlinks/plugins/google_ml_kit/ios`)

    Specs satisfying the `google_ml_kit (from `.symlinks/plugins/google_ml_kit/ios`)` dependency were found, but they required a higher minimum
    deployment target.

    /Library/Ruby/Gems/2.6.0/gems/molinillo-0.6.6/lib/molinillo/resolution.rb:328:in `raise_error_unless_state'
    /Library/Ruby/Gems/2.6.0/gems/molinillo-0.6.6/lib/molinillo/resolution.rb:310:in `block in unwind_for_conflict'
    /Library/Ruby/Gems/2.6.0/gems/molinillo-0.6.6/lib/molinillo/resolution.rb:308:in `tap'
    /Library/Ruby/Gems/2.6.0/gems/molinillo-0.6.6/lib/molinillo/resolution.rb:308:in `unwind_for_conflict'
    /Library/Ruby/Gems/2.6.0/gems/molinillo-0.6.6/lib/molinillo/resolution.rb:684:in `attempt_to_activate'
    /Library/Ruby/Gems/2.6.0/gems/molinillo-0.6.6/lib/molinillo/resolution.rb:254:in `process_topmost_state'
    /Library/Ruby/Gems/2.6.0/gems/molinillo-0.6.6/lib/molinillo/resolution.rb:182:in `resolve'
    /Library/Ruby/Gems/2.6.0/gems/molinillo-0.6.6/lib/molinillo/resolver.rb:43:in `resolve'
    /Library/Ruby/Gems/2.6.0/gems/cocoapods-1.10.1/lib/cocoapods/resolver.rb:94:in `resolve'
    /Library/Ruby/Gems/2.6.0/gems/cocoapods-1.10.1/lib/cocoapods/installer/analyzer.rb:1074:in `block in resolve_dependencies'
    /Library/Ruby/Gems/2.6.0/gems/cocoapods-1.10.1/lib/cocoapods/user_interface.rb:64:in `section'
    /Library/Ruby/Gems/2.6.0/gems/cocoapods-1.10.1/lib/cocoapods/installer/analyzer.rb:1072:in `resolve_dependencies'
    /Library/Ruby/Gems/2.6.0/gems/cocoapods-1.10.1/lib/cocoapods/installer/analyzer.rb:124:in `analyze'
    /Library/Ruby/Gems/2.6.0/gems/cocoapods-1.10.1/lib/cocoapods/installer.rb:414:in `analyze'
    /Library/Ruby/Gems/2.6.0/gems/cocoapods-1.10.1/lib/cocoapods/installer.rb:239:in `block in resolve_dependencies'
    /Library/Ruby/Gems/2.6.0/gems/cocoapods-1.10.1/lib/cocoapods/user_interface.rb:64:in `section'
    /Library/Ruby/Gems/2.6.0/gems/cocoapods-1.10.1/lib/cocoapods/installer.rb:238:in `resolve_dependencies'
    /Library/Ruby/Gems/2.6.0/gems/cocoapods-1.10.1/lib/cocoapods/installer.rb:160:in `install!'
    /Library/Ruby/Gems/2.6.0/gems/cocoapods-1.10.1/lib/cocoapods/command/install.rb:52:in `run'
    /Library/Ruby/Gems/2.6.0/gems/claide-1.0.3/lib/claide/command.rb:334:in `run'
    /Library/Ruby/Gems/2.6.0/gems/cocoapods-1.10.1/lib/cocoapods/command.rb:52:in `run'
    /Library/Ruby/Gems/2.6.0/gems/cocoapods-1.10.1/bin/pod:55:in `<top (required)>'
    /usr/local/bin/pod:23:in `load'
    /usr/local/bin/pod:23:in `<main>'

Text recognision for non english language

Hi,
I am trying to build a Japanese ocr using this plugin.
It is working for english but for Japanese I don't have the luck.
Can you please provide an example to do the same?

Thank you!

QR code scanning errors

Hello,

I have a problem with two kinds of QR codes:

  1. QR code with a company address info
    qr-address-info
  2. QR code with contact info (ex. made up contact info: John Smith +48 788 233 144 - generated by contacts app on my Samsung phone)
    qr-john-smith

You can reproduce both errors using example app which comes with google_ml_kit 0.6.0.

When I scan QR code with company address info my app shuts down with console message:

D/AndroidRuntime(22673): Shutting down VM
E/AndroidRuntime(22673): FATAL EXCEPTION: main
E/AndroidRuntime(22673): Process: com.b.biradar.google_ml_kit_example, PID: 22673
E/AndroidRuntime(22673): java.lang.IllegalArgumentException: Unsupported value: [Ljava.lang.String;@14d337e
E/AndroidRuntime(22673): 	at io.flutter.plugin.common.StandardMessageCodec.writeValue(StandardMessageCodec.java:276)
E/AndroidRuntime(22673): 	at io.flutter.plugin.common.StandardMessageCodec.writeValue(StandardMessageCodec.java:273)
E/AndroidRuntime(22673): 	at io.flutter.plugin.common.StandardMessageCodec.writeValue(StandardMessageCodec.java:265)
E/AndroidRuntime(22673): 	at io.flutter.plugin.common.StandardMessageCodec.writeValue(StandardMessageCodec.java:273)
E/AndroidRuntime(22673): 	at io.flutter.plugin.common.StandardMessageCodec.writeValue(StandardMessageCodec.java:265)
E/AndroidRuntime(22673): 	at io.flutter.plugin.common.StandardMethodCodec.encodeSuccessEnvelope(StandardMethodCodec.java:59)
E/AndroidRuntime(22673): 	at io.flutter.plugin.common.MethodChannel$IncomingMethodCallHandler$1.success(MethodChannel.java:238)
E/AndroidRuntime(22673): 	at com.b.biradar.google_ml_kit.vision.BarcodeDetector$2.onSuccess(BarcodeDetector.java:207)
E/AndroidRuntime(22673): 	at com.b.biradar.google_ml_kit.vision.BarcodeDetector$2.onSuccess(BarcodeDetector.java:85)
E/AndroidRuntime(22673): 	at com.google.android.gms.tasks.zzm.run(com.google.android.gms:play-services-tasks@@17.2.1:1)
E/AndroidRuntime(22673): 	at android.os.Handler.handleCallback(Handler.java:938)
E/AndroidRuntime(22673): 	at android.os.Handler.dispatchMessage(Handler.java:99)
E/AndroidRuntime(22673): 	at android.os.Looper.loop(Looper.java:246)
E/AndroidRuntime(22673): 	at android.app.ActivityThread.main(ActivityThread.java:8512)
E/AndroidRuntime(22673): 	at java.lang.reflect.Method.invoke(Native Method)
E/AndroidRuntime(22673): 	at com.android.internal.os.RuntimeInit$MethodAndArgsCaller.run(RuntimeInit.java:602)
E/AndroidRuntime(22673): 	at com.android.internal.os.ZygoteInit.main(ZygoteInit.java:1139)

When I scan QR code with contact info I get this message in console:

E/flutter (18475): [ERROR:flutter/lib/ui/ui_dart_state.cc(186)] Unhandled Exception: type 'MappedListIterable<dynamic, BarcodeAddress>' is not a subtype of type 'List<BarcodeAddress>?'
E/flutter (18475): #0      new BarcodeContactInfo._ (package:google_ml_kit/src/vision/barcode_scanner.dart:425:26)
E/flutter (18475): #1      new Barcode._fromMap (package:google_ml_kit/src/vision/barcode_scanner.dart:169:52)
E/flutter (18475): #2      BarcodeScanner.processImage (package:google_ml_kit/src/vision/barcode_scanner.dart:28:32)
E/flutter (18475): <asynchronous suspension>
E/flutter (18475): #3      _BarcodeScannerViewState.processImage (package:google_ml_kit_example/VisionDetectorViews/barcode_scanner_view.dart:37:22)
E/flutter (18475): <asynchronous suspension>

I would appreciate fixing these errors.

Increased App Size

After implementing google_ml_kit, the app size is increased by ~85 MB in debug mode and ~44 MB in release mode. I investigated this with the app size tool and found that all models/resources are always included, even if they are not used. So it would be great to be able to configure which resources are used and and which can be excluded.

The firebase_ml_vision plugin had the same issue (firebase/flutterfire#4767) and I was able to adapt the workaround of excluding unused resources to google_ml_kit and wanted to share it. In my case I am currently only using the language detection feature therefore I don't need pose detection, etc.

The workaround for Android looks like that and increases the app size by less than 3 MB:

android {

    // some other code 

    buildTypes {
        release {
            // some other code 

            aaptOptions {
                ignoreAssetsPattern '!mlkit_pose:!mlkit_label_default_model:'
            }
        }
        debug {
            // some other code 

            aaptOptions {
                ignoreAssetsPattern '!mlkit_pose:!mlkit_label_default_model:'
            }
        }
    }

    packagingOptions {
        exclude 'lib/**/libtranslate_jni.so'
        exclude 'lib/**/libdigitalink.so'
        exclude 'lib/**/libxeno_native.so'
        exclude 'lib/**/libmlkitcommonpipeline.so'
        exclude 'lib/**/libbarhopper_v2.so'
        exclude 'lib/**/libclassifier_jni.so'
        exclude 'lib/**/libface_detector_v2_jni.so'
        exclude 'lib/**/libtensorflowlite_jni.so'
        // exclude 'lib/**/liblanguage_id_jni.so' → required for language detection
    }

Since a workaround does exist, this is not a pressing issue, but I think many developers would appreciate some sort of configuration option :)

Overrides a deprecated API

Note: C:\flutter\.pub-cache\hosted\pub.dartlang.org\google_ml_kit-0.6.0\android\src\main\java\com\b\biradar\google_ml_kit\GoogleMlKitPlugin.java uses or overrides a deprecated API.
Note: Recompile with -Xlint:deprecation for details.
Note: Some input files use unchecked or unsafe operations.
Note: Recompile with -Xlint:unchecked for details.

Error building to iOS symbol(s) not found for architecture armv7

Hello
I am getting this error when building to iOS, I am targeting version 10.
I am able to run in the simulator, the issue comes when I run flutter build ios --release
I tried all the possible solutions cleaning pods also I deleted the iOS folder and build again.

flutter doctor -v
[✓] Flutter (Channel stable, 2.2.3, on macOS 11.4 20F71 darwin-x64, locale es-419)
• Flutter version 2.2.3 at /Users/desarrollo/flutter
• Framework revision f4abaa0735 (4 days ago), 2021-07-01 12:46:11 -0700
• Engine revision 241c87ad80
• Dart version 2.13.4

[✓] Android toolchain - develop for Android devices (Android SDK version 29.0.3)
• Android SDK at /Users//Library/Android/sdk
• Platform android-30, build-tools 29.0.3
• Java binary at: /Applications/Android Studio.app/Contents/jre/jdk/Contents/Home/bin/java
• Java version OpenJDK Runtime Environment (build 1.8.0_212-release-1586-b4-5784211)
• All Android licenses accepted.

[✓] Xcode - develop for iOS and macOS
• Xcode at /Applications/Xcode.app/Contents/Developer
• Xcode 12.5.1, Build version 12E507
• CocoaPods version 1.10.1

[✓] Chrome - develop for the web
• Chrome at /Applications/Google Chrome.app/Contents/MacOS/Google Chrome

[✓] Android Studio (version 3.6)
• Android Studio at /Applications/Android Studio.app/Contents
• Flutter plugin version 45.1.1
• Dart plugin version 192.7761
• Java version OpenJDK Runtime Environment (build 1.8.0_212-release-1586-b4-5784211)

[✓] VS Code (version 1.57.1)
• VS Code at /Applications/Visual Studio Code.app/Contents
• Flutter extension version 3.24.0

[✓] Connected device (2 available)
• iPhone 12 mini (mobile) • 4EE48E4F-99F2-43C1-8FC3-33C64178C442 • ios • com.apple.CoreSimulator.SimRuntime.iOS-14-5 (simulator)
• Chrome (web) • chrome • web-javascript • Google Chrome 91.0.4472.114

• No issues found!

This is the error

Undefined symbols for architecture armv7:
"OBJC_CLASS$_MLKTextRecognizer", referenced from:
objc-class-ref in google_ml_kit(TextRecognizer.o)
"OBJC_CLASS$_MLKCustomImageLabelerOptions", referenced from:
objc-class-ref in google_ml_kit(ImageLabeler.o)
"OBJC_CLASS$_MLKImageLabeler", referenced from:
objc-class-ref in google_ml_kit(ImageLabeler.o)
"_MLKModelDownloadDidSucceedNotification", referenced from:
-[DigitalInkRecogniser manageInkModel:result:] in google_ml_kit(DigitalInkRecogniser.o)
-[DigitalInkRecogniser receiveTestNotification:] in google_ml_kit(DigitalInkRecogniser.o)
"OBJC_CLASS$_MLKModelDownloadConditions", referenced from:
objc-class-ref in google_ml_kit(DigitalInkRecogniser.o)
"OBJC_CLASS$_MLKInk", referenced from:
objc-class-ref in google_ml_kit(DigitalInkRecogniser.o)
"OBJC_CLASS$_MLKStrokePoint", referenced from:
objc-class-ref in google_ml_kit(DigitalInkRecogniser.o)
"OBJC_CLASS$_MLKDigitalInkRecognizer", referenced from:
objc-class-ref in google_ml_kit(DigitalInkRecogniser.o)
"OBJC_CLASS$_MLKModelManager", referenced from:
objc-class-ref in google_ml_kit(DigitalInkRecogniser.o)
"OBJC_CLASS$_MLKDigitalInkRecognitionModel", referenced from:
objc-class-ref in google_ml_kit(DigitalInkRecogniser.o)
"OBJC_CLASS$_MLKDigitalInkRecognitionModelIdentifier", referenced from:
objc-class-ref in google_ml_kit(DigitalInkRecogniser.o)
"_MLKFaceLandmarkTypeNoseBase", referenced from:
___39-[FaceDetector handleDetection:result:]_block_invoke in google_ml_kit(FaceDetector.o)
"OBJC_CLASS$_MLKStroke", referenced from:
objc-class-ref in google_ml_kit(DigitalInkRecogniser.o)
"_MLKFaceLandmarkTypeMouthLeft", referenced from:
___39-[FaceDetector handleDetection:result:]_block_invoke in google_ml_kit(FaceDetector.o)
"_MLKFaceLandmarkTypeLeftEar", referenced from:
___39-[FaceDetector handleDetection:result:]_block_invoke in google_ml_kit(FaceDetector.o)
"_MLKFaceLandmarkTypeRightEar", referenced from:
___39-[FaceDetector handleDetection:result:]_block_invoke in google_ml_kit(FaceDetector.o)
"_MLKFaceLandmarkTypeMouthBottom", referenced from:
___39-[FaceDetector handleDetection:result:]_block_invoke in google_ml_kit(FaceDetector.o)
"_MLKFaceLandmarkTypeLeftEye", referenced from:
___39-[FaceDetector handleDetection:result:]_block_invoke in google_ml_kit(FaceDetector.o)
"_MLKFaceContourTypeUpperLipBottom", referenced from:
___39-[FaceDetector handleDetection:result:]_block_invoke in google_ml_kit(FaceDetector.o)
"_MLKFaceLandmarkTypeLeftCheek", referenced from:
___39-[FaceDetector handleDetection:result:]_block_invoke in google_ml_kit(FaceDetector.o)
"_MLKFaceContourTypeNoseBottom", referenced from:
___39-[FaceDetector handleDetection:result:]_block_invoke in google_ml_kit(FaceDetector.o)
"_MLKFaceContourTypeLowerLipBottom", referenced from:
___39-[FaceDetector handleDetection:result:]_block_invoke in google_ml_kit(FaceDetector.o)
"_MLKModelDownloadDidFailNotification", referenced from:
-[DigitalInkRecogniser manageInkModel:result:] in google_ml_kit(DigitalInkRecogniser.o)
-[DigitalInkRecogniser receiveTestNotification:] in google_ml_kit(DigitalInkRecogniser.o)
"_MLKFaceContourTypeLeftEyebrowTop", referenced from:
___39-[FaceDetector handleDetection:result:]_block_invoke in google_ml_kit(FaceDetector.o)
"_MLKFaceContourTypeLowerLipTop", referenced from:
___39-[FaceDetector handleDetection:result:]_block_invoke in google_ml_kit(FaceDetector.o)
"_MLKFaceContourTypeLeftEyebrowBottom", referenced from:
___39-[FaceDetector handleDetection:result:]_block_invoke in google_ml_kit(FaceDetector.o)
"_MLKPoseLandmarkTypeLeftThumb", referenced from:
-[PoseDetector poseLandmarkTypeToNumber:] in google_ml_kit(PoseDetector.o)
"_MLKPoseLandmarkTypeRightEar", referenced from:
-[PoseDetector poseLandmarkTypeToNumber:] in google_ml_kit(PoseDetector.o)
"_MLKPoseLandmarkTypeRightIndexFinger", referenced from:
-[PoseDetector poseLandmarkTypeToNumber:] in google_ml_kit(PoseDetector.o)
"_MLKFaceContourTypeLeftEye", referenced from:
___39-[FaceDetector handleDetection:result:]_block_invoke in google_ml_kit(FaceDetector.o)
"_MLKFaceContourTypeFace", referenced from:
___39-[FaceDetector handleDetection:result:]_block_invoke in google_ml_kit(FaceDetector.o)
"_MLKPoseLandmarkTypeRightWrist", referenced from:
-[PoseDetector poseLandmarkTypeToNumber:] in google_ml_kit(PoseDetector.o)
"_MLKPoseLandmarkTypeMouthRight", referenced from:
-[PoseDetector poseLandmarkTypeToNumber:] in google_ml_kit(PoseDetector.o)
"_MLKPoseLandmarkTypeRightAnkle", referenced from:
-[PoseDetector poseLandmarkTypeToNumber:] in google_ml_kit(PoseDetector.o)
"_MLKPoseLandmarkTypeRightShoulder", referenced from:
-[PoseDetector poseLandmarkTypeToNumber:] in google_ml_kit(PoseDetector.o)
"_MLKFaceContourTypeUpperLipTop", referenced from:
___39-[FaceDetector handleDetection:result:]_block_invoke in google_ml_kit(FaceDetector.o)
"_MLKPoseLandmarkTypeRightToe", referenced from:
-[PoseDetector poseLandmarkTypeToNumber:] in google_ml_kit(PoseDetector.o)
"_MLKPoseLandmarkTypeRightPinkyFinger", referenced from:
-[PoseDetector poseLandmarkTypeToNumber:] in google_ml_kit(PoseDetector.o)
"_MLKFaceLandmarkTypeMouthRight", referenced from:
___39-[FaceDetector handleDetection:result:]_block_invoke in google_ml_kit(FaceDetector.o)
"OBJC_CLASS$_MLKFaceDetectorOptions", referenced from:
objc-class-ref in google_ml_kit(FaceDetector.o)
"_MLKPoseLandmarkTypeRightKnee", referenced from:
-[PoseDetector poseLandmarkTypeToNumber:] in google_ml_kit(PoseDetector.o)
"_MLKPoseLandmarkTypeRightHip", referenced from:
-[PoseDetector poseLandmarkTypeToNumber:] in google_ml_kit(PoseDetector.o)
"_MLKFaceContourTypeRightEye", referenced from:
___39-[FaceDetector handleDetection:result:]_block_invoke in google_ml_kit(FaceDetector.o)
"_MLKPoseLandmarkTypeLeftKnee", referenced from:
-[PoseDetector poseLandmarkTypeToNumber:] in google_ml_kit(PoseDetector.o)
"_MLKPoseLandmarkTypeRightEye", referenced from:
-[PoseDetector poseLandmarkTypeToNumber:] in google_ml_kit(PoseDetector.o)
"_MLKPoseLandmarkTypeNose", referenced from:
-[PoseDetector poseLandmarkTypeToNumber:] in google_ml_kit(PoseDetector.o)
"_MLKPoseLandmarkTypeRightThumb", referenced from:
-[PoseDetector poseLandmarkTypeToNumber:] in google_ml_kit(PoseDetector.o)
"_MLKPoseLandmarkTypeLeftWrist", referenced from:
-[PoseDetector poseLandmarkTypeToNumber:] in google_ml_kit(PoseDetector.o)
"OBJC_CLASS$_MLKBarcodeScanner", referenced from:
objc-class-ref in google_ml_kit(BarcodeScanner.o)
"_MLKPoseLandmarkTypeLeftToe", referenced from:
-[PoseDetector poseLandmarkTypeToNumber:] in google_ml_kit(PoseDetector.o)
"OBJC_CLASS$_MLKDigitalInkRecognizerOptions", referenced from:
objc-class-ref in google_ml_kit(DigitalInkRecogniser.o)
"_MLKFaceContourTypeRightCheek", referenced from:
___39-[FaceDetector handleDetection:result:]_block_invoke in google_ml_kit(FaceDetector.o)
"OBJC_CLASS$_MLKVisionImage", referenced from:
objc-class-ref in google_ml_kit(BarcodeScanner.o)
objc-class-ref in google_ml_kit(PoseDetector.o)
objc-class-ref in google_ml_kit(FaceDetector.o)
objc-class-ref in google_ml_kit(ImageLabeler.o)
objc-class-ref in google_ml_kit(TextRecognizer.o)
objc-class-ref in google_ml_kit(MLKVisionImage+FlutterPlugin.o)
_OBJC$CATEGORY_MLKVisionImage$_FlutterPlugin in google_ml_kit(MLKVisionImage+FlutterPlugin.o)
...
"_MLKPoseLandmarkTypeLeftShoulder", referenced from:
-[PoseDetector poseLandmarkTypeToNumber:] in google_ml_kit(PoseDetector.o)
"_MLKPoseLandmarkTypeLeftPinkyFinger", referenced from:
-[PoseDetector poseLandmarkTypeToNumber:] in google_ml_kit(PoseDetector.o)
"_MLKFaceContourTypeLeftCheek", referenced from:
___39-[FaceDetector handleDetection:result:]_block_invoke in google_ml_kit(FaceDetector.o)
"OBJC_CLASS$_MLKFaceDetector", referenced from:
objc-class-ref in google_ml_kit(FaceDetector.o)
"_MLKPoseLandmarkTypeMouthLeft", referenced from:
-[PoseDetector poseLandmarkTypeToNumber:] in google_ml_kit(PoseDetector.o)
"_MLKPoseLandmarkTypeRightEyeInner", referenced from:
-[PoseDetector poseLandmarkTypeToNumber:] in google_ml_kit(PoseDetector.o)
"_MLKPoseLandmarkTypeLeftIndexFinger", referenced from:
-[PoseDetector poseLandmarkTypeToNumber:] in google_ml_kit(PoseDetector.o)
"_MLKPoseLandmarkTypeLeftEar", referenced from:
-[PoseDetector poseLandmarkTypeToNumber:] in google_ml_kit(PoseDetector.o)
"_MLKPoseLandmarkTypeRightEyeOuter", referenced from:
-[PoseDetector poseLandmarkTypeToNumber:] in google_ml_kit(PoseDetector.o)
"OBJC_CLASS$_MLKBarcodeScannerOptions", referenced from:
objc-class-ref in google_ml_kit(BarcodeScanner.o)
"_MLKPoseLandmarkTypeLeftHip", referenced from:
-[PoseDetector poseLandmarkTypeToNumber:] in google_ml_kit(PoseDetector.o)
"_MLKPoseLandmarkTypeLeftHeel", referenced from:
-[PoseDetector poseLandmarkTypeToNumber:] in google_ml_kit(PoseDetector.o)
"_MLKPoseLandmarkTypeRightElbow", referenced from:
-[PoseDetector poseLandmarkTypeToNumber:] in google_ml_kit(PoseDetector.o)
"_MLKPoseDetectorModeSingleImage", referenced from:
-[PoseDetector handleDetection:result:] in google_ml_kit(PoseDetector.o)
"_MLKPoseLandmarkTypeLeftElbow", referenced from:
-[PoseDetector poseLandmarkTypeToNumber:] in google_ml_kit(PoseDetector.o)
"OBJC_CLASS$_MLKImageLabelerOptions", referenced from:
objc-class-ref in google_ml_kit(ImageLabeler.o)
"_MLKFaceContourTypeRightEyebrowTop", referenced from:
___39-[FaceDetector handleDetection:result:]_block_invoke in google_ml_kit(FaceDetector.o)
"_MLKPoseLandmarkTypeLeftEyeOuter", referenced from:
-[PoseDetector poseLandmarkTypeToNumber:] in google_ml_kit(PoseDetector.o)
"OBJC_CLASS$_MLKPoseDetectorOptions", referenced from:
objc-class-ref in google_ml_kit(PoseDetector.o)
"_MLKPoseLandmarkTypeLeftAnkle", referenced from:
-[PoseDetector poseLandmarkTypeToNumber:] in google_ml_kit(PoseDetector.o)
"_MLKFaceContourTypeNoseBridge", referenced from:
___39-[FaceDetector handleDetection:result:]_block_invoke in google_ml_kit(FaceDetector.o)
"_MLKPoseDetectorModeStream", referenced from:
-[PoseDetector handleDetection:result:] in google_ml_kit(PoseDetector.o)
"_MLKFaceLandmarkTypeRightCheek", referenced from:
___39-[FaceDetector handleDetection:result:]_block_invoke in google_ml_kit(FaceDetector.o)
"OBJC_CLASS$_MLKAccuratePoseDetectorOptions", referenced from:
objc-class-ref in google_ml_kit(PoseDetector.o)
"_MLKFaceContourTypeRightEyebrowBottom", referenced from:
___39-[FaceDetector handleDetection:result:]_block_invoke in google_ml_kit(FaceDetector.o)
"_MLKPoseLandmarkTypeRightHeel", referenced from:
-[PoseDetector poseLandmarkTypeToNumber:] in google_ml_kit(PoseDetector.o)
"_MLKFaceLandmarkTypeRightEye", referenced from:
___39-[FaceDetector handleDetection:result:]_block_invoke in google_ml_kit(FaceDetector.o)
"OBJC_CLASS$_MLKPoseDetector", referenced from:
objc-class-ref in google_ml_kit(PoseDetector.o)
"_MLKPoseLandmarkTypeLeftEyeInner", referenced from:
-[PoseDetector poseLandmarkTypeToNumber:] in google_ml_kit(PoseDetector.o)
"_MLKPoseLandmarkTypeLeftEye", referenced from:
-[PoseDetector poseLandmarkTypeToNumber:] in google_ml_kit(PoseDetector.o)
ld: symbol(s) not found for architecture armv7
clang: error: linker command failed with exit code 1 (use -v to see invocation)

....
Showing All Messages
Could not find or use auto-linked library 'stdc++'

Thanks

Android app crash when scanning a contact barcode

How to reproduce

  1. Create a flutter app with google_ml_kit: ^0.6.0
  2. Implement the barcode scanning feature
  3. Run it on an Android device
  4. Try to scan a QR code with contact information (example attached below)

Expected result

The lib can scan this type of code, just as it does with the other types.

Actual result

The app crashes due to a native exception:

2021-06-18 12:40:15.705 13442-13442/com.example.app E/AndroidRuntime: FATAL EXCEPTION: main
    Process: com.example.app, PID: 13442
    java.lang.IllegalArgumentException: Unsupported value: [Ljava.lang.String;@e5c977c
        at io.flutter.plugin.common.StandardMessageCodec.writeValue(StandardMessageCodec.java:276)
        at io.flutter.plugin.common.StandardMessageCodec.writeValue(StandardMessageCodec.java:273)
        at io.flutter.plugin.common.StandardMessageCodec.writeValue(StandardMessageCodec.java:265)
        at io.flutter.plugin.common.StandardMessageCodec.writeValue(StandardMessageCodec.java:273)
        at io.flutter.plugin.common.StandardMessageCodec.writeValue(StandardMessageCodec.java:265)
        at io.flutter.plugin.common.StandardMethodCodec.encodeSuccessEnvelope(StandardMethodCodec.java:59)
        at io.flutter.plugin.common.MethodChannel$IncomingMethodCallHandler$1.success(MethodChannel.java:238)
        at com.b.biradar.google_ml_kit.vision.BarcodeDetector$2.onSuccess(BarcodeDetector.java:207)
        at com.b.biradar.google_ml_kit.vision.BarcodeDetector$2.onSuccess(BarcodeDetector.java:85)
        at com.google.android.gms.tasks.zzm.run(com.google.android.gms:play-services-tasks@@17.2.1:1)
        at android.os.Handler.handleCallback(Handler.java:938)
        at android.os.Handler.dispatchMessage(Handler.java:99)
        at android.os.Looper.loop(Looper.java:246)
        at android.app.ActivityThread.main(ActivityThread.java:8512)
        at java.lang.reflect.Method.invoke(Native Method)
        at com.android.internal.os.RuntimeInit$MethodAndArgsCaller.run(RuntimeInit.java:602)
        at com.android.internal.os.ZygoteInit.main(ZygoteInit.java:1130)

Example barcode

This QR code makes the app crashes, I have tested with other QR codes that contains different kinds of contact data and the result is the same.

Contact QR Code

minSdkVersion & typo

Since firebase_ml_vision is discontinued, I am migrating to your plugin. Thereby I realized that google_ml_kit currently requires minSdkVersion >= 26, while firebase_ml_vision supported Version 16 onwards. Therefore I am wondering, if minSdkVersion 26 is really necessary or if it can be lowered, to increase compatibility. If this is not possible, I suggest adding a hint in the readme file.

Furthermore during testing I found a typo in the Language Detection Code: identifyLanguange

Integrating in projects with Firebase causes iOS app to have errors while 'pod install'

I have a project, where I am using firebase pubs:
firebase_core: ^1.2.0
firebase_crashlytics: ^2.0.3
firebase_analytics: ^8.0.0

When I am trying to build my app for iOS device, 'pod install' command fails with description:

  In Podfile:
    firebase_crashlytics (from `.symlinks/plugins/firebase_crashlytics/ios`) was resolved to 2.0.4, which depends on
      Firebase/Crashlytics (= 8.0.0) was resolved to 8.0.0, which depends on
        FirebaseCrashlytics (~> 8.0.0) was resolved to 8.0.0, which depends on
          GoogleDataTransport (~> 9.0)

    google_ml_vision (from `.symlinks/plugins/google_ml_vision/ios`) was resolved to 0.0.4-2, which depends on
      GoogleMLKit/BarcodeScanning (~> 2.1.0) was resolved to 2.1.0, which depends on
        GoogleMLKit/MLKitCore (= 2.1.0) was resolved to 2.1.0, which depends on
          MLKitCommon (~> 2.1.0) was resolved to 2.1.0, which depends on
            GoogleDataTransport (~> 8.0)`

Could you help me with some information about workarounds or fix this issue?

face detect attributes returns null

              faces[0].leftEyeOpenProbability
              faces[0].headEulerAngleY,
              faces[0].headEulerAngleZ,
              faces[0].rightEyeOpenProbability,
              faces[0].smilingProbability,
              faces[0].trackingID);

Why is it returns null from all these attributes after it detected a face from live camera?

Example doesn't work with normal camera view

Steps to reproduce:

  1. Run app on Xiaomi Mi Max, android 7.0
  2. Select Vision/Face detector
  3. The app can detect eye, nose... but the camera screen shrinks
  4. Remove row 117 in the camera_view.dart fit: StackFit.expand,
  5. Rerun the app

Actual result:

  • The camera screen ratio is normal but cannot detect and draw detection for eye, nose...

Expected result:

  • Custom paint should work with any camera view

Screenshot_2021-07-08-17

How To Exclude Asset for App Size?

When i use this library, the application size is too big, i want to ask how to exclude an assets when building release mode?
For example i just want to use Text Recognition only.

Thank you if you want to help my work easy.

MLKNilLocalModel on iOS

Objective:

Image labelling using custom model done in real-time. Should run on Android & iOS.

Steps done so far:

Used code here: https://github.com/bharat-biradar/Google-Ml-Kit-plugin/blob/master/example/lib/VisionDetectorViews/label_detector_view.dart

With the stated adaptations for using a custom model (e.g. uncommenting processImageWithCustomModel).

As I am building for iOS, the path of the model is assets/ml/model.tflite and not in the android assets as mentions in a comment on the above link.

I experimented with both CustomTrainedModel.asset and CustomTrainedModel.file with no luck.

Please let me know how to solve the below.

Result

When building on iOS, I get the following:

The selected imageFormatGroup is not supported by iOS. Defaulting to brga8888
*** Terminating app due to uncaught exception 'MLKNilLocalModel', reason: 'Local model must not be nil.'
*** First throw call stack:
(0x18135a754 0x195e217a8 0x18124f7e0 0x103655154 0x106199d94 0x10619945c 0x106199190 0x106198f6c 0x108d92d60 0x108aa407c 0x108dad370 0x108d474d4 0x108d49cfc 0x1812d522c 0x1812d4e28 0x1812d4278 0x1812ce02c 0x1812cd360 0x19890b734 0x183d48584 0x183d4ddf4 0x10257275c 0x180f89cf8)
libc++abi: terminating with uncaught exception of type NSException
* thread #1, queue = 'com.apple.main-thread', stop reason = signal SIGABRT
    frame #0: 0x00000001af471334 libsystem_kernel.dylib`__pthread_kill + 8
libsystem_kernel.dylib`__pthread_kill:
->  0x1af471334 <+8>:  b.lo   0x1af471354               ; <+40>
    0x1af471338 <+12>: pacibsp 
    0x1af47133c <+16>: stp    x29, x30, [sp, #-0x10]!
    0x1af471340 <+20>: mov    x29, sp
Target 0: (Runner) stopped.

Add format to BarcodeValue

It is helpful to know what type (aka format or symbology) of barcode has been scanned - UPCa, Code128, DataMatrix, etc.

flutter build ios fails when deployment target is lower than 13.0

After today's update I managed to perform a "flutter run" to an ios device successfully. But when I attempted a "flutter build ios" or "flutter build ipa", the build fails. After countless tests I discovered that this only happens when the Deployment Target is set lower than 13.0. Increasing this value is not a posibility in my case since the app has been released over 2 years ago.

I created a flutter test project where you can clone it and reproduce it instantly. The only dependency added in the project is the google_ml_kit plugin.

Here's the log too:

In file included from /Users/andres/.pub-cache/hosted/pub.dartlang.org/google_ml_kit-0.5.1/ios/Classes/MLKVisionImage+FlutterPlugin.m:1:
    /Users/andres/.pub-cache/hosted/pub.dartlang.org/google_ml_kit-0.5.1/ios/Classes/GoogleMlKitPlugin.h:32:22: warning: unused function
    'getFlutterError' [-Wunused-function]
    static FlutterError *getFlutterError(NSError *error) {
                         ^
    1 warning generated.
    /Users/andres/.pub-cache/hosted/pub.dartlang.org/google_ml_kit-0.5.1/ios/Classes/vision/ImageLabeler.m:81:15: warning: unused variable
    'modelType' [-Wunused-variable]
        NSString *modelType = optionsData[@"customModel"];
                  ^
    /Users/andres/.pub-cache/hosted/pub.dartlang.org/google_ml_kit-0.5.1/ios/Classes/vision/ImageLabeler.m:82:15: warning: unused variable
    'path' [-Wunused-variable]
        NSString *path = optionsData[@"path"];
                  ^
    2 warnings generated.
    In file included from /Users/andres/.pub-cache/hosted/pub.dartlang.org/google_ml_kit-0.5.1/ios/Classes/GoogleMlKitPlugin.m:1:
    /Users/andres/.pub-cache/hosted/pub.dartlang.org/google_ml_kit-0.5.1/ios/Classes/GoogleMlKitPlugin.h:32:22: warning: unused function
    'getFlutterError' [-Wunused-function]
    static FlutterError *getFlutterError(NSError *error) {
                         ^
    1 warning generated.
    /Users/andres/.pub-cache/hosted/pub.dartlang.org/google_ml_kit-0.5.1/ios/Classes/vision/FaceDetector.m:30:10: warning: incompatible
    pointer to integer conversion initializing 'BOOL' (aka 'signed char') with an expression of type 'id _Nullable' [-Wint-conversion]
        BOOL enableClassification = dictionary[@"enableClassification"];
             ^                      ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
    /Users/andres/.pub-cache/hosted/pub.dartlang.org/google_ml_kit-0.5.1/ios/Classes/vision/FaceDetector.m:33:10: warning: incompatible
    pointer to integer conversion initializing 'BOOL' (aka 'signed char') with an expression of type 'id _Nullable' [-Wint-conversion]
        BOOL enableLandmarks = dictionary[@"enableLandmarks"];
             ^                 ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
    /Users/andres/.pub-cache/hosted/pub.dartlang.org/google_ml_kit-0.5.1/ios/Classes/vision/FaceDetector.m:36:10: warning: incompatible
    pointer to integer conversion initializing 'BOOL' (aka 'signed char') with an expression of type 'id _Nullable' [-Wint-conversion]
        BOOL enableContours = dictionary[@"enableContours"];
             ^                ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
    /Users/andres/.pub-cache/hosted/pub.dartlang.org/google_ml_kit-0.5.1/ios/Classes/vision/FaceDetector.m:39:10: warning: incompatible
    pointer to integer conversion initializing 'BOOL' (aka 'signed char') with an expression of type 'id _Nullable' [-Wint-conversion]
        BOOL enableTracking = dictionary[@"enableTracking"];
             ^                ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
    4 warnings generated.
    In file included from /Users/andres/.pub-cache/hosted/pub.dartlang.org/google_ml_kit-0.5.1/ios/Classes/MLKVisionImage+FlutterPlugin.m:1:
    /Users/andres/.pub-cache/hosted/pub.dartlang.org/google_ml_kit-0.5.1/ios/Classes/GoogleMlKitPlugin.h:32:22: warning: unused function
    'getFlutterError' [-Wunused-function]
    static FlutterError *getFlutterError(NSError *error) {
                         ^
    1 warning generated.
    /Users/andres/.pub-cache/hosted/pub.dartlang.org/google_ml_kit-0.5.1/ios/Classes/vision/ImageLabeler.m:81:15: warning: unused variable
    'modelType' [-Wunused-variable]
        NSString *modelType = optionsData[@"customModel"];
                  ^
    /Users/andres/.pub-cache/hosted/pub.dartlang.org/google_ml_kit-0.5.1/ios/Classes/vision/ImageLabeler.m:82:15: warning: unused variable
    'path' [-Wunused-variable]
        NSString *path = optionsData[@"path"];
                  ^
    2 warnings generated.
    In file included from /Users/andres/.pub-cache/hosted/pub.dartlang.org/google_ml_kit-0.5.1/ios/Classes/GoogleMlKitPlugin.m:1:
    /Users/andres/.pub-cache/hosted/pub.dartlang.org/google_ml_kit-0.5.1/ios/Classes/GoogleMlKitPlugin.h:32:22: warning: unused function
    'getFlutterError' [-Wunused-function]
    static FlutterError *getFlutterError(NSError *error) {
                         ^
    1 warning generated.
    ld: warning: ignoring file
    /Users/andres/development/VIRIDIAN/bugTests/google_ml_kit_bug_test/ios/Pods/MLKitBarcodeScanning/Frameworks/MLKitBarcodeScanning.framewor
    k/MLKitBarcodeScanning, missing required architecture armv7 in file
    /Users/andres/development/VIRIDIAN/bugTests/google_ml_kit_bug_test/ios/Pods/MLKitBarcodeScanning/Frameworks/MLKitBarcodeScanning.framewor
    k/MLKitBarcodeScanning (2 slices)
    ld: warning: ignoring file
    /Users/andres/development/VIRIDIAN/bugTests/google_ml_kit_bug_test/ios/Pods/MLKitFaceDetection/Frameworks/MLKitFaceDetection.framework/ML
    KitFaceDetection, missing required architecture armv7 in file
    /Users/andres/development/VIRIDIAN/bugTests/google_ml_kit_bug_test/ios/Pods/MLKitFaceDetection/Frameworks/MLKitFaceDetection.framework/ML
    KitFaceDetection (2 slices)
    ld: warning: ignoring file
    /Users/andres/development/VIRIDIAN/bugTests/google_ml_kit_bug_test/ios/Pods/MLKitImageLabeling/Frameworks/MLKitImageLabeling.framework/ML
    KitImageLabeling, missing required architecture armv7 in file
    /Users/andres/development/VIRIDIAN/bugTests/google_ml_kit_bug_test/ios/Pods/MLKitImageLabeling/Frameworks/MLKitImageLabeling.framework/ML
    KitImageLabeling (2 slices)
    ld: warning: ignoring file
    /Users/andres/development/VIRIDIAN/bugTests/google_ml_kit_bug_test/ios/Pods/MLKitImageLabelingCommon/Frameworks/MLKitImageLabelingCommon.
    framework/MLKitImageLabelingCommon, missing required architecture armv7 in file
    /Users/andres/development/VIRIDIAN/bugTests/google_ml_kit_bug_test/ios/Pods/MLKitImageLabelingCommon/Frameworks/MLKitImageLabelingCommon.
    framework/MLKitImageLabelingCommon (2 slices)
    ld: warning: ignoring file
    /Users/andres/development/VIRIDIAN/bugTests/google_ml_kit_bug_test/ios/Pods/MLKitImageLabelingCustom/Frameworks/MLKitImageLabelingCustom.
    framework/MLKitImageLabelingCustom, missing required architecture armv7 in file
    /Users/andres/development/VIRIDIAN/bugTests/google_ml_kit_bug_test/ios/Pods/MLKitImageLabelingCustom/Frameworks/MLKitImageLabelingCustom.
    framework/MLKitImageLabelingCustom (2 slices)
    ld: warning: ignoring file
    /Users/andres/development/VIRIDIAN/bugTests/google_ml_kit_bug_test/ios/Pods/MLKitObjectDetectionCommon/Frameworks/MLKitObjectDetectionCom
    mon.framework/MLKitObjectDetectionCommon, missing required architecture armv7 in file
    /Users/andres/development/VIRIDIAN/bugTests/google_ml_kit_bug_test/ios/Pods/MLKitObjectDetectionCommon/Frameworks/MLKitObjectDetectionCom
    mon.framework/MLKitObjectDetectionCommon (2 slices)
    ld: warning: ignoring file
    /Users/andres/development/VIRIDIAN/bugTests/google_ml_kit_bug_test/ios/Pods/MLKitTextRecognition/Frameworks/MLKitTextRecognition.framewor
    k/MLKitTextRecognition, missing required architecture armv7 in file
    /Users/andres/development/VIRIDIAN/bugTests/google_ml_kit_bug_test/ios/Pods/MLKitTextRecognition/Frameworks/MLKitTextRecognition.framewor
    k/MLKitTextRecognition (2 slices)
    ld: warning: ignoring file
    /Users/andres/development/VIRIDIAN/bugTests/google_ml_kit_bug_test/ios/Pods/MLKitVision/Frameworks/MLKitVision.framework/MLKitVision,
    missing required architecture armv7 in file
    /Users/andres/development/VIRIDIAN/bugTests/google_ml_kit_bug_test/ios/Pods/MLKitVision/Frameworks/MLKitVision.framework/MLKitVision (2
    slices)
    ld: warning: ignoring file
    /Users/andres/development/VIRIDIAN/bugTests/google_ml_kit_bug_test/ios/Pods/MLKitCommon/Frameworks/MLKitCommon.framework/MLKitCommon,
    missing required architecture armv7 in file
    /Users/andres/development/VIRIDIAN/bugTests/google_ml_kit_bug_test/ios/Pods/MLKitCommon/Frameworks/MLKitCommon.framework/MLKitCommon (2
    slices)
    ld: warning: ignoring file
    /Users/andres/development/VIRIDIAN/bugTests/google_ml_kit_bug_test/ios/Pods/MLKitVisionKit/Frameworks/MLKitVisionKit.framework/MLKitVisio
    nKit, missing required architecture armv7 in file
    /Users/andres/development/VIRIDIAN/bugTests/google_ml_kit_bug_test/ios/Pods/MLKitVisionKit/Frameworks/MLKitVisionKit.framework/MLKitVisio
    nKit (2 slices)
    ld: warning: object file
    (/Users/andres/development/VIRIDIAN/bugTests/google_ml_kit_bug_test/build/ios/Release-iphoneos/Pods_Runner.framework/Pods_Runner(Pods-Run
    ner-dummy.o)) was built for newer iOS version (10.0) than being linked (9.0)
    ld: warning: object file
    (/Users/andres/development/VIRIDIAN/bugTests/google_ml_kit_bug_test/build/ios/Release-iphoneos/Pods_Runner.framework/Pods_Runner(Pods_Run
    ner_vers.o)) was built for newer iOS version (10.0) than being linked (9.0)
    Undefined symbols for architecture armv7:
      "_OBJC_CLASS_$_MLKTextRecognizer", referenced from:
          objc-class-ref in google_ml_kit(TextRecognizer.o)
      "_OBJC_CLASS_$_MLKImageLabelerOptions", referenced from:
          objc-class-ref in google_ml_kit(ImageLabeler.o)
      "_MLKFaceLandmarkTypeRightCheek", referenced from:
          ___39-[FaceDetector handleDetection:result:]_block_invoke in google_ml_kit(FaceDetector.o)
      "_OBJC_CLASS_$_MLKCustomImageLabelerOptions", referenced from:
          objc-class-ref in google_ml_kit(ImageLabeler.o)
      "_MLKFaceLandmarkTypeMouthRight", referenced from:
          ___39-[FaceDetector handleDetection:result:]_block_invoke in google_ml_kit(FaceDetector.o)
      "_MLKFaceLandmarkTypeMouthLeft", referenced from:
          ___39-[FaceDetector handleDetection:result:]_block_invoke in google_ml_kit(FaceDetector.o)
      "_OBJC_CLASS_$_MLKBarcodeScannerOptions", referenced from:
          objc-class-ref in google_ml_kit(BarcodeScanner.o)
      "_MLKFaceLandmarkTypeLeftEye", referenced from:
          ___39-[FaceDetector handleDetection:result:]_block_invoke in google_ml_kit(FaceDetector.o)
      "_MLKFaceContourTypeLowerLipBottom", referenced from:
          ___39-[FaceDetector handleDetection:result:]_block_invoke in google_ml_kit(FaceDetector.o)
      "_MLKFaceLandmarkTypeLeftEar", referenced from:
          ___39-[FaceDetector handleDetection:result:]_block_invoke in google_ml_kit(FaceDetector.o)
      "_MLKFaceLandmarkTypeLeftCheek", referenced from:
          ___39-[FaceDetector handleDetection:result:]_block_invoke in google_ml_kit(FaceDetector.o)
      "_MLKFaceLandmarkTypeRightEar", referenced from:
          ___39-[FaceDetector handleDetection:result:]_block_invoke in google_ml_kit(FaceDetector.o)
      "_MLKFaceContourTypeNoseBottom", referenced from:
          ___39-[FaceDetector handleDetection:result:]_block_invoke in google_ml_kit(FaceDetector.o)
      "_MLKFaceContourTypeUpperLipTop", referenced from:
          ___39-[FaceDetector handleDetection:result:]_block_invoke in google_ml_kit(FaceDetector.o)
      "_MLKFaceLandmarkTypeMouthBottom", referenced from:
          ___39-[FaceDetector handleDetection:result:]_block_invoke in google_ml_kit(FaceDetector.o)
      "_MLKFaceContourTypeUpperLipBottom", referenced from:
          ___39-[FaceDetector handleDetection:result:]_block_invoke in google_ml_kit(FaceDetector.o)
      "_MLKFaceContourTypeRightEyebrowBottom", referenced from:
          ___39-[FaceDetector handleDetection:result:]_block_invoke in google_ml_kit(FaceDetector.o)
      "_MLKFaceContourTypeLeftEyebrowTop", referenced from:
          ___39-[FaceDetector handleDetection:result:]_block_invoke in google_ml_kit(FaceDetector.o)
      "_MLKFaceContourTypeRightEyebrowTop", referenced from:
          ___39-[FaceDetector handleDetection:result:]_block_invoke in google_ml_kit(FaceDetector.o)
      "_MLKFaceContourTypeRightEye", referenced from:
          ___39-[FaceDetector handleDetection:result:]_block_invoke in google_ml_kit(FaceDetector.o)
      "_MLKFaceContourTypeNoseBridge", referenced from:
          ___39-[FaceDetector handleDetection:result:]_block_invoke in google_ml_kit(FaceDetector.o)
      "_OBJC_CLASS_$_MLKFaceDetector", referenced from:
          objc-class-ref in google_ml_kit(FaceDetector.o)
      "_MLKFaceContourTypeLowerLipTop", referenced from:
          ___39-[FaceDetector handleDetection:result:]_block_invoke in google_ml_kit(FaceDetector.o)
      "_MLKFaceContourTypeLeftCheek", referenced from:
          ___39-[FaceDetector handleDetection:result:]_block_invoke in google_ml_kit(FaceDetector.o)
      "_MLKFaceContourTypeLeftEye", referenced from:
          ___39-[FaceDetector handleDetection:result:]_block_invoke in google_ml_kit(FaceDetector.o)
      "_MLKFaceContourTypeFace", referenced from:
          ___39-[FaceDetector handleDetection:result:]_block_invoke in google_ml_kit(FaceDetector.o)
      "_MLKFaceLandmarkTypeRightEye", referenced from:
          ___39-[FaceDetector handleDetection:result:]_block_invoke in google_ml_kit(FaceDetector.o)
      "_MLKFaceLandmarkTypeNoseBase", referenced from:
          ___39-[FaceDetector handleDetection:result:]_block_invoke in google_ml_kit(FaceDetector.o)
      "_MLKFaceContourTypeRightCheek", referenced from:
          ___39-[FaceDetector handleDetection:result:]_block_invoke in google_ml_kit(FaceDetector.o)
      "_OBJC_CLASS_$_MLKFaceDetectorOptions", referenced from:
          objc-class-ref in google_ml_kit(FaceDetector.o)
      "_MLKFaceContourTypeLeftEyebrowBottom", referenced from:
          ___39-[FaceDetector handleDetection:result:]_block_invoke in google_ml_kit(FaceDetector.o)
      "_OBJC_CLASS_$_MLKBarcodeScanner", referenced from:
          objc-class-ref in google_ml_kit(BarcodeScanner.o)
      "_OBJC_CLASS_$_MLKImageLabeler", referenced from:
          objc-class-ref in google_ml_kit(ImageLabeler.o)
      "_OBJC_CLASS_$_MLKVisionImage", referenced from:
          objc-class-ref in google_ml_kit(BarcodeScanner.o)
          objc-class-ref in google_ml_kit(FaceDetector.o)
          objc-class-ref in google_ml_kit(ImageLabeler.o)
          objc-class-ref in google_ml_kit(TextRecognizer.o)
          objc-class-ref in google_ml_kit(MLKVisionImage+FlutterPlugin.o)
          __OBJC_$_CATEGORY_MLKVisionImage_$_FlutterPlugin in google_ml_kit(MLKVisionImage+FlutterPlugin.o)
    ld: symbol(s) not found for architecture armv7
    clang: error: linker command failed with exit code 1 (use -v to see invocation)
    note: Using new build system
    note: Building targets in parallel
    note: Planning build
    note: Constructing build description
    warning: The iOS deployment target 'IPHONEOS_DEPLOYMENT_TARGET' is set to 8.0, but the range of supported deployment target versions is
    9.0 to 14.4.99. (in target 'GoogleUtilitiesComponents' from project 'Pods')
    warning: The iOS deployment target 'IPHONEOS_DEPLOYMENT_TARGET' is set to 8.0, but the range of supported deployment target versions is
    9.0 to 14.4.99. (in target 'Flutter' from project 'Pods')

Encountered error while building for device.

Thanks in advance!

Custom TFlite model error running on Object Detection

I've been struggling with running a custom tflite model on object detection feature. I am currently building my models through Google Colab with TFLite Model Maker, and tried with another model built following Tanner Gilbert tutorial.

Expected Behavior

To work normally as it works with the default tflite model from example project.

Current Behavior

It shows the error below (log section), and stop functioning until hot reloaded or recompiled with another model.

To Reproduce

  1. Download and extract example project
  2. Change the default object_labeler.tfile with my custom model, model.tfile
  3. Compile and run Object Detection
  4. See error

Log

Launching lib\main.dart on sdk gphone x86 in debug mode...
Running Gradle task 'assembleDebug'...
Formato de par�metros incorreto -
Picked up _JAVA_OPTIONS: -Xmx4096M
√  Built build\app\outputs\flutter-apk\app-debug.apk.
Installing build\app\outputs\flutter-apk\app.apk...
I/CameraManagerGlobal( 9622): Connecting to camera service
Debug service listening on ws://127.0.0.1:65309/TlaQjXuFyG4=/ws
Syncing files to device sdk gphone x86...
I/Camera  ( 9622): [FPS Range Available] is:[5, 30]
I/Camera  ( 9622): [FPS Range Available] is:[15, 30]
I/Camera  ( 9622): [FPS Range Available] is:[15, 15]
I/Camera  ( 9622): [FPS Range Available] is:[30, 30]
I/Camera  ( 9622): [FPS Range] is:[5, 30]
W/Camera  ( 9622): The selected imageFormatGroup is not supported by Android. Defaulting to yuv420
W/Gralloc4( 9622): allocator 3.x is not supported
D/TransportRuntime.JobInfoScheduler( 9622): Scheduling upload for context TransportContext(cct, VERY_LOW, MSRodHRwczovL2ZpcmViYXNlbG9nZ2luZy5nb29nbGVhcGlzLmNvbS92MGNjL2xvZy9iYXRjaD9mb3JtYXQ9anNvbl9wcm90bzNc) with jobId=-843306226 in 86400000ms(Backend next call timestamp 0). Attempt 1
D/TransportRuntime.SQLiteEventStore( 9622): Storing event with priority=VERY_LOW, name=FIREBASE_ML_SDK for destination cct
D/TransportRuntime.JobInfoScheduler( 9622): Upload for context TransportContext(cct, VERY_LOW, MSRodHRwczovL2ZpcmViYXNlbG9nZ2luZy5nb29nbGVhcGlzLmNvbS92MGNjL2xvZy9iYXRjaD9mb3JtYXQ9anNvbl9wcm90bzNc) is already scheduled. Returning...
D/TransportRuntime.SQLiteEventStore( 9622): Storing event with priority=DEFAULT, name=FIREBASE_ML_SDK for destination cct
D/TransportRuntime.JobInfoScheduler( 9622): Scheduling upload for context TransportContext(cct, DEFAULT, MSRodHRwczovL2ZpcmViYXNlbG9nZ2luZy5nb29nbGVhcGlzLmNvbS92MGNjL2xvZy9iYXRjaD9mb3JtYXQ9anNvbl9wcm90bzNc) with jobId=-848286963 in 30000ms(Backend next call timestamp 1623290503699). Attempt 1
I/tflite  ( 9622): Initialized TensorFlow Lite runtime.
I/native  ( 9622): mobile_ssd_tflite_client.cc:210 Model initialized: input_size: 110592, output_locations_size: 30600, preprocessing mean value: 127.5, preprocessing std value: 127.5, inference type: 2
I/native  ( 9622): mobile_ssd_calculator.cc:113 Succeeded in initializing SSD MobileObjectLocalizerV3_1TfLiteClient
I/native  ( 9622): box_classifier_calculator.cc:125 Initializing classifier 
E/native  ( 9622): calculator_graph.cc:771 INVALID_ARGUMENT: CalculatorGraph::Run() failed in Run: 
E/native  ( 9622): Calculator::Open() for node "BoxClassifierCalculator" failed: #vk Unexpected number of dimensions for output index 0: got 3D, expected either 2D (BxN with B=1) or 4D (BxHxWxN with B=1, W=1, H=1). [type.googleapis.com/mediapipe.StatusList='\n\xf5\x01\x08\x03\x12\xc3\x01\x43\x61lculator::Open() for node \"BoxClassifierCalculator\" failed: #vk Unexpected number of dimensions for output index 0: got 3D, expected either 2D (BxN with B=1) or 4D (BxHxWxN with B=1, W=1, H=1).\x1a+\n$tflite::support::TfLiteSupportStatus\x12\x03\x34\x30\x30']
E/native  ( 9622): pipeline_jni.cc:95 INVALID_ARGUMENT: CalculatorGraph::Run() failed in Run: 
E/native  ( 9622): Calculator::Open() for node "BoxClassifierCalculator" failed: #vk Unexpected number of dimensions for output index 0: got 3D, expected either 2D (BxN with B=1) or 4D (BxHxWxN with B=1, W=1, H=1). [type.googleapis.com/mediapipe.StatusList='\n\xf5\x01\x08\x03\x12\xc3\x01\x43\x61lculator::Open() for node \"BoxClassifierCalculator\" failed: #vk Unexpected number of dimensions for output index 0: got 3D, expected either 2D (BxN with B=1) or 4D (BxHxWxN with B=1, W=1, H=1).\x1a+\n$tflite::support::TfLiteSupportStatus\x12\x03\x34\x30\x30']
E/native  ( 9622): pipeline_jni.cc:243 INVALID_ARGUMENT: Graph has errors: 
E/native  ( 9622): Calculator::Open() for node "BoxClassifierCalculator" failed: #vk Unexpected number of dimensions for output index 0: got 3D, expected either 2D (BxN with B=1) or 4D (BxHxWxN with B=1, W=1, H=1). [type.googleapis.com/mediapipe.StatusList='\n\xf5\x01\x08\x03\x12\xc3\x01\x43\x61lculator::Open() for node \"BoxClassifierCalculator\" failed: #vk Unexpected number of dimensions for output index 0: got 3D, expected either 2D (BxN with B=1) or 4D (BxHxWxN with B=1, W=1, H=1).\x1a+\n$tflite::support::TfLiteSupportStatus\x12\x03\x34\x30\x30']
E/native  ( 9622): pipeline_jni.cc:243 INVALID_ARGUMENT: Graph has errors: 
E/native  ( 9622): Calculator::Open() for node "BoxClassifierCalculator" failed: #vk Unexpected number of dimensions for output index 0: got 3D, expected either 2D (BxN with B=1) or 4D (BxHxWxN with B=1, W=1, H=1). [type.googleapis.com/mediapipe.StatusList='\n\xf5\x01\x08\x03\x12\xc3\x01\x43\x61lculator::Open() for node \"BoxClassifierCalculator\" failed: #vk Unexpected number of dimensions for output index 0: got 3D, expected either 2D (BxN with B=1) or 4D (BxHxWxN with B=1, W=1, H=1).\x1a+\n$tflite::support::TfLiteSupportStatus\x12\x03\x34\x30\x30']
E/MobileVisionBase( 9622): Error preloading model resource
E/MobileVisionBase( 9622): com.google.mlkit.common.MlKitException: Failed to initialize detector. Unexpected number of dimensions for output index 0: got 3D, expected either 2D (BxN with B=1) or 4D (BxHxWxN with B=1, W=1, H=1).
E/MobileVisionBase( 9622): 	at com.google.mlkit.vision.vkp.PipelineManager.start(com.google.mlkit:vision-internal-vkp@@18.1.0:66)
E/MobileVisionBase( 9622): 	at com.google.mlkit.vision.objects.custom.internal.zze.tryLoad(com.google.mlkit:object-detection-custom@@16.3.2:3)
E/MobileVisionBase( 9622): 	at com.google.mlkit.common.sdkinternal.model.CustomModelLoader.load(com.google.mlkit:common@@17.1.1:3)
E/MobileVisionBase( 9622): 	at com.google.mlkit.vision.objects.custom.internal.zzf.load(com.google.mlkit:object-detection-custom@@16.3.2:2)
E/MobileVisionBase( 9622): 	at com.google.mlkit.common.sdkinternal.ModelResource.zza(Unknown Source:18)
E/MobileVisionBase( 9622): 	at com.google.mlkit.common.sdkinternal.zzn.run(Unknown Source:10)
E/MobileVisionBase( 9622): 	at com.google.mlkit.common.sdkinternal.zzp.run(com.google.mlkit:common@@17.1.1:2)
E/MobileVisionBase( 9622): 	at com.google.mlkit.common.sdkinternal.MlKitThreadPool.zze(com.google.mlkit:common@@17.1.1:4)
E/MobileVisionBase( 9622): 	at com.google.mlkit.common.sdkinternal.MlKitThreadPool.zzc(Unknown Source:8)
E/MobileVisionBase( 9622): 	at com.google.mlkit.common.sdkinternal.zzj.run(Unknown Source:2)
E/MobileVisionBase( 9622): 	at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1167)
E/MobileVisionBase( 9622): 	at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:641)
E/MobileVisionBase( 9622): 	at com.google.mlkit.common.sdkinternal.MlKitThreadPool.zzd(Unknown Source:10)
E/MobileVisionBase( 9622): 	at com.google.mlkit.common.sdkinternal.zzk.run(Unknown Source:2)
E/MobileVisionBase( 9622): 	at java.lang.Thread.run(Thread.java:923)
D/TransportRuntime.SQLiteEventStore( 9622): Storing event with priority=VERY_LOW, name=FIREBASE_ML_SDK for destination cct
D/TransportRuntime.JobInfoScheduler( 9622): Upload for context TransportContext(cct, VERY_LOW, MSRodHRwczovL2ZpcmViYXNlbG9nZ2luZy5nb29nbGVhcGlzLmNvbS92MGNjL2xvZy9iYXRjaD9mb3JtYXQ9anNvbl9wcm90bzNc) is already scheduled. Returning...
I/native  ( 9622): mobile_ssd_tflite_client.cc:210 Model initialized: input_size: 110592, output_locations_size: 30600, preprocessing mean value: 127.5, preprocessing std value: 127.5, inference type: 2
I/native  ( 9622): mobile_ssd_calculator.cc:113 Succeeded in initializing SSD MobileObjectLocalizerV3_1TfLiteClient
I/native  ( 9622): box_classifier_calculator.cc:125 Initializing classifier 
E/native  ( 9622): calculator_graph.cc:771 INVALID_ARGUMENT: CalculatorGraph::Run() failed in Run: 
E/native  ( 9622): Calculator::Open() for node "BoxClassifierCalculator" failed: #vk Unexpected number of dimensions for output index 0: got 3D, expected either 2D (BxN with B=1) or 4D (BxHxWxN with B=1, W=1, H=1). [type.googleapis.com/mediapipe.StatusList='\n\xf5\x01\x08\x03\x12\xc3\x01\x43\x61lculator::Open() for node \"BoxClassifierCalculator\" failed: #vk Unexpected number of dimensions for output index 0: got 3D, expected either 2D (BxN with B=1) or 4D (BxHxWxN with B=1, W=1, H=1).\x1a+\n$tflite::support::TfLiteSupportStatus\x12\x03\x34\x30\x30']
E/native  ( 9622): pipeline_jni.cc:95 INVALID_ARGUMENT: CalculatorGraph::Run() failed in Run: 
E/native  ( 9622): Calculator::Open() for node "BoxClassifierCalculator" failed: #vk Unexpected number of dimensions for output index 0: got 3D, expected either 2D (BxN with B=1) or 4D (BxHxWxN with B=1, W=1, H=1). [type.googleapis.com/mediapipe.StatusList='\n\xf5\x01\x08\x03\x12\xc3\x01\x43\x61lculator::Open() for node \"BoxClassifierCalculator\" failed: #vk Unexpected number of dimensions for output index 0: got 3D, expected either 2D (BxN with B=1) or 4D (BxHxWxN with B=1, W=1, H=1).\x1a+\n$tflite::support::TfLiteSupportStatus\x12\x03\x34\x30\x30']
E/native  ( 9622): pipeline_jni.cc:243 INVALID_ARGUMENT: Graph has errors: 
E/native  ( 9622): Calculator::Open() for node "BoxClassifierCalculator" failed: #vk Unexpected number of dimensions for output index 0: got 3D, expected either 2D (BxN with B=1) or 4D (BxHxWxN with B=1, W=1, H=1). [type.googleapis.com/mediapipe.StatusList='\n\xf5\x01\x08\x03\x12\xc3\x01\x43\x61lculator::Open() for node \"BoxClassifierCalculator\" failed: #vk Unexpected number of dimensions for output index 0: got 3D, expected either 2D (BxN with B=1) or 4D (BxHxWxN with B=1, W=1, H=1).\x1a+\n$tflite::support::TfLiteSupportStatus\x12\x03\x34\x30\x30']
E/native  ( 9622): pipeline_jni.cc:243 INVALID_ARGUMENT: Graph has errors: 
E/native  ( 9622): Calculator::Open() for node "BoxClassifierCalculator" failed: #vk Unexpected number of dimensions for output index 0: got 3D, expected either 2D (BxN with B=1) or 4D (BxHxWxN with B=1, W=1, H=1). [type.googleapis.com/mediapipe.StatusList='\n\xf5\x01\x08\x03\x12\xc3\x01\x43\x61lculator::Open() for node \"BoxClassifierCalculator\" failed: #vk Unexpected number of dimensions for output index 0: got 3D, expected either 2D (BxN with B=1) or 4D (BxHxWxN with B=1, W=1, H=1).\x1a+\n$tflite::support::TfLiteSupportStatus\x12\x03\x34\x30\x30']
W/System.err( 9622): com.google.mlkit.common.MlKitException: Failed to initialize detector. Unexpected number of dimensions for output index 0: got 3D, expected either 2D (BxN with B=1) or 4D (BxHxWxN with B=1, W=1, H=1).
W/System.err( 9622): 	at com.google.mlkit.vision.vkp.PipelineManager.start(com.google.mlkit:vision-internal-vkp@@18.1.0:66)
W/System.err( 9622): 	at com.google.mlkit.vision.objects.custom.internal.zze.tryLoad(com.google.mlkit:object-detection-custom@@16.3.2:3)
W/System.err( 9622): 	at com.google.mlkit.common.sdkinternal.model.CustomModelLoader.load(com.google.mlkit:common@@17.1.1:3)
W/System.err( 9622): 	at com.google.mlkit.vision.objects.custom.internal.zzf.load(com.google.mlkit:object-detection-custom@@16.3.2:2)
W/System.err( 9622): 	at com.google.mlkit.common.sdkinternal.ModelResource.zza(Unknown Source:18)
W/System.err( 9622): 	at com.google.mlkit.common.sdkinternal.zzn.run(Unknown Source:10)
W/System.err( 9622): 	at com.google.mlkit.common.sdkinternal.zzp.run(com.google.mlkit:common@@17.1.1:2)
W/System.err( 9622): 	at com.google.mlkit.common.sdkinternal.MlKitThreadPool.zze(com.google.mlkit:common@@17.1.1:4)
W/System.err( 9622): 	at com.google.mlkit.common.sdkinternal.MlKitThreadPool.zzc(Unknown Source:8)
W/System.err( 9622): 	at com.google.mlkit.common.sdkinternal.zzj.run(Unknown Source:2)
W/System.err( 9622): 	at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1167)
W/System.err( 9622): 	at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:641)
W/System.err( 9622): 	at com.google.mlkit.common.sdkinternal.MlKitThreadPool.zzd(Unknown Source:10)
W/System.err( 9622): 	at com.google.mlkit.common.sdkinternal.zzk.run(Unknown Source:2)
W/System.err( 9622): 	at java.lang.Thread.run(Thread.java:923)
D/TransportRuntime.SQLiteEventStore( 9622): Storing event with priority=VERY_LOW, name=FIREBASE_ML_SDK for destination cct
D/TransportRuntime.JobInfoScheduler( 9622): Upload for context TransportContext(cct, VERY_LOW, MSRodHRwczovL2ZpcmViYXNlbG9nZ2luZy5nb29nbGVhcGlzLmNvbS92MGNjL2xvZy9iYXRjaD9mb3JtYXQ9anNvbl9wcm90bzNc) is already scheduled. Returning...
E/flutter ( 9622): [ERROR:flutter/lib/ui/ui_dart_state.cc(199)] Unhandled Exception: PlatformException(ObjectDetectionError, com.google.mlkit.common.MlKitException: Failed to initialize detector. Unexpected number of dimensions for output index 0: got 3D, expected either 2D (BxN with B=1) or 4D (BxHxWxN with B=1, W=1, H=1)., null, null)
E/flutter ( 9622): #0      StandardMethodCodec.decodeEnvelope (package:flutter/src/services/message_codecs.dart:597:7)
E/flutter ( 9622): #1      MethodChannel._invokeMethod (package:flutter/src/services/platform_channel.dart:158:18)
E/flutter ( 9622): <asynchronous suspension>
E/flutter ( 9622): #2      ObjectDetector.processImage (package:google_ml_kit/src/vision/object_detector.dart:14:20)
E/flutter ( 9622): <asynchronous suspension>
E/flutter ( 9622): #3      _ObjectDetectorView.processImage (package:google_ml_kit_example/VisionDetectorViews/object_detector_view.dart:49:20)
E/flutter ( 9622): <asynchronous suspension>
E/flutter ( 9622): 
W/System  ( 9622): A resource failed to call release. 

File

model.zip

InputImageData is null

When running the example on a iOS device and choosing an image from the gallery the InputImageData.size and InputImageData.imageRotation is always null.

Example Project errors

In the file barcode_detector_painter.dart whenever barcode.info! is referenced it needs to be replaced with barcode.barcodeUnknown!

In the file barcode_scanner_view.dart the following needs to be replaced
BarcodeFormat.Default,
BarcodeFormat.Code_128,
BarcodeFormat.Code_39,
BarcodeFormat.Code_93,
BarcodeFormat.Codebar,
BarcodeFormat.EAN_13,
BarcodeFormat.EAN_8,
BarcodeFormat.ITF,
BarcodeFormat.UPC_A,
BarcodeFormat.UPC_E,
BarcodeFormat.QR_Code,
BarcodeFormat.PDF417,
BarcodeFormat.Aztec,
BarcodeFormat.Data_Matrix

with this

Barcode.FORMAT_Default,
Barcode.FORMAT_Code_128,
Barcode.FORMAT_Code_39,
Barcode.FORMAT_Code_93,
Barcode.FORMAT_Codabar,
Barcode.FORMAT_EAN_13,
Barcode.FORMAT_EAN_8,
Barcode.FORMAT_ITF,
Barcode.FORMAT_UPC_A,
Barcode.FORMAT_UPC_E,
Barcode.FORMAT_QR_Code,
Barcode.FORMAT_PDF417,
Barcode.FORMAT_Aztec,
Barcode.FORMAT_Data_Matrix

In the file text_detector_painter.dart
textBlock.text replace with textBlock.blockText
and where textBlock.rect is referenced replace with textBlock.blockRect

Implementation of customModelPath for iOS

Objective:

Image labelling using custom model done in real-time. Should run on Android & iOS.

Steps taken:

When running the custom model on Android, everything works perfectly. The model path I use there is just model.tflite and the model is placed at /android/app/src/main/assets/model.tflite

When running on iOS, I use the path model.tflite (as on Android) with the model placed at /ios/Runner/Resources/model.tflite. But I get the following error:

*** Terminating app due to uncaught exception 'MLKInvalidLocalModel', 
reason: 'Local model path (model.tflite) is invalid.'

In the above example, I used the path model.tflite but I have also tried:

  • Resources/model.tflite
  • Runner/Resources/model.tflite
  • ios/Runner/Resources/model.tflite

All of which result in an error like the above.

What works:

I added the following snippet to getCustomLabelerOptions, ignoring the path param passed from flutter & the app starts working as expected!

    NSString *modelPath = [NSBundle.mainBundle pathForResource:@"model"
                                                        ofType:@"tflite"
                                                   inDirectory:@""];
    
    MLKLocalModel *localModel = [[MLKLocalModel alloc] initWithPath:modelPath];

Please let me know what I should be passing to the customModelPath param as part of the CustomImageLabelerOptions object to get the same result.

MissingPluginException No implementation found for method listen on channel

google_ml_kit: ^0.1.0

barcodeScanner = GoogleMlKit.instance.barcodeScanner(); final result = await barcodeScanner.processImage(inputImageOf(image));

on android everything work fine. but iOS not working. with error

MissingPluginException No implementation found for method listen on channel google_ml_kit

Text recognition from bytes crashes on iOS

It appears that text recognition with an InputImage initialized from bytes crashes an iOS app. I have also tested the same image but initialized from file path, which works perfectly fine. Thus, the issue should be with handling the image bytes.

The following code results in crash:

imglib.JpegEncoder jpgEncoder = imglib.JpegEncoder();
List<int> bytes = jpgEncoder.encodeImage(image);

InputImageData inputImageData = InputImageData(
  size: Size(image.width.toDouble(), image.height.toDouble()),
  imageRotation: InputImageRotation.Rotation_0deg,
);
InputImage inputImage = InputImage.fromBytes(
  bytes: bytes,
  inputImageData: inputImageData,
);

RecognisedText result = await _detector.processImage(inputImage);

The following code is OK:

imglib.JpegEncoder jpgEncoder = imglib.JpegEncoder();
List<int> bytes = jpgEncoder.encodeImage(image);

String path = (await getApplicationDocumentsDirectory()).path;
await File('$path/image.jpg').writeAsBytes(bytes);

InputImage inputImage = InputImage.fromFilePath('$path/image.jpg');
RecognisedText result = await _detector.processImage(inputImage);

Here are the logs for the crash:

-[NSNull count]: unrecognized selector sent to instance 0x20c3d1c00
*** Terminating app due to uncaught exception 'NSInvalidArgumentException', reason: '-[NSNull count]: unrecognized selector sent to instance 0x20c3d1c00'
*** First throw call stack:
(0x1ab1a7754 0x1bfc6e7a8 0x1ab0aac3c 0x1ab1aa2ac 0x1ab1ac5b0 0x1072021b4 0x107201e70 0x107202cf4 0x107202ba4 0x107200fa4 0x1094bad60 0x1091cc07c 0x1094d5370 0x10946f4d4 0x109471cfc 0x1ab12222c 0x1ab121e28 0x1ab121278 0x1ab11b02c 0x1ab11a360 0x1c2758734 0x1adb95584 0x1adb9adf4 0x10462c0e0 0x1aadd6cf8)
libc++abi: terminating with uncaught exception of type NSException
* thread #1, queue = 'com.apple.main-thread', stop reason = signal SIGABRT
    frame #0: 0x00000001d8c23334 libsystem_kernel.dylib`__pthread_kill + 8
libsystem_kernel.dylib`__pthread_kill:
->  0x1d8c23334 <+8>:  b.lo   0x1d8c23354               ; <+40>
    0x1d8c23338 <+12>: pacibsp
    0x1d8c2333c <+16>: stp    x29, x30, [sp, #-0x10]!
    0x1d8c23340 <+20>: mov    x29, sp
Target 0: (Runner) stopped.
Lost connection to device.
Exited (sigterm)

Let me know if you need more information. Thanks!

crop InputImage

could you please help me to crop object of InputImage when it has imageType of bytes
i want textDetector.processImage(inputImage) function to only recognize a part of my camera input that user see

DigitalInkRecognizer crashes on iOS

Hi,
it seems that the DigitalInkRecognizer doesn't work on iOS as calling GoogleMlKit.vision.languageModelManager().languageModelManager.downloadModel(), readText() and isModelDownloaded() results in crashing the app. However in Android it works just fine.

Log when calling GoogleMlKit.vision.languageModelManager().languageModelManager.downloadModel('de-DE'):

*** First throw call stack:
(0x1aff8e5ac 0x1c400842c 0x1b1180c78 0x10521bf7c 0x108be6348 0x108be56a4 0x108be94b4 0x10b8f6a60 0x10b607d7c 0x10b911070 0x10b8ab1d4 0x10b8ad9fc 0x1aff0c050 0x1aff0bc50 0x1aff0b0c4 0x1aff05178 0x1aff044bc 0x1c6989820 0x1b28a8734 0x1b28ade10 0x104fc37e0 0x1afbcbe60)
libc++abi.dylib: terminating with uncaught exception of type NSException
*** Terminating app due to uncaught exception 'NSInvalidArgumentException', reason: '*** -[__NSCFConstantString stringByAppendingString:]: nil argument'
terminating with uncaught exception of type NSException
(lldb) 

Log when calling readText():
*** Terminating app due to uncaught exception 'NSInvalidArgumentException', reason: '*** -[__NSCFConstantString stringByAppendingString:]: nil argument'
*** First throw call stack:
(0x1aff8e5ac 0x1c400842c 0x1b1180c78 0x1023e3f7c 0x105dae348 0x105dad6a4 0x105db14b4 0x108956a60 0x108667d7c 0x108971070 0x10890b1d4 0x10890d9fc 0x1aff0c050 0x1aff0bc50 0x1aff0b0c4 0x1aff05178 0x1aff044bc 0x1c6989820 0x1b28a8734 0x1b28ade10 0x10218b7e0 0x1afbcbe60)
libc++abi.dylib: terminating with uncaught exception of type NSException
* thread #1, queue = 'com.apple.main-thread', stop reason = signal SIGABRT
    frame #0: 0x00000001dbeb798c libsystem_kernel.dylib`__pthread_kill + 8
libsystem_kernel.dylib`__pthread_kill:
->  0x1dbeb798c <+8>:  b.lo   0x1dbeb79a8               ; <+36>
    0x1dbeb7990 <+12>: stp    x29, x30, [sp, #-0x10]!
    0x1dbeb7994 <+16>: mov    x29, sp
0x1dbeb7998 <+20>: bl     0x1dbe94714               ; cerror_nocancel
Target 0: (Runner) stopped.
Lost connection to device.

I only get the logs, if I use a physical device as a target (iPhone) instead of the iOS Simulator.

Sorry if my issue looks somehow weird, it is the first time I report an issue on GitHub.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.