Giter Site home page Giter Site logo

rxlabz / speech_recognition Goto Github PK

View Code? Open in Web Editor NEW
335.0 13.0 198.0 10.51 MB

A Flutter plugin to use speech recognition on iOS & Android (Swift/Java)

Home Page: https://pub.dartlang.org/packages/speech_recognition

License: Other

Java 27.57% Ruby 7.67% Swift 28.61% Objective-C 1.86% Dart 34.30%
flutter plugin swift ios javascript android speech-recognition

speech_recognition's Introduction

⚠️ Deprecated


speech_recognition

A flutter plugin to use the speech recognition iOS10+ / Android 4.1+

screenshot

  1. Depend on it Add this to your package's pubspec.yaml file:
dependencies:
  speech_recognition: "^0.3.0"
  1. Install it You can install packages from the command line:
$ flutter packages get
  1. Import it Now in your Dart code, you can use:
import 'package:speech_recognition/speech_recognition.dart';

Usage

//..
_speech = SpeechRecognition();

// The flutter app not only call methods on the host platform,
// it also needs to receive method calls from host.
_speech.setAvailabilityHandler((bool result) 
  => setState(() => _speechRecognitionAvailable = result));

// handle device current locale detection
_speech.setCurrentLocaleHandler((String locale) =>
 setState(() => _currentLocale = locale));

_speech.setRecognitionStartedHandler(() 
  => setState(() => _isListening = true));

// this handler will be called during recognition. 
// the iOS API sends intermediate results,
// On my Android device, only the final transcription is received
_speech.setRecognitionResultHandler((String text) 
  => setState(() => transcription = text));

_speech.setRecognitionCompleteHandler(() 
  => setState(() => _isListening = false));

// 1st launch : speech recognition permission / initialization
_speech
    .activate()
    .then((res) => setState(() => _speechRecognitionAvailable = res));
//..

speech.listen(locale:_currentLocale).then((result)=> print('result : $result'));

// ...

speech.cancel();

// ||

speech.stop();

Recognition

Permissions

iOS

⚠️ iOS : Swift 4.2 project

infos.plist, add :

  • Privacy - Microphone Usage Description
  • Privacy - Speech Recognition Usage Description
<key>NSMicrophoneUsageDescription</key>
<string>This application needs to access your microphone</string>
<key>NSSpeechRecognitionUsageDescription</key>
<string>This application needs the speech recognition permission</string>

Android

<uses-permission android:name="android.permission.RECORD_AUDIO" />

Limitation

On iOS, by default the plugin is configured for French, English, Russian, Spanish, Italian. On Android, without additional installations, it will probably works only with the default device locale.

Troubleshooting

If you get a MissingPluginException, try to flutter build apk on Android, or flutter build ios

Getting Started

For help getting started with Flutter, view our online documentation.

For help on editing plugin code, view the documentation.

speech_recognition's People

Contributors

derekbekoe avatar korkies22 avatar moejay avatar rxlabz avatar utkarsh867 avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

speech_recognition's Issues

Launching lib/main.dart on SM G935F in debug mode has error.

Launching lib/main.dart on SM G935F in debug mode...
Initializing gradle...
Resolving dependencies...
Running 'gradlew assembleDebug'...
Built build/app/outputs/apk/debug/app-debug.apk (31.3MB).
Installing build/app/outputs/apk/app.apk...
I/FlutterActivityDelegate(11364): onResume setting current activity to this
I/flutter (11364): _MyAppState.activateSpeechRecognizer...
I/flutter (11364): _platformCallHandler call speech.onCurrentLocale tr_TR
D/libGLESv2(11364): STS_GLApi : DTS is not allowed for Package : com.yourcompany.speechcapital
Syncing files to device SM G935F...
D/ViewRootImpl@5417505MainActivity: ViewPostImeInputStage processPointer 0
W/System (11364): ClassLoader referenced unknown path: /system/framework/QPerformance.jar
E/BoostFramework(11364): BoostFramework() : Exception_1 = java.lang.ClassNotFoundException: Didn't find class "com.qualcomm.qti.Performance" on path: DexPathList[[],nativeLibraryDirectories=[/system/lib64, /vendor/lib64]]
V/BoostFramework(11364): BoostFramework() : mPerf = null
D/ViewRootImpl@5417505MainActivity: ViewPostImeInputStage processPointer 1
I/flutter (11364): _MyAppState.start => result true
D/SpeechRecognitionPlugin(11364): onError : 9
I/flutter (11364): _platformCallHandler call speech.onSpeechAvailability false
I/flutter (11364): _platformCallHandler call speech.onError 9
I/flutter (11364): Unknowm method speech.onError
V/InputMethodManager(11364): Starting input: tba=android.view.inputmethod.EditorInfo@6c4a144 nm : com.yourcompany.speechcapital ic=null
I/InputMethodManager(11364): [IMM] startInputInner - mService.startInputOrWindowGainedFocus
D/InputTransport(11364): Input channel constructed: fd=101
D/InputTransport(11364): Input channel destroyed: fd=96
D/ViewRootImpl@5417505MainActivity: MSG_WINDOW_FOCUS_CHANGED 0

Flutter Android PIXEL Emulator - _platformCallHandler call speech.onError 2

Hi,

First of all thanks for the utility. It works perfect on iOS.
I am having issues with Android Emulator (android-x86). When I start the emulator and then run flutter run I don't see any errors. It loads the speech.dart implementation without any errors. It does get the call speech.onSpeechAvailability true.

Now when I press Listen it throws error

D/SpeechRecognitionPlugin(10082): onError : 2
I/flutter (10082): _platformCallHandler call speech.onSpeechAvailability false
I/flutter (10082): _platformCallHandler call speech.onError 2

Here is the full trace from flutter run

Launching lib/main.dart on Android SDK built for x86 in debug mode...
registerResGeneratingTask is deprecated, use registerGeneratedFolders(FileCollection)
registerResGeneratingTask is deprecated, use registerGeneratedFolders(FileCollection)
registerResGeneratingTask is deprecated, use registerGeneratedFolders(FileCollection)
Built build/app/outputs/apk/debug/app-debug.apk.
I/FlutterActivityDelegate(10082): onResume setting current activity to this
I/Choreographer(10082): Skipped 56 frames! The application may be doing too much work on its main thread.
D/EGL_emulation(10082): eglMakeCurrent: 0xe2885480: ver 3 0 (tinfo 0xe2883440)
I/OpenGLRenderer(10082): Davey! duration=1230ms; Flags=1, IntendedVsync=2987461107766, Vsync=2988394441062, OldestInputEvent=9223372036854775807, NewestInputEvent=0, HandleInputStart=2988406485133, AnimationStart=2988406770133, PerformTraversalsStart=2988406789133, DrawStart=2988520121133, SyncQueued=2988522026133, SyncStart=2988526756133, IssueDrawCommandsStart=2988526931133, SwapBuffers=2988578172133, FrameCompleted=2988696000133, DequeueBufferDuration=38146000, QueueBufferDuration=10954000,
D/ (10082): HostConnection::get() New Host Connection established 0xe2899940, tid 10109
D/EGL_emulation(10082): eglMakeCurrent: 0xceab0960: ver 3 0 (tinfo 0xe2883330)
I/flutter (10082): _SpeechBotState.activateSpeechRecognizer...
D/SpeechRecognitionPlugin(10082): Current Locale : en_US
I/flutter (10082): _platformCallHandler call speech.onCurrentLocale en_US
I/flutter (10082): Your currentLocale is en_US
I/flutter (10082): _SpeechBotState.start => result true
D/SpeechRecognitionPlugin(10082): onRmsChanged : -2.12
D/SpeechRecognitionPlugin(10082): onRmsChanged : -2.12
D/SpeechRecognitionPlugin(10082): onReadyForSpeech
I/flutter (10082): _platformCallHandler call speech.onSpeechAvailability true
D/SpeechRecognitionPlugin(10082): onRmsChanged : -2.12
D/SpeechRecognitionPlugin(10082): onRmsChanged : -2.0
D/SpeechRecognitionPlugin(10082): onRmsChanged : -2.0
D/SpeechRecognitionPlugin(10082): onRmsChanged : -2.0
D/SpeechRecognitionPlugin(10082): onRmsChanged : -2.0
D/SpeechRecognitionPlugin(10082): onRmsChanged : -2.0
D/SpeechRecognitionPlugin(10082): onRmsChanged : -2.0
D/SpeechRecognitionPlugin(10082): onRmsChanged : -2.12
D/SpeechRecognitionPlugin(10082): onError : 2
I/flutter (10082): _platformCallHandler call speech.onSpeechAvailability false
I/flutter (10082): _platformCallHandler call speech.onError 2
I/flutter (10082): Unknowm method speech.onError
Application finished. (<-- I stopped it)

in AndroidManifest.xml I have included

I have also opened the app settings in Android settings and provided permission to the app to camera, speaker, microphone.

What does speech.onError 2 mean? and how to fix it in emulator?

Thanks

iOS Build Fail

Launching lib/main.dart on iPhone Xʀ in debug mode...
Running Xcode build...                                                  
                                                   
Xcode build done.                                           13.3s
Failed to build iOS app
Error output from Xcode build:
↳
    ** BUILD FAILED **


Xcode's output:
↳
    /Users/pathfinder/labs/development/flutter/.pub-cache/hosted/pub.dartlang.org/speech_recognition-0.3.0+1/ios/Classes/SwiftSpeechRecognitionPlugin.swift:111:34: error:
    type 'AVAudioSession.Category' (aka 'NSString') has no member 'record'
        try audioSession.setCategory(AVAudioSession.Category.record, mode: .default)
                                     ^~~~~~~~~~~~~~~~~~~~~~~ ~~~~~~
    /Users/pathfinder/labs/development/flutter/.pub-cache/hosted/pub.dartlang.org/speech_recognition-0.3.0+1/ios/Classes/SwiftSpeechRecognitionPlugin.swift:112:30: error:
    type 'AVAudioSession.Mode' (aka 'NSString') has no member 'measurement'
        try audioSession.setMode(AVAudioSession.Mode.measurement)
                                 ^~~~~~~~~~~~~~~~~~~ ~~~~~~~~~~~
    /Users/pathfinder/labs/development/flutter/.pub-cache/hosted/pub.dartlang.org/speech_recognition-0.3.0+1/ios/Classes/SwiftSpeechRecognitionPlugin.swift:186:9: error:
    value of type 'AVAudioSession.Category' (aka 'NSString') has no member 'rawValue'
            return input.rawValue
                   ^~~~~ ~~~~~~~~

Could not build the application for the simulator.
Error launching application on iPhone Xʀ.

Add support to dart sdk version above 2.0.0

I am getting this error while running flutter packages get.,

Running "flutter packages get" in App...
The current Dart SDK version is 2.1.0-dev.0.0.flutter-be6309690f.

Because App depends on speech_recognition >=0.2.0+1 which requires SDK version <2.0.0, version solving failed.

Can you please add support to the dart 2.1?

Stop listening when voice end

Hi, on Android platform when listening started, user speak some voice and when user stop speaking, application stoped to listen him. On IOS that function dont worked, and application listening all time while user do not clicked on stop button.
Is it possible to stop listening user if he finished talking?

iOS Build Fail

`Error output from Xcode build:

** BUILD FAILED **

Xcode's output:

=== BUILD TARGET speech_recognition OF PROJECT Pods WITH CONFIGURATION Debug ===
:1:9: note: in file included from :1:
#import "Headers/speech_recognition-umbrella.h"
^
/Users/bogdandinga/Work/hacktm-assets/test_app/ios/Pods/Target Support Files/speech_recognition/speech_recognition-umbrella.h:13:9: error: include of non-modular header inside framework module 'speech_recognition': '/Users/bogdandinga/Work/flutter/flutter/.pub-cache/hosted/pub.dartlang.org/speech_recognition-0.2.0+1/ios/Classes/SpeechRecognitionPlugin.h'
#import "SpeechRecognitionPlugin.h"
^
:0: error: could not build Objective-C module 'speech_recognition'
Could not build the application for the simulator.
Error launching application on iPhone 8.`

Activate, cancel stop lifecycle

First of all, Thanks very much for doing this work and helping others.

I am trying to understand the "optimal" use of the library. In my app, there really is only one screen, and as long as my app is open, there is a button, ready for speech recognition.

When is it best to register callback handlers and then call activate()? On app startup, or when the user presses the UI button for speech recognition?

What is the difference between cancel and stop?

If I call stop or cancel will I have to re-call activate() again? Or just call listen() again.

thanks for the help.

onError: 5

Screenshot_1

Anyone had this problem hitting unknown Error 5? it happens when I try to use the speechRecognition more than a few times.

The method 'setErrorHandler' isn't defined for the class 'SpeechRecognition'.

Hi, I'm using the latest available package:

speech_recognition: ^0.3.0+1

the flutter doctor output:

Doctor summary (to see all details, run flutter doctor -v):
[✓] Flutter (Channel stable, v1.5.4-hotfix.2, on Linux, locale en_US.UTF-8)
 
[✓] Android toolchain - develop for Android devices (Android SDK version 28.0.3)
[✓] Android Studio (version 3.3)
[✓] VS Code (version 1.31.1)
[!] Proxy Configuration
    ! NO_PROXY does not contain 127.0.0.1
[✓] Connected device (1 available)

! Doctor found issues in 1 category.

trying to check the basic example app:

  void activateSpeechRecognizer() {
    print('_MyAppState.activateSpeechRecognizer... ');
    _speech = new SpeechRecognition();
    _speech.setAvailabilityHandler(onSpeechAvailability);
    _speech.setCurrentLocaleHandler(onCurrentLocale);
    _speech.setRecognitionStartedHandler(onRecognitionStarted);
    _speech.setRecognitionResultHandler(onRecognitionResult);
    _speech.setRecognitionCompleteHandler(onRecognitionComplete);
    _speech.setErrorHandler(errorHandler);
    _speech
        .activate()
        .then((res) => setState(() => _speechRecognitionAvailable = res));
}

and getting this message:

The method 'setErrorHandler' isn't defined for the class 'SpeechRecognition'.

also some suspicious error from my previous run (flutter run output):

I/flutter (19599): Unknowm method speech.onError 

Dart SDK <2.0.0 required while running packages get . (pug get failed (1))

D:\Flutter\Sdk\flutter\bin\flutter.bat --no-color packages upgrade
Running "flutter packages upgrade" in music_player...
The current Dart SDK version is 2.1.0-dev.3.1.flutter-760a9690c2.

Because music_player depends on speech_recognition any which requires SDK version >=1.23.0 <2.0.0, version solving failed.
pub upgrade failed (1)

stop when voice ends.

Hi there,

I want to press a listen button and it will start listens me with few second. When I stop it will ends the listing and translates to text. How do we do that?

Example: when I say "what is my balance" its stop and converts to text and search from given list and finds the "balance" word match and it open the Account or Balance page.

Could you add onRmsChanged result to this plugin? THX

When I use this plugin, I noticed that at Run Console, onRmsChanged results were displayed like below:

D/SpeechRecognitionPlugin(28376): onRmsChanged : 8.799999
D/SpeechRecognitionPlugin(28376): onRmsChanged : 1.6000001
D/SpeechRecognitionPlugin(28376): onRmsChanged : -2.0
D/SpeechRecognitionPlugin(28376): onRmsChanged : 2.8000002
D/SpeechRecognitionPlugin(28376): onRmsChanged : 2.8000002
D/SpeechRecognitionPlugin(28376): onRmsChanged : 6.3999996
D/SpeechRecognitionPlugin(28376): onRmsChanged : 6.3999996
D/SpeechRecognitionPlugin(28376): onRmsChanged : 5.2000003
D/SpeechRecognitionPlugin(28376): onRmsChanged : 4.0
D/SpeechRecognitionPlugin(28376): onRmsChanged : 2.8000002
D/SpeechRecognitionPlugin(28376): onRmsChanged : 7.6000004
D/SpeechRecognitionPlugin(28376): onRmsChanged : 7.6000004

Could you add some functions to get these results? These RmsDB results are really useful! Thanks.

Volume decreases after invoking speech recognition on IOS.

Hi team.
I’m using speech_recognition 0.3.0+1 with flutter. I’ve tested with simulator IOS, it worked properly.
But when I build and test on a real device (iPhone 7 plus), have a problem, when the first time I start the app, volume work as normal, but whenever I invoke a speech recognition, the overall volume of the app decreases.
I tried to fix follow guide. But it still not works.

Please help me to find the solution for this, thanks team

Unable to determine Swift version for the following pods when install speech_recognition

I got this error when trying to integrate speech_recognition to my app and running it on IOS.

Installing speech_recognition (0.3.0)
[!] Unable to determine Swift version for the following pods:

  • speech_recognition does not specify a Swift version and none of the targets (Runner) integrating it have the SWIFT_VERSION attribute set. Please contact the author or set the SWIFT_VERSION attribute in at least one of the targets that integrate this pod.

I thought the error is very clear, how would I fix it?
Thank you.

Error on Android Pie

I tried to run example on android Pie API 28 but when I clicked to record I got following error

I/flutter (13784): _platformCallHandler call speech.onSpeechAvailability false
I/flutter (13784): _platformCallHandler call speech.onError 2
I/flutter (13784): Unknowm method speech.onError

IOS , not able to use other locales

How can i use another language on ios with this plugin? i already change the locale language but the speech recognition seems didn't get the local language

onRecognitionComplete() on Android gets called twice

Hi,

I am facing a strange issue on Android (PIXEL 2) real device and android simulator too. I get
void onRecognitionComplete() => setState(() async { called twice.

Here's the dump of the log from Android Studio...

09-06 17:09:31.417 18931-18975/? I/flutter: Your currentLocale is en_AU
09-06 17:09:36.721 18931-18975/? I/flutter: _SpeechBotState.start => result true
09-06 17:09:39.595 18931-18975/? I/flutter: stop() isListening = true
09-06 17:09:39.629 18931-18975/? I/flutter: onRecongintionComplete the transcript is hello
inside _api.dioPost()
inside getIdToken
09-06 17:09:39.637 18931-18975/? I/flutter: inside getIdTokenFromUser
09-06 17:09:39.639 18931-18975/? I/flutter: params passed to post are =
{uid: EJDgM5Kd0EO75R7XW9EPRyD6dZR2, text: hello}
09-06 17:09:39.821 18931-18975/? I/flutter: onRecongintionComplete the transcript is hello
09-06 17:09:39.823 18931-18975/? I/flutter: inside getIdTokenFromUser
09-06 17:09:39.825 18931-18975/? I/flutter: params passed to post are =
{uid: EJDgM5Kd0EO75R7XW9EPRyD6dZR2, text: hello}
09-06 17:24:59.118 18931-18975/? I/flutter: {status: success, successFlag: true, data: Greetings!}
09-06 17:24:59.121 18931-18975/? I/flutter: responsePost obtained ....
09-06 17:24:59.122 18931-18975/? I/flutter: {status: success, successFlag: true, data: Hi!}
09-06 17:25:06.597 18931-18975/? I/flutter: _SpeechBotState.start => result true
09-06 17:25:09.309 18931-18975/? I/flutter: onRecongintionComplete the transcript is I have a problem
09-06 17:25:09.318 18931-18975/? I/flutter: inside getIdTokenFromUser
09-06 17:25:09.323 18931-18975/? I/flutter: params passed to post are =
09-06 17:25:09.324 18931-18975/? I/flutter: {uid: EJDgM5Kd0EO75R7XW9EPRyD6dZR2, text: I have a problem}
09-06 17:25:09.528 18931-18975/? I/flutter: onRecongintionComplete the transcript is I have a problem
09-06 17:25:09.545 18931-18975/? I/flutter: inside getIdTokenFromUser
09-06 17:25:09.551 18931-18975/? I/flutter: params passed to post are =
{uid: EJDgM5Kd0EO75R7XW9EPRyD6dZR2, text: I have a problem}

Why is it happening twice on Android and on iOS it works fine? Your help will be very much appreciated.

To assess what is going wrong sharing the code below....

@OverRide
initState() {
super.initState();
initPlatformState();
checkPermission();
}

@OverRide
dispose() {
stop();
_cancelRecognitionHandler();
super.dispose();
}

new Expanded(
child: new Container(
alignment: Alignment.bottomCenter,
margin: const EdgeInsets.only(bottom: 10.0),
child: new FloatingActionButton(
backgroundColor: floatBttnColor,
child: new Icon(Icons.mic),
//navigate: () => navigate(''),
onPressed: _speechRecognitionAvailable && !_isListening
? () {
if (this.mounted) {
setState(() {
floatBttnColor = Colors.purple;
});
}
start();
}
: () {
stop();
if (this.mounted) {
setState(() {
floatBttnColor = Colors.green;
});
}
},
),
),
),

and out side the build Widget function in the class I have

void start() => _speech
.listen(locale: _currentLocale)
.then((result) => print('_SpeechBotState.start => result $result'));

void cancel() => _speech.cancel().then((result) => setState(() {
_isListening = result;
print('_speech.cancel result is $result');
}));

Future stop() => _speech.stop().then((result) => setState(() async {
_isListening = result;
print('stop() isListening = $_isListening');
}));

void onSpeechAvailability(bool result) =>
setState(() => _speechRecognitionAvailable = result);

void onCurrentLocale(String locale) => setState(() {
_currentLocale = locale;
print('Your currentLocale is $_currentLocale');
});

void onRecognitionStarted() => setState(() => _isListening = true);

void onRecognitionResult(String text) => setState(() {
transcription = text;
//print('your intermediate transcription is $transcription');
});

void onRecognitionComplete() => setState(() async {
_isListening = false;
//await stop();
print('onRecongintionComplete the transcript is $transcription');
//Just for testing... once chatbot integrated with backend TTS will be inserting card in that function
ChatCardData card = new ChatCardData(
id: id++,
hour: '${_date.hour}',
meridian: '${_date.minute}',
title: transcription,
isCustomer: true,
source:
'${DateName.month[(_date.month) - 1]} ${_date.day}, ${_date.year}',
text: true,
labelColor: Colors.green);
if (this.mounted) {
setState(() {
_load = true;
_list.insert(0, card);
});
}
//Tts.speak(transcription);
dynamic body = {'uid': UserAuth.userModel.uid, 'text': transcription};
//send it to chat engine
try {
Response responsePost = await _api.dioPost(
APIPATH.XXX, APIPATH.YYY, body);
print('responsePost obtained ....');
print(responsePost.data);
//print(responsePost.headers);
//print(responsePost.request);
print(responsePost.statusCode);
ChatCardData card = new ChatCardData(
id: id++,
hour: '${_date.hour}',
meridian: '${_date.minute}',
title: responsePost.data['data'],
isCustomer: false,
source:
'${DateName.month[(_date.month) - 1]} ${_date.day}, ${_date.year}',
text: true,
labelColor: Colors.green);
Tts.speak(responsePost.data['data']);
if (this.mounted) {
setState(() {
_load = false;
_list.insert(0, card);
});
}
} on DioError catch (e) {
if (this.mounted) {
setState(() {
_load = false;
});
}
// The request was made and the server responded with a status code
// that falls out of the range of 2xx and is also not 304.
print('post returned error!');
print('Error stack = $e');
if (e.response != null) {
print('e.response not null');
print(e.response.data);
print('statusCode of error = ');
print(e.response.statusCode);
//print(e.response.headers);
//int statusCode = e.response.statusCode;
final ThemeData theme = Theme.of(context);
final TextStyle dialogTextStyle = theme.textTheme.subhead
.copyWith(color: theme.textTheme.caption.color);
await showDialog(
barrierDismissible: false,
context: this.context,
builder: (BuildContext context) => new AlertDialog(
title: new Text('Error'),
content: new Text(e.response.data['data'],
style: dialogTextStyle),
actions: [
new FlatButton(
child: const Text('Dismiss'),
onPressed: () {
Navigator.pop(context, true);
})
]));
} else {
// Something happened in setting up or sending the request that triggered an Error
print('e.response is null');
print(e.response);
print(e.message);

        final ThemeData theme = Theme.of(context);
        final TextStyle dialogTextStyle = theme.textTheme.subhead
            .copyWith(color: theme.textTheme.caption.color);
        await showDialog(
            barrierDismissible: false,
            context: this.context,
            builder: (BuildContext context) => new AlertDialog(
                    title: new Text('Error'),
                    content: new Text(e.message, style: dialogTextStyle),
                    actions: <Widget>[
                      new FlatButton(
                          child: const Text('Dismiss'),
                          onPressed: () {
                            Navigator.pop(context, true);
                          })
                    ]));
      }
    }
  });

}

onError handler (android)

Hi, we're seeing

I/flutter ( 7919): _platformCallHandler call speech.onSpeechAvailability false
I/flutter ( 7919): _platformCallHandler call speech.onError 7
I/flutter ( 7919): Unknowm method speech.onError

on Android when speech recognition is running and the OS needs to do something (say, a text comes in). I see the onError call in the Java file but I don't see it at the Dart level. Maybe I'm missing something?

Thanks for the great work on this!

iOS Code Error

On building for iOS, the code is throwing an error on line 115 in the file "SwiftSpeechRecognitionPlugin.swift":

guard let inputNode = audioEngine.inputNode else {

** BUILD FAILED **
Xcode's output:

=== BUILD TARGET speech_recognition OF PROJECT Pods WITH CONFIGURATION Debug ===
/<flutter-lib>/.pub-cache/hosted/pub.dartlang.org/speech_recognition-0.2.0+1/ios/Classes/SwiftSpeechRecognitionPlugin.swift:115:11: error: initializer for conditional binding must have Optional type, not 'AVAudioInputNode'
guard let inputNode = audioEngine.inputNode else {
^

The cause seems to be that audioEngine is not an Optional type, and according to the AVAudioEngine Documentation neither is the inputNode member, and without an Optional value, guard throws an error.

XCode10 required => Xcode Build Fails (AVAudioSession?)

I'm unable to run an iOS build. The following errors happen to me running against iOS 12. This is a simulator (perhaps a reason?), have not tried on a device. Please let me know if you need more info.

XCode 10.1
Project is swift (Language version 4)

flutter run
Launching lib/main.dart on iPhone XR in debug mode...
Running pod install...                                       1.2s
Starting Xcode build...                                          
Xcode build done.                                            1.6s
Failed to build iOS app
Error output from Xcode build:
↳
    ** BUILD FAILED **


Xcode's output:
↳
    === BUILD TARGET path_provider OF PROJECT Pods WITH CONFIGURATION Debug ===
    /Users/me/Documents/flutter/.pub-cache/hosted/pub.dartlang.org/speech_recognition-0.3.0/ios/Classes/SwiftSpeechRecognitionPlugin.swift:109:34: error: type
    'AVAudioSession.Category' (aka 'NSString') has no member 'record'
        try audioSession.setCategory(AVAudioSession.Category.record, mode: .default)
                                     ^~~~~~~~~~~~~~~~~~~~~~~ ~~~~~~
    /Users/me/Documents/flutter/.pub-cache/hosted/pub.dartlang.org/speech_recognition-0.3.0/ios/Classes/SwiftSpeechRecognitionPlugin.swift:110:30: error: type
    'AVAudioSession.Mode' (aka 'NSString') has no member 'measurement'
        try audioSession.setMode(AVAudioSession.Mode.measurement)
                                 ^~~~~~~~~~~~~~~~~~~ ~~~~~~~~~~~
    /Users/me/Documents/flutter/.pub-cache/hosted/pub.dartlang.org/speech_recognition-0.3.0/ios/Classes/SwiftSpeechRecognitionPlugin.swift:188:9: error: value of
    type 'AVAudioSession.Category' (aka 'NSString') has no member 'rawValue'
            return input.rawValue
                   ^~~~~ ~~~~~~~~

Could not build the application for the simulator.
Error launching application on iPhone XR.

 ~/Documents/personal/xglasses/x                                                                                                                                                       2 ↵  7238  16:28:02
 $ flutter doctor
Doctor summary (to see all details, run flutter doctor -v):
[✓] Flutter (Channel beta, v0.11.9, on Mac OS X 10.13.6 17G65, locale en-US)
[✓] Android toolchain - develop for Android devices (Android SDK 28.0.3)
[✓] iOS toolchain - develop for iOS devices (Xcode 10.1)
[✓] Android Studio (version 3.2)
[✓] VS Code (version 1.29.1)
[✓] Connected device (1 available)

• No issues found!

Speeche recognition is not run

In my android studio emulator the application hangs.
I saw that the emulator has no language installed.
who can help me

Swift Version couldn't find

- `speech_recognition` does not specify a Swift version and none of the targets (`Runner`) integrating it has the `SWIFT_VERSION` attribute set. Please contact the author or set the `SWIFT_VERSION` attribute in at least one of the targets that integrate this pod.

After listening stops, I am getting "app behavior is wrong"

( 2000): [Mali]: gles_texture_bind_texture: Rendering feedback loop detected (texture=10), app behavior is wrong E/ ( 2000): [Mali]: gles_texture_bind_texture: Rendering feedback loop detected (texture=10), app behavior is wrong E/ ( 2000): [Mali]: gles_texture_bind_texture: Rendering feedback loop detected (texture=8), app behavior is wrong E/ ( 2000): [Mali]: gles_texture_bind_texture: Rendering feedback loop detected (texture=10), app behavior is wrong

_platformCallHandler call speech.onError 9 issue

Speech error while pressing Listening button "_platformCallHandler call speech.onError 9".
I/flutter ( 2933): _platformCallHandler call speech.onSpeechAvailability false I/flutter ( 2933): _platformCallHandler call speech.onError 9 I/flutter ( 2933): Unknowm method speech.onError

IOS onRecognitionComplete() not called at all

This might be an error on my part, but I implemented the code as suggested in the examples, and it runs perfectly on android but on IOS the onRecognitionComplete() is not called.
Great plugin though, works very well.

Plugin throws an error and does nothing when attempting to start recognition

I am using a version of the plugin up to date with master on this Git repo. I also get the warning that another reporter mentioned regarding the use of deprecated libraries when I build.

I am testing on the Pixel 3 XL - currently this issue prevents any usage of the package.

E/MethodChannel#speech_recognition(10569): Failed to handle method call
E/MethodChannel#speech_recognition(10569): java.lang.NullPointerException: Attempt to invoke virtual method 'java.lang.String java.lang.Object.toString()' on a null object reference
E/MethodChannel#speech_recognition(10569):      at bz.rxla.flutter.speechrecognition.SpeechRecognitionPlugin.onMethodCall(SpeechRecognitionPlugin.java:67)
E/MethodChannel#speech_recognition(10569):      at io.flutter.plugin.common.MethodChannel$IncomingMethodCallHandler.onMessage(MethodChannel.java:201)
E/MethodChannel#speech_recognition(10569):      at io.flutter.embedding.engine.dart.DartMessenger.handleMessageFromDart(DartMessenger.java:88)
E/MethodChannel#speech_recognition(10569):      at io.flutter.embedding.engine.FlutterJNI.handlePlatformMessage(FlutterJNI.java:219)
E/MethodChannel#speech_recognition(10569):      at android.os.MessageQueue.nativePollOnce(Native Method)
E/MethodChannel#speech_recognition(10569):      at android.os.MessageQueue.next(MessageQueue.java:326)
E/MethodChannel#speech_recognition(10569):      at android.os.Looper.loop(Looper.java:160)
E/MethodChannel#speech_recognition(10569):      at android.app.ActivityThread.main(ActivityThread.java:6718)
E/MethodChannel#speech_recognition(10569):      at java.lang.reflect.Method.invoke(Native Method)
E/MethodChannel#speech_recognition(10569):      at com.android.internal.os.RuntimeInit$MethodAndArgsCaller.run(RuntimeInit.java:493)
E/MethodChannel#speech_recognition(10569):      at com.android.internal.os.ZygoteInit.main(ZygoteInit.java:858)
E/flutter (10569): [ERROR:flutter/lib/ui/ui_dart_state.cc(148)] Unhandled Exception: PlatformException(error, Attempt to invoke virtual method 'java.lang.String java.lang.Object.toString()' on a null object reference, null)
E/flutter (10569): #0      StandardMethodCodec.decodeEnvelope (package:flutter/src/services/message_codecs.dart:564:7)
E/flutter (10569): #1      MethodChannel.invokeMethod (package:flutter/src/services/platform_channel.dart:302:33)
E/flutter (10569): <asynchronous suspension>
E/flutter (10569): #2      SpeechRecognition.listen (package:speech_recognition/speech_recognition.dart:38:16)
E/flutter (10569): #3      _DocumentCreationPageState._mySteps.<anonymous closure> (package:hepian_mobile/pages/document_creation_page.dart:183:26)
E/flutter (10569): #4      _InkResponseState._handleTap (package:flutter/src/material/ink_well.dart:511:14)
E/flutter (10569): #5      _InkResponseState.build.<anonymous closure> (package:flutter/src/material/ink_well.dart:566:30)
E/flutter (10569): #6      GestureRecognizer.invokeCallback (package:flutter/src/gestures/recognizer.dart:166:24)
E/flutter (10569): #7      TapGestureRecognizer._checkUp (package:flutter/src/gestures/tap.dart:240:9)
E/flutter (10569): #8      TapGestureRecognizer.acceptGesture (package:flutter/src/gestures/tap.dart:211:7)
E/flutter (10569): #9      GestureArenaManager.sweep (package:flutter/src/gestures/arena.dart:156:27)
E/flutter (10569): #10     _WidgetsFlutterBinding&BindingBase&GestureBinding.handleEvent (package:flutter/src/gestures/binding.dart:225:20)
E/flutter (10569): #11     _WidgetsFlutterBinding&BindingBase&GestureBinding.dispatchEvent (package:flutter/src/gestures/binding.dart:199:22)
E/flutter (10569): #12     _WidgetsFlutterBinding&BindingBase&GestureBinding._handlePointerEvent (package:flutter/src/gestures/binding.dart:156:7)
E/flutter (10569): #13     _WidgetsFlutterBinding&BindingBase&GestureBinding._flushPointerEventQueue (package:flutter/src/gestures/binding.dart:102:7)
E/flutter (10569): #14     _WidgetsFlutterBinding&BindingBase&GestureBinding._handlePointerDataPacket (package:flutter/src/gestures/binding.dart:86:7)
E/flutter (10569): #15     _rootRunUnary (dart:async/zone.dart:1136:13)
E/flutter (10569): #16     _CustomZone.runUnary (dart:async/zone.dart:1029:19)
E/flutter (10569): #17     _CustomZone.runUnaryGuarded (dart:async/zone.dart:931:7)
E/flutter (10569): #18     _invoke1 (dart:ui/hooks.dart:233:10)
E/flutter (10569): #19     _dispatchPointerDataPacket (dart:ui/hooks.dart:154:5)
E/flutter (10569): 

onError : 7 issue

An error has occurred namely "onError : 7" when the cancel button is clicked immediately after the start button is pressed (such that nothing has been spoken inbetween).

SpeechRecognitionPlugin.java uses or overrides a deprecated API.

When I Run app, I got a note
Note: C:\src\flutter.pub-cache\hosted\pub.dartlang.org\speech_recognition-0.3.0+1\android\src\main\java\bz\rxla\flutter\speechrecognition\SpeechRecognitionPlugin.java uses or overrides a deprecated API.
Note: Recompile with -Xlint:deprecation for details.

continuous speech

Hi,

is there a way to use continuous speech with this plugin ?

let's say for 10, 20 minutes or even more

thanks .

https://stackoverflow.com/questions/57853127/speech-recognition-error-on-ios-thread-1-exc-bad-access-code-2-address-0x16f4

I installed this speech recognition package on a flutter project it works very fine with android but on iOS it gets stock at the splash screen

on Xcode I get this error: Thread 1: EXC_BAD_ACCESS (code=2, address=0x16f21ffe0) see image below enter image description here
Screenshot 2019-09-09 at 13 18 29

the error happens when I add the speech_recognition package to pubspeck.yaml, do packages get, and pod install, then run the project. By removing the package it runs again.

I also tried to clean and also on Xcode product -> clean build folder. still same error.

anyone an idea?

Edit: I also stepped a new project and added the package to the pubspect.yaml file and I only get this warning
warning: could not execute support code to read Objective-C class data in the process. This may reduce the quality of type information available.
and the screen is blank white.

Android Speech_recognition not working in China

Steps to reproduce this problem:

  1. While inside China, connect the Android Phone to the internet. (IOS not tested)

  2. Run the example app. (CANNOT recognize speech)

  3. Connect any Hong Kong or USA VPN Server (or any other countries with Google not blocked).

  4. Run the example app again. (CAN recognize speech)

  5. Turn off the VPN and run the app again. (CANNOT recognize speech)

Not able to use "de_DE" locale

"en_US" runs like expected. But when I try using "de_DE" it takes the default french locale. Do I have to add "de_DE" somewhere else?

_speechRecognition.listen(locale: "de_DE").then((result) => setState(() { _textController.text = resultText; resultText = ""; }));

I am testing on iPhone 7 with latest iOS (physical device).

dart(argument_type_not_assignable) on example project

lib/main.dart:53:43: Error: The argument type 'void Function()' can't be assigned to the parameter type 'void Function(String)'.
Try changing the type of the parameter, or casting the argument to 'void Function(String)'.
_speech.setRecognitionCompleteHandler(onRecognitionComplete);

iOS simulator vs iPhone

I was able to implement an app that accepts voice commands. This is an iOS project at this point. I want to keep speech recognition running for extended periods. The setRecognitionCompleteHandler seems to be getting called automatically by iOS every minute or so. So, I reinitialize speech recognition from the CompleteHandler. This works well in the iOS simulator. Voice commands are activated by clicking a floating button, I give the voice command, the app does it's thing, and it waits for the next voice command. Clicking the floating button again causes it to stop listening. Perfect.

On my iPhone (6), I'm able to load and run the app, however it stops working after the first voice command. I click the floating button, give it the command, it does it's thing, then nothing. I have to click the floating button again to give another voice command.

Is there some permission or setting I need on the iPhone to achieve the behavior I'm getting with the iOS simulator? I'm running in debug mode. Not sure if that has anything to do with it.

Any help would be appreciated!

error on iOS error: type 'AVAudioSession.Category' (aka 'NSString') has no member 'record'

On a clean setup I get errors on IOS. (Android application runs normal)...
By adding "speech_recognition: "^0.3.0" to pubspec.yaml and running on IOS I get the following error Message:

Failed to build iOS app
Error output from Xcode build:

** BUILD FAILED **

Xcode's output:

warning: The use of Swift 3 @objc inference in Swift 4 mode is deprecated. Please address deprecated @objc inference warnings, test your code with “Use of deprecated Swift 3 @objc inference” logging enabled, and then disable inference by changing the "Swift 3 @objc Inference" build setting to "Default" for the "Runner" target. (in target 'Runner')
/Users/myuser/flutter/.pub-cache/hosted/pub.dartlang.org/speech_recognition-0.3.0+1/ios/Classes/SwiftSpeechRecognitionPlugin.swift:111:34: error: type 'AVAudioSession.Category' (aka 'NSString') has no member 'record'
try audioSession.setCategory(AVAudioSession.Category.record, mode: .default)
^~~~~~~~~~~~~~~~~~~~~~~ ~~~~~~
/Users/myuser/flutter/.pub-cache/hosted/pub.dartlang.org/speech_recognition-0.3.0+1/ios/Classes/SwiftSpeechRecognitionPlugin.swift:112:30: error: type 'AVAudioSession.Mode' (aka 'NSString') has no member 'measurement'
try audioSession.setMode(AVAudioSession.Mode.measurement)
^~~~~~~~~~~~~~~~~~~ ~~~~~~~~~~~
/Users/myuser/flutter/.pub-cache/hosted/pub.dartlang.org/speech_recognition-0.3.0+1/ios/Classes/SwiftSpeechRecognitionPlugin.swift:113:22: error: 'setActive(:options:)' has been renamed to 'setActive(:with:)'
try audioSession.setActive(true, options: .notifyOthersOnDeactivation)
^~~~~~~~~ ~~~~~~~
setActive with
AVFoundation.AVAudioSession:15:15: note: 'setActive(:options:)' was introduced in Swift 4.2
open func setActive(
active: Bool, options: AVAudioSessionSetActiveOptions = []) throws
^
/Users/myuser/flutter/.pub-cache/hosted/pub.dartlang.org/speech_recognition-0.3.0+1/ios/Classes/SwiftSpeechRecognitionPlugin.swift:186:9: error: value of type 'AVAudioSession.Category' (aka 'NSString') has no member 'rawValue'
return input.rawValue
^~~~~ ~~~~~~~~
note: Using new build systemnote: Planning buildnote: Constructing build description

Could not build the application for the simulator.
Error launching application on iPhone Xʀ.

Would like to learn more to find a solution.

Build Error: can't find speech_recognition-Swift.h

Failed to build iOS app
Error output from Xcode build:

** BUILD FAILED **
Xcode's output:

=== BUILD TARGET speech_recognition OF PROJECT Pods WITH CONFIGURATION Debug ===
/Users/abcdefg/Development/flutter/.pub-cache/hosted/pub.dartlang.org/speech_recognition-0.3.0+1/ios/Classes/SpeechRecognitionPlugin.m:2:9: fatal error: 'speech_recognition/speech_recognition-Swift.h' file not found
#import <speech_recognition/speech_recognition-Swift.h>
^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
1 error generated.
Could not build the application for the simulator.
Error launching application on iPhone Xʀ.
Exited (sigterm)

Is this an issue with flutter plugin or swift/xcode setup? I tried to find speech_recognition-Swift.h from the root directory and the find command returned:

./Users/abcdefg/Library/Developer/Xcode/DerivedData/Runner-ezxihojhlxwsmcdyavncncoswdha/Build/Intermediates.noindex/Pods.build/Debug-iphonesimulator/speech_recognition.build/Objects-normal/x86_64/speech_recognition-Swift.h
./Users/abcdefg/Library/Developer/Xcode/DerivedData/Runner-ezxihojhlxwsmcdyavncncoswdha/Build/Intermediates.noindex/Pods.build/Debug-iphonesimulator/speech_recognition.build/DerivedSources/speech_recognition-Swift.h
./Users/abcdefg/Library/Developer/Xcode/DerivedData/Runner-ezxihojhlxwsmcdyavncncoswdha/Build/Intermediates.noindex/Pods.build/Debug-iphoneos/speech_recognition.build/Objects-normal/arm64/speech_recognition-Swift.h
./Users/abcdefg/Library/Developer/Xcode/DerivedData/Runner-ezxihojhlxwsmcdyavncncoswdha/Build/Intermediates.noindex/Pods.build/Debug-iphoneos/speech_recognition.build/DerivedSources/speech_recognition-Swift.h

I believe I added the correct keys to the info.plist file. I'd assume if I got these wrong, it wouldn't effect the build though. Is that a false assumption?

Thanks for any help anyone can provide!

speech.onError blocks App from working

I/flutter ( 5459): _platformCallHandler call speech.onSpeechAvailability false
I/flutter ( 5459): _platformCallHandler call speech.onError 2
I/flutter ( 5459): Unknowm method speech.onError 

Android, not able to use other Locales

I've implemented a system in Flutter in which I can choose my language from a menu. The program starts by default in English but when I try to set it to Spanish or French it doesn't work.

I know that in the readme.md says that to make it work, more configuration is needed. The question is what do I need to do to make it work in other languages.

image

_speech.listen( locale: widget._language ).then((result) => setState(() {
print("Listening ${result}");
this.setState(() {
transcription = result;
});
}));

Error 9

I am getting some sort of error .
Annotation 2019-05-01 205916

Can anyone help me

Microphone stays on after stop() and close().

I am building an app where I use both speech recognition and text-to-speech. However, I have encountered an error where the microphone used for speech recognition is being kept on even after I stop the recognition with stop() or close() or if it completes on its own.
If the mic is kept on I cannot play any sounds and hence the text-to-speech doesn't work after I use speech recognition.

Push new version to pub.dartlang.org

I would like to use recent major feature updates in a larger project, however, the most recent build available from pub.dartlang.org is back from November. This prevents us from being able to use features such as the error handling, which is a big deal! I would greatly appreciate it if you could update the package listing with a recent build. We really appreciate the work that you have done on this library already!

Many thanks,

Ed

Android - error: The argument type '() → void' can't be assigned to the parameter type '(String) → void'.

Hi,
I tried to import the project in Android studio, the project sync working fine, after the compiler error displayed,

error: The argument type '() → void' can't be assigned to the parameter type '(String) → void'. (argument_type_not_assignable at [speech_recognition_example] lib\main.dart:53)

Compiler message:
lib/main.dart:53:43: Error: The argument type 'void Function()' can't be assigned to the parameter type 'void Function(String)'.
Try changing the type of the parameter, or casting the argument to 'void Function(String)'.
_speech.setRecognitionCompleteHandler(onRecognitionComplete);

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.