rxlabz / speech_recognition Goto Github PK
View Code? Open in Web Editor NEWA Flutter plugin to use speech recognition on iOS & Android (Swift/Java)
Home Page: https://pub.dartlang.org/packages/speech_recognition
License: Other
A Flutter plugin to use speech recognition on iOS & Android (Swift/Java)
Home Page: https://pub.dartlang.org/packages/speech_recognition
License: Other
Launching lib/main.dart on SM G935F in debug mode...
Initializing gradle...
Resolving dependencies...
Running 'gradlew assembleDebug'...
Built build/app/outputs/apk/debug/app-debug.apk (31.3MB).
Installing build/app/outputs/apk/app.apk...
I/FlutterActivityDelegate(11364): onResume setting current activity to this
I/flutter (11364): _MyAppState.activateSpeechRecognizer...
I/flutter (11364): _platformCallHandler call speech.onCurrentLocale tr_TR
D/libGLESv2(11364): STS_GLApi : DTS is not allowed for Package : com.yourcompany.speechcapital
Syncing files to device SM G935F...
D/ViewRootImpl@5417505MainActivity: ViewPostImeInputStage processPointer 0
W/System (11364): ClassLoader referenced unknown path: /system/framework/QPerformance.jar
E/BoostFramework(11364): BoostFramework() : Exception_1 = java.lang.ClassNotFoundException: Didn't find class "com.qualcomm.qti.Performance" on path: DexPathList[[],nativeLibraryDirectories=[/system/lib64, /vendor/lib64]]
V/BoostFramework(11364): BoostFramework() : mPerf = null
D/ViewRootImpl@5417505MainActivity: ViewPostImeInputStage processPointer 1
I/flutter (11364): _MyAppState.start => result true
D/SpeechRecognitionPlugin(11364): onError : 9
I/flutter (11364): _platformCallHandler call speech.onSpeechAvailability false
I/flutter (11364): _platformCallHandler call speech.onError 9
I/flutter (11364): Unknowm method speech.onError
V/InputMethodManager(11364): Starting input: tba=android.view.inputmethod.EditorInfo@6c4a144 nm : com.yourcompany.speechcapital ic=null
I/InputMethodManager(11364): [IMM] startInputInner - mService.startInputOrWindowGainedFocus
D/InputTransport(11364): Input channel constructed: fd=101
D/InputTransport(11364): Input channel destroyed: fd=96
D/ViewRootImpl@5417505MainActivity: MSG_WINDOW_FOCUS_CHANGED 0
Hi there,
I want to press a listen button and it will start listens me with few second. When I stop it will ends the listing and translates to text. How do we do that?
Example: when I say "what is my balance" its stop and converts to text and search from given list and finds the "balance" word match and it open the Account or Balance page.
I am using a version of the plugin up to date with master on this Git repo. I also get the warning that another reporter mentioned regarding the use of deprecated libraries when I build.
I am testing on the Pixel 3 XL - currently this issue prevents any usage of the package.
E/MethodChannel#speech_recognition(10569): Failed to handle method call
E/MethodChannel#speech_recognition(10569): java.lang.NullPointerException: Attempt to invoke virtual method 'java.lang.String java.lang.Object.toString()' on a null object reference
E/MethodChannel#speech_recognition(10569): at bz.rxla.flutter.speechrecognition.SpeechRecognitionPlugin.onMethodCall(SpeechRecognitionPlugin.java:67)
E/MethodChannel#speech_recognition(10569): at io.flutter.plugin.common.MethodChannel$IncomingMethodCallHandler.onMessage(MethodChannel.java:201)
E/MethodChannel#speech_recognition(10569): at io.flutter.embedding.engine.dart.DartMessenger.handleMessageFromDart(DartMessenger.java:88)
E/MethodChannel#speech_recognition(10569): at io.flutter.embedding.engine.FlutterJNI.handlePlatformMessage(FlutterJNI.java:219)
E/MethodChannel#speech_recognition(10569): at android.os.MessageQueue.nativePollOnce(Native Method)
E/MethodChannel#speech_recognition(10569): at android.os.MessageQueue.next(MessageQueue.java:326)
E/MethodChannel#speech_recognition(10569): at android.os.Looper.loop(Looper.java:160)
E/MethodChannel#speech_recognition(10569): at android.app.ActivityThread.main(ActivityThread.java:6718)
E/MethodChannel#speech_recognition(10569): at java.lang.reflect.Method.invoke(Native Method)
E/MethodChannel#speech_recognition(10569): at com.android.internal.os.RuntimeInit$MethodAndArgsCaller.run(RuntimeInit.java:493)
E/MethodChannel#speech_recognition(10569): at com.android.internal.os.ZygoteInit.main(ZygoteInit.java:858)
E/flutter (10569): [ERROR:flutter/lib/ui/ui_dart_state.cc(148)] Unhandled Exception: PlatformException(error, Attempt to invoke virtual method 'java.lang.String java.lang.Object.toString()' on a null object reference, null)
E/flutter (10569): #0 StandardMethodCodec.decodeEnvelope (package:flutter/src/services/message_codecs.dart:564:7)
E/flutter (10569): #1 MethodChannel.invokeMethod (package:flutter/src/services/platform_channel.dart:302:33)
E/flutter (10569): <asynchronous suspension>
E/flutter (10569): #2 SpeechRecognition.listen (package:speech_recognition/speech_recognition.dart:38:16)
E/flutter (10569): #3 _DocumentCreationPageState._mySteps.<anonymous closure> (package:hepian_mobile/pages/document_creation_page.dart:183:26)
E/flutter (10569): #4 _InkResponseState._handleTap (package:flutter/src/material/ink_well.dart:511:14)
E/flutter (10569): #5 _InkResponseState.build.<anonymous closure> (package:flutter/src/material/ink_well.dart:566:30)
E/flutter (10569): #6 GestureRecognizer.invokeCallback (package:flutter/src/gestures/recognizer.dart:166:24)
E/flutter (10569): #7 TapGestureRecognizer._checkUp (package:flutter/src/gestures/tap.dart:240:9)
E/flutter (10569): #8 TapGestureRecognizer.acceptGesture (package:flutter/src/gestures/tap.dart:211:7)
E/flutter (10569): #9 GestureArenaManager.sweep (package:flutter/src/gestures/arena.dart:156:27)
E/flutter (10569): #10 _WidgetsFlutterBinding&BindingBase&GestureBinding.handleEvent (package:flutter/src/gestures/binding.dart:225:20)
E/flutter (10569): #11 _WidgetsFlutterBinding&BindingBase&GestureBinding.dispatchEvent (package:flutter/src/gestures/binding.dart:199:22)
E/flutter (10569): #12 _WidgetsFlutterBinding&BindingBase&GestureBinding._handlePointerEvent (package:flutter/src/gestures/binding.dart:156:7)
E/flutter (10569): #13 _WidgetsFlutterBinding&BindingBase&GestureBinding._flushPointerEventQueue (package:flutter/src/gestures/binding.dart:102:7)
E/flutter (10569): #14 _WidgetsFlutterBinding&BindingBase&GestureBinding._handlePointerDataPacket (package:flutter/src/gestures/binding.dart:86:7)
E/flutter (10569): #15 _rootRunUnary (dart:async/zone.dart:1136:13)
E/flutter (10569): #16 _CustomZone.runUnary (dart:async/zone.dart:1029:19)
E/flutter (10569): #17 _CustomZone.runUnaryGuarded (dart:async/zone.dart:931:7)
E/flutter (10569): #18 _invoke1 (dart:ui/hooks.dart:233:10)
E/flutter (10569): #19 _dispatchPointerDataPacket (dart:ui/hooks.dart:154:5)
E/flutter (10569):
Failed to build iOS app
Error output from Xcode build:
↳
** BUILD FAILED **
Xcode's output:
↳
=== BUILD TARGET speech_recognition OF PROJECT Pods WITH CONFIGURATION Debug ===
/Users/abcdefg/Development/flutter/.pub-cache/hosted/pub.dartlang.org/speech_recognition-0.3.0+1/ios/Classes/SpeechRecognitionPlugin.m:2:9: fatal error: 'speech_recognition/speech_recognition-Swift.h' file not found
#import <speech_recognition/speech_recognition-Swift.h>
^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
1 error generated.
Could not build the application for the simulator.
Error launching application on iPhone Xʀ.
Exited (sigterm)
Is this an issue with flutter plugin or swift/xcode setup? I tried to find speech_recognition-Swift.h from the root directory and the find command returned:
./Users/abcdefg/Library/Developer/Xcode/DerivedData/Runner-ezxihojhlxwsmcdyavncncoswdha/Build/Intermediates.noindex/Pods.build/Debug-iphonesimulator/speech_recognition.build/Objects-normal/x86_64/speech_recognition-Swift.h
./Users/abcdefg/Library/Developer/Xcode/DerivedData/Runner-ezxihojhlxwsmcdyavncncoswdha/Build/Intermediates.noindex/Pods.build/Debug-iphonesimulator/speech_recognition.build/DerivedSources/speech_recognition-Swift.h
./Users/abcdefg/Library/Developer/Xcode/DerivedData/Runner-ezxihojhlxwsmcdyavncncoswdha/Build/Intermediates.noindex/Pods.build/Debug-iphoneos/speech_recognition.build/Objects-normal/arm64/speech_recognition-Swift.h
./Users/abcdefg/Library/Developer/Xcode/DerivedData/Runner-ezxihojhlxwsmcdyavncncoswdha/Build/Intermediates.noindex/Pods.build/Debug-iphoneos/speech_recognition.build/DerivedSources/speech_recognition-Swift.h
I believe I added the correct keys to the info.plist file. I'd assume if I got these wrong, it wouldn't effect the build though. Is that a false assumption?
Thanks for any help anyone can provide!
After I updated Dart to 2.2.0 and Flutter to v1.4.9-hotfix.1, the application quit with message: Unfortunately, app has stopped.
Any hint on how to solve this problem would be greatly appreciated.
I/flutter ( 5459): _platformCallHandler call speech.onSpeechAvailability false
I/flutter ( 5459): _platformCallHandler call speech.onError 2
I/flutter ( 5459): Unknowm method speech.onError
When I use this plugin, I noticed that at Run Console, onRmsChanged results were displayed like below:
D/SpeechRecognitionPlugin(28376): onRmsChanged : 8.799999
D/SpeechRecognitionPlugin(28376): onRmsChanged : 1.6000001
D/SpeechRecognitionPlugin(28376): onRmsChanged : -2.0
D/SpeechRecognitionPlugin(28376): onRmsChanged : 2.8000002
D/SpeechRecognitionPlugin(28376): onRmsChanged : 2.8000002
D/SpeechRecognitionPlugin(28376): onRmsChanged : 6.3999996
D/SpeechRecognitionPlugin(28376): onRmsChanged : 6.3999996
D/SpeechRecognitionPlugin(28376): onRmsChanged : 5.2000003
D/SpeechRecognitionPlugin(28376): onRmsChanged : 4.0
D/SpeechRecognitionPlugin(28376): onRmsChanged : 2.8000002
D/SpeechRecognitionPlugin(28376): onRmsChanged : 7.6000004
D/SpeechRecognitionPlugin(28376): onRmsChanged : 7.6000004
Could you add some functions to get these results? These RmsDB results are really useful! Thanks.
I've implemented a system in Flutter in which I can choose my language from a menu. The program starts by default in English but when I try to set it to Spanish or French it doesn't work.
I know that in the readme.md says that to make it work, more configuration is needed. The question is what do I need to do to make it work in other languages.
_speech.listen( locale: widget._language ).then((result) => setState(() {
print("Listening ${result}");
this.setState(() {
transcription = result;
});
}));
Hi,
I am facing a strange issue on Android (PIXEL 2) real device and android simulator too. I get
void onRecognitionComplete() => setState(() async { called twice.
Here's the dump of the log from Android Studio...
09-06 17:09:31.417 18931-18975/? I/flutter: Your currentLocale is en_AU
09-06 17:09:36.721 18931-18975/? I/flutter: _SpeechBotState.start => result true
09-06 17:09:39.595 18931-18975/? I/flutter: stop() isListening = true
09-06 17:09:39.629 18931-18975/? I/flutter: onRecongintionComplete the transcript is hello
inside _api.dioPost()
inside getIdToken
09-06 17:09:39.637 18931-18975/? I/flutter: inside getIdTokenFromUser
09-06 17:09:39.639 18931-18975/? I/flutter: params passed to post are =
{uid: EJDgM5Kd0EO75R7XW9EPRyD6dZR2, text: hello}
09-06 17:09:39.821 18931-18975/? I/flutter: onRecongintionComplete the transcript is hello
09-06 17:09:39.823 18931-18975/? I/flutter: inside getIdTokenFromUser
09-06 17:09:39.825 18931-18975/? I/flutter: params passed to post are =
{uid: EJDgM5Kd0EO75R7XW9EPRyD6dZR2, text: hello}
09-06 17:24:59.118 18931-18975/? I/flutter: {status: success, successFlag: true, data: Greetings!}
09-06 17:24:59.121 18931-18975/? I/flutter: responsePost obtained ....
09-06 17:24:59.122 18931-18975/? I/flutter: {status: success, successFlag: true, data: Hi!}
09-06 17:25:06.597 18931-18975/? I/flutter: _SpeechBotState.start => result true
09-06 17:25:09.309 18931-18975/? I/flutter: onRecongintionComplete the transcript is I have a problem
09-06 17:25:09.318 18931-18975/? I/flutter: inside getIdTokenFromUser
09-06 17:25:09.323 18931-18975/? I/flutter: params passed to post are =
09-06 17:25:09.324 18931-18975/? I/flutter: {uid: EJDgM5Kd0EO75R7XW9EPRyD6dZR2, text: I have a problem}
09-06 17:25:09.528 18931-18975/? I/flutter: onRecongintionComplete the transcript is I have a problem
09-06 17:25:09.545 18931-18975/? I/flutter: inside getIdTokenFromUser
09-06 17:25:09.551 18931-18975/? I/flutter: params passed to post are =
{uid: EJDgM5Kd0EO75R7XW9EPRyD6dZR2, text: I have a problem}
Why is it happening twice on Android and on iOS it works fine? Your help will be very much appreciated.
To assess what is going wrong sharing the code below....
@OverRide
initState() {
super.initState();
initPlatformState();
checkPermission();
}
@OverRide
dispose() {
stop();
_cancelRecognitionHandler();
super.dispose();
}
new Expanded(
child: new Container(
alignment: Alignment.bottomCenter,
margin: const EdgeInsets.only(bottom: 10.0),
child: new FloatingActionButton(
backgroundColor: floatBttnColor,
child: new Icon(Icons.mic),
//navigate: () => navigate(''),
onPressed: _speechRecognitionAvailable && !_isListening
? () {
if (this.mounted) {
setState(() {
floatBttnColor = Colors.purple;
});
}
start();
}
: () {
stop();
if (this.mounted) {
setState(() {
floatBttnColor = Colors.green;
});
}
},
),
),
),
and out side the build Widget function in the class I have
void start() => _speech
.listen(locale: _currentLocale)
.then((result) => print('_SpeechBotState.start => result $result'));
void cancel() => _speech.cancel().then((result) => setState(() {
_isListening = result;
print('_speech.cancel result is $result');
}));
Future stop() => _speech.stop().then((result) => setState(() async {
_isListening = result;
print('stop() isListening = $_isListening');
}));
void onSpeechAvailability(bool result) =>
setState(() => _speechRecognitionAvailable = result);
void onCurrentLocale(String locale) => setState(() {
_currentLocale = locale;
print('Your currentLocale is $_currentLocale');
});
void onRecognitionStarted() => setState(() => _isListening = true);
void onRecognitionResult(String text) => setState(() {
transcription = text;
//print('your intermediate transcription is $transcription');
});
void onRecognitionComplete() => setState(() async {
_isListening = false;
//await stop();
print('onRecongintionComplete the transcript is $transcription');
//Just for testing... once chatbot integrated with backend TTS will be inserting card in that function
ChatCardData card = new ChatCardData(
id: id++,
hour: '${_date.hour}',
meridian: '${_date.minute}',
title: transcription,
isCustomer: true,
source:
'${DateName.month[(_date.month) - 1]} ${_date.day}, ${_date.year}',
text: true,
labelColor: Colors.green);
if (this.mounted) {
setState(() {
_load = true;
_list.insert(0, card);
});
}
//Tts.speak(transcription);
dynamic body = {'uid': UserAuth.userModel.uid, 'text': transcription};
//send it to chat engine
try {
Response responsePost = await _api.dioPost(
APIPATH.XXX, APIPATH.YYY, body);
print('responsePost obtained ....');
print(responsePost.data);
//print(responsePost.headers);
//print(responsePost.request);
print(responsePost.statusCode);
ChatCardData card = new ChatCardData(
id: id++,
hour: '${_date.hour}',
meridian: '${_date.minute}',
title: responsePost.data['data'],
isCustomer: false,
source:
'${DateName.month[(_date.month) - 1]} ${_date.day}, ${_date.year}',
text: true,
labelColor: Colors.green);
Tts.speak(responsePost.data['data']);
if (this.mounted) {
setState(() {
_load = false;
_list.insert(0, card);
});
}
} on DioError catch (e) {
if (this.mounted) {
setState(() {
_load = false;
});
}
// The request was made and the server responded with a status code
// that falls out of the range of 2xx and is also not 304.
print('post returned error!');
print('Error stack = $e');
if (e.response != null) {
print('e.response not null');
print(e.response.data);
print('statusCode of error = ');
print(e.response.statusCode);
//print(e.response.headers);
//int statusCode = e.response.statusCode;
final ThemeData theme = Theme.of(context);
final TextStyle dialogTextStyle = theme.textTheme.subhead
.copyWith(color: theme.textTheme.caption.color);
await showDialog(
barrierDismissible: false,
context: this.context,
builder: (BuildContext context) => new AlertDialog(
title: new Text('Error'),
content: new Text(e.response.data['data'],
style: dialogTextStyle),
actions: [
new FlatButton(
child: const Text('Dismiss'),
onPressed: () {
Navigator.pop(context, true);
})
]));
} else {
// Something happened in setting up or sending the request that triggered an Error
print('e.response is null');
print(e.response);
print(e.message);
final ThemeData theme = Theme.of(context);
final TextStyle dialogTextStyle = theme.textTheme.subhead
.copyWith(color: theme.textTheme.caption.color);
await showDialog(
barrierDismissible: false,
context: this.context,
builder: (BuildContext context) => new AlertDialog(
title: new Text('Error'),
content: new Text(e.message, style: dialogTextStyle),
actions: <Widget>[
new FlatButton(
child: const Text('Dismiss'),
onPressed: () {
Navigator.pop(context, true);
})
]));
}
}
});
}
D:\Flutter\Sdk\flutter\bin\flutter.bat --no-color packages upgrade
Running "flutter packages upgrade" in music_player...
The current Dart SDK version is 2.1.0-dev.3.1.flutter-760a9690c2.
Because music_player depends on speech_recognition any which requires SDK version >=1.23.0 <2.0.0, version solving failed.
pub upgrade failed (1)
I open this issue due to it still happens without any solution to solve this.
Does anyone have any idea about this?
I even followed setup:
https://flutter.dev/docs/development/packages-and-plugins/androidx-compatibility
Thanks, team.
I am getting this error while running flutter packages get.,
Running "flutter packages get" in App...
The current Dart SDK version is 2.1.0-dev.0.0.flutter-be6309690f.
Because App depends on speech_recognition >=0.2.0+1 which requires SDK version <2.0.0, version solving failed.
Can you please add support to the dart 2.1?
"en_US" runs like expected. But when I try using "de_DE" it takes the default french locale. Do I have to add "de_DE" somewhere else?
_speechRecognition.listen(locale: "de_DE").then((result) => setState(() { _textController.text = resultText; resultText = ""; }));
I am testing on iPhone 7 with latest iOS (physical device).
I got this error when trying to integrate speech_recognition to my app and running it on IOS.
Installing speech_recognition (0.3.0)
[!] Unable to determine Swift version for the following pods:
speech_recognition
does not specify a Swift version and none of the targets (Runner
) integrating it have theSWIFT_VERSION
attribute set. Please contact the author or set theSWIFT_VERSION
attribute in at least one of the targets that integrate this pod.
I thought the error is very clear, how would I fix it?
Thank you.
( 2000): [Mali]: gles_texture_bind_texture: Rendering feedback loop detected (texture=10), app behavior is wrong E/ ( 2000): [Mali]: gles_texture_bind_texture: Rendering feedback loop detected (texture=10), app behavior is wrong E/ ( 2000): [Mali]: gles_texture_bind_texture: Rendering feedback loop detected (texture=8), app behavior is wrong E/ ( 2000): [Mali]: gles_texture_bind_texture: Rendering feedback loop detected (texture=10), app behavior is wrong
On a clean setup I get errors on IOS. (Android application runs normal)...
By adding "speech_recognition: "^0.3.0" to pubspec.yaml and running on IOS I get the following error Message:
Failed to build iOS app
Error output from Xcode build:
↳
** BUILD FAILED **Xcode's output:
↳
warning: The use of Swift 3 @objc inference in Swift 4 mode is deprecated. Please address deprecated @objc inference warnings, test your code with “Use of deprecated Swift 3 @objc inference” logging enabled, and then disable inference by changing the "Swift 3 @objc Inference" build setting to "Default" for the "Runner" target. (in target 'Runner')
/Users/myuser/flutter/.pub-cache/hosted/pub.dartlang.org/speech_recognition-0.3.0+1/ios/Classes/SwiftSpeechRecognitionPlugin.swift:111:34: error: type 'AVAudioSession.Category' (aka 'NSString') has no member 'record'
try audioSession.setCategory(AVAudioSession.Category.record, mode: .default)
^~~~~~~~~~~~~~~~~~~~~~~ ~~~~~~
/Users/myuser/flutter/.pub-cache/hosted/pub.dartlang.org/speech_recognition-0.3.0+1/ios/Classes/SwiftSpeechRecognitionPlugin.swift:112:30: error: type 'AVAudioSession.Mode' (aka 'NSString') has no member 'measurement'
try audioSession.setMode(AVAudioSession.Mode.measurement)
^~~~~~~~~~~~~~~~~~~ ~~~~~~~~~~~
/Users/myuser/flutter/.pub-cache/hosted/pub.dartlang.org/speech_recognition-0.3.0+1/ios/Classes/SwiftSpeechRecognitionPlugin.swift:113:22: error: 'setActive(:options:)' has been renamed to 'setActive(:with:)'
try audioSession.setActive(true, options: .notifyOthersOnDeactivation)
^~~~~~~~~ ~~~~~~~
setActive with
AVFoundation.AVAudioSession:15:15: note: 'setActive(:options:)' was introduced in Swift 4.2
open func setActive( active: Bool, options: AVAudioSessionSetActiveOptions = []) throws
^
/Users/myuser/flutter/.pub-cache/hosted/pub.dartlang.org/speech_recognition-0.3.0+1/ios/Classes/SwiftSpeechRecognitionPlugin.swift:186:9: error: value of type 'AVAudioSession.Category' (aka 'NSString') has no member 'rawValue'
return input.rawValue
^~~~~ ~~~~~~~~
note: Using new build systemnote: Planning buildnote: Constructing build descriptionCould not build the application for the simulator.
Error launching application on iPhone Xʀ.
Would like to learn more to find a solution.
An error has occurred namely "onError : 7" when the cancel button is clicked immediately after the start button is pressed (such that nothing has been spoken inbetween).
This might be an error on my part, but I implemented the code as suggested in the examples, and it runs perfectly on android but on IOS the onRecognitionComplete() is not called.
Great plugin though, works very well.
While inside China, connect the Android Phone to the internet. (IOS not tested)
Run the example app. (CANNOT recognize speech)
Connect any Hong Kong or USA VPN Server (or any other countries with Google not blocked).
Run the example app again. (CAN recognize speech)
Turn off the VPN and run the app again. (CANNOT recognize speech)
`Error output from Xcode build:
↳
** BUILD FAILED **
Xcode's output:
↳
=== BUILD TARGET speech_recognition OF PROJECT Pods WITH CONFIGURATION Debug ===
:1:9: note: in file included from :1:
#import "Headers/speech_recognition-umbrella.h"
^
/Users/bogdandinga/Work/hacktm-assets/test_app/ios/Pods/Target Support Files/speech_recognition/speech_recognition-umbrella.h:13:9: error: include of non-modular header inside framework module 'speech_recognition': '/Users/bogdandinga/Work/flutter/flutter/.pub-cache/hosted/pub.dartlang.org/speech_recognition-0.2.0+1/ios/Classes/SpeechRecognitionPlugin.h'
#import "SpeechRecognitionPlugin.h"
^
:0: error: could not build Objective-C module 'speech_recognition'
Could not build the application for the simulator.
Error launching application on iPhone 8.`
Hi,
is there a way to use continuous speech with this plugin ?
let's say for 10, 20 minutes or even more
thanks .
Hi team.
I’m using speech_recognition 0.3.0+1 with flutter. I’ve tested with simulator IOS, it worked properly.
But when I build and test on a real device (iPhone 7 plus), have a problem, when the first time I start the app, volume work as normal, but whenever I invoke a speech recognition, the overall volume of the app decreases.
I tried to fix follow guide. But it still not works.
Please help me to find the solution for this, thanks team
flutter/.pub-cache/hosted/pub.dartlang.org/speech_recognition-0.3.0+1/android/src/main/java/bz/rxla/flutter/speechrecognition/SpeechRecognitionPlugin.java uses or overrides a deprecated API.
Hi, I'm using the latest available package:
speech_recognition: ^0.3.0+1
the flutter doctor output:
Doctor summary (to see all details, run flutter doctor -v):
[✓] Flutter (Channel stable, v1.5.4-hotfix.2, on Linux, locale en_US.UTF-8)
[✓] Android toolchain - develop for Android devices (Android SDK version 28.0.3)
[✓] Android Studio (version 3.3)
[✓] VS Code (version 1.31.1)
[!] Proxy Configuration
! NO_PROXY does not contain 127.0.0.1
[✓] Connected device (1 available)
! Doctor found issues in 1 category.
trying to check the basic example app:
void activateSpeechRecognizer() {
print('_MyAppState.activateSpeechRecognizer... ');
_speech = new SpeechRecognition();
_speech.setAvailabilityHandler(onSpeechAvailability);
_speech.setCurrentLocaleHandler(onCurrentLocale);
_speech.setRecognitionStartedHandler(onRecognitionStarted);
_speech.setRecognitionResultHandler(onRecognitionResult);
_speech.setRecognitionCompleteHandler(onRecognitionComplete);
_speech.setErrorHandler(errorHandler);
_speech
.activate()
.then((res) => setState(() => _speechRecognitionAvailable = res));
}
and getting this message:
The method 'setErrorHandler' isn't defined for the class 'SpeechRecognition'.
also some suspicious error from my previous run (flutter run output):
I/flutter (19599): Unknowm method speech.onError
lib/main.dart:53:43: Error: The argument type 'void Function()' can't be assigned to the parameter type 'void Function(String)'.
Try changing the type of the parameter, or casting the argument to 'void Function(String)'.
_speech.setRecognitionCompleteHandler(onRecognitionComplete);
When I Run app, I got a note
Note: C:\src\flutter.pub-cache\hosted\pub.dartlang.org\speech_recognition-0.3.0+1\android\src\main\java\bz\rxla\flutter\speechrecognition\SpeechRecognitionPlugin.java uses or overrides a deprecated API.
Note: Recompile with -Xlint:deprecation for details.
I'm unable to run an iOS build. The following errors happen to me running against iOS 12. This is a simulator (perhaps a reason?), have not tried on a device. Please let me know if you need more info.
XCode 10.1
Project is swift (Language version 4)
flutter run
Launching lib/main.dart on iPhone XR in debug mode...
Running pod install... 1.2s
Starting Xcode build...
Xcode build done. 1.6s
Failed to build iOS app
Error output from Xcode build:
↳
** BUILD FAILED **
Xcode's output:
↳
=== BUILD TARGET path_provider OF PROJECT Pods WITH CONFIGURATION Debug ===
/Users/me/Documents/flutter/.pub-cache/hosted/pub.dartlang.org/speech_recognition-0.3.0/ios/Classes/SwiftSpeechRecognitionPlugin.swift:109:34: error: type
'AVAudioSession.Category' (aka 'NSString') has no member 'record'
try audioSession.setCategory(AVAudioSession.Category.record, mode: .default)
^~~~~~~~~~~~~~~~~~~~~~~ ~~~~~~
/Users/me/Documents/flutter/.pub-cache/hosted/pub.dartlang.org/speech_recognition-0.3.0/ios/Classes/SwiftSpeechRecognitionPlugin.swift:110:30: error: type
'AVAudioSession.Mode' (aka 'NSString') has no member 'measurement'
try audioSession.setMode(AVAudioSession.Mode.measurement)
^~~~~~~~~~~~~~~~~~~ ~~~~~~~~~~~
/Users/me/Documents/flutter/.pub-cache/hosted/pub.dartlang.org/speech_recognition-0.3.0/ios/Classes/SwiftSpeechRecognitionPlugin.swift:188:9: error: value of
type 'AVAudioSession.Category' (aka 'NSString') has no member 'rawValue'
return input.rawValue
^~~~~ ~~~~~~~~
Could not build the application for the simulator.
Error launching application on iPhone XR.
~/Documents/personal/xglasses/x 2 ↵ 7238 16:28:02
$ flutter doctor
Doctor summary (to see all details, run flutter doctor -v):
[✓] Flutter (Channel beta, v0.11.9, on Mac OS X 10.13.6 17G65, locale en-US)
[✓] Android toolchain - develop for Android devices (Android SDK 28.0.3)
[✓] iOS toolchain - develop for iOS devices (Xcode 10.1)
[✓] Android Studio (version 3.2)
[✓] VS Code (version 1.29.1)
[✓] Connected device (1 available)
• No issues found!
I installed this speech recognition package on a flutter project it works very fine with android but on iOS it gets stock at the splash screen
on Xcode I get this error: Thread 1: EXC_BAD_ACCESS (code=2, address=0x16f21ffe0) see image below enter image description here
the error happens when I add the speech_recognition package to pubspeck.yaml, do packages get, and pod install, then run the project. By removing the package it runs again.
I also tried to clean and also on Xcode product -> clean build folder. still same error.
anyone an idea?
Edit: I also stepped a new project and added the package to the pubspect.yaml file and I only get this warning
warning: could not execute support code to read Objective-C class data in the process. This may reduce the quality of type information available.
and the screen is blank white.
On building for iOS, the code is throwing an error on line 115 in the file "SwiftSpeechRecognitionPlugin.swift":
guard let inputNode = audioEngine.inputNode else {
** BUILD FAILED **
Xcode's output:
↳
=== BUILD TARGET speech_recognition OF PROJECT Pods WITH CONFIGURATION Debug ===
/<flutter-lib>/.pub-cache/hosted/pub.dartlang.org/speech_recognition-0.2.0+1/ios/Classes/SwiftSpeechRecognitionPlugin.swift:115:11: error: initializer for conditional binding must have Optional type, not 'AVAudioInputNode'
guard let inputNode = audioEngine.inputNode else {
^
The cause seems to be that audioEngine is not an Optional
type, and according to the AVAudioEngine Documentation neither is the inputNode
member, and without an Optional
value, guard
throws an error.
First of all, Thanks very much for doing this work and helping others.
I am trying to understand the "optimal" use of the library. In my app, there really is only one screen, and as long as my app is open, there is a button, ready for speech recognition.
When is it best to register callback handlers and then call activate()? On app startup, or when the user presses the UI button for speech recognition?
What is the difference between cancel and stop?
If I call stop or cancel will I have to re-call activate() again? Or just call listen() again.
thanks for the help.
Hi, we're seeing
I/flutter ( 7919): _platformCallHandler call speech.onSpeechAvailability false
I/flutter ( 7919): _platformCallHandler call speech.onError 7
I/flutter ( 7919): Unknowm method speech.onError
on Android when speech recognition is running and the OS needs to do something (say, a text comes in). I see the onError call in the Java file but I don't see it at the Dart level. Maybe I'm missing something?
Thanks for the great work on this!
I was able to implement an app that accepts voice commands. This is an iOS project at this point. I want to keep speech recognition running for extended periods. The setRecognitionCompleteHandler seems to be getting called automatically by iOS every minute or so. So, I reinitialize speech recognition from the CompleteHandler. This works well in the iOS simulator. Voice commands are activated by clicking a floating button, I give the voice command, the app does it's thing, and it waits for the next voice command. Clicking the floating button again causes it to stop listening. Perfect.
On my iPhone (6), I'm able to load and run the app, however it stops working after the first voice command. I click the floating button, give it the command, it does it's thing, then nothing. I have to click the floating button again to give another voice command.
Is there some permission or setting I need on the iPhone to achieve the behavior I'm getting with the iOS simulator? I'm running in debug mode. Not sure if that has anything to do with it.
Any help would be appreciated!
Launching lib/main.dart on iPhone Xʀ in debug mode...
Running Xcode build...
Xcode build done. 13.3s
Failed to build iOS app
Error output from Xcode build:
↳
** BUILD FAILED **
Xcode's output:
↳
/Users/pathfinder/labs/development/flutter/.pub-cache/hosted/pub.dartlang.org/speech_recognition-0.3.0+1/ios/Classes/SwiftSpeechRecognitionPlugin.swift:111:34: error:
type 'AVAudioSession.Category' (aka 'NSString') has no member 'record'
try audioSession.setCategory(AVAudioSession.Category.record, mode: .default)
^~~~~~~~~~~~~~~~~~~~~~~ ~~~~~~
/Users/pathfinder/labs/development/flutter/.pub-cache/hosted/pub.dartlang.org/speech_recognition-0.3.0+1/ios/Classes/SwiftSpeechRecognitionPlugin.swift:112:30: error:
type 'AVAudioSession.Mode' (aka 'NSString') has no member 'measurement'
try audioSession.setMode(AVAudioSession.Mode.measurement)
^~~~~~~~~~~~~~~~~~~ ~~~~~~~~~~~
/Users/pathfinder/labs/development/flutter/.pub-cache/hosted/pub.dartlang.org/speech_recognition-0.3.0+1/ios/Classes/SwiftSpeechRecognitionPlugin.swift:186:9: error:
value of type 'AVAudioSession.Category' (aka 'NSString') has no member 'rawValue'
return input.rawValue
^~~~~ ~~~~~~~~
Could not build the application for the simulator.
Error launching application on iPhone Xʀ.
Speech error while pressing Listening button "_platformCallHandler call speech.onError 9".
I/flutter ( 2933): _platformCallHandler call speech.onSpeechAvailability false I/flutter ( 2933): _platformCallHandler call speech.onError 9 I/flutter ( 2933): Unknowm method speech.onError
Hi,
I tried to import the project in Android studio, the project sync working fine, after the compiler error displayed,
error: The argument type '() → void' can't be assigned to the parameter type '(String) → void'. (argument_type_not_assignable at [speech_recognition_example] lib\main.dart:53)
Compiler message:
lib/main.dart:53:43: Error: The argument type 'void Function()' can't be assigned to the parameter type 'void Function(String)'.
Try changing the type of the parameter, or casting the argument to 'void Function(String)'.
_speech.setRecognitionCompleteHandler(onRecognitionComplete);
I am building an app where I use both speech recognition and text-to-speech. However, I have encountered an error where the microphone used for speech recognition is being kept on even after I stop the recognition with stop() or close() or if it completes on its own.
If the mic is kept on I cannot play any sounds and hence the text-to-speech doesn't work after I use speech recognition.
How can i use another language on ios with this plugin? i already change the locale language but the speech recognition seems didn't get the local language
Hi, on Android platform when listening started, user speak some voice and when user stop speaking, application stoped to listen him. On IOS that function dont worked, and application listening all time while user do not clicked on stop button.
Is it possible to stop listening user if he finished talking?
I would like to use recent major feature updates in a larger project, however, the most recent build available from pub.dartlang.org is back from November. This prevents us from being able to use features such as the error handling, which is a big deal! I would greatly appreciate it if you could update the package listing with a recent build. We really appreciate the work that you have done on this library already!
Many thanks,
Ed
- `speech_recognition` does not specify a Swift version and none of the targets (`Runner`) integrating it has the `SWIFT_VERSION` attribute set. Please contact the author or set the `SWIFT_VERSION` attribute in at least one of the targets that integrate this pod.
Hi,
First of all thanks for the utility. It works perfect on iOS.
I am having issues with Android Emulator (android-x86). When I start the emulator and then run flutter run I don't see any errors. It loads the speech.dart implementation without any errors. It does get the call speech.onSpeechAvailability true.
Now when I press Listen it throws error
D/SpeechRecognitionPlugin(10082): onError : 2
I/flutter (10082): _platformCallHandler call speech.onSpeechAvailability false
I/flutter (10082): _platformCallHandler call speech.onError 2
Here is the full trace from flutter run
Launching lib/main.dart on Android SDK built for x86 in debug mode...
registerResGeneratingTask is deprecated, use registerGeneratedFolders(FileCollection)
registerResGeneratingTask is deprecated, use registerGeneratedFolders(FileCollection)
registerResGeneratingTask is deprecated, use registerGeneratedFolders(FileCollection)
Built build/app/outputs/apk/debug/app-debug.apk.
I/FlutterActivityDelegate(10082): onResume setting current activity to this
I/Choreographer(10082): Skipped 56 frames! The application may be doing too much work on its main thread.
D/EGL_emulation(10082): eglMakeCurrent: 0xe2885480: ver 3 0 (tinfo 0xe2883440)
I/OpenGLRenderer(10082): Davey! duration=1230ms; Flags=1, IntendedVsync=2987461107766, Vsync=2988394441062, OldestInputEvent=9223372036854775807, NewestInputEvent=0, HandleInputStart=2988406485133, AnimationStart=2988406770133, PerformTraversalsStart=2988406789133, DrawStart=2988520121133, SyncQueued=2988522026133, SyncStart=2988526756133, IssueDrawCommandsStart=2988526931133, SwapBuffers=2988578172133, FrameCompleted=2988696000133, DequeueBufferDuration=38146000, QueueBufferDuration=10954000,
D/ (10082): HostConnection::get() New Host Connection established 0xe2899940, tid 10109
D/EGL_emulation(10082): eglMakeCurrent: 0xceab0960: ver 3 0 (tinfo 0xe2883330)
I/flutter (10082): _SpeechBotState.activateSpeechRecognizer...
D/SpeechRecognitionPlugin(10082): Current Locale : en_US
I/flutter (10082): _platformCallHandler call speech.onCurrentLocale en_US
I/flutter (10082): Your currentLocale is en_US
I/flutter (10082): _SpeechBotState.start => result true
D/SpeechRecognitionPlugin(10082): onRmsChanged : -2.12
D/SpeechRecognitionPlugin(10082): onRmsChanged : -2.12
D/SpeechRecognitionPlugin(10082): onReadyForSpeech
I/flutter (10082): _platformCallHandler call speech.onSpeechAvailability true
D/SpeechRecognitionPlugin(10082): onRmsChanged : -2.12
D/SpeechRecognitionPlugin(10082): onRmsChanged : -2.0
D/SpeechRecognitionPlugin(10082): onRmsChanged : -2.0
D/SpeechRecognitionPlugin(10082): onRmsChanged : -2.0
D/SpeechRecognitionPlugin(10082): onRmsChanged : -2.0
D/SpeechRecognitionPlugin(10082): onRmsChanged : -2.0
D/SpeechRecognitionPlugin(10082): onRmsChanged : -2.0
D/SpeechRecognitionPlugin(10082): onRmsChanged : -2.12
D/SpeechRecognitionPlugin(10082): onError : 2
I/flutter (10082): _platformCallHandler call speech.onSpeechAvailability false
I/flutter (10082): _platformCallHandler call speech.onError 2
I/flutter (10082): Unknowm method speech.onError
Application finished. (<-- I stopped it)
in AndroidManifest.xml I have included
I have also opened the app settings in Android settings and provided permission to the app to camera, speaker, microphone.
What does speech.onError 2 mean? and how to fix it in emulator?
Thanks
I tried to run example on android Pie API 28 but when I clicked to record I got following error
I/flutter (13784): _platformCallHandler call speech.onSpeechAvailability false
I/flutter (13784): _platformCallHandler call speech.onError 2
I/flutter (13784): Unknowm method speech.onError
In my android studio emulator the application hangs.
I saw that the emulator has no language installed.
who can help me
A declarative, efficient, and flexible JavaScript library for building user interfaces.
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
An Open Source Machine Learning Framework for Everyone
The Web framework for perfectionists with deadlines.
A PHP framework for web artisans
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
Some thing interesting about web. New door for the world.
A server is a program made to process requests and deliver data to clients.
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
Some thing interesting about visualization, use data art
Some thing interesting about game, make everyone happy.
We are working to build community through open source technology. NB: members must have two-factor auth.
Open source projects and samples from Microsoft.
Google ❤️ Open Source for everyone.
Alibaba Open Source for everyone
Data-Driven Documents codes.
China tencent open source team.