jamesbrill / react-speech-recognition Goto Github PK
View Code? Open in Web Editor NEW💬Speech recognition for your React app
Home Page: https://webspeechrecognition.com/
License: MIT License
💬Speech recognition for your React app
Home Page: https://webspeechrecognition.com/
License: MIT License
Build fails in nextjs due to regernatorRuntime is not defined
from RecognitionManger.js
. This is because the babel config which bundles the package isn't conducive to eliminating regernatorRuntime which doesn't work in server side rendering (SSR , which is what NextJS package specializes in).
I don't know the solution, but my hacky solution was to copy the code from src
into my repo and let the nextJS babel config 'next/babel' do the compilation without error. I tried babel/babel#8829 (comment) but it did not solve the problem.
Support for class properties is disabled in the library
How to disable sound effects when recognition starts or stop?
Firstly, thanks for the promising work!
In my case, the basic import (import SpeechRecognition, { useSpeechRecognition } from 'react-speech-recognition'
) fails the TypeScript check with code ts(7016):
Could not find a declaration file for module 'react-speech-recognition'. '.../my-proj/node_modules/react-speech-recognition/lib/index.js' implicitly has an 'any' type.
Try `npm install @types/react-speech-recognition` if it exists or add a new declaration (.d.ts) file containing `declare module 'react-speech-recognition';`
And npm install @types/react-speech-recognition
throws:
Not Found - GET https://registry.npmjs.org/@types%2freact-speech-recognition - Not found
So, I had to workaround it with adding this in my app.d.ts file
:
declare module 'react-speech-recognition';
So, do you have any agenda for adding TypeScript support? If needed, I also may contribute at my best effort.
https://l243n4n1wm.codesandbox.io/
here is example setup, i tried to transcript on start button click, it repeat the words in transcript on mobile chrome browser, working fine on desktop chrome.
please let me know if this can be fixed or workaround to get it working correct way.
for example, above code on start button click try to say "mobile" it print transcript 2-3 times below, in chrome browser in android devices.
I think that is because there is no "window" in server side rendering
i use react Starter Kit
SpeechRecognition.js:60 SpeechRecognitionInner
[adviser-web]/[react-speech-recognition]/lib/SpeechRecognition.js:60:36
SpeechRecognition.js:213 SpeechRecognition
[adviser-web]/[react-speech-recognition]/lib/SpeechRecognition.js:213:12
ChatMessageList.js:616 Object../src/components/ChatMessageList/ChatMessageLi st.js
G:/xampp/htdocs/adviser-web/src/components/ChatMessageList/ChatMessageList.j s:616:1
bootstrap 37ce11a8a8fe752fa6d7:643 webpack_require
G:/xampp/htdocs/adviser-web/webpack/bootstrap 37ce11a8a8fe752fa6d7:643:1
bootstrap 37ce11a8a8fe752fa6d7:47 fn
G:/xampp/htdocs/adviser-web/webpack/bootstrap 37ce11a8a8fe752fa6d7:47:1
chat.js:2615 Object../src/routes/chat/Chat.js
G:/xampp/htdocs/adviser-web/build/chunks/chat.js:2615:87
bootstrap 37ce11a8a8fe752fa6d7:643 webpack_require
G:/xampp/htdocs/adviser-web/webpack/bootstrap 37ce11a8a8fe752fa6d7:643:1
bootstrap 37ce11a8a8fe752fa6d7:47 fn
G:/xampp/htdocs/adviser-web/webpack/bootstrap 37ce11a8a8fe752fa6d7:47:1
chat.js:3131 Object../src/routes/chat/index.js
G:/xampp/htdocs/adviser-web/build/chunks/chat.js:3131:64
bootstrap 37ce11a8a8fe752fa6d7:643 webpack_require
G:/xampp/htdocs/adviser-web/webpack/bootstrap 37ce11a8a8fe752fa6d7:643:1
If I wrap SpeechRecognitionContainer in a top level component the app is always listening even when I don't want it to -- if I talk the transcript prop gets updated to what I am saying. I want it to start recognizing speech when I click the recording button, not all the time. (The red recording icon always shows in the tab bar. I only want it to show whenI click the recording button).
I tried using this.props.recognition.abort() and this.props.recognition.stop() to try and stop the recognition but nothing works.
Is it possible to choose when I want to start and stop speech recognition?
In addition - the speech recognition does not work when the SpeechRecognitionContainer and the component it wraps gets unmounted and then remounted.
Any suggestions?
Hello,
I'm trying to use this with a TTS library and I'm trying to stop the mic from listening, then use my TTS library to say something, and then start listening again so that my speech recognition does not pick up any of the TTS speech. However, when I do this, like this:
SpeechRecognition.stopListening(); console.log('Anything'); SpeechRecognition.startListening({continuous: true});
My SpeechRecognition
doesn't start listening again. Any idea why this is happening? Thanks so much!
I want to clear the transcripts once the certain commands have been executed.
hi, how to I use transcript of "const { transcript, resetTranscript } = useSpeechRecognition();" like local state, I have some textfields which are using useSpeechRecognition and its value is "transcript", but when one of textfields value changed, all of the components that contains textfield has changed
Install react-speech-recognition
and use React @latest v16.13.1
It seems like speech recognition is not working on Chrome on iOS devices.
Not sure I'm doing something wrong but it seems like the fallback is triggered as if the browser did not support speech recognition at all.
useEffect(() => {
if (!SpeechRecognition.browserSupportsSpeechRecognition()) {
return (() => {
SpeechRecognition.stopListening();
resetTranscript();
})
}
if(microPhoneOn) {
SpeechRecognition.startListening({continuous: true, language: "en"})
}
else {
SpeechRecognition.stopListening();
}
if(props.onTranscriptReceived) {
props.onTranscriptReceived(messages);
}
return (() => {
SpeechRecognition.stopListening();
resetTranscript();
})
}, [microPhoneOn, prevMessage]);
I wonder whether this speech recognition function works in Electron.
Hello,
I'm developing a project using speech recognition, and the project is already online using the AWS server. However, I have a problem when accessing the page on mobile, because I want to use speech recognition with continuous in true, but I read in the documentation that this is not possible due to a bug, so I would like to know if you have any method to use continuos in true and not turn off the microphone all the time.
I am trying to use this to build some type of hobby AI app where you can ask the app about "cooldowns" for the game league of legends.
Basicially the structure of the commands would be like such:
"(What is) Lux * Cooldown"
Where * can be Q, W, E or R
How could i make this command? so that the Voice recognition returns to the callback the letter the user said?
Hi,
I see that it works on chrome quite well. I was wondering whether it can be implemented in an android app instead of using it through a browser. Because I couldn't find a better speech library than this one, but I can't get it to work within my app.
Hi,
I've added this in my code:
const propTypes = {
// Props injected by SpeechRecognition
transcript: PropTypes.string,
resetTranscript: PropTypes.func,
startListening: PropTypes.func,
browserSupportsSpeechRecognition: PropTypes.bool
}
I want to create the button which will enable speech recognition on click. I tried using the 'startListening' function but it's not working for me. I'm getting the error cannot read property 'setState' of undefined at startListening. Could you help me with that? I'm new to react so maybe it's just my mistake...
I would like to simultaneously transcript the voice in more language at once so I can also detect the language this way.
That's why I am wondering why there are the global variables recognitionManager
and SpeechRecognition
because if they would be store as a state of the hook, it would be possible to create as many instances as I need.
so the questions are:
I will be happy to try to do a PR, but if you know about something why it have to be like that so it would be just waste of time.
Hi,
first of all great to this react plugin. I would like to have an option to choose language from dropdown and according to that the transcription should work. Is there any way to do that?
Thanks in advance
It would be pretty useful to be able to get to the final transcript before the command callback is executed. Right now you can access the {command} as a param within the callback but that only gives the matched command. So for example, I say "my favorite foods are pizza and beer" and the command is set up as "my favorite foods are * and *". I could log it that way but I'd rather have the full text of what was spoken. So...I am trying to log the final transcript first, then the response second, like this:
log[0] = "my favorite foods are pizza and beer"
log[1] = "Yes, pizza and beer are my favorites too"
But if I log the response from the callback and then have a useEffect() that logs the final transcript, it becomes:
log[0] = "Yes, pizza and beer are my favorites too"
log[1] = "my favorite foods are pizza and beer"
So the decision was made to move the site to nextjs to improve response time, and we've encountered the following error when trying to load a component that uses your package:
Server Error
ReferenceError: regeneratorRuntime is not defined
This error happened while generating the page. Any console logs will be displayed in the terminal window.
Call Stack
<unknown>
file:///G:/Repositories/Biomarkerz/biosurveyfs/client/node_modules/react-speech-recognition/lib/RecognitionManager.js (168:61)
<unknown>
file:///G:/Repositories/Biomarkerz/biosurveyfs/client/node_modules/react-speech-recognition/lib/RecognitionManager.js (239:6)
Object.<anonymous>
file:///G:/Repositories/Biomarkerz/biosurveyfs/client/node_modules/react-speech-recognition/lib/RecognitionManager.js (334:2)
I'm hoping there is a simple fix for this. This package is fantastic and I loathe the thought of changing the implementation
When adding multiple components that use react-speech-recognition, it seems like starting and stopping the listening can cause the error in the title. Looks like it is a race as it happens more frequently the more replicas of the component I add. With only 2 it happens more rarely for me, with 3 it happens almost all the time.
Reproduced on Windows 10 Chrome Version 85.0.4183.83 (Official Build) (64-bit)
How do i stop re render or processing the transcript once the configured wakeword is found.
Here is the code snippet i am using.
class Dictaphone extends Component {
constructor(props) {
super(props);
this.state = {
isWakeWordFound: false
};
}
componentWillMount() {
const { recognition } = this.props;
recognition.lang = 'en-US';
recognition.interimResults = true;
recognition.maxAlternatives = 3;
}
componentDidMount() {
// this.props.startListening();
}
checkForWakeword = (transcript) => {
if (!this.state.isWakeWordFound) {
for (const ww in wakeWords) {
if (transcript.trim().toLowerCase().includes(wakeWords[ww])) {
console.log('Wake word found');
this.setState({
isWakeWordFound: true
});
// return true;
}
}
}
// return false;
};
render() {
const {
transcript,
finalTranscript,
startListening,
abortListening,
resetTranscript,
browserSupportsSpeechRecognition
} = this.props;
if (!browserSupportsSpeechRecognition) {
return null;
}
console.log('Transcript::::' + transcript);
return (
<div>
{!this.state.isWakeWordFound ? this.checkForWakeword(transcript) : null}
{this.state.isWakeWordFound ? abortListening() : null}
<Button onClick={resetTranscript}>Reset</Button>
<span>{transcript}</span>
</div>
);
}
}
The end goal is to abort Listening whenever a hotword is found.How do i achieve this?
Right now even though, abort Listning is being called once, due to transcript in render props, it is constantly unable to update the state, hence throwing the below error in console.
hi!
I created a little vocal app like this with following commands:
const [name, setName] = useState('');
const commands = [
{
command: "my name is *"
callback: (name) => {
setName(name);
},
},
{
command: "tell my name"
callback: () => {
console.log(name);
},
when I say tell my name
in console name its undefined.
Hey James, hey guys,
I'm currently working on my bachelor thesis developing an e-Learning App which should enable users to simulate oral exams. As a part of it I'd like to make use of some properties like the SpeechGrammar, SpeechRecognitionAlternative etc..
Is there a way to address the complete Web Speech API in React or is it just a limited component / framework you did build?
I hope this is worth opening an issue,
best regards,
Andreas
P.S.: I've already implemented your component successfully in my React project. Anything is working fine. Just need a hint how to use the rest.
Hi all, i'm using react-speech-recognition like so in my react project:
import React, { Component } from "react";
import SpeechRecognition from "react-speech-recognition";
import { Button } from 'react-bootstrap';
class Dictaphone extends Component {
componentDidUpdate=(prevProps)=>{
console.log(prevProps,this.props.cursor)
if (this.props.readAlongHighlightState===true){
let {transcript} = prevProps
if(this.props.cursor !== '' && this.props.cursor !== undefined){
var cursor = this.props.cursor
//console.log("cursor parentNode ",cursor.anchorNode.parentNode)
//console.log("just cursor",cursor)
//console.log("just inner html",cursor.anchorNode.parentNode.textContent)
//console.log("cursor innerhtml",cursor.anchorNode.innerhtml)
if(transcript.includes(" ")){
transcript = transcript.split(" ").pop()
console.log(transcript)
}
console.log("cursor anchor node textcontent:",cursor.anchorNode.textContent)
if(transcript === cursor.anchorNode.textContent.toString().toLowerCase()){
cursor.anchorNode.parentNode.style.backgroundColor = 'yellow';
cursor = cursor.anchorNode.parentNode.nextElementSibling
this.props.updatecursor (cursor); //highlight the span matching the intent
}else{
console.log("no cursor")
}
}
}
}
render() {
const {transcript, resetTranscript, browserSupportsSpeechRecognition} = this.props
if (!browserSupportsSpeechRecognition) {
return null
}
if(transcript==="notes"||transcript==="cards"||transcript==="home"|| transcript==="settings" || transcript==="read document"){
this.libraryOfIntents(transcript,resetTranscript);
}
return (
<div>
<span id="transcriptSpan"className="transcriptspan"> {transcript} </span>
<Button variant="outline-dark" onClick={resetTranscript}>Reset</Button>
</div>
)
}
}
for some reason, all of a sudden my browser is not recognizing any speech from my microphone. I have used different microphones and even different computers. I have tried using npm update and I have tried using an older version of my app I have saved where I know the component works. However now that version of the app has the same issue.
I'm wondering if there are new updates in chrome that might be causing a bug in the current version of react-speech-recognition?
Not sure if this is a problem with this project or the underlying Web API. When using continuous mode it keeps listening as expected, but the transcript text is never cleared, even after issuing stopListening().
I also tried calling resetTranscript() after calling stopListening() but that seems to turn on listening again for some reason.
For example when I say "Hello how are you" and I would want to delete the word "you" such that I only have "Hello how are". Now if I speak again and say "I am good". my new transcript should be "Hello how are I am good".
Hi I am new to React and trying to use react-speech-recognition. I am trying to start & stop listening to the microphone by click buttons. However, it seems that it's still listening after I click 'stop' button and the transcription is still updating.
Attaching the code here. Any advice will be helpful!
import React, { PropTypes, Component } from 'react'
import SpeechRecognition from 'react-speech-recognition'
const propTypes = {
// Props injected by SpeechRecognition
transcript: PropTypes.string,
finalTranscript: PropTypes.string,
startListening: PropTypes.func,
abortListening: PropTypes.func,
resetTranscript: PropTypes.func,
browserSupportsSpeechRecognition: PropTypes.bool
}
const options = {
autoStart: false
}
class Dictaphone extends Component {
render() {
const {
transcript,
finalTranscript,
startListening,
abortListening,
resetTranscript,
browserSupportsSpeechRecognition
} = this.props
if (!browserSupportsSpeechRecognition) {
return null
}
return (
<div>
<button onClick={startListening}>Start</button>
<button onClick={abortListening}>Stop</button>
<button onClick={resetTranscript}>Reset</button>
<p>{finalTranscript}</p>
</div>
)
}
}
Dictaphone.propTypes = propTypes
export default SpeechRecognition(options)(Dictaphone)
(Sorry for the bad formatting!)
The graceful exit does not work on safari as it's trying to do computation with 'recognition' variable when it's null. Safari is not a supported browser.
Hence, a possible fix is adding the if statement (commented out below)
I've also added a pull request #17
Will we get any callbacks while invoking async functions such as startListening,stopListening,abortListening ?
If so, how can we handle the callbacks?
Summary
I want to keep track of the raw interim transcript data accessible from recognition.onresult
. As such, I wrote the following code:
const recognition = SpeechRecognition.getRecognition();
recognition.onresult = (e) => {
// run custom code here...
}
interimTranscript
does not fit my use case because I am looking to go beyond just a string; I want all the event data has to offer.
Observed behavior
When I try reading transcript
it is empty.
Expected behavior
I should be able to run code onresult
and still be able to access the transcript through react-speech-recognition
.
Hypothesis
Overriding recognition.onresult
screws up the internals of react-speech-recognition
, resulting in the transcript being empty.
Next steps
Can someone validate or invalidate my hypothesis? Have I included sufficient information in this report, or should I give more details?
If there is any work that needs to be done here (and I may be of assistance), I am available to make a PR.
Everything is working perfectly on my laptop, however, nothing seems to happen when I try to run the app on mobile (React App on a mobile browser). Is there mobile support / does the API automatically request microphone access like it does for desktop?
I see a similar problem popped up in Issue #22 , is version 3 any more successful in getting it to work on android browsers?
For instance, if there is no further voice input within 2 seconds, the recognition starts to check if it is a valid command or not, if not then a special callback for non-valid commands is called.
Within this callback, the user may be able to resetTranscript or display some text feedback.
I've used this hook in a form where a textarea value gets updated with the spoken text. Works perfectly fine when the textarea is empty. However, if I have a default value set to the textarea, it gets reset on listening, which I'm guessing is the normal behaviour. But I don't want that. For eg. if I have "Default Notes Value" as the default value of the textarea and I start listening, the new content should append to the already present one and not replace it.
Here's a working example - https://codesandbox.io/s/adoring-wiles-q4tnf
I did try setting the default value state as the transcript onMount but it says transcipt is not a function:
useEffect(() => {
notes && transcript(notes)
}, [])
This is in the readme and will throw an error:
import React, { PropTypes, Component } from 'react'
const propTypes = {
// Props injected by SpeechRecognition
transcript: PropTypes.string,
resetTranscript: PropTypes.func,
browserSupportsSpeechRecognition: PropTypes.bool
};
the error:
TypeError: Cannot read property 'string' of undefined
Note: React.PropTypes is deprecated as of React v15.5. Please use the prop-types library instead.
The readme should be updated to use:
import PropTypes from 'prop-types'
Also prop-types should now be a peer dependency if this is required.
I know the api lists a way to add an event listener to the onaudioend property of Speech Recognition. I have no luck. basically I'm trying to get a function to fire whenever the speech recognition does it's automatic stopListen (when listening isn't continuous) so that it can update a useState that checks the listening status and shows the right button (on/off).
Do you have an example to show me?
This is working for reactJs but not for nextJs and showing an error that ReferenceError: regeneratorRuntime is not defined
It works on desktop, but in mobile i get onend callback right after startListening.
I manually added onerror callback and it's says that it's some network error happens. But network is on. Same problem in different devices
SpeechRecognitionErrorEvent
isTrusted: true error: "network" message: "" type: "error" target: SpeechRecognition {grammars: SpeechGrammarList, lang: "ru-RU", continuous: false, interimResults: true, maxAlternatives: 1, …} currentTarget: SpeechRecognition {grammars: SpeechGrammarList, lang: "ru-RU", continuous: false, interimResults: true, maxAlternatives: 1, …} eventPhase: 0 bubbles: false cancelable: false defaultPrevented: false composed: false timeStamp: 15449.699999997392 srcElement: SpeechRecognition {grammars: SpeechGrammarList, lang: "ru-RU", continuous: false, interimResults: true, maxAlternatives: 1, …} returnValue: true cancelBubble: false path: [] __proto__: SpeechRecognitionErrorEvent
So the speech recognition does support for instance, hebrew, however when trying to write a command in hebrew, the callback isn't summoned. according to the transcript, it was spoken perfectly, but it did not activate the command callback that was set up. Also, resetTranscript does not work when called by handleReset (or perhaphs it's just not called), only with the spoken 'clear' command
const { resetTranscript } = useSpeechRecognition()
const commands = [
{
command: 'clear',
callback: ({ resetTranscript }) => resetTranscript()
},
{
command: ['Everything is working','Nothing is working','Just the audio works','Just the video works','שלום'],
callback: (command, spokenPhrase ) => {
sendAns(spokenPhrase)
handleReset();
},
isFuzzyMatch: true,
fuzzyMatchingThreshold: 0.2},
]
const { transcript } = useSpeechRecognition({ commands })
const handleReset = useCallback(() => {
SpeechRecognition.stopListening()
SpeechRecognition.startListening({
continuous:true,
language: 'he'
})
},[]);
if (!SpeechRecognition.browserSupportsSpeechRecognition()) {
return null
}else{
SpeechRecognition.startListening({
continuous:true,
language: 'he'
});
}
for reference, the sendAns command sends the spoken phrase back to the parent
What is the best way to do this, do I have to define multiple commands with the same callback, or can a command be an array of possible strings which will call the same callback?
Hello !
I am a newbie programmer and I don't understand how to change the recognition.lang as stated in the documentation. Can you please clarify even more? Thank you
So if you may recall I was the one who requested the feature of bestMatchOnly, I found a bug though, when working with an array that included the nfollowing:
{
command: ['item two','item four','item five','item six'],
callback: (command) => videoCommandCallback(command),
isFuzzyMatch: true,
fuzzyMatchingThreshold: 0.6,
bestMatchOnly: true
}
when ever either "item four" or "item five" was spoken, the command that triggered the callback (aka the bestMatch) was in fact "item two"
Server Error
ReferenceError: regeneratorRuntime is not defined for example: at /node_modules/react-speech-recognition/lib/RecognitionManager.js:168:61
How do you mount SpeechRecognition onto a react class component. Ideally I would like a method for onTranscriptionChanged: (text: string) => void
in my component
Hi, I've been using this for about 8 months now and it's worked great, but all of a sudden it won't work for me.
In my main app as well as just creating a simple app with just the getting started code.
It's not throwing an error or anything, the app compiles fine but then just does nothing in the browser.
Please advise. I know this is kind of ambiguous but I'm not sure what other info I can provide. Would appreciate any info greatly :)
Hi,
I don't see any mention of this requiring a network connection, however when I attempt to use it offline it does not respond. (No error thrown, it just never updates the Transcript).
Is it possible to use it offline?
Any help would be very much appreciated as I'm somewhere I cannot access the internet but still would like to use the app I'm making.
Just like 'Hi Siri' on ios.
A declarative, efficient, and flexible JavaScript library for building user interfaces.
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
An Open Source Machine Learning Framework for Everyone
The Web framework for perfectionists with deadlines.
A PHP framework for web artisans
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
Some thing interesting about web. New door for the world.
A server is a program made to process requests and deliver data to clients.
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
Some thing interesting about visualization, use data art
Some thing interesting about game, make everyone happy.
We are working to build community through open source technology. NB: members must have two-factor auth.
Open source projects and samples from Microsoft.
Google ❤️ Open Source for everyone.
Alibaba Open Source for everyone
Data-Driven Documents codes.
China tencent open source team.