Giter Site home page Giter Site logo

uberi / speech_recognition Goto Github PK

View Code? Open in Web Editor NEW
8.1K 283.0 2.4K 108.43 MB

Speech recognition module for Python, supporting several engines and APIs, online and offline.

Home Page: https://pypi.python.org/pypi/SpeechRecognition/

License: BSD 3-Clause "New" or "Revised" License

Shell 0.39% Python 99.61%
python audio speech-recognition speech-to-text

speech_recognition's Introduction

SpeechRecognition

Latest Version

Development Status

Supported Python Versions

License

Continuous Integration Test Results

Library for performing speech recognition, with support for several engines and APIs, online and offline.

UPDATE 2022-02-09: Hey everyone! This project started as a tech demo, but these days it needs more time than I have to keep up with all the PRs and issues. Therefore, I'd like to put out an open invite for collaborators - just reach out at [email protected] if you're interested!

Speech recognition engine/API support:

Quickstart: pip install SpeechRecognition. See the "Installing" section for more details.

To quickly try it out, run python -m speech_recognition after installing.

Project links:

Library Reference

The library reference documents every publicly accessible object in the library. This document is also included under reference/library-reference.rst.

See Notes on using PocketSphinx for information about installing languages, compiling PocketSphinx, and building language packs from online resources. This document is also included under reference/pocketsphinx.rst.

You have to install Vosk models for using Vosk. Here are models avaiable. You have to place them in models folder of your project, like "your-project-folder/models/your-vosk-model"

Examples

See the examples/ directory in the repository root for usage examples:

Installing

First, make sure you have all the requirements listed in the "Requirements" section.

The easiest way to install this is using pip install SpeechRecognition.

Otherwise, download the source distribution from PyPI, and extract the archive.

In the folder, run python setup.py install.

Requirements

To use all of the functionality of the library, you should have:

  • Python 3.8+ (required)
  • PyAudio 0.2.11+ (required only if you need to use microphone input, Microphone)
  • PocketSphinx (required only if you need to use the Sphinx recognizer, recognizer_instance.recognize_sphinx)
  • Google API Client Library for Python (required only if you need to use the Google Cloud Speech API, recognizer_instance.recognize_google_cloud)
  • FLAC encoder (required only if the system is not x86-based Windows/Linux/OS X)
  • Vosk (required only if you need to use Vosk API speech recognition recognizer_instance.recognize_vosk)
  • Whisper (required only if you need to use Whisper recognizer_instance.recognize_whisper)
  • openai (required only if you need to use Whisper API speech recognition recognizer_instance.recognize_whisper_api)

The following requirements are optional, but can improve or extend functionality in some situations:

The following sections go over the details of each requirement.

Python

The first software requirement is Python 3.8+. This is required to use the library.

PyAudio (for microphone users)

PyAudio is required if and only if you want to use microphone input (Microphone). PyAudio version 0.2.11+ is required, as earlier versions have known memory management bugs when recording from microphones in certain situations.

If not installed, everything in the library will still work, except attempting to instantiate a Microphone object will raise an AttributeError.

The installation instructions on the PyAudio website are quite good - for convenience, they are summarized below:

  • On Windows, install PyAudio using Pip: execute pip install pyaudio in a terminal.
  • On Debian-derived Linux distributions (like Ubuntu and Mint), install PyAudio using APT: execute sudo apt-get install python-pyaudio python3-pyaudio in a terminal.
    • If the version in the repositories is too old, install the latest release using Pip: execute sudo apt-get install portaudio19-dev python-all-dev python3-all-dev && sudo pip install pyaudio (replace pip with pip3 if using Python 3).
  • On OS X, install PortAudio using Homebrew: brew install portaudio. Then, install PyAudio using Pip: pip install pyaudio.
  • On other POSIX-based systems, install the portaudio19-dev and python-all-dev (or python3-all-dev if using Python 3) packages (or their closest equivalents) using a package manager of your choice, and then install PyAudio using Pip: pip install pyaudio (replace pip with pip3 if using Python 3).

PyAudio wheel packages for common 64-bit Python versions on Windows and Linux are included for convenience, under the third-party/ directory in the repository root. To install, simply run pip install wheel followed by pip install ./third-party/WHEEL_FILENAME (replace pip with pip3 if using Python 3) in the repository root directory.

PocketSphinx-Python (for Sphinx users)

PocketSphinx-Python is required if and only if you want to use the Sphinx recognizer (recognizer_instance.recognize_sphinx).

PocketSphinx-Python wheel packages for 64-bit Python 3.4, and 3.5 on Windows are included for convenience, under the third-party/ directory. To install, simply run pip install wheel followed by pip install ./third-party/WHEEL_FILENAME (replace pip with pip3 if using Python 3) in the SpeechRecognition folder.

On Linux and other POSIX systems (such as OS X), follow the instructions under "Building PocketSphinx-Python from source" in Notes on using PocketSphinx for installation instructions.

Note that the versions available in most package repositories are outdated and will not work with the bundled language data. Using the bundled wheel packages or building from source is recommended.

See Notes on using PocketSphinx for information about installing languages, compiling PocketSphinx, and building language packs from online resources. This document is also included under reference/pocketsphinx.rst.

Vosk (for Vosk users)

Vosk API is required if and only if you want to use Vosk recognizer (recognizer_instance.recognize_vosk).

You can install it with python3 -m pip install vosk.

You also have to install Vosk Models:

Here are models avaiable for download. You have to place them in models folder of your project, like "your-project-folder/models/your-vosk-model"

Google Cloud Speech Library for Python (for Google Cloud Speech API users)

Google Cloud Speech library for Python is required if and only if you want to use the Google Cloud Speech API (recognizer_instance.recognize_google_cloud).

If not installed, everything in the library will still work, except calling recognizer_instance.recognize_google_cloud will raise an RequestError.

According to the official installation instructions, the recommended way to install this is using Pip: execute pip install google-cloud-speech (replace pip with pip3 if using Python 3).

FLAC (for some systems)

A FLAC encoder is required to encode the audio data to send to the API. If using Windows (x86 or x86-64), OS X (Intel Macs only, OS X 10.6 or higher), or Linux (x86 or x86-64), this is already bundled with this library - you do not need to install anything.

Otherwise, ensure that you have the flac command line tool, which is often available through the system package manager. For example, this would usually be sudo apt-get install flac on Debian-derivatives, or brew install flac on OS X with Homebrew.

Whisper (for Whisper users)

Whisper is required if and only if you want to use whisper (recognizer_instance.recognize_whisper).

You can install it with python3 -m pip install SpeechRecognition[whisper-local].

Whisper API (for Whisper API users)

The library openai is required if and only if you want to use Whisper API (recognizer_instance.recognize_whisper_api).

If not installed, everything in the library will still work, except calling recognizer_instance.recognize_whisper_api will raise an RequestError.

You can install it with python3 -m pip install SpeechRecognition[whisper-api].

Troubleshooting

The recognizer tries to recognize speech even when I'm not speaking, or after I'm done speaking.

Try increasing the recognizer_instance.energy_threshold property. This is basically how sensitive the recognizer is to when recognition should start. Higher values mean that it will be less sensitive, which is useful if you are in a loud room.

This value depends entirely on your microphone or audio data. There is no one-size-fits-all value, but good values typically range from 50 to 4000.

Also, check on your microphone volume settings. If it is too sensitive, the microphone may be picking up a lot of ambient noise. If it is too insensitive, the microphone may be rejecting speech as just noise.

The recognizer can't recognize speech right after it starts listening for the first time.

The recognizer_instance.energy_threshold property is probably set to a value that is too high to start off with, and then being adjusted lower automatically by dynamic energy threshold adjustment. Before it is at a good level, the energy threshold is so high that speech is just considered ambient noise.

The solution is to decrease this threshold, or call recognizer_instance.adjust_for_ambient_noise beforehand, which will set the threshold to a good value automatically.

The recognizer doesn't understand my particular language/dialect.

Try setting the recognition language to your language/dialect. To do this, see the documentation for recognizer_instance.recognize_sphinx, recognizer_instance.recognize_google, recognizer_instance.recognize_wit, recognizer_instance.recognize_bing, recognizer_instance.recognize_api, recognizer_instance.recognize_houndify, and recognizer_instance.recognize_ibm.

For example, if your language/dialect is British English, it is better to use "en-GB" as the language rather than "en-US".

The recognizer hangs on recognizer_instance.listen; specifically, when it's calling Microphone.MicrophoneStream.read.

This usually happens when you're using a Raspberry Pi board, which doesn't have audio input capabilities by itself. This causes the default microphone used by PyAudio to simply block when we try to read it. If you happen to be using a Raspberry Pi, you'll need a USB sound card (or USB microphone).

Once you do this, change all instances of Microphone() to Microphone(device_index=MICROPHONE_INDEX), where MICROPHONE_INDEX is the hardware-specific index of the microphone.

To figure out what the value of MICROPHONE_INDEX should be, run the following code:

import speech_recognition as sr
for index, name in enumerate(sr.Microphone.list_microphone_names()):
    print("Microphone with name \"{1}\" found for `Microphone(device_index={0})`".format(index, name))

This will print out something like the following:

Microphone with name "HDA Intel HDMI: 0 (hw:0,3)" found for `Microphone(device_index=0)`
Microphone with name "HDA Intel HDMI: 1 (hw:0,7)" found for `Microphone(device_index=1)`
Microphone with name "HDA Intel HDMI: 2 (hw:0,8)" found for `Microphone(device_index=2)`
Microphone with name "Blue Snowball: USB Audio (hw:1,0)" found for `Microphone(device_index=3)`
Microphone with name "hdmi" found for `Microphone(device_index=4)`
Microphone with name "pulse" found for `Microphone(device_index=5)`
Microphone with name "default" found for `Microphone(device_index=6)`

Now, to use the Snowball microphone, you would change Microphone() to Microphone(device_index=3).

Calling Microphone() gives the error IOError: No Default Input Device Available.

As the error says, the program doesn't know which microphone to use.

To proceed, either use Microphone(device_index=MICROPHONE_INDEX, ...) instead of Microphone(...), or set a default microphone in your OS. You can obtain possible values of MICROPHONE_INDEX using the code in the troubleshooting entry right above this one.

The program doesn't run when compiled with PyInstaller.

As of PyInstaller version 3.0, SpeechRecognition is supported out of the box. If you're getting weird issues when compiling your program using PyInstaller, simply update PyInstaller.

You can easily do this by running pip install --upgrade pyinstaller.

On Ubuntu/Debian, I get annoying output in the terminal saying things like "bt_audio_service_open: [...] Connection refused" and various others.

The "bt_audio_service_open" error means that you have a Bluetooth audio device, but as a physical device is not currently connected, we can't actually use it - if you're not using a Bluetooth microphone, then this can be safely ignored. If you are, and audio isn't working, then double check to make sure your microphone is actually connected. There does not seem to be a simple way to disable these messages.

For errors of the form "ALSA lib [...] Unknown PCM", see this StackOverflow answer. Basically, to get rid of an error of the form "Unknown PCM cards.pcm.rear", simply comment out pcm.rear cards.pcm.rear in /usr/share/alsa/alsa.conf, ~/.asoundrc, and /etc/asound.conf.

For "jack server is not running or cannot be started" or "connect(2) call to /dev/shm/jack-1000/default/jack_0 failed (err=No such file or directory)" or "attempt to connect to server failed", these are caused by ALSA trying to connect to JACK, and can be safely ignored. I'm not aware of any simple way to turn those messages off at this time, besides entirely disabling printing while starting the microphone.

On OS X, I get a ChildProcessError saying that it couldn't find the system FLAC converter, even though it's installed.

Installing FLAC for OS X directly from the source code will not work, since it doesn't correctly add the executables to the search path.

Installing FLAC using Homebrew ensures that the search path is correctly updated. First, ensure you have Homebrew, then run brew install flac to install the necessary files.

Developing

To hack on this library, first make sure you have all the requirements listed in the "Requirements" section.

  • Most of the library code lives in speech_recognition/__init__.py.
  • Examples live under the examples/ directory, and the demo script lives in speech_recognition/__main__.py.
  • The FLAC encoder binaries are in the speech_recognition/ directory.
  • Documentation can be found in the reference/ directory.
  • Third-party libraries, utilities, and reference material are in the third-party/ directory.

To install/reinstall the library locally, run python -m pip install -e .[dev] in the project root directory.

Before a release, the version number is bumped in README.rst and speech_recognition/__init__.py. Version tags are then created using git config gpg.program gpg2 && git config user.signingkey DB45F6C431DE7C2DCD99FF7904882258A4063489 && git tag -s VERSION_GOES_HERE -m "Version VERSION_GOES_HERE".

Releases are done by running make-release.sh VERSION_GOES_HERE to build the Python source packages, sign them, and upload them to PyPI.

Testing

To run all the tests:

python -m unittest discover --verbose

To run static analysis:

python -m flake8 --ignore=E501,E701,W503 speech_recognition tests examples setup.py

To ensure RST is well-formed:

python -m rstcheck README.rst reference/*.rst

Testing is also done automatically by GitHub Actions, upon every push.

FLAC Executables

The included flac-win32 executable is the official FLAC 1.3.2 32-bit Windows binary.

The included flac-linux-x86 and flac-linux-x86_64 executables are built from the FLAC 1.3.2 source code with Manylinux to ensure that it's compatible with a wide variety of distributions.

The built FLAC executables should be bit-for-bit reproducible. To rebuild them, run the following inside the project directory on a Debian-like system:

# download and extract the FLAC source code
cd third-party
sudo apt-get install --yes docker.io

# build FLAC inside the Manylinux i686 Docker image
tar xf flac-1.3.2.tar.xz
sudo docker run --tty --interactive --rm --volume "$(pwd):/root" quay.io/pypa/manylinux1_i686:latest bash
    cd /root/flac-1.3.2
    ./configure LDFLAGS=-static # compiler flags to make a static build
    make
exit
cp flac-1.3.2/src/flac/flac ../speech_recognition/flac-linux-x86 && sudo rm -rf flac-1.3.2/

# build FLAC inside the Manylinux x86_64 Docker image
tar xf flac-1.3.2.tar.xz
sudo docker run --tty --interactive --rm --volume "$(pwd):/root" quay.io/pypa/manylinux1_x86_64:latest bash
    cd /root/flac-1.3.2
    ./configure LDFLAGS=-static # compiler flags to make a static build
    make
exit
cp flac-1.3.2/src/flac/flac ../speech_recognition/flac-linux-x86_64 && sudo rm -r flac-1.3.2/

The included flac-mac executable is extracted from xACT 2.39, which is a frontend for FLAC 1.3.2 that conveniently includes binaries for all of its encoders. Specifically, it is a copy of xACT 2.39/xACT.app/Contents/Resources/flac in xACT2.39.zip.

Authors

Uberi <[email protected]> (Anthony Zhang)
bobsayshilol
arvindch <[email protected]> (Arvind Chembarpu)
kevinismith <[email protected]> (Kevin Smith)
haas85
DelightRun <[email protected]>
maverickagm
kamushadenes <[email protected]> (Kamus Hadenes)
sbraden <[email protected]> (Sarah Braden)
tb0hdan (Bohdan Turkynewych)
Thynix <[email protected]> (Steve Dougherty)
beeedy <[email protected]> (Broderick Carlin)

Please report bugs and suggestions at the issue tracker!

How to cite this library (APA style):

Zhang, A. (2017). Speech Recognition (Version 3.8) [Software]. Available from https://github.com/Uberi/speech_recognition#readme.

How to cite this library (Chicago style):

Zhang, Anthony. 2017. Speech Recognition (version 3.8).

Also check out the Python Baidu Yuyin API, which is based on an older version of this project, and adds support for Baidu Yuyin. Note that Baidu Yuyin is only available inside China.

License

Copyright 2014-2017 Anthony Zhang (Uberi). The source code for this library is available online at GitHub.

SpeechRecognition is made available under the 3-clause BSD license. See LICENSE.txt in the project's root directory for more information.

For convenience, all the official distributions of SpeechRecognition already include a copy of the necessary copyright notices and licenses. In your project, you can simply say that licensing information for SpeechRecognition can be found within the SpeechRecognition README, and make sure SpeechRecognition is visible to users if they wish to see it.

SpeechRecognition distributes source code, binaries, and language files from CMU Sphinx. These files are BSD-licensed and redistributable as long as copyright notices are correctly retained. See speech_recognition/pocketsphinx-data/*/LICENSE*.txt and third-party/LICENSE-Sphinx.txt for license details for individual parts.

SpeechRecognition distributes source code and binaries from PyAudio. These files are MIT-licensed and redistributable as long as copyright notices are correctly retained. See third-party/LICENSE-PyAudio.txt for license details.

SpeechRecognition distributes binaries from FLAC - speech_recognition/flac-win32.exe, speech_recognition/flac-linux-x86, and speech_recognition/flac-mac. These files are GPLv2-licensed and redistributable, as long as the terms of the GPL are satisfied. The FLAC binaries are an aggregate of separate programs, so these GPL restrictions do not apply to the library or your programs that use the library, only to FLAC itself. See LICENSE-FLAC.txt for license details.

speech_recognition's People

Contributors

achembarpu avatar aculeasis avatar akabraham avatar aleneum avatar beckereth avatar beeedy avatar bobsayshilol avatar chriamue avatar chrisspen avatar cspencer-eod avatar frawau avatar frnsys avatar ftnext avatar fygul avatar ibutra avatar jhoelzl avatar josh-hernandez-exe avatar joy-void-joy avatar kamushadenes avatar kevinismith avatar lastcoolnameleft avatar mytja avatar native-api avatar palikar avatar sylvainde avatar tb0hdan avatar thynix avatar timgates42 avatar tmator avatar uberi avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

speech_recognition's Issues

Error

Hello !
Firstly sorry for my english ..

I have an error when i'm executing the script

Say something!
Traceback (most recent call last):
  File "D:\DEV\PYTHON\eclipse\plugins\org.python.pydev_3.7.1.201409021729\pysrc\pydevd.py", line 2090, in <module>
    debugger.run(setup['file'], None, None)
  File "D:\DEV\PYTHON\eclipse\plugins\org.python.pydev_3.7.1.201409021729\pysrc\pydevd.py", line 1547, in run
    pydev_imports.execfile(file, globals, locals)  # execute the script
  File "D:\DEV\PYTHON\workspaceSpeechRecognition\TestSpeechRecognition\Main.py", line 270, in <module>
    audio = r.listen(source)
  File "D:\DEV\PYTHON\workspaceSpeechRecognition\TestSpeechRecognition\Main.py", line 159, in listen
    pause_buffer_count = math.ceil(self.pause_threshold / seconds_per_buffer) # number of buffers of quiet audio before the phrase is complete
ZeroDivisionError: float division by zero

I try my micro on audacity , it's work.

When i debug the listen method the

 seconds_per_buffer = source.CHUNK / source.RATE
# sourcee.CHUNK = 1024
# source.RATE = 16000

So seconds_per_buffer = 0

It's normal ?

Thanks

works with google's v2 (requires key)?

google's speech API recently moved from v1 to v2, now requiring a key and developer registration. Does speech_recognition work with v2? From the code it looks like v1 only.

IOError: [Errno Input overflowed] -9981

I got this error after the recognition worked OK once.

Source code:

import speech_recognition as sr

r = sr.Recognizer()
m = sr.Microphone()
m.RATE = 44100
m.CHUNK = 512

print("A moment of silence, please...")
with m as source:
    r.adjust_for_ambient_noise(source)
    print("Set minimum energy threshold to {}".format(r.energy_threshold))
    while True:
        print("Say something!")
        audio = r.listen(source)
        print("Got it! Now to recognize it...")
        try:
            print("You said " + r.recognize(audio))
        except LookupError:
            print("Oops! Didn't catch that")

Error message:

A moment of silence, please...
ALSA lib pcm_dmix.c:957:(snd_pcm_dmix_open) The dmix plugin supports only playback stream
Cannot connect to server socket err = No such file or directory
Cannot connect to server request channel
jack server is not running or cannot be started
Set minimum energy threshold to 350.037988116
Say something!
Got it! Now to recognize it...
You said hello testing testing
Say something!
Traceback (most recent call last):
  File "listen.py", line 14, in <module>
    audio = r.listen(source)
  File "/usr/local/lib/python2.7/dist-packages/speech_recognition/__init__.py", line 265, in listen
    buffer = source.stream.read(source.CHUNK)
  File "/usr/lib/pymodules/python2.7/pyaudio.py", line 564, in read
    return pa.read_stream(self._stream, num_frames)
IOError: [Errno Input overflowed] -9981

Error

Traceback (most recent call last):
File "C:\Users\Jose\Desktop\audio.py", line 5, in
audio = r.record(source)
File "C:\Python34\lib\site-packages\speech_recognition__init__.py", line 139, in record
return AudioData(source.RATE, self.samples_to_flac(source, frame_data))
File "C:\Python34\lib\site-packages\speech_recognition__init__.py", line 119, in samples_to_flac
process = subprocess.Popen(""%s" --stdout --totally-silent --best -" % flac_converter, stdin=subprocess.PIPE, stdout=subprocess.PIPE)
File "C:\Python34\lib\subprocess.py", line 848, in init
restore_signals, start_new_session)
File "C:\Python34\lib\subprocess.py", line 1104, in _execute_child
startupinfo)
FileNotFoundError: [WinError 2] The system cannot find the file specified

Why I am getting this error?

Use native Linux flac instead of custom binary

You should consider using the native flac executable available on most Linux distros (installable on Ubuntu with sudo apt-get install flac) instead of using that custom flac-linux-386 binary. It doesn't work on Linux anyways, because you're not setting your permissions correctly. I get "permission denied" when I tried running your script, and had to patch it to use normal flac.

Recognized Words

First of all, I love this module you've created. It really makes speech recognition under python incredibly simple and I thank you for all your hard work.
However, I am having a hard time getting this library to understand words that it doesn't want to. For instance "Codsworth" is not a word but a name. This is not recognized by the speech API and is causing me a great deal of trouble. Any help? Thanks

Works a couple of times then dies

Hi there,

This is a very cool library, thanks for putting it together.

Two things when I ran the next command:

python - m speech_recognition

it didn't work until I changed in line 21 on speech recognition main str for value

once I did that it works once and sometimes a couple of times but always get the the next

You said DS
Say something!
Traceback (most recent call last):
  File "/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/runpy.py", line 162, in _run_module_as_main
    "__main__", fname, loader, pkg_name)
  File "/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/runpy.py", line 72, in _run_code
    exec code in run_globals
  File "/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/site-packages/speech_recognition/__main__.py", line 28, in <module>
    print("Uh oh! Couldn't request results from Google Speech Recognition service")
  File "/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/site-packages/speech_recognition/__init__.py", line 81, in __exit__
    self.stream.stop_stream()
  File "/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/site-packages/pyaudio.py", line 524, in stop_stream
    pa.stop_stream(self._stream)
IOError: Stream not open

I'm not very sure what the problem is

Just in case here is main

import speech_recognition as sr



r = sr.Recognizer()
m = sr.Microphone()

try:
    print("A moment of silence, please...")
    with m as source:
        r.adjust_for_ambient_noise(source)
        print("Set minimum energy threshold to {}".format(r.energy_threshold))
        while True:
            print("Say something!")
            audio = r.listen(source)
            print("Got it! Now to recognize it...")
            try:
                # recognize speech using Google Speech Recognition
                value = r.recognize_google(audio,key="AIzaSyB55R1xXklGJOGbYh_UpQ31MRWv_5LlG5c")


                # we need some special handling here to correctly print unicode characters to standard output
                if value is bytes: # this version of Python uses bytes for strings (Python 2)
                    print(u"You said {}".format(value).encode("utf-8"))
                else: # this version of Python uses unicode for strings (Python 3+)
                    print("You said {}".format(value))
            except sr.UnknownValueError:
                print("Oops! Didn't catch that")
            except sr.RequestError:
                print("Uh oh! Couldn't request results from Google Speech Recognition service")
except KeyboardInterrupt:
    pass

Not work when speech sample is too long

Hi,

I'm trying to use it to recognize some long speech samples.

The first one is about 45s. But it only returns the first few words no matter how large the pause_threshold is. The second one is over 1min and it can't be recognized, only returning exception information "request failed, ensure that key is correct and quota is not maxed out".

Is there a solution for this? or the only way is to segment my speech samples?

Thank you very much!

Audio must be mono

When runnning , i am getting the error below

File "/usr/lib/python2.6/site-packages/SpeechRecognition-1.1.4-py2.6.egg/speech_recognition/init.py", line 74, in enter
assert self.CHANNELS == 1 # audio must be mono

I confirmed that the input audio is mono.

Latest code is failing for me

When I back out the latest changes it works fine. but with the latest change it fails with the following error:

File "/usr/local/lib/python2.7/dist-packages/SpeechRecognition-1.2.3-py2.7.egg/speech_recognition/init.py", line 137, in samples_to_flac
with wave.open(wav_file, "wb") as wav_writer:
AttributeError: Wave_write instance has no attribute 'exit'

Attribute Microphone

Hi, I am new in this topic.
So, I have installed the SpeechRecognition without problem, but when I tried to test the example I got the message "AttributeError:'module' object has no attribute 'Microphone'".
I am using Python 3.4 and SpeechRecognition was installed using the files of this page. Besides I have updated using PIP but I still get the same message.
I don't know if SpeechRecognition only works in a specific version of Python or if it is neccessary to use other libraries or an extra microphone because I am using the internal microphone of my laptop.
Thanks

Listen function doesn't detect the stop of the phrase

I am using the 4th example code. It works sometimes, but sometimes it doesn't seem to be able to detect the stop of the phrase and keep listening even I am no longer speaking. After a long while, maybe 30 seconds or more of silence, if finally stops and sends the audio to Google. And sometimes it doesn't come back with anything and there is no error message.

Attempt to connect to ALSA server failed : ZeroDivisionError

Hi,

I'm using the python2 version on archlinux with Python 2.7.8. I have pyAudio and flac installed.

When executing this:

import speech_recognition as sr
r = sr.Recognizer()
with sr.Microphone() as source:                # use the default microphone as the audio source
    audio = r.listen(source)                   # listen for the first phrase and extract it into audio data

try:
    print("You said " + r.recognize(audio))    # recognize speech using Google Speech Recognition
except LookupError:                            # speech is unintelligible
    print("Could not understand audio")

I get this:

ALSA lib pcm_dsnoop.c:618:(snd_pcm_dsnoop_open) unable to open slave
ALSA lib pcm_dmix.c:1022:(snd_pcm_dmix_open) unable to open slave
ALSA lib pcm_dmix.c:1022:(snd_pcm_dmix_open) unable to open slave
connect(2) call to /dev/shm/jack-1000/default/jack_0 failed (err=No such file or directory)
attempt to connect to server failed
Traceback (most recent call last):
File "", line 2, in
File "/usr/lib/python2.7/site-packages/speech_recognition/init.py", line 159, in listen
pause_buffer_count = math.ceil(self.pause_threshold / seconds_per_buffer) # number of buffers of quiet audio before the phrase is complete
ZeroDivisionError: float division by zero

Hope I didn't forget something specific.

Different results on different machines

I'm not sure if this counts as an "error" as such, however I feel it is worth noting.
I used the exact same .wav file on two separate machines, one windows, one linux and used the exact same python script to access the api.

The windows machine gave me the result "Don" whereas the linux machine gave me "Dance" (Don not even being in the list of possible transcriptions)

Is there any reason for this? and is one OS better than the other?

Thanks,
Alexon

No error - no output

Hello ,
I am developing voice recognition in virtual reality. I am using Vizard (32 bit) and python 2.7
i was able to install all the libraries needed but still when i run the code i get no output.
code packages

output

Impossible to launch

Hello :D
I've download your code and test it.
The first time, I've made an error and ask to print audio in with r.Microphone() as audio: ..., it had print on the screen something like that : <object ... at 0x...>
I'd understood my error and put the correct code (r.recognize(audio)) and print the result, but nothing appeared. The program I've made is built to exit after printing the recognize text. But he did not exit ...
What the problem ?
my code :

from pygame.locals import *
import speech_recognition as sr
import urllib
import pygame
import pickle
import re
import os

class SpeechRecogntion:
    def __init__(self):
        self.recognizer = sr.Recognizer("en-GB")

    def recognize(self):
        with sr.Microphone() as source:
            audio = self.recognizer.listen(source)
        try:
            return self.recognizer.recognize(audio)
        except LookupError:
            return ""

class InnovativeProposal:
    def __init__(self):
        pass

class Interface:
    def __init__(self):
        self.name = "JaniswΓΆ"
        self.version = "0.0.1-a"
        self.thinking_array = InnovativeProposal()
        self.recognizer = SpeechRecogntion()
        self.save_file = "test."
        self.location = os.path.dirname(os.path.abspath(__file__)) + "\\"
        self.i = ""
        self.o = ""
        self.load()

    def start(self):
        print("Starting " + self.name + " v" + self.version + " ...")
        print(self.recognizer.recognize())
        self.save()

    def load(self):
        if os.path.exists(self.location + self.save_file):
            with open(self.location + self.save_file, "rb") as saved_rb:
                self.thinking_array = pickle.Unpickler(saved_rb).load()

    def save(self):
        with open(self.location + self.save_file, "wb") as saved_wb:
            pickle.Pickler(saved_wb).dump("")

pip instalation is not working

i tried pip instalation on xubuntu 14.04, i got this

File "<string>", line 17, in <module>
      File "/tmp/pip_build_meruem/speechrecognition/setup.py", line 4, in <module>
        import speech_recognition
      File "speech_recognition/__init__.py", line 9, in <module>
        import json, urllib.request
    ImportError: No module named request
    Complete output from command python setup.py egg_info:
    Traceback (most recent call last):

  File "<string>", line 17, in <module>

  File "/tmp/pip_build_meruem/speechrecognition/setup.py", line 4, in <module>

    import speech_recognition

  File "speech_recognition/__init__.py", line 9, in <module>

    import json, urllib.request

ImportError: No module named request

so i downloaded .zip, and tried with python setup.py install but i got this:

Traceback (most recent call last):
  File "setup.py", line 4, in <module>
    import speech_recognition
  File "/home/meruem/Descargas/SpeechRecognition-1.0.4/speech_recognition/__init__.py", line 9, in <module>
    import json, urllib.request
ImportError: No module named request

because python mean python2.7 but urllib.request is a python3 package, so i tried to run setup.py like a python3 script like this python3 setup.py install but says i dont have permissions, so i added sudo and i got this:

Traceback (most recent call last):
  File "setup.py", line 32, in <module>
    "Topic :: Software Development :: Libraries :: Python Modules",
  File "/usr/lib/python3.4/distutils/core.py", line 148, in setup
    dist.run_commands()
  File "/usr/lib/python3.4/distutils/dist.py", line 955, in run_commands
    self.run_command(cmd)
  File "/usr/lib/python3.4/distutils/dist.py", line 974, in run_command
    cmd_obj.run()
  File "/usr/lib/python3/dist-packages/setuptools/command/install.py", line 73, in run
    self.do_egg_install()
  File "/usr/lib/python3/dist-packages/setuptools/command/install.py", line 88, in do_egg_install
    self.run_command('bdist_egg')
  File "/usr/lib/python3.4/distutils/cmd.py", line 313, in run_command
    self.distribution.run_command(command)
  File "/usr/lib/python3.4/distutils/dist.py", line 974, in run_command
    cmd_obj.run()
  File "/usr/lib/python3/dist-packages/setuptools/command/bdist_egg.py", line 231, in run
    os.path.join(archive_root,'EGG-INFO'), self.zip_safe()
  File "/usr/lib/python3/dist-packages/setuptools/command/bdist_egg.py", line 270, in zip_safe
    return analyze_egg(self.bdist_dir, self.stubs)
  File "/usr/lib/python3/dist-packages/setuptools/command/bdist_egg.py", line 383, in analyze_egg
    safe = scan_module(egg_dir, base, name, stubs) and safe
  File "/usr/lib/python3/dist-packages/setuptools/command/bdist_egg.py", line 414, in scan_module
    code = marshal.load(f); f.close()
ValueError: bad marshal data (unknown type code)

recognizer.listen() will not stop unless I manually disable my microphone

Hi again,

Thank you for fixing the division by zero error. Now I have another issue.
It occurs when using this sample code:

                                               # NOTE: this requires PyAudio because it uses the Microphone class
import speech_recognition as sr
r = sr.Recognizer()
with sr.Microphone() as source:                # use the default microphone as the audio source
    audio = r.listen(source)                   # listen for the first phrase and extract it into audio data

try:
    print("You said " + r.recognize(audio))    # recognize speech using Google Speech Recognition
except LookupError:                            # speech is unintelligible
    print("Could not understand audio")

As you know when listen() is called, it starts waiting for me to speak. After I'm done talking, even though the room is almost completely silent, it simply doesn't move on. At that point the only thing I can do to force it to continue is to manually disable my microphone in my control panel. Then it goes to the try/except part of the code.

I've tried tweaking silent_treshold and energy_treshold but I can't get it to work.

Again, I'm not sure what information I should provide. Feel free to ask whatever's necessary. I'm running Archlinux with Gnome, PyAudio is installed.

listen/listen_in_background don't work when WavFile is being recorded and used as an AudioSource

Hi I'm trying to invoke listen_in_background passing a WavFile which is being recorded by another process using sox rec command.
The problem is that only the first sentence is recognized and the callback function keep being called for that same sentence over and over again.
Have you guys tried that before? Is there a way to workaround this ?

See below my test snippet:

import os
import speech_recognition as sr

def callback(recognizer, audio):                          # this is called from the background thread
    try:
        print("You said " + recognizer.recognize(audio))  # received audio data, now need to recognize it
    except LookupError:
        print("Oops! Didn't catch that")

r = sr.Recognizer()
# the file below is being recorded via sox
# rec -r 8000 -c 1 record_voice.wav
wf = os.path.abspath("record_voice.wav")
audio = sr.WavFile(wf)
r.listen_in_background(audio, callback)

import timeport
while True: time.sleep(0.1)   

Just checked and the same happens if I use listen:

while True:
    with sr.WavFile(wf) as source:
        audio = r.listen(source)
    print("Got it! Now to recognize it...")
    try:
        print("You said " + r.recognize(audio))
    except LookupError:
        print("Oops! Didn't catch that")

wit.ai backend

Hi. I've patched your module to be able to use the wit.ai engine as a backend instead of google. If you're interested in integrating this into your project the patch is here:
https://gist.github.com/maverickagm/56ee25f830ac4440cc70

To switch to the wit.ai backend you must specify stt_engine as 'wit' and provide a valid key:
r = sr.Recognizer()
r.stt_engine = 'wit'
r.key = 'my_wit_api_key_here'

In the recognize_wit() function, the show_all variable now toggles whether just the recognized text is returned to the entire json object (including entities)

Thanks.

Recognize audio in real time

Hey Uberi,
I love the work you've done with this and just want to say keep up the good work!

I do have one tiny problem though. I'm trying to get this to recognize speech in real time (or as close as possible) and I'm having a bit of trouble. The idea is to have it print each word as I say it instead of waiting for the designated amount of silence. The "offset" parameter does help with this, but like your documentation says, this can lead to wildly inaccurate results.

I was wondering if you had any advice on how to make this work better or if you were working on it.
Thanks!

Google Speech API v2 access control

I would like to use Google Speech API v2 to generate asr(auto speech recognition) result for .wav files for experiment use.

I make some changes to your project, And I apply for my own API key, and I can only access the API for about 500 times a day.

Do you know how to make 5000 or more request(access to the API)? Or how google block us??

Thanks very much !!!

Error when trying initializing

I am using speech_recognition on a raspberry pi running a version of linux, and I get this error when I try and run my program

ALSA lib pcm_dmix.c:957:(snd_pcm_dmix_open) The dmix plugin supports only playback stream
ALSA lib pcm_dmix.c:957:(snd_pcm_dmix_open) The dmix plugin supports only playback stream
Expression 'parameters->channelCount <= maxChans' failed in 'src/hostapi/alsa/pa_linux_alsa.c', line: 1438
Expression 'ValidateParameters( inputParameters, hostApi, StreamDirection_In )' failed in 'src/hostapi/alsa/pa_linux_alsa.c', line: 2742
Traceback (most recent call last):
File "AI.py", line 62, in
with sr.Microphone(0) as source:
File "/usr/local/lib/python2.7/dist-packages/SpeechRecognition-1.4.0-py2.7.egg/speech_recognition/init.py", line 61, in enter
input = True, # stream is an input stream
File "/usr/local/lib/python2.7/dist-packages/pyaudio.py", line 747, in open
stream = Stream(self, args, _kwargs)
File "/usr/local/lib/python2.7/dist-packages/pyaudio.py", line 442, in init
self.stream = pa.open(*arguments)
IOError: [Errno Invalid number of channels] -9998

This error appears at this line of code
with sr.Microphone(0) as source:
print("Got passed")
audio = r.listen(source)

The same program works fine when I am running it on OS X. I belive it is caused by the pyaudio channel it is using, as I got the same error 'IOError: [Errno Invalid number of channels] -9998' using record.py(a test file from the pyaudio folder), but when I changed its channel to 0 it worked.

Any help would be greatly appreciated, thanks.

Add model to recognize_ibm

Add a model argument to def recognize_ibm() like in the code below to allow the selection of using Wideband or Narrowband model and use the Narrowband model by default. Note bandmodel and model variables.

I'm requesting this because while using speech_recognition I was getting an HTTPError exception telling me "request failed, ensure that username and password are correct" when I knew the username and password were correct. The actual cause was me using a narrowband wav while speech_recognition was trying to upload it as a wideband. According to IBMs documentation submitting a wideband audio source with the narrowband model should work fine, but not the other way around.

As per IBMs Speech to Text API documentation:
"The service automatically adjusts the incoming sampling rate to match the model. In theory, therefore, you can send 44 KHz audio with the narrowband model. Note, however, that the service does not accept audio sampled at a lower rate than the intrinsic sample rate of the model."

    def recognize_ibm(self, audio_data, username, password, bandmodel, language = "en-US", show_all = False):
        """
        Performs speech recognition on ``audio_data`` (an ``AudioData`` instance), using the IBM Speech to Text API.

        The IBM Speech to Text username and password are specified by ``username`` and ``password``, respectively. Unfortunately, these are not available without an account. IBM has published instructions for obtaining these credentials in the `IBM Watson Developer Cloud documentation <https://www.ibm.com/smarterplanet/us/en/ibmwatson/developercloud/doc/getting_started/gs-credentials.shtml>`__.

        The band model is determined by ``bandmodel``, supported models are ``"WideBand"`` 16Khz sampling rate or ``"Narrowband"`` 8Khz sampling rate, defaulting to Narrowband. IBMs Speech to Text API  automatically adjusts the incoming sampling rate to match the model, however does not accept audio sampled at a lower rate than the intrinsic sample rate of the model.

        The recognition language is determined by ``language``, an IETF language tag with a dialect like ``"en-US"`` or ``"es-ES"``, defaulting to US English. At the moment, this supports the tags ``"en-US"``, ``"es-ES"``, and ``"ja-JP"``.

        Returns the most likely transcription if ``show_all`` is false (the default). Otherwise, returns the `raw API response <http://www.ibm.com/smarterplanet/us/en/ibmwatson/developercloud/speech-to-text/api/v1/#recognize>`__ as a JSON dictionary.

        Raises a ``speech_recognition.UnknownValueError`` exception if the speech is unintelligible. Raises a ``speech_recognition.RequestError`` exception if the key isn't valid, or there is no internet connection.
        """
        assert isinstance(audio_data, AudioData), "Data must be audio data"
        assert isinstance(username, str), "`username` must be a string"
        assert isinstance(password, str), "`password` must be a string"
        assert bandmodel in ["Wideband", "Narrowband"], "`model` must be a valid model"
        assert language in ["en-US", "es-ES", "ja-JP"], "`language` must be a valid language."

        flac_data = audio_data.get_flac_data()
        if bandmodel is not None:
            model = "{0}_{1}".format(language, bandmodel)
        else:
            model = "{0}_Narrowband".format(language)
        url = "https://stream.watsonplatform.net/speech-to-text/api/v1/recognize?continuous=true&model={0}".format(model)
        request = Request(url, data = flac_data, headers = {"Content-Type": "audio/x-flac"})
        if hasattr("", "encode"):
            authorization_value = base64.standard_b64encode("{0}:{1}".format(username, password).encode("utf-8")).decode("utf-8")
        else:
            authorization_value = base64.standard_b64encode("{0}:{1}".format(username, password))
        request.add_header("Authorization", "Basic {0}".format(authorization_value))
        try:
            response = urlopen(request)
        except HTTPError:
            raise RequestError("request failed, ensure that username and password are correct")
        except URLError:
            raise RequestError("no internet connection available to transfer audio data")
        response_text = response.read().decode("utf-8")
        result = json.loads(response_text)

        if show_all: return result

        if "results" not in result or len(result["results"]) < 1 or "alternatives" not in result["results"][0]:
            raise UnknownValueError()
        for entry in result["results"][0]["alternatives"]:
            if "transcript" in entry: return entry["transcript"]

        # no transcriptions available
        raise UnknownValueError()

Confidence always "1"

Hello all,

I've been using this function for a while having good results but since last tuesday I'm always getting "1" in the confidence score, no matter if the text is well recognized or not.

Do you guys know why this could be happening?

Any help would be appreciated!!

Regards

possible to do recognize_google() using .flac files directly?

Very nice project! Thanks for your work on this.

Perhaps there's an easy way to do the following, sorry if I missed it. In my application (I contribute to PsychoPy), I have speech snippets saved on disk, and they are often already in .flac format to save disk space. Is there a way to pass those files directly to google using SR, or does the SR API structure mean that my application has to convert them to .wav, pass the new file name to SR, and SR then converts from .wav back to .flac for sending to google?

example wav_transcribe.py doesn't work

When I am trying to run python wav_transcribe.py with my abc.wav(of cause this file is defined in script), it says:

Google Speech Recognition thinks you said what are Dinobots

I have a Google Cloud account and a valid API key(type: host) (however I could not find Speech Recognition API in it's cloud platform), then I use this key in the script, what I get is:

Could not request results from Google Speech Recognition service

So there are 2 questions:

  • How to make example work?
  • where to get API key? no matter free or pay.

How to pause recording ?

Hello, is there any way to pause recording for a certain amount of time?
update : i might have found a way by setting the audio input zero and using time.sleep but a built in feature would be great

Google Speech API

Does this library always work? I've seen that the Google Speech API can be requested only 50 times a day, so can this be used as much as wanted even if it is using the google speech API? Also do I need to use my API key? or can I use the default one?

Thanks

Can't get it work

Hello! I tried speech recognition 1.1.4 with python 3.4.0 64 bits and pyaudio 0.2.8 64 bits for python 3.4 but it does not work on my computer. I'm running on a windows 8.1 64 bits OS.

With the default test code:

import speech_recognition as sr
r = sr.Recognizer()
with sr.Microphone() as source: # use the default microphone as the audio source
audio = r.listen(source) # listen for the first phrase and extract it into audio data

try:
print("You said " + r.recognize(audio)) # recognize speech using Google Speech Recognition
except LookupError: # speech is unintelligible
print("Could not understand audio")

nothing happens (Microphone seems not detected)

So I tried to replace:

with sr.Microphone() as source:

by:

with sr.Microphone(2) as source:

and I get this error:

Traceback (most recent call last):
File "C:/Users/yul/Desktop/test.py", line 3, in
with sr.Microphone(2) as source: # use the default microphone as the audio source
File "C:\Python34\lib\site-packages\speechrecognition-1.1.4-py3.4.egg\speech_recognition__init__.py", line 47, in enter
input = True, # stream is an input stream
File "C:\Python34\lib\site-packages\pyaudio.py", line 747, in open
stream = Stream(self, _args, *_kwargs)
File "C:\Python34\lib\site-packages\pyaudio.py", line 442, in init
self._stream = pa.open(**arguments)
OSError: [Errno Invalid number of channels] -9998

Do I have to configure something in pyaudio to make this work? Thanks very much for your answer!!

Does not stop recording in speech recognition python

I was trying the example code for the first time

import speech_recognition as sr
r = sr.Recognizer(language = "en-US" )
r.pause_threshold = 0.6
with sr.Microphone() as source:
audio = r.adjust_for_ambient_noise(source)
print "Speak Now"
audio = r.listen(source, timeout=1) # listen for the first phrase and extract it into audio data

try:
print("You said " + r.recognize(audio)) # recognize speech using Google Speech Recognition
except LookupError: # speech is unintelligible
print("Could not understand audio")
except IndexError:
print("No internet")
except KeyError:
print("quota maxed out")
But i am getting this

ALSA lib pcm_dsnoop.c:618:(snd_pcm_dsnoop_open) unable to open slave
ALSA lib pcm_dmix.c:1022:(snd_pcm_dmix_open) unable to open slave
bt_audio_service_open: connect() failed: Connection refused (111)
bt_audio_service_open: connect() failed: Connection refused (111)
bt_audio_service_open: connect() failed: Connection refused (111)
bt_audio_service_open: connect() failed: Connection refused (111)
ALSA lib dlmisc.c:252:(snd1_dlobj_cache_get) Cannot open shared library /usr/lib/x86_64-linux-gnu/alsa-lib/libasound_module_pcm_equal.so
ALSA lib dlmisc.c:252:(snd1_dlobj_cache_get) Cannot open shared library /usr/lib/x86_64-linux-gnu/alsa-lib/libasound_module_pcm_equal.so
ALSA lib pcm_dmix.c:1022:(snd_pcm_dmix_open) unable to open slave
Cannot connect to server socket err = No such file or directory
Cannot connect to server request channel
jack server is not running or cannot be started
Speak now
and it is then struck there only

What should i do?

Python SpeechRecognition 1.1.3 with usb microphone

I'm using the Python Speech Recognition library to recognize speech input from the microphone.

This works fine with my default microphone.
This is the code I'm using. According to what I understood of the documentation

Creates a new Microphone instance, which represents a physical
microphone on the computer. Subclass of AudioSource.

If device_index is unspecified or None, the default microphone is used
as the audio source. Otherwise, device_index should be the index of
the device to use for audio input. https://pypi.python.org/pypi/SpeechRecognition/

The problem is that when I want to get the node with pyaudio.get_device_count() - 1. I'm getting this error.

AttributeError: 'module' object has no attribute 'get_device_count'

So I'm not sure how to configure the microphone to use a usb microphone

import pyaudio
import speech_recognition as sr

index = pyaudio.get_device_count() - 1
print index

r = sr.Recognizer()

with sr.Microphone(index) as source: 
    audio = r.listen(source) 

try:
    print("You said " + r.recognize(audio))   
except LookupError:                           
    print("Could not understand audio")

Flac linux permission denied ubuntu

Tonight my code was just hanging up when it's supposed to record. Then i tried python -m speech_recognition. I was running. But instead of decoding it's showing
Suddenly getting this error today . /usr/local/lib/python2.7/dist-packages/speech_recognition/flac-linux-i386:

Bug in __init__.py

There is a bug in __init__.py in line 43.
Instead of pyaudio.get_device_count() there should be pyaudio.PyAudio().get_device_count()
The exception will happen only if device_index is not None.

Issues with Pulse Audio (Microfones)

I am on Arch Linux with python 3.4 and pyaudio 0.2.8-1 and have this issue:
ALSA lib pcm.c:2267:(snd_pcm_open_noupdate) Unknown PCM cards.pcm.rear
ALSA lib pcm.c:2267:(snd_pcm_open_noupdate) Unknown PCM cards.pcm.center_lfe
ALSA lib pcm.c:2267:(snd_pcm_open_noupdate) Unknown PCM cards.pcm.side
ALSA lib pcm_route.c:867:(find_matching_chmap) Found no matching channel map
ALSA lib pcm_route.c:867:(find_matching_chmap) Found no matching channel map
connect(2) call to /dev/shm/jack-1000/default/jack_0 failed (err=No such file or directory)
attempt to connect to server failed

Is is possible to specify Pulse audio instead of Alsa?

UnicodeEncodeError: 'ascii' codec can't encode character

Hey there,

I get this exception

UnicodeEncodeError: 'ascii' codec can't encode character u'\xf6' in position 56: ordinal not in range(128)

when using your example code (in python 2.7.10) and setting the language to german. The character should be 'ΓΆ'.

I did some research and using

UTF8Writer = codecs.getwriter('utf8')
sys.stdout = UTF8Writer(sys.stdout)

at the very beginning of the example code solved the problem.

Maybe you could/should add this to the speech recognition code itself, if you're giving people the choice to use any language they like :)

about api keys and internet connection

hello, thank you for the bugfix, i could install speech_recognition using the repository and running setup.py, but pip installation is still working bad.

do i need internet connection to use library?, i tested offline and it said that "key was not working" and i tested online and worked properly. it is possible to recognize an language different from english?

i created my own key as said in chromium developers website (new project, speech api enabled, new id, new key browser type), but, how can i replace the default key with my own key or what can i do for repair the old one?, do i need to be with internet connection to run this library?

Dynamic energy_threshold

I found that a static energy_threshold wasn't working well for me. I changed it to dynamically adjust to the ambient noise level.
self.energy_threshold = self.energy_threshold * .9 + (energy * 1.05) * .1

This slowly trends toward an energy threshold of 1.05 the ambient noise. It works pretty well for me. I'm happy to provide my code if you think it would be useful / accepted.

How can I set a duration for the listen_in_background method

It would be nice to set a duration for the listen_in_background method. If it listens to a long winded speaker who doesn't stop talking for minutes, it waits until there is no audio before recognizing the audio. It would be nice to process this in smaller chunks. For example,start a new record process for 10 seconds, then send that to recognize while another record process starts, and so on...

WAV Results are way off

I'm trying to use speech_recognition 3.1.2 using Python 3.4 but I've been having troubles the entire time.

Initially when trying to use just the example WAV recognizer I was getting TypeError: 'str' does not support the buffer interface so I combed through the source and made the following change:

    def read(self, size = -1):
        buffer = self.wav_reader.readframes(self.wav_reader.getnframes() if size == -1 else size)
        if type(buffer) is str:
            buffer = buffer.encode(encoding="utf-8", errors="strict")
            print(buffer)
        if self.wav_reader.getnchannels() != 1: # stereo audio
            try:
                buffer = audioop.tomono(buffer, self.wav_reader.getsampwidth(), 1, 1) # convert stereo audio data to mono
            except Exception as e:
                print(e)
        return buffer

from:

    def read(self, size = -1):
        buffer = self.wav_reader.readframes(self.wav_reader.getnframes() if size == -1 else size)
        if self.wav_reader.getnchannels() != 1: # stereo audio
            buffer = audioop.tomono(buffer, self.wav_reader.getsampwidth(), 1, 1) # convert stereo audio data to mono
        return buffer

While it doesn't throw an error now the transcription quality is terrible. I can run python -m speech_recognition with great accuracy so I'm not sure what is happening. I upped the energy_threshold to 4000 to make sure it wasn't an ambient noise issue. I even used 2 different recognition services (IBM and Google Speech Recognition). Also, for some reason the last 2 buffers are empty Strings which I then have to convert to byte objects

b''
b''

Any advice would be awesome!

Invalid Sample Rate

Is there any way to specify the sample rate that should be used, for recording via the microphone.

I'm using a bluetooth headset and I have no problem recording via arecord at 8000; however, when I attempt to utilitze SpeechRecognition-1.4.0, an exception is generated in speech_recognition.init.py (line 61) / pyaduio.py (line 396) stating "IOError: [errno Invalid sample rate] -9997". I believe that you are defaulting to a rate of 16000 and I would like to change it to 8000.

Edit: I'll need to hone in on more of the root cause. For grins, I directly edited the init.py source, changing the sample rate, and while it no longer throws an error, it also does not record audio from the Bluetooth headset (even though arecord records audio just fine).

Python 3.5 stuff

  • New PyAudio Windows binaries.
  • Make sure everything is fully compatible with 3.5.

Not recognizing in case of nouns

Hello, If I say my name or some thing like "sudo" "apt-get" ,...... it says couldnot understand audio. So is there any option to resolve this ..

Regards,
chaitanya.

Chunked Requests

Would be nice to have support for multipart requests to allow for longer audio.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    πŸ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. πŸ“ŠπŸ“ˆπŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❀️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.