Giter Site home page Giter Site logo

ianblenke / whisperlive Goto Github PK

View Code? Open in Web Editor NEW

This project forked from collabora/whisperlive

0.0 1.0 0.0 4.59 MB

A nearly-live implementation of OpenAI's Whisper.

License: MIT License

Shell 0.05% JavaScript 38.41% Python 49.86% CSS 2.13% HTML 9.45% Makefile 0.10%

whisperlive's Introduction

whisper-live

A nearly-live implementation of OpenAI's Whisper.

This project is a real-time transcription application that uses the OpenAI Whisper model to convert speech input into text output. It can be used to transcribe both live audio input from microphone and pre-recorded audio files.

Unlike traditional speech recognition systems that rely on continuous audio streaming, we use voice activity detection (VAD) to detect the presence of speech and only send the audio data to whisper when speech is detected. This helps to reduce the amount of data sent to the whisper model and improves the accuracy of the transcription output.

Installation

  • Install PyAudio and ffmpeg
 bash setup.sh
  • Install whisper-live from pip
 pip install whisper-live

Getting Started

  • Run the server
 from whisper_live.server import TranscriptionServer
 server = TranscriptionServer()
 server.run("0.0.0.0", 9090)
  • On the client side

    • To transcribe an audio file:
      from whisper_live.client import TranscriptionClient
      client = TranscriptionClient("localhost", 9090, is_multilingual=True, lang="hi", translate=True)
      client(audio_file_path)

    This command transcribes the specified audio file (audio.wav) using the Whisper model. It connects to the server running on localhost at port 9090. It also enables the multilingual feature, allowing transcription in multiple languages. The language option specifies the target language for transcription, in this case, Hindi ("hi"). The translate option should be set to True if we want to translate from the source language to English and False if we want to transcribe in the source language.

    • To transcribe from microphone:
      from whisper_live.client import TranscriptionClient
      client = TranscriptionClient(host, port, is_multilingual=True, lang="hi", translate=True)
      client()

    This command captures audio from the microphone and sends it to the server for transcription. It uses the same options as the previous command, enabling the multilingual feature and specifying the target language and task.

Transcribe audio from browser

  • Run the server
 from whisper_live.server import TranscriptionServer
 server = TranscriptionServer()
 server.run("0.0.0.0", 9090)

This would start the websocket server on port 9090.

Chrome Extension

Firefox Extension

Whisper Live Server in Docker

  • GPU
 docker build . -t whisper-live -f docker/Dockerfile.gpu
 docker run -it --gpus all -p 9090:9090 whisper-live:latest
  • CPU
 docker build . -t whisper-live -f docker/Dockerfile.cpu
 docker run -it -p 9090:9090 whisper-live:latest

Note: By default we use "small" model size. To build docker image for a different model size, change the size in server.py and then build the docker image.

Future Work

  • Add translation to other languages on top of transcription.
  • TensorRT backend for Whisper.

Contact

We are available to help you with both Open Source and proprietary AI projects. You can reach us via the Collabora website or [email protected] and [email protected].

Citations

@article{Whisper
  title = {Robust Speech Recognition via Large-Scale Weak Supervision},
  url = {https://arxiv.org/abs/2212.04356},
  author = {Radford, Alec and Kim, Jong Wook and Xu, Tao and Brockman, Greg and McLeavey, Christine and Sutskever, Ilya},
  publisher = {arXiv},
  year = {2022},
}
@misc{Silero VAD,
  author = {Silero Team},
  title = {Silero VAD: pre-trained enterprise-grade Voice Activity Detector (VAD), Number Detector and Language Classifier},
  year = {2021},
  publisher = {GitHub},
  journal = {GitHub repository},
  howpublished = {\url{https://github.com/snakers4/silero-vad}},
  commit = {insert_some_commit_here},
  email = {hello@silero.ai}
}

whisperlive's People

Contributors

ianblenke avatar justinlevi avatar makaveli10 avatar zoq avatar

Watchers

 avatar

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.