Giter Site home page Giter Site logo

alexandreline / whisperlive Goto Github PK

View Code? Open in Web Editor NEW

This project forked from collabora/whisperlive

0.0 0.0 0.0 6.15 MB

A nearly-live implementation of OpenAI's Whisper.

License: MIT License

Shell 1.48% JavaScript 20.68% Python 70.63% CSS 1.18% HTML 5.97% Makefile 0.06%

whisperlive's Introduction

WhisperLive

WhisperLive

A nearly-live implementation of OpenAI's Whisper.

This project is a real-time transcription application that uses the OpenAI Whisper model to convert speech input into text output. It can be used to transcribe both live audio input from microphone and pre-recorded audio files.

Installation

  • Install PyAudio and ffmpeg
 bash scripts/setup.sh
  • Install whisper-live from pip
 pip install whisper-live

Setting up NVIDIA/TensorRT-LLM for TensorRT backend

Getting Started

The server supports two backends faster_whisper and tensorrt. If running tensorrt backend follow TensorRT_whisper readme

Running the Server

python3 run_server.py --port 9090 \
                      --backend faster_whisper
  
# running with custom model
python3 run_server.py --port 9090 \
                      --backend faster_whisper
                      -fw "/path/to/custom/faster/whisper/model"
  • TensorRT backend. Currently, we recommend to only use the docker setup for TensorRT. Follow TensorRT_whisper readme which works as expected. Make sure to build your TensorRT Engines before running the server with TensorRT backend.
# Run English only model
python3 run_server.py -p 9090 \
                      -b tensorrt \
                      -trt /home/TensorRT-LLM/examples/whisper/whisper_small_en

# Run Multilingual model
python3 run_server.py -p 9090 \
                      -b tensorrt \
                      -trt /home/TensorRT-LLM/examples/whisper/whisper_small \
                      -m

Running the Client

  • Initializing the client:
from whisper_live.client import TranscriptionClient
client = TranscriptionClient(
  "localhost",
  9090,
  lang="en",
  translate=False,
  model="small",
  use_vad=False,
)

It connects to the server running on localhost at port 9090. Using a multilingual model, language for the transcription will be automatically detected. You can also use the language option to specify the target language for the transcription, in this case, English ("en"). The translate option should be set to True if we want to translate from the source language to English and False if we want to transcribe in the source language.

  • Trancribe an audio file:
client("tests/jfk.wav")
  • To transcribe from microphone:
client()
  • To transcribe from a HLS stream:
client(hls_url="http://as-hls-ww-live.akamaized.net/pool_904/live/ww/bbc_1xtra/bbc_1xtra.isml/bbc_1xtra-audio%3d96000.norewind.m3u8") 

Browser Extensions

Whisper Live Server in Docker

  • GPU

    • Faster-Whisper
    docker run -it --gpus all -p 9090:9090 ghcr.io/collabora/whisperlive-gpu:latest
    • TensorRT. Follow TensorRT_whisper readme in order to setup docker and use TensorRT backend. We provide a pre-built docker image which has TensorRT-LLM built and ready to use.
  • CPU

docker run -it -p 9090:9090 ghcr.io/collabora/whisperlive-cpu:latest

Note: By default we use "small" model size. To build docker image for a different model size, change the size in server.py and then build the docker image.

Future Work

  • Add translation to other languages on top of transcription.
  • TensorRT backend for Whisper.

Contact

We are available to help you with both Open Source and proprietary AI projects. You can reach us via the Collabora website or [email protected] and [email protected].

Citations

@article{Whisper
  title = {Robust Speech Recognition via Large-Scale Weak Supervision},
  url = {https://arxiv.org/abs/2212.04356},
  author = {Radford, Alec and Kim, Jong Wook and Xu, Tao and Brockman, Greg and McLeavey, Christine and Sutskever, Ilya},
  publisher = {arXiv},
  year = {2022},
}
@misc{Silero VAD,
  author = {Silero Team},
  title = {Silero VAD: pre-trained enterprise-grade Voice Activity Detector (VAD), Number Detector and Language Classifier},
  year = {2021},
  publisher = {GitHub},
  journal = {GitHub repository},
  howpublished = {\url{https://github.com/snakers4/silero-vad}},
  email = {hello@silero.ai}
}

whisperlive's People

Contributors

makaveli10 avatar alexandreline avatar zoq avatar lightwastak3n avatar jsichi avatar mrtoorich avatar jhormigo avatar flippfuzz avatar chronoz avatar ethanzrd avatar k0hacuu avatar justinlevi avatar rpavlik avatar stinosko avatar yehiaabdelm avatar gchust avatar

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.