Giter Site home page Giter Site logo

algo-boys / swr2-asr Goto Github PK

View Code? Open in Web Editor NEW
2.0 1.0 0.0 13.64 MB

Automatic speech recognition model for the Spoken Word Recognition seminar (SWR2) Tübingen

Python 78.82% Shell 0.73% Jupyter Notebook 18.83% Dockerfile 1.61%
automatic-speech-recognition ctc-decode deepspeech german-language tuebingen

swr2-asr's Introduction

SWR2-ASR

Automatic speech recognition model for the seminar "Spoken Word Recogniton 2 (SWR2)" by Konstantin Sering in the summer term 2023.

Authors: Silja Kasper, Marvin Borner, Philipp Merkel, Valentin Schmidt

Dataset

We use the german multilangual librispeech dataset (mls_german_opus). If the dataset is not found under the specified path, it will be downloaded automatically.

If you want to train this model on custom data, this code expects a folder structure like this:

<dataset_path>
  ├── <language>
  │  ├── train
  │  │  ├── transcripts.txt
  │  │  └── audio
  │  │     └── <speakerid>
  │  │        └── <bookid>
  │  │           └── <speakerid>_<bookid>_<chapterid>.opus/.flac
  │  ├── dev
  │  │  ├── transcripts.txt
  │  │  └── audio
  │  │     └── <speakerid>
  │  │        └── <bookid>
  │  │           └── <speakerid>_<bookid>_<chapterid>.opus/.flac
  │  └── test
  │     ├── transcripts.txt
  │     └── audio
  │        └── <speakerid>
  │           └── <bookid>
  │              └── <speakerid>_<bookid>_<chapterid>.opus/.flac

Installation

The preferred method of installation is using poetry. After installing poetry, run

poetry install

to install all dependencies. poetry also enables you to run our scripts using

poetry run SCRIPT_NAME

Alternatively, you can use the provided requirements.txt file to install the dependencies using pip or conda.

Usage

Tokenizer

We include a pre-trained character-level tokenizer for the german language in the data/tokenizers directory.

If the path to the tokenizer you specified in the config.yaml file does not exist or is None (~), a new tokenizer will be trained on the training data.

Decoder

There are two options for the decoder:

  • greedy
  • beam search with language model

The language model is a KenLM model and supplied by the multi-lingual librispeech dataset. If you want to use a different KenLM language model, you can specify the path to the language model in the config.yaml file.

Training the model

All hyperparameters can be configured in the config.yaml file. The main sections are:

  • model
  • training
  • dataset
  • tokenizer
  • checkpoints
  • inference

Train using the provided train script:

poetry run train \
--config_path="PATH_TO_CONFIG_FILE"

You can also find our model that was trained for 67 epochs on the mls_german_opus here.

Inference

The config.yaml also includes a section for inference. To run inference on a single audio file, run:

poetry run recognize \
--config_path="PATH_TO_CONFIG_FILE" \
--file_path="PATH_TO_AUDIO_FILE" \
--target_path="PATH_TO_TARGET_FILE"

Target path is optional. If not specified, the recognized text will be printed to the console. Otherwise, a WER will be computed.

swr2-asr's People

Contributors

jojobarthold2 avatar marvinborner avatar pherkel avatar silja78 avatar

Stargazers

 avatar  avatar

Watchers

 avatar

swr2-asr's Issues

Fixing performance of the model

image

Ich werde hier mal sachen reinschicken die nicht nicht verstehe. Hier wäre es z.b warum wir das UNK token komplett removen ? Zerschießen wir uns dann nicht immer das komplette alignment weil dann ja ein wort fehlt?

Improve Performance

Have not looked into it, but I think the audioloader is the main performance bottleneck currently.
E.g. during evaluation, we only need the utterances but the audioloader always loads the entire dataset, including audio files and converts them to tensors.

  • Check where unnecessary data is loaded with the audioloader
  • Potentially replace audioloader with more lightweight, custom implementation

Nicht online

Valentin hasst uns und online, bitte nicht machen!!

Refactor

  • Layer and Model definition
  • Make tokenizer chooseable
  • training loop
  • inference

Reduce dependecies in GitHub ci

Since we only do static code analysis, installing all dependencies can be skipped.

Only black and pylint should be installed via pip.

Limited supervision support

Add support for the reduced size/limited supervision dataset

  • Dataset flag
  • Dataset creates test and valid set from limited supervision
  • Create smaller zip with only limited supervision

Checklist Model

  • Meaningful variable names
  • PEP8 compliant (Pylint value of 9.0 or higher)
  • One line no more than one logical step
  • Where do you get the data from?
  • Deterministic train, validation, test split
  • Where to put the data?
  • Is the code reproducible (in a loose sense)? Can I run it without extra knowledge?
  • Docstrings that describe what is/should be achieved and how to use? (Not how it is implemented.)
  • Minimal comments; the code should be self-explanatory
  • Split into train validation test set
  • Training loop (2 points)
  • Storing weights and continuing from checkpoint
  • Code for plotting of loss reduction over epochs
  • Code for evaluation measure (figure / table)

Decide on rough structure

Lets decide on rough structure of our project e.g. end-to-end? transformer or LSTM? etc. so we can have more focus.
We probably can't choose optimally yet anyway

Recherchen und Archiv: @PeterIsbrandt

CTCDecoder with LM

Librispeech stellt Language Models bereit, die mit KenLM trainiert wurden.
Es gibt bei Pytorch ein Tutorial, wie man das verwenden kann, aber ich habs noch nicht verstanden.
Im Deepspeech 2 Paper zeigen die aber, dass man mit LM die WER schon deutlich verbessern kann, also sollten wir das wahrscheinlich auch machen.

Todo:

  • load language model provided by librispeech
  • implement beamsearch decoder with that language model with the pytorch class
  • implement greedy decoder with abstract pytorch class
  • adjust test loop to be able to choose between decoders
  • add CTCDecoder with LM support to inference
  • run decoder hyperparameter grid search to find best values for beam_size, beam_threshold, lm_weight, word_score, sil_score

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.