Giter Site home page Giter Site logo

wer_are_we's Issues

TIMIT update

Hi, recently, we have done some improvements on TIMIT:

Average PER 15.58% (15.08% min.) on the core test set. fMLLR, 4x1024 LSTM, http://arxiv.org/abs/1806.07974 It is going to be presented at TSD2018 next week.

Further, we had boosted the result by NN ensembled and by the regularization post-layer in SPECOM 2018 (18.-22. September). Average PER 14.84% (14.69% min.) https://arxiv.org/abs/1806.07186

In addition, we share ready-to-try python scripts here:
https://github.com/OrcusCZ/NNAcousticModeling

To be fair, we had found a nice result of average PER 14.9% by Ravanelli with fMLLR and a M-reluGRU based NN, https://arxiv.org/abs/1710.00641

Thanks, Jan

Scaling this effort up

What should be included in wer_are_we?

When I started this repo, I put only numbers that I trusted, a (very few) of them I reproduced, or I knew they were reproducible. Now it seems (from the past year issues and pull requests) that people want it to be exhaustive. In this regard, there are two broad families of solutions:

  1. an extensive editorial work, it means vetting the results somehow, leveraging any of:
    1. reproduced work
    2. cited work
    3. network of trust ("editorial board")
    4. $your_suggestion
  2. be exhaustive but give metrics/indicators for readers to form their own judgement.

I don't want to be the gatekeeper for the community, but I do care about trustable numbers and reproducible science. Classic concerns are validating on the test set, having a language model including the test set, and basically human error. Still, it doesn't mean that even slightly bogus numbers (plain wrong ones are still banished) are not interesting, they should just be taken with a grain of salt. Otherwise I'm an adept of "trust, but verify" a.k.a. optimism in the face of uncertainty. Thus, I am leaning towards 2. and adding a column for "trustability" that gives (if there is) an argument for this number. It can be a link to a github repo, a paper reproducing it (e.g. openseq2seq for DeepSpeech2 and wav2letter), or a high number of citations or a noteworthy citation. What do you think of that?

I am also going to include baselines from Kaldi if there is no good argument against in #31.

Do you want to help?

#28 raises the question of my (lack of) responsiveness lately. If you're interested in helping out with the maintaining of the repo and if you adhere with the above, feel free to submit PRs of course. A good PR is not just the number(s) and the paper title, but also a note explaining what is special/specific in this paper approach. It's even better if there is a note in your PR that is slightly longer form than the "Notes" column, that shows that you understood the paper.

I will also consider adding a few trusted maintainers with push access.

Let me know in the comments if you have suggestions on how to scale that better while informing readers about the trustworthiness of the results we list.

Request for note next to pytorch-kaldi TIMIT results

Hi there,

While the top TIMIT scores of 13.8% and 14.9% are reproducible, they perform a non-standard evaluation wherein silence phones are removed from reference and hypothesis transcripts (https://github.com/mravanelli/pytorch-kaldi/blob/6234b86df5ea65fe61091519d27358177b04a198/kaldi_decoding_scripts/local/score.sh). The result is a non-negligible decrease in PER. For reference, when Kaldi went back to including silences in its eval, here were its results kaldi-asr/kaldi@bdd752b.

Best,
Sean

Do you need a maintainer?

Hi, I noticed there are several issues and PRs with updated results.

Is this repo being maintained? I'd be happy to help as a maintainer or take over the repo if you need help :-)

"WER are we" for other languages than English?

Thanks for creating this repository. Are WER comparisons planned for other languages as well? Could be organized in separate .MD files for each language.

We host a free and public testset as well as a Kaldi training recipe for German since a couple of years: https://github.com/uhh-lt/kaldi-tuda-de

Several papers have started to use tuda-test as a benchmark as well and it could be a candidate for a German WER benchmark.

GramCTC results

| 7.3% | ??.?% | [Gram-CTC: Automatic Unit Selection and Target Decomposition for Sequence
Labelling](https://arxiv.org/pdf/1703.00096.pdf) | March 2017 | RNN + CTC + Gram CTC acoustic model trained on SWB+Fisher+CH, N-gram  |

Links to datasets

Hi, I would find it useful to also have links to the datasets' papers (or the datasets' webpage when there's no paper). Thanks!

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.