Giter Site home page Giter Site logo

libri-light's Introduction

Libri-Light: A Benchmark for ASR with Limited or No Supervision

You can track papers that use Libri-Light and their relative performance on Papers With Code: [test-clean] [test-other]

Description

This repository contains code and models associated with the Libri-Light dataset, which can be downloaded and prepared here. More information about dataset creation and baselines can be found in this arXiv Paper. Contained here is code for data preparation, pretrained models, and evaluation resources:

data_preparation/         # code to download the data; VAD and SNR code; json generation; stats; audio segmentation
eval/                     # ABX, PER, WER (evaluation metrics on LibriSpeech dev-clean, dev-other, test-clean, test-other)
baselines/                   # code, pretrained wav2letter models, baselines, and examples

To get started, first clone the repository:

git clone https://github.com/facebookresearch/libri-light

The environment is easiest to set up with Anaconda. Requirements can be installed by running:

conda env create -f environment.yml && conda activate libri-light

If you don't have conda you can get it here.

Goals and structure

Libri-Light offers 60+ k hours of unlabelled speech, a small training set for limited supervision (10h, 1h or 10 minutes of labelled speech), and a common set of metrics to evaluated three settings:

  1. the unsupervised/zero-resource setting. Here, models are trained only on unlabelleds speech and attempt to construct 'good' speech representations. They are evaluated with the ABX metric.
  2. the semi-supervised setting. Here, models are trained with the limited supervision dataset and exploit the unlabelled in various ways (as pretraining, to get pseudo-labels, etc). The models are evaluated using either PER or WER.
  3. the distant supervision setting. Here, models can use additional unaligned text to build a decoder. These models are evaluated using WER.

Documentation

Documentation for downloading Libri-Light or preparing the source files from scratch can be found in data_preparation.

The eval directory contains ABX, PER and WER evaluations on pretrained CPC models.

The baselines directory contains pretrained wav2letter baseline models and information about reproduction.

Citing

@INPROCEEDINGS{librilight,
  author={J. {Kahn} and M. {Rivière} and W. {Zheng} and E. {Kharitonov} and Q. {Xu} and P. E. {Mazaré} and J. {Karadayi} and V. {Liptchinsky} and R. {Collobert} and C. {Fuegen} and T. {Likhomanenko} and G. {Synnaeve} and A. {Joulin} and A. {Mohamed} and E. {Dupoux}},
  booktitle={ICASSP 2020 - 2020 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP)}, 
  title={Libri-Light: A Benchmark for ASR with Limited or No Supervision}, 
  year={2020},
  pages={7669-7673},
  note = {\url{https://github.com/facebookresearch/libri-light}},
}

License

The Libri-light code is released under the MIT license. See LICENSE for additional details.

libri-light's People

Contributors

bmilde avatar dweekly avatar enhuiz avatar eugene-kharitonov avatar jacobkahn avatar lenassero avatar zhengwy888 avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

libri-light's Issues

Incorrect number of utterances for the 10min and 1h subsets

Hello,

I recently downloaded this dataset, and noticed that the 10min and 1h subsets are of equal size (in number of utterances).
Both account to 1,571 lines of phonetic transcriptions.

Fetching the corresponding audios results in two sets that are 05:29:37 long (HH:MM:SS).
I'm guessing this is a mistake? :)

ABX_src is not compiling correctly

I tried to compile ABX_src inplace as explained in the doc, but I get:

(libri-light) milde@ltgpu2:/raid/milde/libri-light/eval/ABX_src$ python setup.py build_ext --inplace
running build_ext
building 'ABX_src.dtw' extension
gcc -pthread -B /srv/home/milde/anaconda3/envs/libri-light/compiler_compat -Wl,--sysroot=/ -Wsign-compare -DNDEBUG -g -fwrapv -O3 -Wall -Wstrict-prototypes -fPIC -I/srv/home/milde/anaconda3/envs/libri-light/include/python3.7m -c dtw.c -o build/temp.linux-x86_64-3.7/dtw.o
dtw.c:611:10: fatal error: numpy/arrayobject.h: No such file or directory
 #include "numpy/arrayobject.h"
          ^~~~~~~~~~~~~~~~~~~~~
compilation terminated.
error: command 'gcc' failed with exit status 1

When I manually change:

setup(
    include_dirs=[numpy.get_include()],
    ext_modules=cythonize("dtw.pyx")
)

in setup.py it works and finds the numpy headers correctly.

checksums for medium and large.tar?

Hi,

for the larger downloads I get connection aborts from your server with wget very frequently, almost every 2 hours. Looks like this:

2020-01-16 22:34:56 (10.9 MB/s) - Connection closed at byte 811303997429. Retrying.

Wget is retrying, but I fear it happened so often now that something is corrupted. Could you please release checksums (md5/sha or something like it)? Thank you and also many thanks for releasing this exciting dataset!

Wrong version of torchaudio

Hi, when running build_all_stats.py, there is the following error: TypeError: 'AudioMetaData' object is not subscriptable. Torchaudio changed this object, so you should either apply try, except to handle this difference or use older version of torchaudio in your environment.yaml. Here is the code to fix this issue without using older torchaudioversion:
Add the below funcion in the libri-light/data_preparation/metadata_completion/utils.py and use it in parts of code when you use torch.info().

def get_audio_size(path_audio_data):
    try:
        info = torchaudio.info(path_audio_data)[0]
        length = info.length
        rate = info.rate
    except:
        info = torchaudio.info(str(path_audio_data))
        length = info.num_frames
        rate = info.sample_rate
    return length / (rate * 3600.)

Making Supervised Large Datasets for English / German / Spanish

Hi,

Have not found any contacts in the press-release or in the paper (please correct me if I am wrong), so I decided to open an issue here to reach out.

My name is Alexander, I am the main author of Open STT and these recent articles from The Gradient:

TLDR - we have collected 30k hours of annotation in Russian with close to zero investment into manual annotation and we are doing the same in English / German / Spanish. My personal goal is to collect 10-20k hours in English and 10k in German + Spanish. We have chosen these languages (apart from English ofc) because they are popular, we speak them (at least I can read) and phonetics is really simple and similar to Russian.

On Russian data we have built production grade models and have even deployed some high-load services into production (if you speak Russian - please follow these links http://silero.ai/, https://mobile-demo.silero.ai/, https://habr.com/ru/post/494006/)

I wonder if FAIR (please correct me if FAIR and facebookresearch is not the same entity) would be interested in any win-win collaboration or sponsoring our efforts to fully open-source our models and datasets.

Libri-Light offers 60+ k hours of unlabelled speech, a small training set for limited supervision (10h, 1h or 10 minutes of labelled speech), and a common set of metrics to evaluated three settings:

You can build almost fully supervised datasets from Librivox (granted there will be some noise the data ofc). I wonder why you did not do / share this. This is such a low-hanging fruit!

Best,
Alexander

What is the 4th subset?

Hello,

thank you very much for your contribution!

What is the 4th sebset (unlab_duplicate.tar (4500 hours, 274 GB))?
Does it contain potentially duplicated books but produced by unseen speakers?

Missing data in prepared datasets

Hello!

I am trying to re-download source files and then convert them to a different sampling rate, but i can't find anywhere the metadata about how the source books were cut.

Is metadata published somewhere?

Finetuning using 10mins labelled data

For finetuning using 10mins labeled data from Librispeech, as in LibriLight there're six versions of 10 mins labeled data, may I know which version is used for the results reported in the paper?

Cython dependency missing

Hi,
I followed the instructions yet Cython is missing from dependencies when running the setup.py in ABX_src.
Should it be added to environment.yml?
Neil

ABX_src setup issues

I'm trying to use the eval module.
When I run :
python setup.py build_ext --inplace

I get the following error :

Compiling dtw.pyx because it depends on /home/ubuntu/anaconda3/envs/libri-light/lib/python3.10/site-packages/Cython/Includes/libc/string.pxd.
[1/1] Cythonizing dtw.pyx
/home/ubuntu/anaconda3/envs/libri-light/lib/python3.10/site-packages/Cython/Compiler/Main.py:369: FutureWarning: Cython directive 'language_level' not set, using 2 for now (Py2). This will change in a later release! File: /mnt/efs/data/libri-light/eval/ABX_src/dtw.pyx
tree = Parsing.p_module(s, pxd, full_module_name)
running build_ext
building 'ABX_src.dtw' extension
gcc -pthread -B /home/ubuntu/anaconda3/envs/libri-light/compiler_compat -Wno-unused-result -Wsign-compare -DNDEBUG -g -fwrapv -O2 -Wall -fPIC -O2 -isystem /home/ubuntu/anaconda3/envs/libri-light/include -fPIC -O2 -isystem /home/ubuntu/anaconda3/envs/libri-light/include -fPIC -I/home/ubuntu/anaconda3/envs/libri-light/lib/python3.10/site-packages/numpy/core/include -I/home/ubuntu/anaconda3/envs/libri-light/include/python3.10 -c dtw.c -o build/temp.linux-x86_64-3.10/dtw.o
In file included from /home/ubuntu/anaconda3/envs/libri-light/lib/python3.10/site-packages/numpy/core/include/numpy/ndarraytypes.h:1969:0,
from /home/ubuntu/anaconda3/envs/libri-light/lib/python3.10/site-packages/numpy/core/include/numpy/ndarrayobject.h:12,
from /home/ubuntu/anaconda3/envs/libri-light/lib/python3.10/site-packages/numpy/core/include/numpy/arrayobject.h:4,
from dtw.c:711:
/home/ubuntu/anaconda3/envs/libri-light/lib/python3.10/site-packages/numpy/core/include/numpy/npy_1_7_deprecated_api.h:17:2: warning: #warning "Using deprecated NumPy API, disable it with " "#define NPY_NO_DEPRECATED_API NPY_1_7_API_VERSION" [-Wcpp]
#warning "Using deprecated NumPy API, disable it with "
^~~~~~~
gcc -pthread -B /home/ubuntu/anaconda3/envs/libri-light/compiler_compat -shared -Wl,-rpath,/home/ubuntu/anaconda3/envs/libri-light/lib -Wl,-rpath-link,/home/ubuntu/anaconda3/envs/libri-light/lib -L/home/ubuntu/anaconda3/envs/libri-light/lib -Wl,-rpath,/home/ubuntu/anaconda3/envs/libri-light/lib -Wl,-rpath-link,/home/ubuntu/anaconda3/envs/libri-light/lib -L/home/ubuntu/anaconda3/envs/libri-light/lib build/temp.linux-x86_64-3.10/dtw.o -o build/lib.linux-x86_64-3.10/ABX_src/dtw.cpython-310-x86_64-linux-gnu.so
copying build/lib.linux-x86_64-3.10/ABX_src/dtw.cpython-310-x86_64-linux-gnu.so -> ABX_src
error: could not create 'ABX_src/dtw.cpython-310-x86_64-linux-gnu.so': No such file or directory

Not sure why this is happening. Any help will be highly appreciated. Thanks in advance.

ABX eval script $FEATURE_SIZE

In the eval README it says "$FEATURE_SIZE is the duration (in s) of one feature of the model (for a 10ms frame rate, this would be 0.01)."

When I try use 0.01 as feature size I get:

usage: eval_ABX.py [-h] [--path_checkpoint PATH_CHECKPOINT]
[--file_extension {.pt,.npy,.wav,.flac,.mp3}]
[--feature_size FEATURE_SIZE] [--cuda]
[--mode {all,within,across}]
[--distance_mode {euclidian,cosine}]
[--max_size_group MAX_SIZE_GROUP]
[--max_x_across MAX_X_ACROSS] [--out OUT]
path_data path_item_file
eval_ABX.py: error: argument --feature_size: invalid int value: '0.01'

Oddly enough the default is also 0.01, but the type is int in the parse argument:

parser.add_argument('--feature_size', type=int, default=0.01,
                    help="Size (in s) of one feature")

I assume it should be type=float? Or does feature_size mean something else?

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.