Giter Site home page Giter Site logo

philipperemy / tensorflow-ctc-speech-recognition Goto Github PK

View Code? Open in Web Editor NEW
131.0 10.0 47.0 649 KB

Application of Connectionist Temporal Classification (CTC) for Speech Recognition (Tensorflow 1.0 but compatible with 2.0).

License: Apache License 2.0

Python 100.00%
ctc ctc-loss tensorflow-1-0 tensorflow speech-recognition speech-to-text speech-analysis deep-learning machine-learning tutorial

tensorflow-ctc-speech-recognition's Introduction

Tensorflow CTC Speech Recognition

  • Compatible with Tensorflow through v1 compat.
  • Application of Connectionist Temporal Classification (CTC) for Speech Recognition (Tensorflow 1.0)
  • On the VCTK Corpus (same corpus as the one used by WaveNet).

How to get started?

git clone https://github.com/philipperemy/tensorflow-ctc-speech-recognition.git ctc-speech
cd ctc-speech
pip3 install -r requirements.txt # inside a virtualenv

# Download the VCTK Corpus here: http://homepages.inf.ed.ac.uk/jyamagis/page3/page58/page58.html
# OR use this file (~65MB) that contains all the utterances for speaker p225.
wget https://www.dropbox.com/s/xecprghgwbbuk3m/vctk-pc225.tar.gz
tar xvzf vctk-pc225.tar.gz && rm -rf vctk-pc225.tar.gz
python generate_audio_cache.py --audio_dir vctk-p225



wget http://homepages.inf.ed.ac.uk/jyamagis/release/VCTK-Corpus.tar.gz # 10GB!l
python3 ctc_tensorflow_example.py # to run the experiment defined in the section First Experiment.

You can also download only the relevant files here https://www.dropbox.com/s/xecprghgwbbuk3m/vctk-pc225.tar.gz?dl=1 (~69MB). Thanks to @Burak Bayramli.

Requirements

  • dill: improved version of pickle
  • librosa: library to interact with audio wav files
  • namedtupled: dictionary to named tuples
  • numpy: scientific library
  • python_speech_features: extracting relevant features from raw audio data
  • tensorflow: machine learning library
  • progressbar2: progression bar

First experiment

The code to reproduce this experiment is no longer in the latest commit.

git checkout ba6c10fba2383cd4933d47896f95d30248458161

Set up

Speech Recognition is a very difficult topic. In this first experiment, we consider:

  • A very small subset of the VCTK Corpus composed of only one speaker: p225.
  • Only 5 sentences of this speaker, denoted as: 001, 002, 003, 004 and 005.

The network is defined as:

  • One LSTM layer rnn.LSTMCell with 100 units, completed by a softmax.
  • Batch size of 1.
  • Momentum Optimizer with learning rate of 0.005 and momentum of 0.9.

The validation set is obtained by constantly truncating the audio files randomly at the beginning (between 0 and 125ms max). We make sure that we do not cut when the speaker is speaking. Using 5 unseen sentences would be more realistic, however, it's almost impossible for the network to pick it up since a training set of only 5 sentences is way too small to cover all the possible phonemes of the english language. By truncating randomly the silences at the beginning, we make sure that the network does not learn the mapping audio from sentence -> text in a dumb way.

Results

Most of the time, the network can guess the correct sentence. Sometimes, it misses a bit but still encouraging.

Example 1

Original training: diving is no part of football
Decoded training: diving is no part of football
Original validation: theres still a bit to go
Decoded validation: thers still a bl to go
Epoch 3074/10000, train_cost = 0.032, train_ler = 0.000, val_cost = 9.131, val_ler = 0.125, time = 1.648

Example 2

Original training: three hours later the man was free
Decoded training: three hours later the man was free
Original val: and they were being paid 
Decoded val: nand they ere being paid  
Epoch 3104/10000, train_cost = 0.075, train_ler = 0.000, val_cost = 2.945, val_ler = 0.077, time = 1.042

Example 3

Original training: theres still a bit to go
Decoded training: theres still a bit to go
Original val: three hours later the man was free
Decoded val: three hors late th man wasfree
Epoch 3108/10000, train_cost = 0.032, train_ler = 0.000, val_cost = 12.532, val_ler = 0.118, time = 0.859

CTC Loss

CTC Loss (Log scale)

CTC Loss is the raw loss defined in the paper by Alex Graves.

LER Loss

LER (Label Error Rate) measures the inaccuracy between the predicted and the ground truth texts.

Clearly we can see that the network learns very well on just 5 sentences! It's far from being perfect but quite appealing for a first try.

Second experiment

  • LSTM with 256 cells.
  • Only one speaker: p225.
  • 15 shortest utterances used as testing set.
  • Rest used as training set.
  • Can now define a batch size different than 1.
Epoch 2723/3000, train_cost = 1.108, train_ler = 0.000, val_cost = 59.116, val_ler = 0.467, time = 2.559
- Original (training) : but the commission is on a collision course with the government 
- Decoded  (training) : but the commission is on a collision course with the government 
- Original (training) : this action reflects a slump in bookings 
- Decoded  (training) : this action reflects a slump in bookings 
- Original (training) : they had to learn to work from the consumer back 
- Decoded  (training) : they had to learn to work from the consumer back 
- Original (training) : it depends on the internal discussions in the ministry of defence 
- Decoded  (training) : it depends on the internal discussions in the ministry of defence 
- Original (training) : irvine said his company was intent on supporting the scottish dairy industry 
- Decoded  (training) : irvine said his company was intent on supporting the scottish dairy industry 
- Original (training) : the pain was almost too much to bear 
- Decoded  (training) : the pain was almost too much to bear 
- Original (training) : this is a very common type of bow one showing mainly red and yellow with little or no green or blue 
- Decoded  (training) : this is a very common type of bow one showing mainly red and yellow with little or no green or blue 
- Original (training) : in fact he should never have been in the field 
- Decoded  (training) : in fact he should never have been in the field 
- Original (training) : saddam is not the only example of evil in our world 
- Decoded  (training) : saddam is nat the only example of evil in our world 
- Original (training) : so did she meet him  
- Decoded  (training) : so did she meet him  
- Original (validation) : it is a court case 
- Decoded  (validation) : it is a cot ase    

LER Loss

This experiment is interesting. The network seems to generalize a bit on unseen audio files (of the same speaker). I didn't expect the network to perform well on such a small dataset. However, let's keep in mind that the generalization power is quite poor here. It's overfitting as well.

Special Thanks

tensorflow-ctc-speech-recognition's People

Contributors

hbk0932254399 avatar philipperemy avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

tensorflow-ctc-speech-recognition's Issues

[Query] Minimum number of training examples for overfitting

I am new to deep learning and end to end speech recognition , I am looking to create a model which can predict 3 words , "red" , "green" or "blue" , I want the model to over fit these 3 training examples , so do you think it's possible with ctc training for such a small dataset , or do I need a huge dataset to learn essential features and then overfit with these 3 additional examples ?

Thanks

What is INDEX?

What is INDEX here and how we can use it for other languages?
Actually, what portions of code are needed to change?
Please give me the instructions.

modify for many classes (many char)

i want to train with many char. not use ord(), how can map char (ex map by a dict)
charmap_en1 = {'t': 20, 'v': 22, 'u': 21, 'z': 26, 'y': 25, 'f': 6, 'p': 16, 'x': 24, 'h': 8, 'o': 15, 'k': 11, 'q': 17, 'w': 23, 'i': 9, 'm': 13, 'l': 12, 'c': 3, 's': 19, 'a': 1, 'b': 2, 'g': 7, 'e': 5, 'j': 10, 'r': 18, 'n': 14, 'd': 4}
or charmap = {'ử': 84, 'í': 32, 'ỷ': 89, 'ặ': 57, 'ầ': 49, 'ọ': 68, 'm': 12, 'đ': 41, 'á': 25, 'ĩ': 42, 'ằ': 54, 'ẹ': 58, 's': 18, 'ễ': 64, 'b': 3, 'g': 7, 'ă': 40, 'ã': 27, 'ấ': 48, 'ể': 63, 'x': 22, 'c': 4, 'ẵ': 56, 'ợ': 79, 't': 19, 'y': 23, 'ỡ': 78, 'ờ': 76, 'v': 21, 'à': 24, 'r': 17, 'é': 29, 'ỗ': 73, 'a': 2, 'ụ': 80, 'n': 13, 'ở': 77, 'ẩ': 50, 'q': 16, 'â': 26, 'ữ': 85, 'ớ': 75, 'ổ': 72, 'ỉ': 66, 'ỏ': 69, 'ò': 33, 'è': 28, 'h': 8, 'ơ': 44, 'd': 5, 'o': 14, 'ệ': 65, 'e': 6, 'ô': 35, 'k': 10, 'p': 15, 'i': 9, 'ế': 61, 'ị': 67, 'ê': 30, 'ỹ': 90, 'ý': 39, 'ì': 31, 'ộ': 74, 'ỳ': 87, 'ề': 62, 'l': 11, 'ồ': 71, 'ắ': 53, 'ừ': 83, 'ỵ': 88, 'ả': 47, 'õ': 36, 'ó': 34, 'ạ': 46, 'ù': 37, 'ẻ': 59, 'ú': 38, 'ũ': 43, 'ư': 45, 'ủ': 81, 'ẫ': 51, 'ứ': 82, 'ẳ': 55, 'ậ': 52, 'ố': 70, 'ự': 86, 'u': 20, 'ẽ': 60}
but in decode_batch() how change to replace 'blank' char, 'space' char by map not use ord()
def decode_batch(d, original, phase='training'):
aligned_original_string = ''
aligned_decoded_string = ''
for jj in range(batch_size)[0:2]: # just for visualisation purposes. we display only 2.
values = d.values[np.where(d.indices[:, 0] == jj)[0]]
print('d:',d.values)
str_decoded = ''.join([chr(x) for x in np.asarray(values) + FIRST_INDEX])
# Replacing blank label to none
str_decoded = str_decoded.replace(chr(ord('z') + 1), '')
# Replacing space label to space
str_decoded = str_decoded.replace(chr(ord('a') - 1), ' ')
...........................................
thanks

What does number of feature do?

Thanks philipperemy for this work,

  1. What does number of features do? Couldnot understand that

  2. I just want to the number of layers to something like 4 or 5 but there is a dimension issue. Is there any other adjustment i should make other than increase the number of layers?

Thanks

Trained model

I have a very old computer if I try to train the neural network on it, it will take me forever. can you give me a link to a trained model of this project so I don't have to train it

batch generation

Curious as to why you didn't use a generator for batch creation and then also capped the number of iterations per epoch to 10. I updated the code to this:

def batch_generator(items, batch_size=16):
    items = np.array(items)
    idx = 0
    while True:
        if idx+batch_size > len(items):
            np.random.shuffle(items)
            idx = 0

        items_batch = dict(items[idx:idx+batch_size])
        
        x_batch = []
        y_batch = []
        seq_len_batch = []
        original_batch = []
        for k,v in items_batch.items():
            target_text = v['target']
            audio_buffer = v['audio']
            x, y, seq_len, original = convert_inputs_to_ctc_format(audio_buffer, sample_rate, target_text, num_features)
            
            x_batch.append(x)
            y_batch.append(y)
            seq_len_batch.append(seq_len)
            original_batch.append(original)

        y_batch = sparse_tuple_from(y_batch)
        seq_len_batch = np.array(seq_len_batch)[:, 0]
        for i, pad in enumerate(np.max(seq_len_batch) - seq_len_batch):
            x_batch[i] = np.pad(x_batch[i], ((0, 0), (0, pad), (0, 0)), mode='constant', constant_values=0)

        x_batch = np.concatenate(x_batch, axis=0)
        
        idx += batch_size
        
        yield x_batch, y_batch, seq_len_batch, original_batch

Then we can use it like this

data = list(audio.cache.items()) # could shuffle here as well
split = int(len(data)*0.8)

train_data = data[:split]
valid_data = data[split:]

num_batches_per_epoch = len(train_data) // batch_size

train_gen = batch_generator(train_data, batch_size)
valid_gen = batch_generator(valid_data, batch_size)

#...training code....

for batch_num in range(num_batches_per_epoch):
    train_inputs, train_targets, train_seq_len, original = next(train_gen)
    feed = {inputs: train_inputs,
                 targets: train_targets,
                 seq_len: train_seq_len,
                 keep_prob: 0.8}

This will be sure to cycle through the entire dataset while also shuffling it each cycle. It also allows the batch generator to be agnostic, which in my opinion is good. Just thought I would leave this here unsolicited.

Also, this is working just fine on tensorflow==1.9.0.

Thanks for building out this architecture!

all possible alignments

As given in the Alex Graves CTC paper, we sum over probabilities of all the possible alignments using dynamic programming which gives us certain transcription. In code and in tensorflow documentation it is not explicitly mentioned that where this part is happening. Could you help me here and tell me in which of the following functions, this DP thing is happening:

  1. tf.nn.ctc_loss
  2. tf.nn.ctc_beam_search_decoder
  3. tf.nn.ctc_greedy_decoder

AssertionError

I have been download the smaller dataset and whole dataset,but there's some problem whlie I'm running generate_audio_cache.py.Did I put the dataset in the wrong folder?I only change the datapath in conf.json.

Here's the datapath where I save dataset and code.
C:\Users\OAQ\ctc-speech
C:\tmp\VCTK-Corpus

C:\Users\OAQ\ctc-speech>python generate_audio_cache.py
{'AUDIO': {'SAMPLE_RATE': 8000, 'VCTK_CORPUS_PATH': '/tmp/VCTK-Corpus/'}}
Initializing AudioReader()
audio_dir = /tmp/VCTK-Corpus/
sample_rate = 8000
speakers_sub_list = ['p225']
Nothing found at /tmp/tensorflow-ctc-speech-recognition/. Generating all the caches now.
Found 44257 files in total in /tmp/VCTK-Corpus/.
0 files correspond to the speaker list ['p225'].
Traceback (most recent call last):
File "generate_audio_cache.py", line 18, in
generate()
File "generate_audio_cache.py", line 14, in generate
speakers_sub_list=speakers_sub_list)
File "C:\Users\OAQ\ctc-speech\audio_reader.py", line 65, in init
assert len(files) != 0
AssertionError

Generate your cache please

i keep getting this message, I'm not sure what to do.

there is no path '/Volumes/Transcend/VCTK-Corpus/'
should i create it and put the dropbox file in it?

{'AUDIO': {'SAMPLE_RATE': 8000,
'VCTK_CORPUS_PATH': '/Volumes/Transcend/VCTK-Corpus/'}}
Initializing AudioReader()
audio_dir = /Volumes/Transcend/VCTK-Corpus/
sample_rate = 8000
speakers_sub_list = ['p225']
Nothing found at /tmp/tensorflow-ctc-speech-recognition/. Generating all the caches now.
Traceback (most recent call last):
File "generate_audio_cache.py", line 18, in
generate()
File "generate_audio_cache.py", line 14, in generate
speakers_sub_list=speakers_sub_list)
File "/Users/XXXX/Desktop/file/ctc-speech/audio_reader.py", line 59, in init
assert len(files) != 0, 'Generate your cache please.'
AssertionError: Generate your cache please.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.