Giter Site home page Giter Site logo

Comments (6)

albertz avatar albertz commented on May 24, 2024

from returnn-experiments.

akshatdewan avatar akshatdewan commented on May 24, 2024

The test sequences, like training sequences, are mostly between 1-15 sec long. (see histogram)
image
I do not have any statistic on the problematic segments though. However, I do have a feeling that it occurs for long segments. Do you think a solution could be to just reduce the length of the test sequences?

from returnn-experiments.

albertz avatar albertz commented on May 24, 2024

So, to make it more clean again: You have some sequences which are longer (e.g. 14 secs), and they contain some text (about 6 secs) which you don't have in your reference transcription?
Noisy/incorrect training transcriptions can definitely be problematic for stable training.
Also, I observed that the attention model kind of learns like a linear alignment initially, and to detect silence. So any sequence which are far away from this pattern will be hard.
Some sort of curriculum learning, to exclude such bad sequences in the beginning of the training (first 20 epochs or so), will probably help to some extend.

from returnn-experiments.

akshatdewan avatar akshatdewan commented on May 24, 2024

Sorry, I guess I was not clear - for some longer test segments (e.g. 14.9 sec), the predicted sequence is missing a lot of text (5-6 seconds worth) as compared to the reference.

"Also, I observed that the attention model kind of learns like a linear alignment initially, and to detect silence. So any sequence which are far away from this pattern will be hard."
I thought that input and output sequences in S2T are always linearly aligned. Am I missing something here?

"Some sort of curriculum learning, to exclude such bad sequences in the beginning of the training (first 20 epochs or so), will probably help to some extend."
I currently do not have a mechanism to classify training segments as clean or noisy but I plan on working on it.

from returnn-experiments.

albertz avatar albertz commented on May 24, 2024

I think you mean monotonic. Of course that is true for speech recognition (but e.g. not for translation).
But linear is not true. Linear means that e.g. the attention for the first output label should be at t=0, and for the last output label the attention should be at t=T, and for all other output labels, the attention should focus exactly linearly interpolated in between.

from returnn-experiments.

akshatdewan avatar akshatdewan commented on May 24, 2024

Thanks! I think I understand it better now. I will run some more experiments to see what can I do. Perhaps if I try to limit non-speech in my test segments, the "linearity" condition might get satisfied and coverage might improve.

from returnn-experiments.

Related Issues (20)

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.