Comments (5)
It seems a very related question was also asked in #42.
I have not tried unidir LSTMs in the encoder yet, so I don't know. You probably should play around with all the available hyper params, e.g.:
- Learning rate (initial at warmup, and highest learning rate after warmup)
- Learning rate warmup length (num epochs)
- Pretraining. E.g. start number of layers (try 2). Initial time reduction (try increasing it, e.g. 6, 8, 16, or even 32). Try make it longer (more repetitions). Etc.
- No or less SpecAugment in the beginning.
- Higher batch size in the beginning. Or gradient accumulation in the beginning.
- Curriculum learning, i.e. the
epoch_wise_filter
option. - ...
Let this smallest network with highest time reduction, high batch size, less/no SpecAugment etc train like that for as long as needed, before increasing anything. This small network should first get some half-way good score. Only when you see that, at that point the pretraining can increase the depth and other things, but only carefully and slowly (such that the network does not totally break again).
from returnn-experiments.
Thanks @albertz for suggesting these.
I trained following models upto pretraining epochs (45) with following observation of loss:
- Base model - asr_2018_attention - below hyperparameters:
pretrain = {"repetitions": 5, "construction_algo": custom_construction_algo}
learning_rate = 0.0008
learning_rates = list(numpy.linspace(0.0003, learning_rate, num=15)) # warmup
-
Uni lstm
size 1024
with all hyperparams same as 1 except:
pretrain = {"repetitions": 7, "construction_algo": custom_construction_algo}
-
Uni lstm
size 1024
with all hyperparams same as 1 except:
learning_rate = 0.0005
-
Uni lstm
size 1024
with all hyperparams same as 1 except warmup steps 10:
learning_rates = list(numpy.linspace(0.0003, learning_rate, num=10)) # warmup
-
Uni lstm
size 1024
with all hyperparams same as 1 except warmup steps 20:
learning_rates = list(numpy.linspace(0.0003, learning_rate, num=20)) # warmup
In this case loss becomes nan after 10 epochs
All the models are with global attention
from returnn-experiments.
Seems like decreasing the learning rate helps.
Also, increasing the lstm cell size to 1536 helps a bit, however, not much.
What other combination do you suggest to try next?
from returnn-experiments.
All the things which I wrote already (here), but basically everything else as well.
from returnn-experiments.
Lowering the learning rate lr=0.0005
and lr_init=0.0002
worked for me.
Thanks!
from returnn-experiments.
Related Issues (20)
- Implement a unidirectional variant of local attention HOT 10
- Loading a saved Returnn model from its .meta file HOT 16
- query regarding LM data preprocessing HOT 2
- Reusing parameters inside rec layer HOT 5
- Training Configuration for TEDLIUMv2 HOT 3
- specAugment policy and schedules HOT 3
- Question about 2020-rnn-transducer HOT 16
- 2018-asr-attention/librispeech/attention/exp3.ctc.lm.config: target 'bpe' unknown HOT 3
- Question about 2018-asr-librispeech dev = get_dataset("dev", subset=3000) HOT 2
- loss nan and cost nan while running my own corpus using librispeech sets HOT 10
- Hierarchical layer name not captured correctly
- Problem with retrieving source layer from a hierarchical definition
- Multi Stage Training
- Questions on librispeech transformer lm HOT 10
- Transducer error in GetFilteredScoreOp HOT 4
- Big files in repo HOT 5
- Git commit/push rule to not allow big files HOT 3
- Could you please provide a script that could run lsh-attention for translation? HOT 4
- Assert Error when running 2022-lsh-attention HOT 7
Recommend Projects
-
React
A declarative, efficient, and flexible JavaScript library for building user interfaces.
-
Vue.js
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
-
Typescript
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
-
TensorFlow
An Open Source Machine Learning Framework for Everyone
-
Django
The Web framework for perfectionists with deadlines.
-
Laravel
A PHP framework for web artisans
-
D3
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
-
Recommend Topics
-
javascript
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
-
web
Some thing interesting about web. New door for the world.
-
server
A server is a program made to process requests and deliver data to clients.
-
Machine learning
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
-
Visualization
Some thing interesting about visualization, use data art
-
Game
Some thing interesting about game, make everyone happy.
Recommend Org
-
Facebook
We are working to build community through open source technology. NB: members must have two-factor auth.
-
Microsoft
Open source projects and samples from Microsoft.
-
Google
Google ❤️ Open Source for everyone.
-
Alibaba
Alibaba Open Source for everyone
-
D3
Data-Driven Documents codes.
-
Tencent
China tencent open source team.
from returnn-experiments.