Comments (8)
Closing this for now. I will try running this for more epochs sometime later. If issues persists I will reopen.
Thanks for you help @glample appreciate it.
from unsupervisedmt.
Hi,
When you say you are getting very different results when training do you mean compared to when you run the code on GPU? Did you compare a CPU and GPU experiment? Can you provide the train.log as well? I'm not sure what is happening here, performance should be same on CPU and GPU so I'm curious to see the training losses.
from unsupervisedmt.
I could not find the train.log. As I noticed my dumped folder was not getting created correctly. I'm guessing it is because of Windows/Linux directory structure differences.
Let me fix this in sometime and get back to you.
Thanks !
from unsupervisedmt.
@glample I'm assuming GPU runs perfectly fine as no changes are required. Here is my train log for CPU run. It's only for couple of steps but it does gives the same error as before.
train.log
On GPU it gives me the correct output, like this:
INFO - 10/11/18 02:23:10 - 0:03:24 - Creating new training otf,fr iterator ...
INFO - 10/11/18 02:23:17 - 0:03:32 - Creating new training otf,en iterator ...
INFO - 10/11/18 02:29:41 - 0:09:56 - 50 - 11.87 sent/s - 302.00 words/s - XE-en-en: 9.0458 || XE-fr-fr: 9.3363 || XE-fr-en-fr: 8.8043 || XE-en-fr-en: 9.5765 || ENC-L2-en: 4.4486 || ENC-L2-fr: 4.4885 - LR enc=1.0000e-04,dec=1.0000e-04 - Sentences generation time: 128.76s (23.88%)
INFO - 10/11/18 02:34:47 - 0:15:01 - 100 - 20.95 sent/s - 537.00 words/s - XE-en-en: 6.7902 || XE-fr-fr: 6.8785 || XE-fr-en-fr: 6.5884 || XE-en-fr-en: 7.1309 || ENC-L2-en: 4.1948 || ENC-L2-fr: 4.1622 - LR enc=1.0000e-04,dec=1.0000e-04 - Sentences generation time: 27.85s (9.12%)
INFO - 10/11/18 02:39:52 - 0:20:06 - 150 - 20.99 sent/s - 576.00 words/s - XE-en-en: 6.3104 || XE-fr-fr: 6.1536 || XE-fr-en-fr: 6.1312 || XE-en-fr-en: 6.5265 || ENC-L2-en: 4.2309 || ENC-L2-fr: 4.1930 - LR enc=1.0000e-04,dec=1.0000e-04 - Sentences generation time: 25.22s (8.27%)
INFO - 10/11/18 02:44:44 - 0:24:59 - 200 - 21.90 sent/s - 585.00 words/s - XE-en-en: 6.0423 || XE-fr-fr: 5.9185 || XE-fr-en-fr: 5.9556 || XE-en-fr-en: 6.4293 || ENC-L2-en: 4.2429 || ENC-L2-fr: 4.2192 - LR enc=1.0000e-04,dec=1.0000e-04 - Sentences generation time: 15.95s (5.46%)
from unsupervisedmt.
How do you know this is the correct output? For CPU it looks like you trained for 6 epochs and still get -1 BLEU. How much do you get if you train for 6 epochs on GPU?
from unsupervisedmt.
@glample
Here is the train.log for the run on GPU (not complete just 8 epochs). Blues scores show 0.0 , it doesn't give the warning message like the CPU run. Is this expected ?
FAIR_train_GPU.log
from unsupervisedmt.
Hey,
You are using --epoch_size 500
, why? With this, the model only trains on a few batches per epoch. Default value for this is --epoch_size 500000
.
Maybe you can try something in between --epoch_size 100000
since CPU will be very slow, and check that you have the same performance at the end of one GPU and one CPU epoch.
from unsupervisedmt.
@glample 500 , Just because I wanted to see if my CPU level changes to all files were correct and does it even start to run, which I'm guessing that all all changes were positive and the model runs fine. It just need to run more epochs to give good BLEU score.
from unsupervisedmt.
Related Issues (20)
- why MemoryError
- Why codes file is empty.? HOT 4
- for different language, where to make change?
- How to train NMT + PBSMT ?
- UnboundLocalError: local variable 'n_words' referenced before assignment
- About number of shared layers
- RuntimeError: one of the variables needed for gradient computation has been modified by an inplace operation: [torch.cuda.FloatTensor [14, 32, 1536]], which is output 0 of AddBackward0, is at version 2; expected version 1 instead. Hint: enable anomaly detection to find the operation that failed to compute its gradient, with torch.autograd.set_detect_anomaly(True). HOT 1
- How to run PBSMT +NMT ?
- transformer multihead attention scaling layer error
- Setting the random seed does not result in same outputs across runs
- I have trouble when run get_data_enfr.sh
- How can I modify the code to train may own dataset on specific language?
- Low utilization rate of cuda HOT 1
- How to train the vector of phrases
- Low BLEU on PBSMT HOT 3
- bpe_end issue
- Getting raise EOFError() while executing Linux Command through Netmiko
- How i can run MUSE alignment in .sh
- How to train the model without para_dataset
- Error in runny bash command. HOT 2
Recommend Projects
-
React
A declarative, efficient, and flexible JavaScript library for building user interfaces.
-
Vue.js
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
-
Typescript
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
-
TensorFlow
An Open Source Machine Learning Framework for Everyone
-
Django
The Web framework for perfectionists with deadlines.
-
Laravel
A PHP framework for web artisans
-
D3
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
-
Recommend Topics
-
javascript
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
-
web
Some thing interesting about web. New door for the world.
-
server
A server is a program made to process requests and deliver data to clients.
-
Machine learning
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
-
Visualization
Some thing interesting about visualization, use data art
-
Game
Some thing interesting about game, make everyone happy.
Recommend Org
-
Facebook
We are working to build community through open source technology. NB: members must have two-factor auth.
-
Microsoft
Open source projects and samples from Microsoft.
-
Google
Google ❤️ Open Source for everyone.
-
Alibaba
Alibaba Open Source for everyone
-
D3
Data-Driven Documents codes.
-
Tencent
China tencent open source team.
from unsupervisedmt.