Comments (7)
Your experimental setup turns out to be quite different than ours. Firstly, we recommend training with
total_effective_batch_size
>= 256 for a steady loss curve. Since you are training with a single GPU, and assuming you can't increasePER_DEVICE_TRAIN_BATCH_SIZE
to more than 8 without memory overflow, setgradient_accumulation_steps
to 32 to reach the targettotal_effective_batch_size
. Secondly, please stick to the defaultlearning_rate
of 1 with thelr_scheduler_type
of "transformer" to reproduce our results. Please let us know if this resolves your issues.
Thanks a lot! I will try to retrain the model as your instruction.
from crosssum.
The
per_lang_batch_size
along withtotal_effective_batch_size
dictates how many language pairs we sample from in a batch. In our case, we usetotal_effective_batch_size=256
withper_lang_batch_size=32
, which means we sample mini-batches from 8 language pairs in a batch. So, for your previous settings, you'd have to setper_lang_batch_size=4
, to mimic our settings for atotal_effective_batch_size
of 32.However, since the batch size is quite small, you may not get the results mentioned in our paper. Let us know how it turns out. Please feel free to comment if you have any questions regarding the paper/experiment settings.
Thanks! I successfully reproduce your result with your instruction! Really nice work!
from crosssum.
Hi,
Thank you for appreciating our work. Our training hyperparameters are detailed in the trainer.sh script. However, the default training_batch_size
hyperparameter assumes you are running the script with 8 GPUs. You would need to set this according to your training setup so that the total_effective_batch_size
equals 256. If you can provide additional details like your previous experimental setup, arguments provided to trainer.sh
etc., it'll be easier to figure out the problem.
from crosssum.
Hi, Thank you for appreciating our work. Our training hyperparameters are detailed in the trainer.sh script. However, the default
training_batch_size
hyperparameter assumes you are running the script with 8 GPUs. You would need to set this according to your training setup so that thetotal_effective_batch_size
equals 256. If you can provide additional details like your previous experimental setup, arguments provided totrainer.sh
etc., it'll be easier to figure out the problem.
Thanks for your explanation, I use bash trainer.sh --ngpus 1 --training_type m2o --pivot_lang english
to train the model. My training settings basically follow your settings in train.sh
. I set lr_scheduler_type
to "constant_with_warmup. Firstly I run train.sh
, I set learning_rate
as default(1.0), and I quickly found that the training loss curve fluctuates a lot. So I set learning_rate
to 5e-6. Others settings are same with the settings provided in train.sh
. Given that I only use single GPU to train the model, the total_effective_batch_size
equals 32 finally.
from crosssum.
Your experimental setup turns out to be quite different than ours. Firstly, we recommend training with total_effective_batch_size
>= 256 for a steady loss curve. Since you are training with a single GPU, and assuming you can't increase PER_DEVICE_TRAIN_BATCH_SIZE
to more than 8 without memory overflow, set gradient_accumulation_steps
to 32 to reach the target total_effective_batch_size
. Secondly, please stick to the default learning_rate
of 1 with the lr_scheduler_type
of "transformer" to reproduce our results. Please let us know if this resolves your issues.
from crosssum.
I'm sorry to disturb you, I just found there is a hyperparameter named per_lang_batch_size
in trainer.sh
. This parameter seems to influence sampling stage. Should I change the parameter to fit my previous settings?
from crosssum.
The per_lang_batch_size
along with total_effective_batch_size
dictates how many language pairs we sample from in a batch. In our case, we use total_effective_batch_size=256
with per_lang_batch_size=32
, which means we sample mini-batches from 8 language pairs in a batch. So, for your previous settings, you'd have to set per_lang_batch_size=4
, to mimic our settings for a total_effective_batch_size
of 32.
However, since the batch size is quite small, you may not get the results mentioned in our paper. Let us know how it turns out. Please feel free to comment if you have any questions regarding the paper/experiment settings.
from crosssum.
Related Issues (7)
Recommend Projects
-
React
A declarative, efficient, and flexible JavaScript library for building user interfaces.
-
Vue.js
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
-
Typescript
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
-
TensorFlow
An Open Source Machine Learning Framework for Everyone
-
Django
The Web framework for perfectionists with deadlines.
-
Laravel
A PHP framework for web artisans
-
D3
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
-
Recommend Topics
-
javascript
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
-
web
Some thing interesting about web. New door for the world.
-
server
A server is a program made to process requests and deliver data to clients.
-
Machine learning
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
-
Visualization
Some thing interesting about visualization, use data art
-
Game
Some thing interesting about game, make everyone happy.
Recommend Org
-
Facebook
We are working to build community through open source technology. NB: members must have two-factor auth.
-
Microsoft
Open source projects and samples from Microsoft.
-
Google
Google ❤️ Open Source for everyone.
-
Alibaba
Alibaba Open Source for everyone
-
D3
Data-Driven Documents codes.
-
Tencent
China tencent open source team.
from crosssum.