Giter Site home page Giter Site logo

Comments (7)

Youggls avatar Youggls commented on June 12, 2024 1

Your experimental setup turns out to be quite different than ours. Firstly, we recommend training with total_effective_batch_size >= 256 for a steady loss curve. Since you are training with a single GPU, and assuming you can't increase PER_DEVICE_TRAIN_BATCH_SIZE to more than 8 without memory overflow, set gradient_accumulation_steps to 32 to reach the target total_effective_batch_size. Secondly, please stick to the default learning_rate of 1 with the lr_scheduler_type of "transformer" to reproduce our results. Please let us know if this resolves your issues.

Thanks a lot! I will try to retrain the model as your instruction.

from crosssum.

Youggls avatar Youggls commented on June 12, 2024 1

The per_lang_batch_size along with total_effective_batch_size dictates how many language pairs we sample from in a batch. In our case, we use total_effective_batch_size=256 with per_lang_batch_size=32, which means we sample mini-batches from 8 language pairs in a batch. So, for your previous settings, you'd have to set per_lang_batch_size=4, to mimic our settings for a total_effective_batch_size of 32.

However, since the batch size is quite small, you may not get the results mentioned in our paper. Let us know how it turns out. Please feel free to comment if you have any questions regarding the paper/experiment settings.

Thanks! I successfully reproduce your result with your instruction! Really nice work!

from crosssum.

abhik1505040 avatar abhik1505040 commented on June 12, 2024

Hi,
Thank you for appreciating our work. Our training hyperparameters are detailed in the trainer.sh script. However, the default training_batch_size hyperparameter assumes you are running the script with 8 GPUs. You would need to set this according to your training setup so that the total_effective_batch_size equals 256. If you can provide additional details like your previous experimental setup, arguments provided to trainer.sh etc., it'll be easier to figure out the problem.

from crosssum.

Youggls avatar Youggls commented on June 12, 2024

Hi, Thank you for appreciating our work. Our training hyperparameters are detailed in the trainer.sh script. However, the default training_batch_size hyperparameter assumes you are running the script with 8 GPUs. You would need to set this according to your training setup so that the total_effective_batch_size equals 256. If you can provide additional details like your previous experimental setup, arguments provided to trainer.sh etc., it'll be easier to figure out the problem.

Thanks for your explanation, I use bash trainer.sh --ngpus 1 --training_type m2o --pivot_lang english to train the model. My training settings basically follow your settings in train.sh. I set lr_scheduler_type to "constant_with_warmup. Firstly I run train.sh, I set learning_rate as default(1.0), and I quickly found that the training loss curve fluctuates a lot. So I set learning_rate to 5e-6. Others settings are same with the settings provided in train.sh. Given that I only use single GPU to train the model, the total_effective_batch_size equals 32 finally.

from crosssum.

abhik1505040 avatar abhik1505040 commented on June 12, 2024

Your experimental setup turns out to be quite different than ours. Firstly, we recommend training with total_effective_batch_size >= 256 for a steady loss curve. Since you are training with a single GPU, and assuming you can't increase PER_DEVICE_TRAIN_BATCH_SIZE to more than 8 without memory overflow, set gradient_accumulation_steps to 32 to reach the target total_effective_batch_size. Secondly, please stick to the default learning_rate of 1 with the lr_scheduler_type of "transformer" to reproduce our results. Please let us know if this resolves your issues.

from crosssum.

Youggls avatar Youggls commented on June 12, 2024

I'm sorry to disturb you, I just found there is a hyperparameter named per_lang_batch_size in trainer.sh. This parameter seems to influence sampling stage. Should I change the parameter to fit my previous settings?

from crosssum.

abhik1505040 avatar abhik1505040 commented on June 12, 2024

The per_lang_batch_size along with total_effective_batch_size dictates how many language pairs we sample from in a batch. In our case, we use total_effective_batch_size=256 with per_lang_batch_size=32, which means we sample mini-batches from 8 language pairs in a batch. So, for your previous settings, you'd have to set per_lang_batch_size=4, to mimic our settings for a total_effective_batch_size of 32.

However, since the batch size is quite small, you may not get the results mentioned in our paper. Let us know how it turns out. Please feel free to comment if you have any questions regarding the paper/experiment settings.

from crosssum.

Related Issues (7)

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.