Giter Site home page Giter Site logo

recadam's Introduction

RecAdam

Introduction

We provide RecAdam (Recall Adam) optimizer to facilitate fine-tuning deep pretrained language models (e.g., BERT, ALBERT) with less forgetting.

For a detailed description and experimental results, please refer to our paper: Recall and Learn: Fine-tuning Deep Pretrained Language Models with Less Forgetting.

Environment

python >= 3.6
pytorch >= 1.0.0
transformers >= 2.5.1

Files

Run GLUE tasks

GLUE tasks can be download from GLUE data by running this script and unpacked to some directory $GLUE_DIR.

With ALBERT-xxlarge model

For ALBERT-xxlarge, we use the same hyperparameters following ALBERT paper, except for the maximum sequence length, which we set to 128 rather than 512.

As for the hyperparameters of RecAdam, we choose the sigmoid annealing function, set the coefficient of the quadratic penalty to 5,000, select the best k in {0.05, 0.1, 0.2, 0.5, 1}, select the best t_0 in {100, 250, 500} for small tasks and {250, 500, 1,000} for large tasks.

Here is an example script to get started:

export GLUE_DIR=/path/to/glue
export TASK_NAME=CoLA

python run_glue_with_RecAdam.py \
  --model_type albert \
  --model_name_or_path /path/to/model \
  --log_path /path/to/log \
  --task_name $TASK_NAME \
  --do_train \
  --do_eval \
  --do_lower_case \
  --data_dir $GLUE_DIR/$TASK_NAME \
  --max_seq_length 128 \
  --per_gpu_train_batch_size 16 \
  --learning_rate 1e-5 \
  --warmup_steps 320 \
  --max_steps 5336 \
  --output_dir /path/to/output/$TASK_NAME/ \
  --evaluate_during_training \
  --train_logging_steps 25 \
  --eval_logging_steps 100 \
  --albert_dropout 0.0 \
  --optimizer RecAdam \
  --recadam_anneal_fun sigmoid \
  --recadam_anneal_t0 1000 \
  --recadam_anneal_k 0.1 \
  --recadam_pretrain_cof 5000.0 \
  --logging_Euclid_dist 

With BERT-base model

For BERT-base, we use the same hyperparameters following BERT paper. We set the learning rate to 2e-5, and find that the model has not converged on each GLUE task after 3 epochs fine-tuning. To make sure the convergence of vanilla fine-tuning, we increase the training step for each task (61,360 on MNLI, 56,855 on QQP, 33,890 on QNLI, 21,050 on SST, 13,400 on CoLA, 9,000 on STS, 11,500 on MRPC, 7,800 on RTE), and achieve better baseline scores on the dev set of GLUE benchmark.

As for the hyperparameters of RecAdam, we choose the sigmoid annealing function, set the coefficient of the quadratic penalty to 5,000, select the best k and t_0 in {0.05, 0.1, 0.2, 0.5, 1} and {250, 500, 1,000} respectively.

Here is an example script to get started:

export GLUE_DIR=/path/to/glue
export TASK_NAME=STS-B

python run_glue_with_RecAdam.py \
  --model_type bert \
  --model_name_or_path /path/to/model \
  --log_path /path/to/log \
  --task_name $TASK_NAME \
  --do_train \
  --do_eval \
  --do_lower_case \
  --data_dir $GLUE_DIR/$TASK_NAME \
  --max_seq_length 128 \
  --per_gpu_train_batch_size 32 \
  --learning_rate 2e-5 \
  --max_steps 9000 \
  --output_dir /path/to/output/$TASK_NAME/ \
  --evaluate_during_training \
  --train_logging_steps 50 \
  --eval_logging_steps 180 \
  --optimizer RecAdam \
  --recadam_anneal_fun sigmoid \
  --recadam_anneal_t0 1000 \
  --recadam_anneal_k 0.1 \
  --recadam_pretrain_cof 5000.0 \
  --logging_Euclid_dist 

Citation

If you find RecAdam useful, please cite our paper:

@article{recadam,
  title={Recall and Learn: Fine-tuning Deep Pretrained Language Models with Less Forgetting},
  author={Chen, Sanyuan and Hou, Yutai and Cui, Yiming and Che, Wanxiang and Liu, Ting and Yu, Xiangzhan},
  journal={arXiv preprint arXiv:2004.12651},
  year={2020}
}

recadam's People

Contributors

sanyuan-chen avatar valle-demo avatar ymcui avatar

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.