In this work, we apply two transfoermer models to the EuroParl Danish-English dataset for a translation task. This repository contains code for the two models: RoBerta-base-model and transformer-base-model.
You can install the package via
pip install git+https://github.com/kev-zhao/life-after-bert
Or (recommended) you can download the source code and install the package in editable mode for each model directory:
git clone https://github.com/kev-zhao/life-after-bert
cd life-after-bert
pip install -e .
- First, you need to create tokenizers for both models, run the following code for transformer-base-model directories:
python cli/create_tokenizer.py --vocab_size 32000 --save_dir da_en_output_dir --source_lang da --target_lang en`
- Once the tokenizers are created run the following code to train your models:
python cli/train.py --dataset_name stas/wmt14-en-de-pre-processed --dataset_config ende --source_lang en --target_lang de --output_dir en_de_output_dir --batch_size 32 --num_warmup_steps 5000 --learning_rate 3e-4 --num_train_epochs 1 --eval_every 5000
- To use the pre-trained RoBERTa model, first create the target tokenizer by running the following code for roberta-base-danish directory:
python cli/create_target_tokenizer.py --vocab_size 32000 --save_dir en_output_dir --target_lang en
- Once the target language's tokenizer is created run the following code to train your model:
python cli/train_final.py --source_lang da --target_lang en --output_dir en_output_dir --batch_size 32 --num_warmup_steps 5000 --learning_rate 3e-4 --num_train_epochs 1 --eval_every 5000
You can find danish-english translafor app publicly available at: https://huggingface.co/spaces/ftakelait/da_en_translation
You can find danish-english machine translation report publicly available in Overleaf at: https://www.overleaf.com/read/jfhtbffxmksg