Comparing M2M and mT5 on a rare language pairs, blog post: https://medium.com/@abdessalemboukil/comparing-facebooks-m2m-to-mt5-in-low-resources-translation-english-yoruba-ef56624d2b75
Thank you for the guide for m2m100. I was following it but when I got to !fairseq-train data_bin , it seems like I am using the TransformerEncoderBase while you are using the TransformerEncoder. What should I do to use the normal Transformer Encoder (without the base?)
I was following your guide step by step, so I am a little confused about this difference. I believe this is why I am stuck at Preparing to load checkpoint.
When I halt the kernal, and read the output error, I get this error that states size mismatch.
According to doc, prefix token need to be setted. In the document of simpletransformers, t5 model supported prefix task does not has anything that looks like translation.
In mt5_test.ipynb, you set prefix to ''. Is this a correct way to finetune mT5 for translation? Is there any reference?