Comments (10)
Hi! There are two probable reasons for this issue:
- To fine-tune the RoBERTa-large model, a larger batch size may help you get higher F1-score.
- We use 4 GPUs with models NVIDIA GeForce RTX 3090 in training time. Thus the hyper-parameters tuning may be necessary to reproduce the result in your configurations.
Since our model and default setting of hyper-parameters is friendly to the BERT-base and RoBERTa-base model fine-tuning, it's more efficient to reproduce the result on the X-base models. We suggest you try this way.
Thx!
from docunet.
When I was running CDR, I also encountered this problem, the performance was very low, around 62. However, I did not change any code, only changed some paths, I do not know why this happened, have you solved this problem?
from docunet.
Hi buddy, there are many reasons for this situation. I suggest you re-check the following steps, if you have any problems. feel free to contact us:
-
Do you use the correct pre-trained language model? For CDR, the model is SciBERT-base.
-
Do you use the correct hyper-parameters? Because we set different hyper-parameters (learning rate, batch size) for different datasets. I think it is necessary to conduct hyper-parameters tuning with dev set. Besides, maybe you are using different GPU so the batch size are different which will definitely influence the results. A large batch size may help you obtain a better F1 score.
I hope those tips can help you reproduce the results.
Thx!
from docunet.
First of all, thank you very much for your reply and for being so timely. Thank you again! Yes, for CDR, I did use Scibert, but for GPU, I only have one, so I changed the batch size to 2, but the other super parameters are not changed, run_cdr.sh is run, will the batch size affect so much, looking forward to your advice!
from docunet.
I think the major reason may be the batch size, you can watch the loss to check whether the model convergence or not. Maybe training more steps will result better performance. Besides, you can use fp16 for large batchsize.
from docunet.
I think the major reason may be the batch size, you can watch the loss to check whether the model convergence or not. Maybe training more steps will result better performance. Besides, you can use fp16 for large batchsize.
Thanks for the advice, but what I don't understand is that the batch size will affect so much? Would it affect 10%+? Have you tried a similar experiment?
from docunet.
I think the major reason may be the batch size, you can watch the loss to check whether the model convergence or not. Maybe training more steps will result better performance. Besides, you can use fp16 for large batchsize.
Thanks for the advice, but what I don't understand is that the batch size will affect so much? Would it affect 10%+? Have you tried a similar experiment?
Maybe deep learning is such a hyperparameter sensitive methodology, and we don't want this to happen either. We will try to conduct an analysis on batch size in future.
from docunet.
When I was running CDR, I also encountered this problem, the performance was very low, around 62. However, I did not change any code, only changed some paths, I do not know why this happened, have you solved this problem?
Hello, I have the same question with you! I use 1 GPU with batch_size=4 and get f1=0.64, it is even lower than ATLOP with the same hyper-parameters.
So, have you achieved the best scores?
from docunet.
When I was running CDR, I also encountered this problem, the performance was very low, around 62. However, I did not change any code, only changed some paths, I do not know why this happened, have you solved this problem?
Hello, I have the same question with you! I use 1 GPU with batch_size=4 and get f1=0.64, it is even lower than ATLOP with the same hyper-parameters. So, have you achieved the best scores?
Hello, do you use the default experimental setting? Some other researchers have already reproduce this performance and even obtain much better results with hyperparameter tuning (such as #13 (comment)). Maybe the following situation account for the reason.
Do you use the SciBERTbase as the pre-trained language model?
For the CDR dataset, we use one NVIDIA V100 16GB GPU
and evaluated our model with Ign F1, and F,do you use the right evaluation metric?
If you have any question, feel free to contact us.
from docunet.
When I was running CDR, I also encountered this problem, the performance was very low, around 62. However, I did not change any code, only changed some paths, I do not know why this happened, have you solved this problem?
Hello, I have the same question with you! I use 1 GPU with batch_size=4 and get f1=0.64, it is even lower than ATLOP with the same hyper-parameters. So, have you achieved the best scores?
Hello, do you use the default experimental setting? Some other researchers have already reproduce this performance and even obtain much better results with hyperparameter tuning (such as #13 (comment)). Maybe the following situation account for the reason.
Do you use the SciBERTbase as the pre-trained language model? For the CDR dataset, we use one NVIDIA V100 16GB GPU and evaluated our model with Ign F1, and F,do you use the right evaluation metric?
If you have any question, feel free to contact us.
Thanks for your work and reply!
I just repreoduced the result by replacing the train.data to train_filter.data!
from docunet.
Related Issues (20)
- No train_bio.py HOT 7
- ERROR: No matching distribution found for transformers==3.0.4 HOT 5
- 给的shell脚本用windows改过吗?(Resolved) HOT 4
- 对预测矩阵的疑问 HOT 3
- Did someone try run on multi-gpu? Got an error in the multi-gpu setting. HOT 1
- The BC5CDR dataset result HOT 2
- 关于context-base strategy的疑问
- ModuleNotFoundError: No module named 'overrides' HOT 4
- TypeError: ElementWiseMatrixAttention.forward: `matrix_1` is not present. HOT 1
- More recent trained DocRED Weights? HOT 4
- 您好,请问随机种子固定后,复现后每次结果都不相同是什么原因?怎么才能在随机种子相同时固定复现结果? HOT 3
- 数据集咋下载 HOT 1
- 结果不一致 HOT 3
- I am unable to obtain the results you presented. HOT 2
- the result of roberta-large HOT 3
- Question about Roberta-large results HOT 4
- 关于分割区域的一点疑问 HOT 2
- 关于损失函数的一些疑问 HOT 3
- 复现实验结果一直达不到论文的结果,怎么搞 HOT 2
- 数据集 HOT 1
Recommend Projects
-
React
A declarative, efficient, and flexible JavaScript library for building user interfaces.
-
Vue.js
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
-
Typescript
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
-
TensorFlow
An Open Source Machine Learning Framework for Everyone
-
Django
The Web framework for perfectionists with deadlines.
-
Laravel
A PHP framework for web artisans
-
D3
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
-
Recommend Topics
-
javascript
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
-
web
Some thing interesting about web. New door for the world.
-
server
A server is a program made to process requests and deliver data to clients.
-
Machine learning
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
-
Visualization
Some thing interesting about visualization, use data art
-
Game
Some thing interesting about game, make everyone happy.
Recommend Org
-
Facebook
We are working to build community through open source technology. NB: members must have two-factor auth.
-
Microsoft
Open source projects and samples from Microsoft.
-
Google
Google ❤️ Open Source for everyone.
-
Alibaba
Alibaba Open Source for everyone
-
D3
Data-Driven Documents codes.
-
Tencent
China tencent open source team.
from docunet.