Giter Site home page Giter Site logo

Comments (10)

TimelordRi avatar TimelordRi commented on July 21, 2024

Hi! There are two probable reasons for this issue:

  1. To fine-tune the RoBERTa-large model, a larger batch size may help you get higher F1-score.
  2. We use 4 GPUs with models NVIDIA GeForce RTX 3090 in training time. Thus the hyper-parameters tuning may be necessary to reproduce the result in your configurations.

Since our model and default setting of hyper-parameters is friendly to the BERT-base and RoBERTa-base model fine-tuning, it's more efficient to reproduce the result on the X-base models. We suggest you try this way.
Thx!

from docunet.

MingYangi avatar MingYangi commented on July 21, 2024

When I was running CDR, I also encountered this problem, the performance was very low, around 62. However, I did not change any code, only changed some paths, I do not know why this happened, have you solved this problem?

from docunet.

zxlzr avatar zxlzr commented on July 21, 2024

Hi buddy, there are many reasons for this situation. I suggest you re-check the following steps, if you have any problems. feel free to contact us:

  1. Do you use the correct pre-trained language model? For CDR, the model is SciBERT-base.

  2. Do you use the correct hyper-parameters? Because we set different hyper-parameters (learning rate, batch size) for different datasets. I think it is necessary to conduct hyper-parameters tuning with dev set. Besides, maybe you are using different GPU so the batch size are different which will definitely influence the results. A large batch size may help you obtain a better F1 score.

I hope those tips can help you reproduce the results.

Thx!

from docunet.

MingYangi avatar MingYangi commented on July 21, 2024

First of all, thank you very much for your reply and for being so timely. Thank you again! Yes, for CDR, I did use Scibert, but for GPU, I only have one, so I changed the batch size to 2, but the other super parameters are not changed, run_cdr.sh is run, will the batch size affect so much, looking forward to your advice!

from docunet.

zxlzr avatar zxlzr commented on July 21, 2024

I think the major reason may be the batch size, you can watch the loss to check whether the model convergence or not. Maybe training more steps will result better performance. Besides, you can use fp16 for large batchsize.

from docunet.

MingYangi avatar MingYangi commented on July 21, 2024

I think the major reason may be the batch size, you can watch the loss to check whether the model convergence or not. Maybe training more steps will result better performance. Besides, you can use fp16 for large batchsize.

Thanks for the advice, but what I don't understand is that the batch size will affect so much? Would it affect 10%+? Have you tried a similar experiment?

from docunet.

zxlzr avatar zxlzr commented on July 21, 2024

I think the major reason may be the batch size, you can watch the loss to check whether the model convergence or not. Maybe training more steps will result better performance. Besides, you can use fp16 for large batchsize.

Thanks for the advice, but what I don't understand is that the batch size will affect so much? Would it affect 10%+? Have you tried a similar experiment?

Maybe deep learning is such a hyperparameter sensitive methodology, and we don't want this to happen either. We will try to conduct an analysis on batch size in future.

from docunet.

ZhangYi0621 avatar ZhangYi0621 commented on July 21, 2024

When I was running CDR, I also encountered this problem, the performance was very low, around 62. However, I did not change any code, only changed some paths, I do not know why this happened, have you solved this problem?

Hello, I have the same question with you! I use 1 GPU with batch_size=4 and get f1=0.64, it is even lower than ATLOP with the same hyper-parameters.
So, have you achieved the best scores?

from docunet.

zxlzr avatar zxlzr commented on July 21, 2024

When I was running CDR, I also encountered this problem, the performance was very low, around 62. However, I did not change any code, only changed some paths, I do not know why this happened, have you solved this problem?

Hello, I have the same question with you! I use 1 GPU with batch_size=4 and get f1=0.64, it is even lower than ATLOP with the same hyper-parameters. So, have you achieved the best scores?

Hello, do you use the default experimental setting? Some other researchers have already reproduce this performance and even obtain much better results with hyperparameter tuning (such as #13 (comment)). Maybe the following situation account for the reason.

Do you use the SciBERTbase as the pre-trained language model?
For the CDR dataset, we use one NVIDIA V100 16GB GPU
and evaluated our model with Ign F1, and F,do you use the right evaluation metric?

If you have any question, feel free to contact us.

from docunet.

ZhangYi0621 avatar ZhangYi0621 commented on July 21, 2024

When I was running CDR, I also encountered this problem, the performance was very low, around 62. However, I did not change any code, only changed some paths, I do not know why this happened, have you solved this problem?

Hello, I have the same question with you! I use 1 GPU with batch_size=4 and get f1=0.64, it is even lower than ATLOP with the same hyper-parameters. So, have you achieved the best scores?

Hello, do you use the default experimental setting? Some other researchers have already reproduce this performance and even obtain much better results with hyperparameter tuning (such as #13 (comment)). Maybe the following situation account for the reason.

Do you use the SciBERTbase as the pre-trained language model? For the CDR dataset, we use one NVIDIA V100 16GB GPU and evaluated our model with Ign F1, and F,do you use the right evaluation metric?

If you have any question, feel free to contact us.

Thanks for your work and reply!
I just repreoduced the result by replacing the train.data to train_filter.data!

from docunet.

Related Issues (20)

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.