Giter Site home page Giter Site logo

Comments (5)

JeanKaddour avatar JeanKaddour commented on June 30, 2024

Thanks for your comment and interest in our work!
In order for me to better understand how I can help you, can you please specify more about what remains unclear after reading our paper?

from notrainnogain.

Sapium59 avatar Sapium59 commented on June 30, 2024

In my case, I observed behavior of Sophia on both downstream tasks (GLUE & SuperGLUE) in the table 1 in your paper. It seems the baseline (with its learning rate being fully decayed? I am bnot sure) could outperform Sophia acceleration by performance after trained for the same time budget in the GLUE task, while the two methods made a tie in SuperGLUE task. I suppose this observation lead to the conclusion that Sophia cannot accelerate training; it slows down training in fact. However, I am also aware of my hypothesis that validation loss of this training procedure meets the down stream task performance, i.e. baseline's better performance should imply a smaller loss value than Sophia. This data intepretation problem is probably where I need your help most :)
A similar case is the Figure 13 in appendix A.6. (Really similar enough!) I suppose you find $\rho = 0.01$ is the best hyper parameter in practice, which still cannot reach baseline level.
Hope my (maybe silly) questions don't bother you too much. Gratitute again.

from notrainnogain.

JeanKaddour avatar JeanKaddour commented on June 30, 2024

Thanks for your response. So your question is whether Sophia's pre-training loss is also worse than the baseline?
You can find these results in Figure 5. We report the training loss because we use each example only once throughout training; hence, there is no need to use a validation set.

from notrainnogain.

Sapium59 avatar Sapium59 commented on June 30, 2024

Thank you. The loss curves in Figure 5 did report Sophia worse than baseline in most budget cases. It helps a lot.
Meanwhile, the Sophia paper gave the contrast conclusion (2x speed-up compared with AdamW in the number of
steps, total compute, and wall-clock time, as you say). The most difference seems to be time measurement, as you introduced RST. May I regard it as a crutial amendment for Sophia paper and its conclusion, by revealing the acceleration does not necessarily happen in practise as they expected?

from notrainnogain.

JeanKaddour avatar JeanKaddour commented on June 30, 2024

Sorry for the late response. Yes, indeed, your suggestion makes sense!

from notrainnogain.

Related Issues (2)

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.