Giter Site home page Giter Site logo

How about giga HOT 29 CLOSED

ut-austin-rpl avatar ut-austin-rpl commented on September 24, 2024
How

from giga.

Comments (29)

Steve-Tod avatar Steve-Tod commented on September 24, 2024

Yeah, I think so.

from giga.

wuzeww avatar wuzeww commented on September 24, 2024

But the result of GIGA-Aff is higher than the paper.

from giga.

wuzeww avatar wuzeww commented on September 24, 2024

I am currently making a graduation project based on your thesis, so I am eager to know the specific epochs of your work. You can also reply me by email [email protected].
Extremely grateful!

from giga.

Steve-Tod avatar Steve-Tod commented on September 24, 2024

But the result of GIGA-Aff is higher than the paper.

That's possible, different devices and different random seeds can give different results. I suggest running more tests with more different random seeds. BTW, how much higher?

from giga.

wuzeww avatar wuzeww commented on September 24, 2024

5 or 6 percentage points

from giga.

Steve-Tod avatar Steve-Tod commented on September 24, 2024

Is that the average result from multiple random seeds?

from giga.

wuzeww avatar wuzeww commented on September 24, 2024

Yes, the random seeds are [0, 1, 2, 3, 4]. The epoches are 20. I tested twice, the results were both higher than the paper.

from giga.

Steve-Tod avatar Steve-Tod commented on September 24, 2024

Hmmm, how about the result of GIGA? Is it better than GIGA-Aff?

from giga.

wuzeww avatar wuzeww commented on September 24, 2024

from giga.

Steve-Tod avatar Steve-Tod commented on September 24, 2024

Hmmm, that's weird. Have you checked the loss curve and made sure they both converged?

from giga.

wuzeww avatar wuzeww commented on September 24, 2024

This is the loss graph of GIGA-Aff training 20 epoches

2021-10-28 09-59-19 的屏幕截图

This is the 10 epoches

2021-10-28 09-59-26 的屏幕截图

from giga.

wuzeww avatar wuzeww commented on September 24, 2024

Hi,

I trained GIGA-Aff for 10 epoches, the result is higher 5 percentage points than the paper. I wonder if there is a problem with the code.

from giga.

Steve-Tod avatar Steve-Tod commented on September 24, 2024

How about the training figure of GIGA? Have you trained GIGA?

from giga.

wuzeww avatar wuzeww commented on September 24, 2024

This the loss curve of giga:
2021-10-30 20-51-43 的屏幕截图

from giga.

Steve-Tod avatar Steve-Tod commented on September 24, 2024

So the GIGA trained with the same number of epochs perform worse than GIGA-Aff?

from giga.

wuzeww avatar wuzeww commented on September 24, 2024

from giga.

Steve-Tod avatar Steve-Tod commented on September 24, 2024

Hmmm, that's weird. What scenario are you using? Packed or pile?

from giga.

wuzeww avatar wuzeww commented on September 24, 2024

from giga.

Steve-Tod avatar Steve-Tod commented on September 24, 2024

Not sure about that. GIGA should perform better than GIGA-Aff, especially in packed scenarios.

from giga.

wuzeww avatar wuzeww commented on September 24, 2024

Sorry to reply you now.

It's weird. I retrained giga-aff, and the result was lower than before, even lower than the paper.

from giga.

Steve-Tod avatar Steve-Tod commented on September 24, 2024

Do you retrain with the same setting?

from giga.

wuzeww avatar wuzeww commented on September 24, 2024

from giga.

Steve-Tod avatar Steve-Tod commented on September 24, 2024

The absolute value may vary because the test scenes can be different, but the relative performance between GIGA and GIGA-Aff should stay the same. However, you said GIGA is worse than GIGA-Aff previously, which is very weird. Did you train GIGA and GIGA-Aff on the same computer?

from giga.

wuzeww avatar wuzeww commented on September 24, 2024

The absolute value may vary because the test scenes can be different, but the relative performance between GIGA and GIGA-Aff should stay the same. However, you said GIGA is worse than GIGA-Aff previously, which is very weird. Did you train GIGA and GIGA-Aff on the same computer?

Yes, I trained them on the same computer before. Training the same model on the same computer, the loss curve obtained is different.

from giga.

wuzeww avatar wuzeww commented on September 24, 2024

The absolute value may vary because the test scenes can be different, but the relative performance between GIGA and GIGA-Aff should stay the same. However, you said GIGA is worse than GIGA-Aff previously, which is very weird. Did you train GIGA and GIGA-Aff on the same computer?

Yes, I trained them on the same computer before. Training the same model on the same computer, the loss curve obtained is different.

from giga.

Steve-Tod avatar Steve-Tod commented on September 24, 2024

The loss curve can be different. I think training on different computers is OK. The important thing is testing on the same computer so that after fixing the random seed, the generated scenes will be the same. (I should have asked if you test them on the same computer, it was a typo.)

from giga.

wuzeww avatar wuzeww commented on September 24, 2024

from giga.

Steve-Tod avatar Steve-Tod commented on September 24, 2024

Not sure why this happens. I'll look back to this and re-train by myself later.

from giga.

wuzeww avatar wuzeww commented on September 24, 2024

from giga.

Related Issues (20)

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.