Comments (29)
Yeah, I think so.
from giga.
But the result of GIGA-Aff is higher than the paper.
from giga.
I am currently making a graduation project based on your thesis, so I am eager to know the specific epochs of your work. You can also reply me by email [email protected].
Extremely grateful!
from giga.
But the result of GIGA-Aff is higher than the paper.
That's possible, different devices and different random seeds can give different results. I suggest running more tests with more different random seeds. BTW, how much higher?
from giga.
5 or 6 percentage points
from giga.
Is that the average result from multiple random seeds?
from giga.
Yes, the random seeds are [0, 1, 2, 3, 4]. The epoches are 20. I tested twice, the results were both higher than the paper.
from giga.
Hmmm, how about the result of GIGA? Is it better than GIGA-Aff?
from giga.
from giga.
Hmmm, that's weird. Have you checked the loss curve and made sure they both converged?
from giga.
This is the loss graph of GIGA-Aff training 20 epoches
This is the 10 epoches
from giga.
Hi,
I trained GIGA-Aff for 10 epoches, the result is higher 5 percentage points than the paper. I wonder if there is a problem with the code.
from giga.
How about the training figure of GIGA? Have you trained GIGA?
from giga.
from giga.
So the GIGA trained with the same number of epochs perform worse than GIGA-Aff?
from giga.
from giga.
Hmmm, that's weird. What scenario are you using? Packed or pile?
from giga.
from giga.
Not sure about that. GIGA should perform better than GIGA-Aff, especially in packed scenarios.
from giga.
Sorry to reply you now.
It's weird. I retrained giga-aff, and the result was lower than before, even lower than the paper.
from giga.
Do you retrain with the same setting?
from giga.
from giga.
The absolute value may vary because the test scenes can be different, but the relative performance between GIGA and GIGA-Aff should stay the same. However, you said GIGA is worse than GIGA-Aff previously, which is very weird. Did you train GIGA and GIGA-Aff on the same computer?
from giga.
The absolute value may vary because the test scenes can be different, but the relative performance between GIGA and GIGA-Aff should stay the same. However, you said GIGA is worse than GIGA-Aff previously, which is very weird. Did you train GIGA and GIGA-Aff on the same computer?
Yes, I trained them on the same computer before. Training the same model on the same computer, the loss curve obtained is different.
from giga.
The absolute value may vary because the test scenes can be different, but the relative performance between GIGA and GIGA-Aff should stay the same. However, you said GIGA is worse than GIGA-Aff previously, which is very weird. Did you train GIGA and GIGA-Aff on the same computer?
Yes, I trained them on the same computer before. Training the same model on the same computer, the loss curve obtained is different.
from giga.
The loss curve can be different. I think training on different computers is OK. The important thing is testing on the same computer so that after fixing the random seed, the generated scenes will be the same. (I should have asked if you test them on the same computer, it was a typo.)
from giga.
from giga.
Not sure why this happens. I'll look back to this and re-train by myself later.
from giga.
from giga.
Related Issues (20)
- New dataset HOT 3
- Visualizing on custom dataset HOT 9
- Visualization of data generation HOT 2
- visual HOT 1
- How to use GIGA on real robot? HOT 2
- how to query at a higher resolution of 60×60×60 HOT 4
- Installation error: " LINK : fatal error LNK1181: can not open input file“m.lib” " HOT 4
- About NVIDIA Driver in WSL2 HOT 5
- Scene Descriptor HOT 2
- [Question] How to get the 3d reconstruction at inference time? HOT 6
- No module named vgn HOT 2
- libmesh failed! HOT 2
- question about GIGA(HR) HOT 1
- Train GIGA HOT 1
- Can't log _aff.obj when running sim_grasp_multiple.py HOT 1
- Some confusion in the paper HOT 2
- The program that generates data gets stuck in the first loop HOT 2
- The time to generate the training set HOT 1
- Re-implentation in real world HOT 6
- Did you consider trying to avoid using the grasp data on the wrong voxels? HOT 2
Recommend Projects
-
React
A declarative, efficient, and flexible JavaScript library for building user interfaces.
-
Vue.js
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
-
Typescript
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
-
TensorFlow
An Open Source Machine Learning Framework for Everyone
-
Django
The Web framework for perfectionists with deadlines.
-
Laravel
A PHP framework for web artisans
-
D3
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
-
Recommend Topics
-
javascript
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
-
web
Some thing interesting about web. New door for the world.
-
server
A server is a program made to process requests and deliver data to clients.
-
Machine learning
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
-
Visualization
Some thing interesting about visualization, use data art
-
Game
Some thing interesting about game, make everyone happy.
Recommend Org
-
Facebook
We are working to build community through open source technology. NB: members must have two-factor auth.
-
Microsoft
Open source projects and samples from Microsoft.
-
Google
Google ❤️ Open Source for everyone.
-
Alibaba
Alibaba Open Source for everyone
-
D3
Data-Driven Documents codes.
-
Tencent
China tencent open source team.
from giga.