Comments (10)
And i have a question, is there a big difference between the experimental results of running the configuration under the final_configs and GOOD_configs folders? The result I got after running !goodtg --config_path /content/GOOD/configs/GOOD_configs/GOODMotif/basis/concept/GSAT.yaml
is
Train ACCURACY: 0.9221
Train Loss: 0.3211
ID Validation ACCURACY: 0.9204
ID Validation Loss: 0.3302
ID Test ACCURACY: 0.9111
ID Test Loss: 0.3575
OOD Validation ACCURACY: 0.6562
OOD Validation Loss: 0.8695
OOD Test ACCURACY: 0.5193
OOD Test Loss: 1.1497
INFO: Loading best Out-of-Domain Checkpoint 19...
INFO: Checkpoint 19:
Train ACCURACY: 0.9237
Train Loss: 0.3330
ID Validation ACCURACY: 0.9189
ID Validation Loss: 0.3439
ID Test ACCURACY: 0.9130
ID Test Loss: 0.3669
OOD Validation ACCURACY: 0.6957
OOD Validation Loss: 0.7038
OOD Test ACCURACY: 0.5605
OOD Test Loss: 0.9081
INFO: ChartInfo 0.9111 0.5193 0.9130 0.5605 0.6957
it is different with the result on Leaderboard : 75.30(1.57)
from good.
And i have a question, is there a big difference between the experimental results of running the configuration under the final_configs and GOOD_configs folders? The result I got after running
!goodtg --config_path /content/GOOD/configs/GOOD_configs/GOODMotif/basis/concept/GSAT.yaml
isTrain ACCURACY: 0.9221 Train Loss: 0.3211 ID Validation ACCURACY: 0.9204 ID Validation Loss: 0.3302 ID Test ACCURACY: 0.9111 ID Test Loss: 0.3575 OOD Validation ACCURACY: 0.6562 OOD Validation Loss: 0.8695 OOD Test ACCURACY: 0.5193 OOD Test Loss: 1.1497 INFO: Loading best Out-of-Domain Checkpoint 19... INFO: Checkpoint 19:
Train ACCURACY: 0.9237 Train Loss: 0.3330 ID Validation ACCURACY: 0.9189 ID Validation Loss: 0.3439 ID Test ACCURACY: 0.9130 ID Test Loss: 0.3669 OOD Validation ACCURACY: 0.6957 OOD Validation Loss: 0.7038 OOD Test ACCURACY: 0.5605 OOD Test Loss: 0.9081 INFO: ChartInfo 0.9111 0.5193 0.9130 0.5605 0.6957
it is different with the result on Leaderboard : 75.30(1.57)
Yes. They are absolutely different. GOOD_configs stores the basic hyperparameters, but final_configs stores those hyperparameters after sweeping, i.e., the best hyperparameters.
from good.
!goodtg --config_path /content/GOOD/configs/final_configs/GOODMotif/basis/concept/GSAT.yaml
got the problem:ERROR: 06/08/2023 12:27:43 PM - utils.py - line 87 : Traceback (most recent call last): File "/usr/local/lib/python3.10/dist-packages/munch/init.py", line 103, in getattr return object.getattribute(self, k) AttributeError: 'Munch' object has no attribute 'clean_save'
During handling of the above exception, another exception occurred:
Traceback (most recent call last): File "/usr/local/lib/python3.10/dist-packages/munch/init.py", line 106, in getattr return self[k] KeyError: 'clean_save'
During handling of the above exception, another exception occurred:
Traceback (most recent call last): File "/usr/local/bin/goodtg", line 33, in sys.exit(load_entry_point('graph-ood', 'console_scripts', 'goodtg')()) File "/content/GOOD/GOOD/kernel/main.py", line 69, in goodtg main() File "/content/GOOD/GOOD/kernel/main.py", line 60, in main pipeline.load_task() File "/content/GOOD/GOOD/kernel/pipelines/basic_pipeline.py", line 231, in load_task self.train() File "/content/GOOD/GOOD/kernel/pipelines/basic_pipeline.py", line 155, in train self.save_epoch(epoch, epoch_train_stat, id_val_stat, id_test_stat, val_stat, test_stat, self.config) File "/content/GOOD/GOOD/kernel/pipelines/basic_pipeline.py", line 412, in save_epoch if config.clean_save: File "/usr/local/lib/python3.10/dist-packages/munch/init.py", line 108, in getattr raise AttributeError(k) AttributeError: clean_save
Hi czstudio,
I cannot reproduce this error. Can you share your package list? Also, you may try adding clean_save: False
into final_configs/base.yaml
. For example,
pipeline: Pipeline
clean_save: False
Please let me know if any questions. 😄
from good.
I have a new question. Does the outcome of the second “OOD Test ACCURACY” correspond to the result on the LEADERBOARD?
INFO: Training end.
INFO: Loading best In-Domain Checkpoint 195...
INFO: Checkpoint 195:
Train ACCURACY: 0.9259
Train Loss: 0.3450
ID Validation ACCURACY: 0.9243
ID Validation Loss: 0.3643
ID Test ACCURACY: 0.9250
ID Test Loss: 0.3630
OOD Validation ACCURACY: 0.7837
OOD Validation Loss: 0.6359
OOD Test ACCURACY: 0.6660
OOD Test Loss: 0.9501
INFO: Loading best Out-of-Domain Checkpoint 42...
INFO: Checkpoint 42:
Train ACCURACY: 0.8967
Train Loss: 0.4123
ID Validation ACCURACY: 0.8930
ID Validation Loss: 0.4263
ID Test ACCURACY: 0.8927
ID Test Loss: 0.4120
OOD Validation ACCURACY: 0.9300
OOD Validation Loss: 0.3504
OOD Test ACCURACY: 0.5800 (this one)
OOD Test Loss: 1.2065
from good.
True.
from good.
I run the final config on my own computer, and the results are quite different from those on the leaderboard:
goodtg --config_path GOOD_configs/GOODMotif/basis/covariate/CIGAv2.yaml --allow_devices 0
got:
INFO: Loading best In-Domain Checkpoint 107...
INFO: Checkpoint 107:
-----------------------------------
Train ACCURACY: 0.9273
Train Loss: 0.3611
ID Validation ACCURACY: 0.9263
ID Validation Loss: 0.3800
ID Test ACCURACY: 0.9260
ID Test Loss: 0.3633
OOD Validation ACCURACY: 0.5527
OOD Validation Loss: 3.6722
OOD Test ACCURACY: 0.4727
OOD Test Loss: 2.3150
INFO: Loading best Out-of-Domain Checkpoint 2...
INFO: Checkpoint 2:
-----------------------------------
Train ACCURACY: 0.6152
Train Loss: 1.1106
ID Validation ACCURACY: 0.6090
ID Validation Loss: 1.1530
ID Test ACCURACY: 0.6160
ID Test Loss: 1.0900
OOD Validation ACCURACY: 0.9307
OOD Validation Loss: 0.4753
OOD Test ACCURACY: 0.3487
OOD Test Loss: 9.0112
INFO: ChartInfo 0.9260 0.4727 0.6160 0.3487 0.9307
But the leaderboard result is 67.15(8.19), and GSAT I got : OOD Test ACCURACY: 0.4910
from good.
CIGA's leaderboard results are shared by the authors, there is no final configs in GOOD. They may choose the causal subgraph size ratio instead of sweeping them.
For GSAT, after running 10 rounds (or just 3 rounds), the results should be similar. How many rounds have you run?
from good.
I have run 200 rounds
from good.
Sorry about the confusion. What I meant was 10 runs with different random seeds.
from good.
Since there is no further update, I'm closing this issue. Please don't hesitate to reopen it if needed.
from good.
Related Issues (15)
- ERROR: Cannot install graph-ood and graph-ood==1.1.1 because these package versions have conflicting dependencies. HOT 3
- run CIGA algorithm error HOT 5
- Question of the DIR performance discrepancy between the paper table 13 and leaderboard HOT 10
- Leaderboard results of GOODTwitter HOT 6
- Questions about the GOOD-motif dataset HOT 1
- run CIGA on GOODPCBA dataset got error HOT 2
- Questions about the GOOD-motif dataset HOT 1
- 报错OSError: /home/.local/lib/python3.8/site-packages/torch_sparse/_convert_cuda.so: undefined symbol: _ZN6caffe28TypeMeta21_typeMetaDataInstanceIdEEPKNS_6detail12TypeMetaDataEv HOT 6
- An issue about GOOD-HIV dataset HOT 1
- How can one add a new algorithm and benchmark it with GOOD? HOT 7
- How to obtain results of multiple runs HOT 6
- INFO: pip is looking at multiple versions of wheel to determine which version is compatible with other requirements HOT 4
- Circular Import error HOT 3
- Dependencies conflicts during package installation HOT 4
Recommend Projects
-
React
A declarative, efficient, and flexible JavaScript library for building user interfaces.
-
Vue.js
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
-
Typescript
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
-
TensorFlow
An Open Source Machine Learning Framework for Everyone
-
Django
The Web framework for perfectionists with deadlines.
-
Laravel
A PHP framework for web artisans
-
D3
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
-
Recommend Topics
-
javascript
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
-
web
Some thing interesting about web. New door for the world.
-
server
A server is a program made to process requests and deliver data to clients.
-
Machine learning
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
-
Visualization
Some thing interesting about visualization, use data art
-
Game
Some thing interesting about game, make everyone happy.
Recommend Org
-
Facebook
We are working to build community through open source technology. NB: members must have two-factor auth.
-
Microsoft
Open source projects and samples from Microsoft.
-
Google
Google ❤️ Open Source for everyone.
-
Alibaba
Alibaba Open Source for everyone
-
D3
Data-Driven Documents codes.
-
Tencent
China tencent open source team.
from good.