cuai / non-homophily-benchmarks Goto Github PK
View Code? Open in Web Editor NEW[WWW 2021 GLB] New Benchmarks for Learning on Non-Homophilous Graphs
License: MIT License
[WWW 2021 GLB] New Benchmarks for Learning on Non-Homophilous Graphs
License: MIT License
Hello,
Thank you so much for putting these datasets together for public access. Very interesting and well-written paper as well!
I have encountered some issues when I try to reproduce the results on the pokec and snap-patents dataset. For the simplest GCN model, my results on these two datasets are ~62% and ~41% respectively. However, the accuracy in the paper are ~75% and ~45%. For both cases, I used hidden_dim = 32 and searched over lr = [0.1, 0.01, 0.001]. May I ask what hyperparameters I should use to achieve the desired accuracy shown in the paper? Also, after how many epochs did your training converge?
In the paper appendix B1, it says that the best results were also searched over hidden_dim = [4,8,16,32]. However, my training accuracy is also similar to the validation/test accuracy, so I am not sure reducing hidden_dim will help. Also, since these two datasets are large, running the hyperparameter search again be expensive. Could you please kindly share the exact hyperparameters you used?
By the way, my results are on the first fixed split. My other guess is that maybe the 5 fixed splits are very different from each other so the averaged result can be high when the other splits produce high accuracies? However, if this is the case, the variance of these 5 splits may seem to be too high. It would be great if you could also confirm the accuracy of 5 splits should be similar.
I really appreciate your help.
Dear contributors,
Thanks for your contribution to the community of heterophilic graphs. However, when I try to use yelp-chi data which is downloaded from google drive, I cannot load it correctly using spicy.io.loadmat. I have followed every step of your code and instruction, so could you please give some advice, try yourselves again, or update the dataset?
p.s. the error info looks like:
---------------------------------------------------------------------------
ValueError Traceback (most recent call last)
~/etc/miniconda3/lib/python3.8/site-packages/scipy/io/matlab/mio.py in loadmat(file_name, mdict, appendmat, **kwargs)
223 variable_names = kwargs.pop('variable_names', None)
224 with _open_file_context(file_name, appendmat) as f:
--> 225 MR, _ = mat_reader_factory(f, **kwargs)
226 matfile_dict = MR.get_variables(variable_names)
227
~/etc/miniconda3/lib/python3.8/site-packages/scipy/io/matlab/mio.py in mat_reader_factory(file_name, appendmat, **kwargs)
72 """
73 byte_stream, file_opened = _open_file(file_name, appendmat)
---> 74 mjv, mnv = get_matfile_version(byte_stream)
75 if mjv == 0:
76 return MatFile4Reader(byte_stream, **kwargs), file_opened
~/etc/miniconda3/lib/python3.8/site-packages/scipy/io/matlab/miobase.py in get_matfile_version(fileobj)
229 if maj_val in (1, 2):
...
--> 231 raise ValueError('Unknown mat file type, version %s, %s' % ret)
232
233
ValueError: Unknown mat file type, version 32, 99
Many thanks and waiting for your response sincerely,
Sun
Hi,
Thank you for your contribution about the datasets!
I tried to run your predefined experiments directly but get the following errors when I run bash experiments/mixhop_exp.sh snap-patents
:
Namespace(dataset='snap-patents', sub_dataset='', hidden_channels=8, dropout=0.5, lr=0.01, method='mixhop', epochs=500, cpu=False, weight_decay=0.001, display_step=25, hops=2, num_layers=2, runs=5, cached=False, gat_heads=8, lp_alpha=0.1, gpr_alpha=0.1, directed=True, jk_type='max', rocauc=False, num_mlp_layers=1, print_prop=False, train_prop=0.5, valid_prop=0.25, rand_split=False, no_bn=False) Traceback (most recent call last): File "/home/niepert-adm/Downloads/Non-Homophily-Benchmarks/main.py", line 32, in <module> dataset = load_nc_dataset(args.dataset, args.sub_dataset) File "/home/niepert-adm/Downloads/Non-Homophily-Benchmarks/dataset.py", line 102, in load_nc_dataset dataset = load_snap_patents_mat() File "/home/niepert-adm/Downloads/Non-Homophily-Benchmarks/dataset.py", line 256, in load_snap_patents_mat fulldata = scipy.io.loadmat(f'{DATAPATH}snap_patents.mat') File "/home/niepert-adm/miniconda3/envs/non-hom/lib/python3.9/site-packages/scipy/io/matlab/_mio.py", line 225, in loadmat MR, _ = mat_reader_factory(f, **kwargs) File "/home/niepert-adm/miniconda3/envs/non-hom/lib/python3.9/site-packages/scipy/io/matlab/_mio.py", line 74, in mat_reader_factory mjv, mnv = _get_matfile_version(byte_stream) File "/home/niepert-adm/miniconda3/envs/non-hom/lib/python3.9/site-packages/scipy/io/matlab/_miobase.py", line 251, in _get_matfile_version raise ValueError('Unknown mat file type, version %s, %s' % ret) ValueError: Unknown mat file type, version 32, 99
bash experiments/mixhop_exp.sh twitch-e DE
works !
Best,
Min
Hi, thank you for your great work!
I have a question about Twitch dataset. (same as issue in here)
you wrote: Vertex features are extracted based on the games played and liked, location, and streaming habits.
but you didn't mention how you make node feature vectors with those features.
Could you please let us know?
Thanks,
A declarative, efficient, and flexible JavaScript library for building user interfaces.
๐ Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
An Open Source Machine Learning Framework for Everyone
The Web framework for perfectionists with deadlines.
A PHP framework for web artisans
Bring data to life with SVG, Canvas and HTML. ๐๐๐
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
Some thing interesting about web. New door for the world.
A server is a program made to process requests and deliver data to clients.
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
Some thing interesting about visualization, use data art
Some thing interesting about game, make everyone happy.
We are working to build community through open source technology. NB: members must have two-factor auth.
Open source projects and samples from Microsoft.
Google โค๏ธ Open Source for everyone.
Alibaba Open Source for everyone
Data-Driven Documents codes.
China tencent open source team.