cripac-dig / gca Goto Github PK
View Code? Open in Web Editor NEW[WWW 2021] Source code for "Graph Contrastive Learning with Adaptive Augmentation"
License: MIT License
[WWW 2021] Source code for "Graph Contrastive Learning with Adaptive Augmentation"
License: MIT License
dear authors,
I am a new student in UCAS, I want to understand and study the baseline experiment code, could you please offer me this part of code.
What is the purpose of testing and recording accuracy every 100 rounds during training? Isn't the pre-training process an unsupervised process? According to the DGI paper and code implementation, DGI only has a gradient descent in the training process until the loss stops getting smaller (early stopping) and then the training ends.
I don't think the value of infonce loss is necessarily inversely proportional to the result of linear evaluation, but controlling the training of graph contrastive learning according to the result of linear evaluation is fitting the dataset. So, when should the training of graph contrastive learning end? And how to compare the different graph contrastive learning methods fairly?
Also the final accuracy calculation in the code implementation only calculates the accuracy of one random split of the dataset, and does not calculate the accuracy of multiple splits (or multiple run of logistic regression like DGI) and calculate the final average acc.
drop weightis is computed according to the folllowing codes in pGRACE/functional.py :
`def pr_drop_weights(edge_index, aggr: str = 'sink', k: int = 10):
pv = compute_pr(edge_index, k=k)
pv_row = pv[edge_index[0]].to(torch.float32)
pv_col = pv[edge_index[1]].to(torch.float32)
s_row = torch.log(pv_row)
s_col = torch.log(pv_col)
if aggr == 'sink':
s = s_col
elif aggr == 'source':
s = s_row
elif aggr == 'mean':
s = (s_col + s_row) * 0.5
else:
s = s_col
weights = (s.max() - s) / (s.max() - s.mean())
return `
However, in debugging codes, I found that some elements in array pv
could be 0 (while using WikiCS for training). After executed torch.log(), min value of s_row
and s_col
will be -inf
, resulted in all of weights
became 0. At that time, the weights was meaningless.
Sorry to bother you, I am confused that why edge_weights divided by edge_weights.mean()?
def drop_edge_weighted(edge_index, edge_weights, p: float, threshold: float = 1.):
edge_weights = edge_weights / edge_weights.mean() * p
edge_weights = edge_weights.where(edge_weights < threshold, torch.ones_like(edge_weights) * threshold)
sel_mask = torch.bernoulli(1. - edge_weights).to(torch.bool)
return edge_index[:, sel_mask]
Hi, thanks for reading this!
When I reproduce the results of WikiCS by the provided hyperparameters and code (nothing changed), I cannot reproduce the results shown in the paper, i.e., 78%+. I only can achieve 32%+ accuracy.
Do you have any hints on it? Is that caused by hyperparameter setting?
Thanks!
I wonder if the loss function in the paper was first proposed by you? Thank you very much!
Thanks for your awesome work! Could you please offer the requirement.txt or environment.yml for convenient installation?
A declarative, efficient, and flexible JavaScript library for building user interfaces.
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
An Open Source Machine Learning Framework for Everyone
The Web framework for perfectionists with deadlines.
A PHP framework for web artisans
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
Some thing interesting about web. New door for the world.
A server is a program made to process requests and deliver data to clients.
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
Some thing interesting about visualization, use data art
Some thing interesting about game, make everyone happy.
We are working to build community through open source technology. NB: members must have two-factor auth.
Open source projects and samples from Microsoft.
Google ❤️ Open Source for everyone.
Alibaba Open Source for everyone
Data-Driven Documents codes.
China tencent open source team.