Giter Site home page Giter Site logo

cripac-dig / gca Goto Github PK

View Code? Open in Web Editor NEW
154.0 154.0 26.0 323 KB

[WWW 2021] Source code for "Graph Contrastive Learning with Adaptive Augmentation"

License: MIT License

Python 100.00%
contrastive-learning deep-learning graph-contrastive-learning graph-representation-learning pytorch self-supervised-learning

gca's People

Contributors

linyxus avatar opilgrim avatar sxkdz avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar

gca's Issues

Relevant loss function

I wonder if the loss function in the paper was first proposed by you? Thank you very much!

Question about code

Sorry to bother you, I am confused that why edge_weights divided by edge_weights.mean()?

def drop_edge_weighted(edge_index, edge_weights, p: float, threshold: float = 1.):
edge_weights = edge_weights / edge_weights.mean() * p
edge_weights = edge_weights.where(edge_weights < threshold, torch.ones_like(edge_weights) * threshold)
sel_mask = torch.bernoulli(1. - edge_weights).to(torch.bool)
return edge_index[:, sel_mask]

Questions about the evaluation.

What is the purpose of testing and recording accuracy every 100 rounds during training? Isn't the pre-training process an unsupervised process? According to the DGI paper and code implementation, DGI only has a gradient descent in the training process until the loss stops getting smaller (early stopping) and then the training ends.

I don't think the value of infonce loss is necessarily inversely proportional to the result of linear evaluation, but controlling the training of graph contrastive learning according to the result of linear evaluation is fitting the dataset. So, when should the training of graph contrastive learning end? And how to compare the different graph contrastive learning methods fairly?

Also the final accuracy calculation in the code implementation only calculates the accuracy of one random split of the dataset, and does not calculate the accuracy of multiple splits (or multiple run of logistic regression like DGI) and calculate the final average acc.

Environment setup

Thanks for your awesome work! Could you please offer the requirement.txt or environment.yml for convenient installation?

PageRank centrality computed by this codes maybe meaningless

drop weightis is computed according to the folllowing codes in pGRACE/functional.py :

`def pr_drop_weights(edge_index, aggr: str = 'sink', k: int = 10):
pv = compute_pr(edge_index, k=k)
pv_row = pv[edge_index[0]].to(torch.float32)
pv_col = pv[edge_index[1]].to(torch.float32)
s_row = torch.log(pv_row)
s_col = torch.log(pv_col)
if aggr == 'sink':
s = s_col
elif aggr == 'source':
s = s_row
elif aggr == 'mean':
s = (s_col + s_row) * 0.5
else:
s = s_col
weights = (s.max() - s) / (s.max() - s.mean())

return `

However, in debugging codes, I found that some elements in array pv could be 0 (while using WikiCS for training). After executed torch.log(), min value of s_row and s_col will be -inf, resulted in all of weights became 0. At that time, the weights was meaningless.

Cannot reproduce the results of WikiCS dataset

Hi, thanks for reading this!

When I reproduce the results of WikiCS by the provided hyperparameters and code (nothing changed), I cannot reproduce the results shown in the paper, i.e., 78%+. I only can achieve 32%+ accuracy.

Do you have any hints on it? Is that caused by hyperparameter setting?

Thanks!

baseline experiment

dear authors,
I am a new student in UCAS, I want to understand and study the baseline experiment code, could you please offer me this part of code.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.