Giter Site home page Giter Site logo

aghasemian / optimallinkprediction Goto Github PK

View Code? Open in Web Editor NEW
58.0 5.0 20.0 8.55 MB

This page is a companion for our paper on optimal link prediction, written by Amir Ghasemian, Homa Hosseinmardi, Aram Galstyan, Edoardo M. Airoldi, and Aaron Clauset. (arXiv:1909.07578)

Python 100.00%

optimallinkprediction's Introduction

Optimal Link Prediction

This page is a companion for the paper

Amir Ghasemian, Homa Hosseinmardi, Aram Galstyan, Edoardo M. Airoldi and Aaron Clauset
Stacking Models for Nearly Optimal Link Prediction in Complex Networks, PNAS USA 117(38), 23393-23400 (2020).

on optimal link prediction.


Here, we provide (i) a reference set of networks as a benchmark for link prediction (Fig. S1 of the paper), (ii) the necessary code to generate 42 topological features for each network (Table S1 of the paper), and (iii) a useful stacking method to combine these topological features to be used in link prediction.


Fig. S1 of the paper


Table S1 of the paper

The most common approach to predict missing links constructs a score function from network statistics of each unconnected node pair. We studied 42 of these topological predictors in this paper, which include predictions based on node degrees, common neighbors, random walks, node and edge centralities, among others (see SI Appendix, Table S1). Models of large-scale network structure and close proximity of an unconnected pair, after embedding a network's nodes into a latent space are also commonly used for link prediction. We have also studied 11 of the model-based methods (Table S2 of the paper), besides 150 of the embedding-based predictors, derived from two popular graph embedding algorithms and six notions of distance or similarity in the latent space in this work. In total, we considered 203 features of node pairs.


Table S2 of the paper

Across domains, predictor importances cluster in interesting ways, such that some individual and some families of predictors perform better on specific domains. For instance, examining the 10 most-important predictors by domain (29 unique predictors; Fig. 1 of the paper), we find that topological methods, such as those based on common neighbors or localized random walks, perform well on social networks but less well on networks from other domains. In contrast, model-based methods perform relatively well across domains, but often perform less well on social networks than do topological measures and some embedding-based methods. Together, these results indicate that predictor methods exhibit a broad diversity of errors, which tend correlate somewhat with scientific domain.


Fig. 1 of the paper

This performance heterogeneity highlights the practical relevance to link prediction of the general No Free Lunch theorem, which proves that across all possible inputs, every machine learning method has the same average performance, and hence accuracy must be assessed on a per dataset basis. The observed diversity of errors indicates that none of the 203 individual predictors is a universally-best method for the subset of all inputs that are realistic. However, that diversity also implies that a nearly-optimal link prediction method for realistic inputs could be constructed by combining individual methods so that the best individual method is applied for each given input. Such a meta-learning algorithm cannot circumvent the No Free Lunch theorem, but it can achieve optimal performance on realistic inputs by effectively redistributing its worse-than-average performance onto unrealistic inputs, which are unlikely to be encountered in practice.

In this page we also provide one of the useful stacking methods in our paper to be accessible for all researchers in the field. In the module provided in Python we construct 42 topological features and combine them using a standard random forest as a supervised learning algorithm. The reason we are not including the model-based or embedding-based features, and the corresponding stacked models is that the implementations we used of these techniques are not ours to reshare, and we refer the interested readers to the sources we cite in the paper for specific details.

Download the package:

Download Pickle Format.

This package contains the corpus of 550 real-world networks, a slightly expanded version of the CommunityFitNet corpus from many scientific domains drawn from the Index of Complex Networks (ICON). This corpus spans a variety of sizes and structures, with 23% social, 23% economic, 32% biological, 12% technological, 3% information, and 7% transportation graphs (Fig. S1 of the paper). More information regarding the partitions achieved by 16 state-of-the-art community detection algorithms over these networks are provided in CommunityFitNet.

Download the code:

Topol. Stacking Method.

Instruction for using the package and running the code:

To load the data:

import pickle  
# load the data 
infile = open('./Benchmark/OLP_updated.pickle','rb')  
df = pickle.load(infile)  

# read edge lists for all networks
df_edgelists = df['edges_id'] # column 'edges_id' in dataframe df includes the edge list 
                              # for each network 
 
# extract the edge list for the first network 
edges_orig = df_edgelists.iloc[0] # a numpy array of edge list for original graph 

To run the topological feature stacking model on one of the networks in real dataset:

import OLP as olp 
# run topological stacking model
olp.topol_stacking(edges_orig) 

To run a demo:

import OLP as olp
olp.demo()

How to cite this work:

If you use this code or data in your research, please cite it as follows:

@article{ghasemian2020stacking,
  title = {Stacking models for nearly optimal link prediction in complex networks},
  author = {Ghasemian, Amir and Hosseinmardi, Homa and Galstyan, Aram and Airoldi, Edoardo M and Clauset, Aaron},
  journal = {Proceedings of the National Academy of Sciences},
  volume = {117},
  number = {38},
  pages = {23393--23400},
  year = {2020},
  publisher = {National Acad Sciences},
}

optimallinkprediction's People

Contributors

aghasemian avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar

optimallinkprediction's Issues

SVD truncation

Hi @Aghasemian

Just spotted a potential bug in the code:

https://github.com/Aghasemian/OptimalLinkPrediction/blob/ce6b1934bab396b8e6283c3afab343ab2aea830f/Code/OLP.py#L429C5-L434

Here, the singular values with large magnitude are assumed to be positive, though in the singular value decomposition, the sign is not uniquely determined. Consequently, it could happen that the principal component is truncated when it is associated with negative singular values.

I think the right implementation is to take the magnitude by np.abs(sig) before truncation.

Questions about the train/test split

Is this a correct interpretation of how this works:

By randomly removing edges we're simulating creating a 'past' state of the graph. In the paper you do so twice, so we end up with three 'snapshots' of the graph: G_train < G_ho < G_orig

Then, in the training phase we're trying to predict the 'missing' edges E_ho \ E_train with every non-edge of G_ho as negatives, and in the test phase we're trying to predict the 'missing' edges E_orig \ E_ho with every non-edge of G_orig as negatives.

If that's all correct, why is it necessary to create two snapshots? Why can't we just randomly remove edges once to create: G_past < G_orig, positives are E_orig \ E_past and negatives are non-edges of G_orig, and then just do a random 80-20 split for training/test? In other words, why do training and test have to be separate graphs?

And, if we did want to create two snapshots, why does E_train have to be a strict subset of E_ho? Couldn't we have the random edge removal for G_train and G_ho be independent?

Thanks for any help. I'm trying out something similar to your paper, so thanks for this repo and the paper.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.