Giter Site home page Giter Site logo

pretrain-gnns's Introduction

Strategies for Pre-training Graph Neural Networks

This is a Pytorch implementation of the following paper:

Weihua Hu*, Bowen Liu*, Joseph Gomes, Marinka Zitnik, Percy Liang, Vijay Pande, Jure Leskovec. Strategies for Pre-training Graph Neural Networks. ICLR 2020. arXiv OpenReview

If you make use of the code/experiment in your work, please cite our paper (Bibtex below).

@inproceedings{
hu2020pretraining,
title={Strategies for Pre-training Graph Neural Networks},
author={Hu, Weihua and Liu, Bowen and Gomes, Joseph and Zitnik, Marinka and Liang, Percy and Pande, Vijay and Leskovec, Jure},
booktitle={International Conference on Learning Representations},
year={2020},
url={https://openreview.net/forum?id=HJlWWJSFDH},
}

Installation

We used the following Python packages for core development. We tested on Python 3.7.

pytorch                   1.0.1
torch-cluster             1.2.4              
torch-geometric           1.0.3
torch-scatter             1.1.2 
torch-sparse              0.2.4
torch-spline-conv         1.0.6
rdkit                     2019.03.1.0
tqdm                      4.31.1
tensorboardx              1.6

Dataset download

All the necessary data files can be downloaded from the following links.

For the chemistry dataset, download from chem data (2.5GB), unzip it, and put it under chem/. For the biology dataset, download from bio data (2GB), unzip it, and put it under bio/.

Pre-training and fine-tuning

In each directory, we have three kinds of files used to train GNNs.

1. Self-supervised pre-training

python pretrain_contextpred.py --output_model_file OUTPUT_MODEL_PATH
python pretrain_masking.py --output_model_file OUTPUT_MODEL_PATH
python pretrain_edgepred.py --output_model_file OUTPUT_MODEL_PATH
python pretrain_deepgraphinfomax.py --output_model_file OUTPUT_MODEL_PATH

This will save the resulting pre-trained model to OUTPUT_MODEL_PATH.

2. Supervised pre-training

python pretrain_supervised.py --output_model_file OUTPUT_MODEL_PATH --input_model_file INPUT_MODEL_PATH

This will load the pre-trained model in INPUT_MODEL_PATH, further pre-train it using supervised pre-training, and then save the resulting pre-trained model to OUTPUT_MODEL_PATH.

3. Fine-tuning

python finetune.py --model_file INPUT_MODEL_PATH --dataset DOWNSTREAM_DATASET --filename OUTPUT_FILE_PATH

This will finetune pre-trained model specified in INPUT_MODEL_PATH using dataset DOWNSTREAM_DATASET. The result of fine-tuning will be saved to OUTPUT_FILE_PATH.

Saved pre-trained models

We release pre-trained models in model_gin/ and model_architecture/ for both biology (bio/) and chemistry (chem/) applications. Feel free to take the models and use them in your applications!

Reproducing results in the paper

Our results in the paper can be reproduced by running sh finetune_tune.sh SEED DEVICE, where SEED is a random seed ranging from 0 to 9, and DEVICE specifies the GPU ID to run the script. This script will finetune our saved pre-trained models on each downstream dataset.

pretrain-gnns's People

Contributors

monk1337 avatar weihua916 avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

pretrain-gnns's Issues

Questions about the Null_value

Hi,
Thanks for this impressive work! I just want to know the details about how you deal with the datasets with null value when the datasets were split for finetune, such as MUV and ToxCast. It seems that you filled the NULL values with 0 in your finetune.py, but it would add new labels (0) into the dataset if you filled NULL with 0, whether it should fill with other values except 0 and 1? Correct me if I am wrong.
Thanks again!

Question about more datasets and more baselines

Why don't you give the associated experimental results on some regression tasks in the MoleculeNet benchmark? Additionally, there are some pertaining techniques tailored for molecule data such as mol2vec, seq2seq fingerprint. However, I don't see the model comparison.

Question about the results

Hi,

Thanks a lot for this amazing article. I just wanted to ask you about the results that you got on regression tasks (lipo, freesolv, esol). Do you have those ? I've tried it on my side and it doesn't look that impressive. However, I might be doing something wrong.

Thanks in advance

Questions about rou_auc value

Hi,
Thanks for this impressive work! I just want to know the details about how you get rou_auc value. Does the following code calculate the AUC value on the test set.
image

Any help would be appreciated.

Other Dependencies

The dependencies in the README.md should also include tqdm and tensorboardX.

Code for processing raw data

Hi,

I am interested in customizing the input data for my application. Could you also share the code to generate the geometric_data_processed.pt from raw/ in the bio dataset?

Thanks a lot!!

Question about OGB submission

Hi there,

I was just wondering whether you have used the pre-training strategies proposed in the paper for your submissions in OGB, e.g. when training the GIN+virtual node models for molpcba and molhivdatasets, as described here.

Thanks!

Edge attributes for BIO dataset

In the paper, you describe the BIO dataset as having 7 edge attributes corresponding to different interactions (coexpression, neighborhood etc...). However, when I stepped through the code, I noticed that edge_attr for geometric_data_processed.pt has 9 features. Was wondering about the discrepancy.

EDIT: Additionally, I'm unsure why the Embedding for node features has dimension 2 x 300. Why 2?
Also, I now understand that the 7th (starting from 0) dimension is the "self loops" attribute. But I'm still unclear on the 8th (last) dimension

Shape mismatch error when switching the model to GAT

Hi, I got shape mismatch error for this line "x_j += edge_attr" in the message function of GATConv class when I tried to switch to the GAT model. It seems that the reshaping "x = self.weight_linear(x).view(-1, self.heads, self.emb_dim)" in the forward function mess up the shape of "x_j".

Question about the environment

Hello,

I am trying to build the environment according to your installation guidance. May I know the cuda version you used?

dataset format

Hello,

It seems that molecular datasets the node/edge features are not one-hot encoding, instead a natural number is used to encode the category. As the paper appendix mentioned, those features are categorical features. I was wondering why is one-hot encoding not used?

Thanks a lot!

Questions about ChEMBL Loader

Hi there,
Got two questions about the function _load_chembl_with_labels_dataset :

  1. I am wondering why you generate isomeric SMILES from Molecule and then choose the largest one? Didn't find this in the original loader file. Is there any specific reason for doing this?
  2. What's the difference with the above generated smiles with the one stored in chembl20Smiles.pckl?

Any help would be appreciated.

Sharing GNN models through the Hugging Face Hub

Hi @weihua916 and snap team!

This and OGB projects are amazing. I see you host and share models in your repo. Would you be interested in sharing your models/datasets in the 🤗 Hugging Face Hub? At HF we want to collaborate with the GNNs community to promote open source knowledge in this area where your work is key 😀.

This integration would allow you to freely download/upload models, and make your work more accessible and visible to the rest of the ML community. We can help you set up a SNAP-Stanford organization (examples from other Stanford organizations within HF, Stanford CRFM, and Stanford NLP).

Creating the repos and adding new models should be a relatively straightforward process. This is a step-by-step guide explaining the process in case you're interested. Please let us know if you would be interested and if you have any questions.

Some of the benefits of sharing your models through the HF Hub would be:

  • Presence in the HF Hub might lower the entry of barrier to SNAP Stanford users as well as increase its visibility.
    • Repos provide useful metadata about their tasks, languages, metrics, etc that make them discoverable
  • versioning, commit history, and diffs.
  • multiple features from TensorBoard visualizations, PapersWithCode integration, and more.

Additionally, we have a library to programmatically access repositories (both downloading pre-trained models and pushing, with a lot of nice things such as filtering, caching, etc). If we want to try out this integration, I would suggest you add one or two models manually and then use the huggingface_hub library to implement downloading those models programmatically from SNAP. You might want to check our documentation to read more about it.

Relevant references:

Happy to hear your thoughts,

Omar and the Hugging Face team

cc @napoles-uach @osanseviero @abidlabs

Validation on different dataset

Hi,
I am using your model to use pretrained model and fine-tuning on my own dataset. Is it possible to validate my fine-tuned model on another different validation dataset? I want to keep it completely separate. Is this possible with pretrain-gnns chem model? If yes, how can I approach?

Thanks.

@weihua916

Some questions about bio. dataset.

Hi,

The paper said that 394,925 protein subgraphs are used for the self-supervised pre-training, but the code only loads 306,925 subgraphs (without 88000 labeled subgraphs?).

So, how many graphs used in the pre-trained models?
I am wondering whether I lose any insight behind that.

Code question

Hi:
After reading your paper and code. I have two questions.
First. In pretrain-gnns-master/model_gin, there are several supervised_.pth files, how to get them?
Second, in README.md Fine-tuning, there is a parameter dataset, which is downstream_dataset. While in finetune.py and finetune_tune.sh , there is no this dataset parameter. Why?

An RuntimeError:There were no tensor arguments to this function

I used my own dataset, and the following error occurred. How can I change it? Thanks!

Traceback (most recent call last):
File "D:/g/project/Expression2graph/learndxs/S_Pretra/my_pretrain_combine.py", line 243, in
main()
File "D:/g/project/Expression2graph/learndxs/S_Pretra/my_pretrain_combine.py", line 235, in main
train_loss, train_acc_pair, train_acc_node = train(args, model_list, loader, optimizer_list, device)
File "D:/g/project/Expression2graph/learndxs/S_Pretra/my_pretrain_combine.py", line 52, in train
for step, batch in enumerate(loader):
File "C:\Users\11708\Desktop\minikangda\envs\pytorch\lib\site-packages\torch\utils\data\dataloader.py", line 345, in next
data = self._next_data()
File "C:\Users\11708\Desktop\minikangda\envs\pytorch\lib\site-packages\torch\utils\data\dataloader.py", line 385, in _next_data
data = self._dataset_fetcher.fetch(index) # may raise StopIteration
File "C:\Users\11708\Desktop\minikangda\envs\pytorch\lib\site-packages\torch\utils\data_utils\fetch.py", line 47, in fetch
return self.collate_fn(data)
File "D:\g\project\Expression2graph\learndxs\S_Pretra\my_dataloader.py", line 55, in
collate_fn=lambda data_list: BatchMaskingAndSubstructContext.from_data_list(data_list),
File "D:\g\project\Expression2graph\learndxs\S_Pretra\my_batch.py", line 260, in from_data_list
batch[key] = torch.cat(batch[key], dim=batch.cat_dim(key))
RuntimeError: There were no tensor arguments to this function (e.g., you passed an empty list of Tensors), but no fallback function is registered for schema aten::_cat. This usually means that this function requires a non-empty list of Tensors. Available functions are [CPUTensorId, CUDATensorId, QuantizedCPUTensorId, VariableTensorId]

ImportError

Hi,

Great job! I have a minor question. When running code, I got the following error (the package version should be correct as readme).

ImportError: /usr/lib/x86_64-linux-gnu/libstdc++.so.6: version CXXABI_1.3.11 not found (required by /home/tlchen/anaconda3/envs/gnnrdkit/lib/python3.6/site-packages/rdkit/DataStructs/../../../../libRDKitDataStructs.so.1)

How to solve this? Thank you!

Dependency issues

Hi,
Thanks for sharing your code. I'm trying to install the dependencies you mentioned but I'm running into several compatibility issues. Could you please add a requirements.txt file to make installation easier?

a question about calculating the loss in pretrainning

set up models, one for pre-training and one for context embeddings

model = GNN(args.num_layer, args.emb_dim, JK = args.JK, drop_ratio = args.dropout_ratio, gnn_type = args.gnn_type).to(device)
linear_pred_atoms = torch.nn.Linear(args.emb_dim, 119).to(device)
linear_pred_bonds = torch.nn.Linear(args.emb_dim, 4).to(device)

node_rep = model(batch.x, batch.edge_index, batch.edge_attr)

loss for nodes

criterion = nn.CrossEntropyLoss()
pred_node = linear_pred_atoms(node_rep[batch.masked_atom_indices])
loss = criterion(pred_node.double(), batch.mask_node_label[:,0])

Question about evaluating results

Hi, I had one query about can we use any other metrics to evaluate results? Why you chose auc_roc_score()?
I wanted to get confusion matrix for my dataset using your model. I tried but I could not succeed. Could you help me with it?
and I must say, wonderful work you all have done. Hats off!

Why didn't we use random scaffold split?

Hi,
Thank you for publishing such an amazing paper!
I have a question regarding the splitting of the datasets used in the experiments. As shown in the paper and in the code, you used scaffold splitting. However, it is also reasonable to use random scaffold splitting on the small datasets reported. Is there a intuitions for preferring scaffold splitting over random scaffold splitting?

Questions on sensitivity of results to package versions

Hi,

Thank you for the great work! I am currently trying to reproduce your results. Everything is going great, except for one dataset (Clintox). Would be able to guide me on this?

In particular, I am currently finetuning the models that you provided e.g., pretrained_models/gin_contextpred.pth. I am using a finetuning code almost identical to yours (changes only made for logging and refactoring purpose).

This is the finetuning results that I get for finetuning gin_mask.pth and gin_contextpred.pth, averaged over 5 runs:

datasets | bbbp | clintox | sider | tox21 | bace | toxcast
masking | 0.648 | 0.713 | 0.602 | 0.765 | 0.786 | 0.633
contextpred | 0.687 | 0.600 | 0.602 | 0.750 | 0.797 | 0.636

The numbers reported in your paper are as follows:

datasets | bbbp | clintox | sider | tox21 | bace | toxcast
masking | 0.643 | 0.718 | 0.610 | 0.767 | 0.793 | 0.642
contextpred | 0.680 | 0.659 | 0.609 | 0.757 | 0.796 | 0.639

As you can see, the contextpred algorithm for Clintox dataset degrades significantly (0.659 -> 0.609). The 0.609 result has with very small standard deviation (it is an average of 0.614, 0.619, 0.609, 0.567, 0.593).

I suspect that the main cause if from different package versions that I use. I am curious if you have any knowledge of how the performance of algorithms are sensitive to the package versions.

I have attached my finetuning code and Dockerfile for your information. I created the Dockerfile to match library of a recent paper (https://openreview.net/forum?id=DAaaaqPv9-q), but it differs slightly from yours.

files.zip

Thank you for your time! I really appreciate it.

Arguments error

hi,
when I ran the code "python pretrain_contextpred.py --output_model_file OUTPUT_MODEL_PATH" to train a model, I got the following error:

pretrain_contextpred.py: error: unrecognized arguments: --output_model_file

Maybe you changed the source code?Or replaced it with other parameters?

Question about the eval function

Hi,

I have a question about the eval function in chem/finetune.py, I list the code that I don't understand below,

    y_true = torch.cat(y_true, dim = 0).cpu().numpy()
    y_scores = torch.cat(y_scores, dim = 0).cpu().numpy()

    roc_list = []
    for i in range(y_true.shape[1]):
        #AUC is only defined when there is at least one positive data.
        if np.sum(y_true[:,i] == 1) > 0 and np.sum(y_true[:,i] == -1) > 0:
            is_valid = y_true[:,i]**2 > 0
            roc_list.append(roc_auc_score((y_true[is_valid,i] + 1)/2, y_scores[is_valid,i]))

I have the following questions,

  1. I don't understand that why are we looping over the columns instead of the rows of y_true
  2. I understand that AUC is defined when there's at least one positive data, but why we need to use np.sum(y_true[:,i] == -1) > 0 here, does -1 represents positive data also ?
  3. what's the reason that we need to + 1 and /2 for the valid y_true here?

Sorry for my long questions and thank you for your time!

License for Code

Thanks for releasing this code! I really like your paper and am excited to try it out in some of my projects :)

Would it be possible to add a LICENSE for this repo? If you folks are unsure what license would be good, I'd suggest MIT (flexible, friendly for many use cases).

Questions about reproducing experimental results

Hi,

I find your work very interesting and tried to reproduce some of the results with your code. However, I find some of my results are inconsistent with Table 1. For example, for the Clintox dataset, attributemasking has ROC-AUC 82% while supervised + masking has ROC-AUC 78%. These results are inconsistent or may be contrary to the results in the paper.

I use the scaffold data splitting and finetune 100 epochs.

Question about negative context representation in context prediction

Hi, I read your code and found that there is something i don't understand. In the context prediction module(pretrain_contextpred.py), the negative context representative is obtained by cycle shifting context_rep or substruct_rep.

# cbow
# negative contexts are obtained by shifting the indicies of context embeddings
neg_context_rep = torch.cat([context_rep[cycle_index(len(context_rep), i+1)] for i in range(args.neg_samples)], dim = 0)

# skipgram
#shift indices of substructures to create negative examples
shifted_expanded_substruct_rep = []
for i in range(args.neg_samples):
    shifted_substruct_rep = substruct_rep[cycle_index(len(substruct_rep), i+1)]
    shifted_expanded_substruct_rep.append(torch.cat([shifted_substruct_rep[i].repeat((batch.overlapped_context_size[i],1)) for i in range(len(shifted_substruct_rep))], dim = 0))

My question is why use cycle shifting to get negative rep? Have you tried other methods, such as use random index or use random rep directly? Is it cycle shifting method is better?
In skipgram mode, is it more reasonable to randomly sample other nodes as negative rep?
Looking forward to your reply, thanks a lot!!!

Question about the regression tasks

Hi, thanks for your nice work. It seems that the code and results are all based on the classification dataset. Can you provide the code and results of the regression tasks?

Different embedding in each layer?

Thank you for the nice paper and repo!

I have a question regarding the edge features: why is the Embedding layer defined within the geometric filter? In other words, is there a reason why a different embedding is trained in each layer?

About prediction method

Hi,
I was wondering if there is any predict method available for this model? I want to get classification predictions for a small set of unknown molecules I have for my downstream finetuned task using this pretrain-gnn model. Can I save the model and run predictions? I couldn't find it in here but if I am missing anything, please help me out.

Thank you!

sufficient GPU

Could you please let me know what GPU you used to run your experiments (in your publication and on https://openreview.net/forum?id=HJlWWJSFDH computation times for a single GPU implementation are mentioned
but I could not find information on what GPU was used). More generally, do you know what GPU memory would be sufficient
to run your experiments - e.g. would GeForce RTX 2080 Ti with 11 GB be enough?

Prior set

Just want to make sure I understand exactly how you're using the prior set during supervised pretraining and training. From what I understand in the paper, the prior set is used during supervised pretraining along with the train and validation sets (how is it used here? As validation or train?) but it is not used during supervised training with fine-grained labels?

EDIT:
Also, related to this question, I checked the code, and I see a "easy" and "hard" set for testing. I see that these are the human organism, but what I don't understand is why one is easy and the other is hard, if they're just randomly split between the two. i.e I'm referring to this line:

test_dataset_broad, test_dataset_none, _ = random_split(test_dataset, seed = args.seed, frac_train=0.5, frac_valid=0.5, frac_test=0)

Implementation error of GATConv

The message function of GATConv in chem/model.py writes
image
However, I think the alpha should be normalized w.r.t edge_index[1] (dst nodes) instead of edge_index[0] (src nodes) so that it is normalized among nodes connecting to a dst.

Although for undirected graphs, what calculated after scatter operations woule be the same, yet different indexing has different meaning (edge_index[0] and edge_index[1])

Consider this simple case, edge_index[1] gives the desired result
image

Could you share the entire package list? Thanks!

Thanks for the amazing paper and code! However, we found it is hard to establish the environment since the packages listed in README seem too old or incompatible with our CUDA version. We try to use torch==1.7.0, and torch-sparse/cluster/scatter listed here (https://pytorch-geometric.com/whl/torch-1.7.0+cu110.html), which is compatible with our CUDA==11.0, but cannot train the model and has this error:

ValueError: MessagePassing.propagate only supports torch.LongTensor of shape [2, num_messages] or torch_sparse.SparseTensor for argument edge_index.

Could you please share the conda package list with us, including the CUDA version? Thanks!

Docker for running code?

Hi Weihua, I love your pre-training paper, and I'm looking forward to trying out the code. I tried installing the particular library versions that you list, but I can't get torch-scatter==1.1.2 to install. Could you share a docker file or image to run the code? I anticipate this would help others as well. Thanks for your help!

Error in data loading

Hi,

Thanks for providing the code.
I get an error in the pretrain_contextpred.py
at:
##############
for step, batch in enumerate(tqdm(loader, desc="Iteration")):
#########
error reads as
AttributeError: Caught AttributeError in DataLoader worker process 0.
Original Traceback (most recent call last):
File "/Users/ivasilei/opt/anaconda3/envs/my_dgl/lib/python3.7/site-packages/torch/utils/data/_utils/worker.py", line 178, in _worker_loop
data = fetcher.fetch(index)
File "/Users/ivasilei/opt/anaconda3/envs/my_dgl/lib/python3.7/site-packages/torch/utils/data/_utils/fetch.py", line 44, in fetch
data = [self.dataset[idx] for idx in possibly_batched_index]
File "/Users/ivasilei/opt/anaconda3/envs/my_dgl/lib/python3.7/site-packages/torch/utils/data/_utils/fetch.py", line 44, in
data = [self.dataset[idx] for idx in possibly_batched_index]
File "/Users/ivasilei/opt/anaconda3/envs/my_dgl/lib/python3.7/site-packages/torch_geometric/data/dataset.py", line 188, in getitem
data = self.get(self.indices()[idx])
File "/Users/ivasilei/PycharmProjects/UniversalEmbedding/aux_files/pretrain-gnns-master/chem/loader.py", line 298, in get
slices[idx + 1])
AttributeError: 'Data' object has no attribute 'cat_dim'

Any ideas regarding this?

Thanks, Vassilis

Questions about code.

Hi,

In the code for BIO dataset, the node embeddings in GIN are updated with CONCAT operation, while the node embeddings in other GNN models are updated with PLUS operation, which is different from the description in the paper.

So, is the code wrong?
Thx.

Noisy train loss

Hi there,

Great work, thank you for cool insights into graph models pre-training!

I've tried to reproduce the results, but when running GIN pretraining in masking + supervised setup I encountered a bit unstable train loss.

GIN - pretraining GNNS supervised

Is is inherit to chembl multi-task setup or should I expect it to converge?

Implementation error of GAT (?)

The message function of GATConv in chem/model.py writes
image
However, I think the alpha should be normalized w.r.t edge_index[1] (dst nodes) instead of edge_index[0] (src nodes) so that it is normalized among nodes connecting to a dst.

ImportError and RuntimeError

Hellos and thanks for sharing.Some errors occurred while running the program.

ImportError: cannot import name 'segment_csr' from 'torch_scatter'(torch-cluster ==1.2.4)

RuntimeError: scatter_add() expected at most 5 argument(s) but received 6 argument(s). Declaration: scatter_add(Tensor src, Tensor index, int dim=-1, Tensor? out=None, int? dim_size=None) -> (Tensor)(torch-scatter==2.0.4)

I don't know how to solve it. I look forward to receiving your reply.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.