Giter Site home page Giter Site logo

gcn's Introduction

Graph Convolutional Networks

This is a TensorFlow implementation of Graph Convolutional Networks for the task of (semi-supervised) classification of nodes in a graph, as described in our paper:

Thomas N. Kipf, Max Welling, Semi-Supervised Classification with Graph Convolutional Networks (ICLR 2017)

For a high-level explanation, have a look at our blog post:

Thomas Kipf, Graph Convolutional Networks (2016)

Installation

python setup.py install

Requirements

  • tensorflow (>0.12)
  • networkx

Run the demo

cd gcn
python train.py

Data

In order to use your own data, you have to provide

  • an N by N adjacency matrix (N is the number of nodes),
  • an N by D feature matrix (D is the number of features per node), and
  • an N by E binary label matrix (E is the number of classes).

Have a look at the load_data() function in utils.py for an example.

In this example, we load citation network data (Cora, Citeseer or Pubmed). The original datasets can be found here: http://www.cs.umd.edu/~sen/lbc-proj/LBC.html. In our version (see data folder) we use dataset splits provided by https://github.com/kimiyoung/planetoid (Zhilin Yang, William W. Cohen, Ruslan Salakhutdinov, Revisiting Semi-Supervised Learning with Graph Embeddings, ICML 2016).

You can specify a dataset as follows:

python train.py --dataset citeseer

(or by editing train.py)

Models

You can choose between the following models:

Graph classification

Our framework also supports batch-wise classification of multiple graph instances (of potentially different size) with an adjacency matrix each. It is best to concatenate respective feature matrices and build a (sparse) block-diagonal matrix where each block corresponds to the adjacency matrix of one graph instance. For pooling (in case of graph-level outputs as opposed to node-level outputs) it is best to specify a simple pooling matrix that collects features from their respective graph instances, as illustrated below:

graph_classification

Cite

Please cite our paper if you use this code in your own work:

@inproceedings{kipf2017semi,
  title={Semi-Supervised Classification with Graph Convolutional Networks},
  author={Kipf, Thomas N. and Welling, Max},
  booktitle={International Conference on Learning Representations (ICLR)},
  year={2017}
}

gcn's People

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

gcn's Issues

about the citation dataset

Hello Thomas,
I am wondering if you have the directed graph of the citation datasets ? Or do you have the script to generate those data ? I feel those 140 labeled nodes and testing instances are well-chosen. Do you have any idea about this ?

No node features

Hi -- I'm interested in using the GCN in a situation where the node's don't have any features associated with them (besides the label). It seems like the thing to do is set featureless=True in the first layer in models.py, and then change the line in train.py to

model = model_func(placeholders, input_dim=adj.shape[0], logging=True)

This runs, but the performance isn't great. Any thoughts? I would expect this to perform somewhere in the neighborhood of deepwalk, but I may be misunderstanding the algorithm

Thanks
Ben

Confusion matrix for node-wise classifications

I'm hoping to output a confusion matrix for node-wise binary classification after training is complete.

Do you have a way you recommend I go about doing this? I've tried using tf.confusion_matrix where the labels input is y_test and the predictions input is the vector from model.outputs (which is coming from a modification to your evaluate function). This has it's issues though, because (I think) model.outputs is the output of an activation function and not the classification itself.

Is there a better way to go about this, or is there something I'm missing?

How to handle the graph data with temporal changes?

I'm handling a little bit different problem with gcn:

Given a graph of N nodes, I have serval temporal slice data, say s_0, s_1, s_2, ..., s_t, each of which includes the nodes' features X_t and the corresponding labels L_t at a time t (For each time slice, the graph structure does not change)

The task is to predict the label of each node at a given time t+1.

In this case, I was wondering:

  1. If I understand correctly, current gcn implementation can only handle one graph in an epoch. How can I handle the multiple "temporal" graphs and corresponding data properly?
  2. I notice that the graph labels actually change over time. How can I model such a temporal evolving in gcn?

Any suggestion will be appreciated.

Out Of Memory Error when training with higher support or more kernels

Hi,

I tried to train your GCN network using the Chebychev polynomials. However, on my network and features (~10.000 nodes, ~90.000 edges, 24 feature dimensions), my graphics card seems to quickly run out of memory when using either higher suport polynomials (>3) or more filters (hidden1 > 20).

I am using a NVIDIA GeForce TITAN X card with 12 GB memory.

Do you think the reason for that lies in the implementation and can be tweaked or is it a natural limitation?

The exact error I get is:

totalMemory: 11.90GiB freeMemory: 11.76GiB
2017-11-15 14:14:23.596353: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1120] Creating TensorFlow device (/device:GPU:0) -> (device: 0, name: TITAN X (Pascal), pci bus id: 0000:81:00.0, compute capability: 6.1)
2017-11-15 14:14:36.928227: W tensorflow/core/common_runtime/bfc_allocator.cc:273] Allocator (GPU_0_bfc) ran out of memory trying to allocate 649.86MiB.  Current allocation summary follows.
2017-11-15 14:14:36.928341: I tensorflow/core/common_runtime/bfc_allocator.cc:627] Bin (256): 	Total Chunks: 39, Chunks in use: 37. 9.8KiB allocated for chunks. 9.2KiB in use in bin. 3.2KiB client-requested in use in bin.
2017-11-15 14:14:36.928366: I tensorflow/core/common_runtime/bfc_allocator.cc:627] Bin (512): 	Total Chunks: 1, Chunks in use: 0. 512B allocated for chunks. 0B in use in bin. 0B client-requested in use in bin.
2017-11-15 14:14:36.928380: I tensorflow/core/common_runtime/bfc_allocator.cc:627] Bin (1024): 	Total Chunks: 1, Chunks in use: 1. 1.2KiB allocated for chunks. 1.2KiB in use in bin. 1.0KiB client-requested in use in bin.
2017-11-15 14:14:36.928392: I tensorflow/core/common_runtime/bfc_allocator.cc:627] Bin (2048): 	Total Chunks: 16, Chunks in use: 16. 40.0KiB allocated for chunks. 40.0KiB in use in bin. 37.5KiB client-requested in use in bin.
2017-11-15 14:14:36.928401: I tensorflow/core/common_runtime/bfc_allocator.cc:627] Bin (4096): 	Total Chunks: 0, Chunks in use: 0. 0B allocated for chunks. 0B in use in bin. 0B client-requested in use in bin.
2017-11-15 14:14:36.928410: I tensorflow/core/common_runtime/bfc_allocator.cc:627] Bin (8192): 	Total Chunks: 0, Chunks in use: 0. 0B allocated for chunks. 0B in use in bin. 0B client-requested in use in bin.
2017-11-15 14:14:36.928419: I tensorflow/core/common_runtime/bfc_allocator.cc:627] Bin (16384): 	Total Chunks: 0, Chunks in use: 0. 0B allocated for chunks. 0B in use in bin. 0B client-requested in use in bin.
2017-11-15 14:14:36.928429: I tensorflow/core/common_runtime/bfc_allocator.cc:627] Bin (32768): 	Total Chunks: 3, Chunks in use: 2. 118.0KiB allocated for chunks. 78.5KiB in use in bin. 78.1KiB client-requested in use in bin.
2017-11-15 14:14:36.928440: I tensorflow/core/common_runtime/bfc_allocator.cc:627] Bin (65536): 	Total Chunks: 6, Chunks in use: 5. 469.5KiB allocated for chunks. 391.2KiB in use in bin. 390.3KiB client-requested in use in bin.
2017-11-15 14:14:36.928459: I tensorflow/core/common_runtime/bfc_allocator.cc:627] Bin (131072): 	Total Chunks: 2, Chunks in use: 1. 379.2KiB allocated for chunks. 156.2KiB in use in bin. 156.1KiB client-requested in use in bin.
2017-11-15 14:14:36.928470: I tensorflow/core/common_runtime/bfc_allocator.cc:627] Bin (262144): 	Total Chunks: 0, Chunks in use: 0. 0B allocated for chunks. 0B in use in bin. 0B client-requested in use in bin.
2017-11-15 14:14:36.928480: I tensorflow/core/common_runtime/bfc_allocator.cc:627] Bin (524288): 	Total Chunks: 13, Chunks in use: 11. 11.29MiB allocated for chunks. 9.47MiB in use in bin. 9.34MiB client-requested in use in bin.
2017-11-15 14:14:36.928490: I tensorflow/core/common_runtime/bfc_allocator.cc:627] Bin (1048576): 	Total Chunks: 6, Chunks in use: 5. 8.69MiB allocated for chunks. 7.36MiB in use in bin. 6.66MiB client-requested in use in bin.
2017-11-15 14:14:36.928501: I tensorflow/core/common_runtime/bfc_allocator.cc:627] Bin (2097152): 	Total Chunks: 1, Chunks in use: 1. 2.64MiB allocated for chunks. 2.64MiB in use in bin. 2.64MiB client-requested in use in bin.
2017-11-15 14:14:36.928512: I tensorflow/core/common_runtime/bfc_allocator.cc:627] Bin (4194304): 	Total Chunks: 0, Chunks in use: 0. 0B allocated for chunks. 0B in use in bin. 0B client-requested in use in bin.
2017-11-15 14:14:36.928524: I tensorflow/core/common_runtime/bfc_allocator.cc:627] Bin (8388608): 	Total Chunks: 4, Chunks in use: 4. 43.52MiB allocated for chunks. 43.52MiB in use in bin. 43.52MiB client-requested in use in bin.
2017-11-15 14:14:36.928534: I tensorflow/core/common_runtime/bfc_allocator.cc:627] Bin (16777216): 	Total Chunks: 2, Chunks in use: 2. 42.51MiB allocated for chunks. 42.51MiB in use in bin. 42.51MiB client-requested in use in bin.
2017-11-15 14:14:36.928547: I tensorflow/core/common_runtime/bfc_allocator.cc:627] Bin (33554432): 	Total Chunks: 4, Chunks in use: 4. 207.96MiB allocated for chunks. 207.96MiB in use in bin. 207.96MiB client-requested in use in bin.
2017-11-15 14:14:36.928559: I tensorflow/core/common_runtime/bfc_allocator.cc:627] Bin (67108864): 	Total Chunks: 1, Chunks in use: 1. 103.98MiB allocated for chunks. 103.98MiB in use in bin. 103.98MiB client-requested in use in bin.
2017-11-15 14:14:36.928572: I tensorflow/core/common_runtime/bfc_allocator.cc:627] Bin (134217728): 	Total Chunks: 1, Chunks in use: 1. 181.08MiB allocated for chunks. 181.08MiB in use in bin. 181.08MiB client-requested in use in bin.
2017-11-15 14:14:36.928584: I tensorflow/core/common_runtime/bfc_allocator.cc:627] Bin (268435456): 	Total Chunks: 6, Chunks in use: 5. 10.58GiB allocated for chunks. 10.24GiB in use in bin. 6.19GiB client-requested in use in bin.
2017-11-15 14:14:36.928595: I tensorflow/core/common_runtime/bfc_allocator.cc:643] Bin for 649.86MiB was 256.00MiB, Chunk State: 
2017-11-15 14:14:36.928613: I tensorflow/core/common_runtime/bfc_allocator.cc:649]   Size: 344.70MiB | Requested Size: 0B | in_use: 0, prev:   Size: 16.52MiB | Requested Size: 16.52MiB | in_use: 1, next:   Size: 362.17MiB | Requested Size: 362.17MiB | in_use: 1
2017-11-15 14:14:36.928630: I tensorflow/core/common_runtime/bfc_allocator.cc:661] Chunk at 0x1020d600000 of size 1280
2017-11-15 14:14:36.928643: I tensorflow/core/common_runtime/bfc_allocator.cc:661] Chunk at 0x1020d600500 of size 256
2017-11-15 14:14:36.928649: I tensorflow/core/common_runtime/bfc_allocator.cc:661] Chunk at 0x1020d600600 of size 256
2017-11-15 14:14:36.928654: I tensorflow/core/common_runtime/bfc_allocator.cc:661] Chunk at 0x1020d600700 of size 256
2017-11-15 14:14:36.928660: I tensorflow/core/common_runtime/bfc_allocator.cc:661] Chunk at 0x1020d600800 of size 256
2017-11-15 14:14:36.928665: I tensorflow/core/common_runtime/bfc_allocator.cc:661] Chunk at 0x1020d600900 of size 256
2017-11-15 14:14:36.928671: I tensorflow/core/common_runtime/bfc_allocator.cc:661] Chunk at 0x1020d600a00 of size 256
2017-11-15 14:14:36.928677: I tensorflow/core/common_runtime/bfc_allocator.cc:661] Chunk at 0x1020d600b00 of size 256
2017-11-15 14:14:36.928682: I tensorflow/core/common_runtime/bfc_allocator.cc:661] Chunk at 0x1020d600c00 of size 256
2017-11-15 14:14:36.928687: I tensorflow/core/common_runtime/bfc_allocator.cc:661] Chunk at 0x1020d600d00 of size 256
2017-11-15 14:14:36.928692: I tensorflow/core/common_runtime/bfc_allocator.cc:661] Chunk at 0x1020d600e00 of size 2560
2017-11-15 14:14:36.928698: I tensorflow/core/common_runtime/bfc_allocator.cc:661] Chunk at 0x1020d601800 of size 2560
2017-11-15 14:14:36.928704: I tensorflow/core/common_runtime/bfc_allocator.cc:661] Chunk at 0x1020d602200 of size 2560
2017-11-15 14:14:36.928722: I tensorflow/core/common_runtime/bfc_allocator.cc:661] Chunk at 0x1020d602c00 of size 256
2017-11-15 14:14:36.928727: I tensorflow/core/common_runtime/bfc_allocator.cc:661] Chunk at 0x1020d602d00 of size 2560
2017-11-15 14:14:36.928733: I tensorflow/core/common_runtime/bfc_allocator.cc:661] Chunk at 0x1020d603700 of size 2560
2017-11-15 14:14:36.928739: I tensorflow/core/common_runtime/bfc_allocator.cc:661] Chunk at 0x1020d604100 of size 2560
2017-11-15 14:14:36.928745: I tensorflow/core/common_runtime/bfc_allocator.cc:661] Chunk at 0x1020d604b00 of size 2560
2017-11-15 14:14:36.928751: I tensorflow/core/common_runtime/bfc_allocator.cc:661] Chunk at 0x1020d605500 of size 256
2017-11-15 14:14:36.928757: I tensorflow/core/common_runtime/bfc_allocator.cc:661] Chunk at 0x1020d605600 of size 256
2017-11-15 14:14:36.928763: I tensorflow/core/common_runtime/bfc_allocator.cc:661] Chunk at 0x1020d605700 of size 256
2017-11-15 14:14:36.928787: I tensorflow/core/common_runtime/bfc_allocator.cc:661] Chunk at 0x1020d605800 of size 256
2017-11-15 14:14:36.928792: I tensorflow/core/common_runtime/bfc_allocator.cc:661] Chunk at 0x1020d605900 of size 256
2017-11-15 14:14:36.928798: I tensorflow/core/common_runtime/bfc_allocator.cc:661] Chunk at 0x1020d605a00 of size 256
2017-11-15 14:14:36.928803: I tensorflow/core/common_runtime/bfc_allocator.cc:661] Chunk at 0x1020d605b00 of size 256
2017-11-15 14:14:36.928809: I tensorflow/core/common_runtime/bfc_allocator.cc:661] Chunk at 0x1020d605d00 of size 256
2017-11-15 14:14:36.928815: I tensorflow/core/common_runtime/bfc_allocator.cc:661] Chunk at 0x1020d605e00 of size 256
2017-11-15 14:14:36.928821: I tensorflow/core/common_runtime/bfc_allocator.cc:661] Chunk at 0x1020d605f00 of size 256
2017-11-15 14:14:36.928827: I tensorflow/core/common_runtime/bfc_allocator.cc:661] Chunk at 0x1020d606000 of size 256
2017-11-15 14:14:36.928832: I tensorflow/core/common_runtime/bfc_allocator.cc:661] Chunk at 0x1020d606100 of size 256
2017-11-15 14:14:36.928839: I tensorflow/core/common_runtime/bfc_allocator.cc:661] Chunk at 0x1020d606200 of size 256
2017-11-15 14:14:36.928845: I tensorflow/core/common_runtime/bfc_allocator.cc:661] Chunk at 0x1020d606300 of size 256
2017-11-15 14:14:36.928850: I tensorflow/core/common_runtime/bfc_allocator.cc:661] Chunk at 0x1020d606400 of size 256
2017-11-15 14:14:36.928855: I tensorflow/core/common_runtime/bfc_allocator.cc:661] Chunk at 0x1020d606500 of size 256
2017-11-15 14:14:36.928860: I tensorflow/core/common_runtime/bfc_allocator.cc:661] Chunk at 0x1020d606600 of size 256
2017-11-15 14:14:36.928865: I tensorflow/core/common_runtime/bfc_allocator.cc:661] Chunk at 0x1020d606700 of size 256
2017-11-15 14:14:36.928871: I tensorflow/core/common_runtime/bfc_allocator.cc:661] Chunk at 0x1020d606800 of size 256
2017-11-15 14:14:36.928876: I tensorflow/core/common_runtime/bfc_allocator.cc:661] Chunk at 0x1020d606900 of size 256
2017-11-15 14:14:36.928881: I tensorflow/core/common_runtime/bfc_allocator.cc:661] Chunk at 0x1020d606a00 of size 256
2017-11-15 14:14:36.928887: I tensorflow/core/common_runtime/bfc_allocator.cc:661] Chunk at 0x1020d606b00 of size 2560
2017-11-15 14:14:36.928893: I tensorflow/core/common_runtime/bfc_allocator.cc:661] Chunk at 0x1020d607500 of size 2560
2017-11-15 14:14:36.928898: I tensorflow/core/common_runtime/bfc_allocator.cc:661] Chunk at 0x1020d607f00 of size 2560
2017-11-15 14:14:36.928903: I tensorflow/core/common_runtime/bfc_allocator.cc:661] Chunk at 0x1020d608900 of size 2560
2017-11-15 14:14:36.928908: I tensorflow/core/common_runtime/bfc_allocator.cc:661] Chunk at 0x1020d609300 of size 2560
2017-11-15 14:14:36.928914: I tensorflow/core/common_runtime/bfc_allocator.cc:661] Chunk at 0x1020d609d00 of size 2560
2017-11-15 14:14:36.928919: I tensorflow/core/common_runtime/bfc_allocator.cc:661] Chunk at 0x1020d60a700 of size 256
2017-11-15 14:14:36.928925: I tensorflow/core/common_runtime/bfc_allocator.cc:661] Chunk at 0x1020d60a800 of size 256
2017-11-15 14:14:36.928931: I tensorflow/core/common_runtime/bfc_allocator.cc:661] Chunk at 0x1020d60a900 of size 2560
2017-11-15 14:14:36.928938: I tensorflow/core/common_runtime/bfc_allocator.cc:661] Chunk at 0x1020d60b300 of size 2560
2017-11-15 14:14:36.928944: I tensorflow/core/common_runtime/bfc_allocator.cc:661] Chunk at 0x1020d60bd00 of size 2560
2017-11-15 14:14:36.928950: I tensorflow/core/common_runtime/bfc_allocator.cc:661] Chunk at 0x1020d60c700 of size 40192
2017-11-15 14:14:36.928957: I tensorflow/core/common_runtime/bfc_allocator.cc:661] Chunk at 0x1020d616400 of size 160000
2017-11-15 14:14:36.928963: I tensorflow/core/common_runtime/bfc_allocator.cc:661] Chunk at 0x1020d63d500 of size 692992
2017-11-15 14:14:36.928968: I tensorflow/core/common_runtime/bfc_allocator.cc:661] Chunk at 0x1020d6e6800 of size 2771200
2017-11-15 14:14:36.928975: I tensorflow/core/common_runtime/bfc_allocator.cc:661] Chunk at 0x1020d98b100 of size 27257344
2017-11-15 14:14:36.928981: I tensorflow/core/common_runtime/bfc_allocator.cc:661] Chunk at 0x1020f389b00 of size 109028608
2017-11-15 14:14:36.928988: I tensorflow/core/common_runtime/bfc_allocator.cc:661] Chunk at 0x10215b84000 of size 189882112
2017-11-15 14:14:36.928994: I tensorflow/core/common_runtime/bfc_allocator.cc:661] Chunk at 0x10221099f00 of size 759528448
2017-11-15 14:14:36.929000: I tensorflow/core/common_runtime/bfc_allocator.cc:661] Chunk at 0x1024e5d0900 of size 80128
2017-11-15 14:14:36.929006: I tensorflow/core/common_runtime/bfc_allocator.cc:661] Chunk at 0x1024e5ee000 of size 256
2017-11-15 14:14:36.929012: I tensorflow/core/common_runtime/bfc_allocator.cc:661] Chunk at 0x1024e5ee100 of size 256
2017-11-15 14:14:36.929018: I tensorflow/core/common_runtime/bfc_allocator.cc:661] Chunk at 0x1024e5ee400 of size 256
2017-11-15 14:14:36.929033: I tensorflow/core/common_runtime/bfc_allocator.cc:661] Chunk at 0x1024e5ee500 of size 80128
2017-11-15 14:14:36.929039: I tensorflow/core/common_runtime/bfc_allocator.cc:661] Chunk at 0x1024e601e00 of size 80128
2017-11-15 14:14:36.929052: I tensorflow/core/common_runtime/bfc_allocator.cc:661] Chunk at 0x1024e615700 of size 80128
2017-11-15 14:14:36.929059: I tensorflow/core/common_runtime/bfc_allocator.cc:661] Chunk at 0x1024e63c900 of size 590848
2017-11-15 14:14:36.929066: I tensorflow/core/common_runtime/bfc_allocator.cc:661] Chunk at 0x1024e6ccd00 of size 54514432
2017-11-15 14:14:36.929072: I tensorflow/core/common_runtime/bfc_allocator.cc:661] Chunk at 0x10251aca000 of size 54514432
2017-11-15 14:14:36.929078: I tensorflow/core/common_runtime/bfc_allocator.cc:661] Chunk at 0x10254ec7300 of size 80128
2017-11-15 14:14:36.929086: I tensorflow/core/common_runtime/bfc_allocator.cc:661] Chunk at 0x10254edac00 of size 40192
2017-11-15 14:14:36.929091: I tensorflow/core/common_runtime/bfc_allocator.cc:661] Chunk at 0x10254ee4a00 of size 256
2017-11-15 14:14:36.929097: I tensorflow/core/common_runtime/bfc_allocator.cc:661] Chunk at 0x10254f1c700 of size 54514432
2017-11-15 14:14:36.929103: I tensorflow/core/common_runtime/bfc_allocator.cc:661] Chunk at 0x10258319a00 of size 54514432
2017-11-15 14:14:36.929109: I tensorflow/core/common_runtime/bfc_allocator.cc:661] Chunk at 0x1025b716d00 of size 999168
2017-11-15 14:14:36.929126: I tensorflow/core/common_runtime/bfc_allocator.cc:661] Chunk at 0x1025b80ac00 of size 17319936
2017-11-15 14:14:36.929146: I tensorflow/core/common_runtime/bfc_allocator.cc:661] Chunk at 0x10272142b00 of size 379764224
2017-11-15 14:14:36.929157: I tensorflow/core/common_runtime/bfc_allocator.cc:661] Chunk at 0x10288b6e900 of size 1385728
2017-11-15 14:14:36.929162: I tensorflow/core/common_runtime/bfc_allocator.cc:661] Chunk at 0x10288cc0e00 of size 1385728
2017-11-15 14:14:36.929167: I tensorflow/core/common_runtime/bfc_allocator.cc:661] Chunk at 0x10288e13300 of size 1385728
2017-11-15 14:14:36.929173: I tensorflow/core/common_runtime/bfc_allocator.cc:661] Chunk at 0x102890b7d00 of size 1825536
2017-11-15 14:14:36.929178: I tensorflow/core/common_runtime/bfc_allocator.cc:661] Chunk at 0x10289275800 of size 999168
2017-11-15 14:14:36.929183: I tensorflow/core/common_runtime/bfc_allocator.cc:661] Chunk at 0x10289369700 of size 1739520
2017-11-15 14:14:36.929188: I tensorflow/core/common_runtime/bfc_allocator.cc:661] Chunk at 0x10289512200 of size 912896
2017-11-15 14:14:36.929193: I tensorflow/core/common_runtime/bfc_allocator.cc:661] Chunk at 0x102895f1000 of size 912896
2017-11-15 14:14:36.929197: I tensorflow/core/common_runtime/bfc_allocator.cc:661] Chunk at 0x102896cfe00 of size 912896
2017-11-15 14:14:36.929202: I tensorflow/core/common_runtime/bfc_allocator.cc:661] Chunk at 0x102897aec00 of size 912896
2017-11-15 14:14:36.929207: I tensorflow/core/common_runtime/bfc_allocator.cc:661] Chunk at 0x1028988da00 of size 379764224
2017-11-15 14:14:36.929212: I tensorflow/core/common_runtime/bfc_allocator.cc:661] Chunk at 0x102a02b9800 of size 379764224
2017-11-15 14:14:36.929217: I tensorflow/core/common_runtime/bfc_allocator.cc:661] Chunk at 0x102b6ce5600 of size 11408640
2017-11-15 14:14:36.929222: I tensorflow/core/common_runtime/bfc_allocator.cc:661] Chunk at 0x102b77c6b00 of size 11408640
2017-11-15 14:14:36.929226: I tensorflow/core/common_runtime/bfc_allocator.cc:661] Chunk at 0x102b82a8000 of size 11408640
2017-11-15 14:14:36.929231: I tensorflow/core/common_runtime/bfc_allocator.cc:661] Chunk at 0x102b8d89500 of size 11408640
2017-11-15 14:14:36.929235: I tensorflow/core/common_runtime/bfc_allocator.cc:661] Chunk at 0x102b995e900 of size 999168
2017-11-15 14:14:36.929240: I tensorflow/core/common_runtime/bfc_allocator.cc:661] Chunk at 0x102b9a52800 of size 999168
2017-11-15 14:14:36.929244: I tensorflow/core/common_runtime/bfc_allocator.cc:661] Chunk at 0x102b9b46700 of size 999168
2017-11-15 14:14:36.929250: I tensorflow/core/common_runtime/bfc_allocator.cc:661] Chunk at 0x102b9c3a600 of size 9100342016
2017-11-15 14:14:36.929255: I tensorflow/core/common_runtime/bfc_allocator.cc:670] Free at 0x1020d605c00 of size 256
2017-11-15 14:14:36.929260: I tensorflow/core/common_runtime/bfc_allocator.cc:670] Free at 0x1024e4f1b00 of size 912896
2017-11-15 14:14:36.929265: I tensorflow/core/common_runtime/bfc_allocator.cc:670] Free at 0x1024e5e4200 of size 40448
2017-11-15 14:14:36.929270: I tensorflow/core/common_runtime/bfc_allocator.cc:670] Free at 0x1024e5ee200 of size 512
2017-11-15 14:14:36.929274: I tensorflow/core/common_runtime/bfc_allocator.cc:670] Free at 0x1024e629000 of size 80128
2017-11-15 14:14:36.929278: I tensorflow/core/common_runtime/bfc_allocator.cc:670] Free at 0x10254ee4900 of size 256
2017-11-15 14:14:36.929283: I tensorflow/core/common_runtime/bfc_allocator.cc:670] Free at 0x10254ee4b00 of size 228352
2017-11-15 14:14:36.929288: I tensorflow/core/common_runtime/bfc_allocator.cc:670] Free at 0x1025c88f400 of size 361445120
2017-11-15 14:14:36.929292: I tensorflow/core/common_runtime/bfc_allocator.cc:670] Free at 0x10288f65800 of size 1385728
2017-11-15 14:14:36.929297: I tensorflow/core/common_runtime/bfc_allocator.cc:670] Free at 0x102b986aa00 of size 999168
2017-11-15 14:14:36.929301: I tensorflow/core/common_runtime/bfc_allocator.cc:676]      Summary of in-use Chunks by size: 
2017-11-15 14:14:36.929310: I tensorflow/core/common_runtime/bfc_allocator.cc:679] 37 Chunks of size 256 totalling 9.2KiB
2017-11-15 14:14:36.929316: I tensorflow/core/common_runtime/bfc_allocator.cc:679] 1 Chunks of size 1280 totalling 1.2KiB
2017-11-15 14:14:36.929322: I tensorflow/core/common_runtime/bfc_allocator.cc:679] 16 Chunks of size 2560 totalling 40.0KiB
2017-11-15 14:14:36.929328: I tensorflow/core/common_runtime/bfc_allocator.cc:679] 2 Chunks of size 40192 totalling 78.5KiB
2017-11-15 14:14:36.929334: I tensorflow/core/common_runtime/bfc_allocator.cc:679] 5 Chunks of size 80128 totalling 391.2KiB
2017-11-15 14:14:36.929339: I tensorflow/core/common_runtime/bfc_allocator.cc:679] 1 Chunks of size 160000 totalling 156.2KiB
2017-11-15 14:14:36.929345: I tensorflow/core/common_runtime/bfc_allocator.cc:679] 1 Chunks of size 590848 totalling 577.0KiB
2017-11-15 14:14:36.929350: I tensorflow/core/common_runtime/bfc_allocator.cc:679] 1 Chunks of size 692992 totalling 676.8KiB
2017-11-15 14:14:36.929356: I tensorflow/core/common_runtime/bfc_allocator.cc:679] 4 Chunks of size 912896 totalling 3.48MiB
2017-11-15 14:14:36.929361: I tensorflow/core/common_runtime/bfc_allocator.cc:679] 5 Chunks of size 999168 totalling 4.76MiB
2017-11-15 14:14:36.929366: I tensorflow/core/common_runtime/bfc_allocator.cc:679] 3 Chunks of size 1385728 totalling 3.96MiB
2017-11-15 14:14:36.929371: I tensorflow/core/common_runtime/bfc_allocator.cc:679] 1 Chunks of size 1739520 totalling 1.66MiB
2017-11-15 14:14:36.929377: I tensorflow/core/common_runtime/bfc_allocator.cc:679] 1 Chunks of size 1825536 totalling 1.74MiB
2017-11-15 14:14:36.929382: I tensorflow/core/common_runtime/bfc_allocator.cc:679] 1 Chunks of size 2771200 totalling 2.64MiB
2017-11-15 14:14:36.929387: I tensorflow/core/common_runtime/bfc_allocator.cc:679] 4 Chunks of size 11408640 totalling 43.52MiB
2017-11-15 14:14:36.929393: I tensorflow/core/common_runtime/bfc_allocator.cc:679] 1 Chunks of size 17319936 totalling 16.52MiB
2017-11-15 14:14:36.929399: I tensorflow/core/common_runtime/bfc_allocator.cc:679] 1 Chunks of size 27257344 totalling 25.99MiB
2017-11-15 14:14:36.929404: I tensorflow/core/common_runtime/bfc_allocator.cc:679] 4 Chunks of size 54514432 totalling 207.96MiB
2017-11-15 14:14:36.929410: I tensorflow/core/common_runtime/bfc_allocator.cc:679] 1 Chunks of size 109028608 totalling 103.98MiB
2017-11-15 14:14:36.929415: I tensorflow/core/common_runtime/bfc_allocator.cc:679] 1 Chunks of size 189882112 totalling 181.08MiB
2017-11-15 14:14:36.929421: I tensorflow/core/common_runtime/bfc_allocator.cc:679] 3 Chunks of size 379764224 totalling 1.06GiB
2017-11-15 14:14:36.929426: I tensorflow/core/common_runtime/bfc_allocator.cc:679] 1 Chunks of size 759528448 totalling 724.34MiB
2017-11-15 14:14:36.929432: I tensorflow/core/common_runtime/bfc_allocator.cc:679] 1 Chunks of size 9100342016 totalling 8.47GiB
2017-11-15 14:14:36.929437: I tensorflow/core/common_runtime/bfc_allocator.cc:683] Sum Total of in-use chunks: 10.83GiB
2017-11-15 14:14:36.929451: I tensorflow/core/common_runtime/bfc_allocator.cc:685] Stats: 
Limit:                 11992553882
InUse:                 11627460864
MaxInUse:              11991371008
NumAllocs:                     131
MaxAllocSize:           9100342016

2017-11-15 14:14:36.929470: W tensorflow/core/common_runtime/bfc_allocator.cc:277] ************__**************************************************xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx
2017-11-15 14:14:36.929507: W tensorflow/core/framework/op_kernel.cc:1192] Resource exhausted: OOM when allocating tensor with shape[6814273,25]
2017-11-15 14:14:46.930598: W tensorflow/core/common_runtime/bfc_allocator.cc:273] Allocator (GPU_0_bfc) ran out of memory trying to allocate 362.17MiB.  Current allocation summary follows.
2017-11-15 14:14:46.930676: I tensorflow/core/common_runtime/bfc_allocator.cc:627] Bin (256): 	Total Chunks: 39, Chunks in use: 37. 9.8KiB allocated for chunks. 9.2KiB in use in bin. 3.2KiB client-requested in use in bin.
2017-11-15 14:14:46.930686: I tensorflow/core/common_runtime/bfc_allocator.cc:627] Bin (512): 	Total Chunks: 1, Chunks in use: 0. 512B allocated for chunks. 0B in use in bin. 0B client-requested in use in bin.
2017-11-15 14:14:46.930694: I tensorflow/core/common_runtime/bfc_allocator.cc:627] Bin (1024): 	Total Chunks: 1, Chunks in use: 1. 1.2KiB allocated for chunks. 1.2KiB in use in bin. 1.0KiB client-requested in use in bin.
2017-11-15 14:14:46.930702: I tensorflow/core/common_runtime/bfc_allocator.cc:627] Bin (2048): 	Total Chunks: 16, Chunks in use: 16. 40.0KiB allocated for chunks. 40.0KiB in use in bin. 37.5KiB client-requested in use in bin.
2017-11-15 14:14:46.930709: I tensorflow/core/common_runtime/bfc_allocator.cc:627] Bin (4096): 	Total Chunks: 0, Chunks in use: 0. 0B allocated for chunks. 0B in use in bin. 0B client-requested in use in bin.
2017-11-15 14:14:46.930715: I tensorflow/core/common_runtime/bfc_allocator.cc:627] Bin (8192): 	Total Chunks: 0, Chunks in use: 0. 0B allocated for chunks. 0B in use in bin. 0B client-requested in use in bin.
2017-11-15 14:14:46.930721: I tensorflow/core/common_runtime/bfc_allocator.cc:627] Bin (16384): 	Total Chunks: 0, Chunks in use: 0. 0B allocated for chunks. 0B in use in bin. 0B client-requested in use in bin.
2017-11-15 14:14:46.930729: I tensorflow/core/common_runtime/bfc_allocator.cc:627] Bin (32768): 	Total Chunks: 3, Chunks in use: 2. 118.0KiB allocated for chunks. 78.5KiB in use in bin. 78.1KiB client-requested in use in bin.
2017-11-15 14:14:46.930737: I tensorflow/core/common_runtime/bfc_allocator.cc:627] Bin (65536): 	Total Chunks: 10, Chunks in use: 10. 782.5KiB allocated for chunks. 782.5KiB in use in bin. 780.5KiB client-requested in use in bin.
2017-11-15 14:14:46.930744: I tensorflow/core/common_runtime/bfc_allocator.cc:627] Bin (131072): 	Total Chunks: 2, Chunks in use: 1. 301.0KiB allocated for chunks. 156.2KiB in use in bin. 156.1KiB client-requested in use in bin.
2017-11-15 14:14:46.930751: I tensorflow/core/common_runtime/bfc_allocator.cc:627] Bin (262144): 	Total Chunks: 0, Chunks in use: 0. 0B allocated for chunks. 0B in use in bin. 0B client-requested in use in bin.
2017-11-15 14:14:46.930758: I tensorflow/core/common_runtime/bfc_allocator.cc:627] Bin (524288): 	Total Chunks: 11, Chunks in use: 10. 9.16MiB allocated for chunks. 8.52MiB in use in bin. 8.39MiB client-requested in use in bin.
2017-11-15 14:14:46.930765: I tensorflow/core/common_runtime/bfc_allocator.cc:627] Bin (1048576): 	Total Chunks: 7, Chunks in use: 6. 10.59MiB allocated for chunks. 9.27MiB in use in bin. 7.61MiB client-requested in use in bin.
2017-11-15 14:14:46.930772: I tensorflow/core/common_runtime/bfc_allocator.cc:627] Bin (2097152): 	Total Chunks: 1, Chunks in use: 1. 2.64MiB allocated for chunks. 2.64MiB in use in bin. 2.64MiB client-requested in use in bin.
2017-11-15 14:14:46.930779: I tensorflow/core/common_runtime/bfc_allocator.cc:627] Bin (4194304): 	Total Chunks: 0, Chunks in use: 0. 0B allocated for chunks. 0B in use in bin. 0B client-requested in use in bin.
2017-11-15 14:14:46.930786: I tensorflow/core/common_runtime/bfc_allocator.cc:627] Bin (8388608): 	Total Chunks: 4, Chunks in use: 4. 43.52MiB allocated for chunks. 43.52MiB in use in bin. 43.52MiB client-requested in use in bin.
2017-11-15 14:14:46.930794: I tensorflow/core/common_runtime/bfc_allocator.cc:627] Bin (16777216): 	Total Chunks: 2, Chunks in use: 2. 42.51MiB allocated for chunks. 42.51MiB in use in bin. 42.51MiB client-requested in use in bin.
2017-11-15 14:14:46.930802: I tensorflow/core/common_runtime/bfc_allocator.cc:627] Bin (33554432): 	Total Chunks: 4, Chunks in use: 3. 207.96MiB allocated for chunks. 155.97MiB in use in bin. 155.97MiB client-requested in use in bin.
2017-11-15 14:14:46.930809: I tensorflow/core/common_runtime/bfc_allocator.cc:627] Bin (67108864): 	Total Chunks: 1, Chunks in use: 1. 103.98MiB allocated for chunks. 103.98MiB in use in bin. 103.98MiB client-requested in use in bin.
2017-11-15 14:14:46.930817: I tensorflow/core/common_runtime/bfc_allocator.cc:627] Bin (134217728): 	Total Chunks: 1, Chunks in use: 1. 181.08MiB allocated for chunks. 181.08MiB in use in bin. 181.08MiB client-requested in use in bin.
2017-11-15 14:14:46.930825: I tensorflow/core/common_runtime/bfc_allocator.cc:627] Bin (268435456): 	Total Chunks: 6, Chunks in use: 5. 10.58GiB allocated for chunks. 10.24GiB in use in bin. 6.19GiB client-requested in use in bin.
2017-11-15 14:14:46.930832: I tensorflow/core/common_runtime/bfc_allocator.cc:643] Bin for 362.17MiB was 256.00MiB, Chunk State: 
2017-11-15 14:14:46.930848: I tensorflow/core/common_runtime/bfc_allocator.cc:649]   Size: 344.70MiB | Requested Size: 0B | in_use: 0, prev:   Size: 16.52MiB | Requested Size: 16.52MiB | in_use: 1, next:   Size: 362.17MiB | Requested Size: 362.17MiB | in_use: 1
2017-11-15 14:14:46.930860: I tensorflow/core/common_runtime/bfc_allocator.cc:661] Chunk at 0x1020d600000 of size 1280
2017-11-15 14:14:46.930865: I tensorflow/core/common_runtime/bfc_allocator.cc:661] Chunk at 0x1020d600500 of size 256
2017-11-15 14:14:46.930870: I tensorflow/core/common_runtime/bfc_allocator.cc:661] Chunk at 0x1020d600600 of size 256
2017-11-15 14:14:46.930874: I tensorflow/core/common_runtime/bfc_allocator.cc:661] Chunk at 0x1020d600700 of size 256
2017-11-15 14:14:46.930879: I tensorflow/core/common_runtime/bfc_allocator.cc:661] Chunk at 0x1020d600800 of size 256
2017-11-15 14:14:46.930883: I tensorflow/core/common_runtime/bfc_allocator.cc:661] Chunk at 0x1020d600900 of size 256
2017-11-15 14:14:46.930888: I tensorflow/core/common_runtime/bfc_allocator.cc:661] Chunk at 0x1020d600a00 of size 256
2017-11-15 14:14:46.930893: I tensorflow/core/common_runtime/bfc_allocator.cc:661] Chunk at 0x1020d600b00 of size 256
2017-11-15 14:14:46.930897: I tensorflow/core/common_runtime/bfc_allocator.cc:661] Chunk at 0x1020d600c00 of size 256
2017-11-15 14:14:46.930902: I tensorflow/core/common_runtime/bfc_allocator.cc:661] Chunk at 0x1020d600d00 of size 256
2017-11-15 14:14:46.930906: I tensorflow/core/common_runtime/bfc_allocator.cc:661] Chunk at 0x1020d600e00 of size 2560
2017-11-15 14:14:46.930911: I tensorflow/core/common_runtime/bfc_allocator.cc:661] Chunk at 0x1020d601800 of size 2560
2017-11-15 14:14:46.930916: I tensorflow/core/common_runtime/bfc_allocator.cc:661] Chunk at 0x1020d602200 of size 2560
2017-11-15 14:14:46.930920: I tensorflow/core/common_runtime/bfc_allocator.cc:661] Chunk at 0x1020d602c00 of size 256
2017-11-15 14:14:46.930925: I tensorflow/core/common_runtime/bfc_allocator.cc:661] Chunk at 0x1020d602d00 of size 2560
2017-11-15 14:14:46.930930: I tensorflow/core/common_runtime/bfc_allocator.cc:661] Chunk at 0x1020d603700 of size 2560
2017-11-15 14:14:46.930935: I tensorflow/core/common_runtime/bfc_allocator.cc:661] Chunk at 0x1020d604100 of size 2560
2017-11-15 14:14:46.930939: I tensorflow/core/common_runtime/bfc_allocator.cc:661] Chunk at 0x1020d604b00 of size 2560
2017-11-15 14:14:46.930944: I tensorflow/core/common_runtime/bfc_allocator.cc:661] Chunk at 0x1020d605500 of size 256
2017-11-15 14:14:46.930948: I tensorflow/core/common_runtime/bfc_allocator.cc:661] Chunk at 0x1020d605600 of size 256
2017-11-15 14:14:46.930953: I tensorflow/core/common_runtime/bfc_allocator.cc:661] Chunk at 0x1020d605700 of size 256
2017-11-15 14:14:46.930958: I tensorflow/core/common_runtime/bfc_allocator.cc:661] Chunk at 0x1020d605800 of size 256
2017-11-15 14:14:46.930962: I tensorflow/core/common_runtime/bfc_allocator.cc:661] Chunk at 0x1020d605900 of size 256
2017-11-15 14:14:46.930967: I tensorflow/core/common_runtime/bfc_allocator.cc:661] Chunk at 0x1020d605a00 of size 256
2017-11-15 14:14:46.930972: I tensorflow/core/common_runtime/bfc_allocator.cc:661] Chunk at 0x1020d605b00 of size 256
2017-11-15 14:14:46.930976: I tensorflow/core/common_runtime/bfc_allocator.cc:661] Chunk at 0x1020d605d00 of size 256
2017-11-15 14:14:46.930981: I tensorflow/core/common_runtime/bfc_allocator.cc:661] Chunk at 0x1020d605e00 of size 256
2017-11-15 14:14:46.930986: I tensorflow/core/common_runtime/bfc_allocator.cc:661] Chunk at 0x1020d605f00 of size 256
2017-11-15 14:14:46.930990: I tensorflow/core/common_runtime/bfc_allocator.cc:661] Chunk at 0x1020d606000 of size 256
2017-11-15 14:14:46.930995: I tensorflow/core/common_runtime/bfc_allocator.cc:661] Chunk at 0x1020d606100 of size 256
2017-11-15 14:14:46.930999: I tensorflow/core/common_runtime/bfc_allocator.cc:661] Chunk at 0x1020d606200 of size 256
2017-11-15 14:14:46.931004: I tensorflow/core/common_runtime/bfc_allocator.cc:661] Chunk at 0x1020d606300 of size 256
2017-11-15 14:14:46.931009: I tensorflow/core/common_runtime/bfc_allocator.cc:661] Chunk at 0x1020d606400 of size 256
2017-11-15 14:14:46.931013: I tensorflow/core/common_runtime/bfc_allocator.cc:661] Chunk at 0x1020d606500 of size 256
2017-11-15 14:14:46.931018: I tensorflow/core/common_runtime/bfc_allocator.cc:661] Chunk at 0x1020d606600 of size 256
2017-11-15 14:14:46.931023: I tensorflow/core/common_runtime/bfc_allocator.cc:661] Chunk at 0x1020d606700 of size 256
2017-11-15 14:14:46.931028: I tensorflow/core/common_runtime/bfc_allocator.cc:661] Chunk at 0x1020d606800 of size 256
2017-11-15 14:14:46.931033: I tensorflow/core/common_runtime/bfc_allocator.cc:661] Chunk at 0x1020d606900 of size 256
2017-11-15 14:14:46.931038: I tensorflow/core/common_runtime/bfc_allocator.cc:661] Chunk at 0x1020d606a00 of size 256
2017-11-15 14:14:46.931042: I tensorflow/core/common_runtime/bfc_allocator.cc:661] Chunk at 0x1020d606b00 of size 2560
2017-11-15 14:14:46.931047: I tensorflow/core/common_runtime/bfc_allocator.cc:661] Chunk at 0x1020d607500 of size 2560
2017-11-15 14:14:46.931051: I tensorflow/core/common_runtime/bfc_allocator.cc:661] Chunk at 0x1020d607f00 of size 2560
2017-11-15 14:14:46.931056: I tensorflow/core/common_runtime/bfc_allocator.cc:661] Chunk at 0x1020d608900 of size 2560
2017-11-15 14:14:46.931061: I tensorflow/core/common_runtime/bfc_allocator.cc:661] Chunk at 0x1020d609300 of size 2560
2017-11-15 14:14:46.931066: I tensorflow/core/common_runtime/bfc_allocator.cc:661] Chunk at 0x1020d609d00 of size 2560
2017-11-15 14:14:46.931070: I tensorflow/core/common_runtime/bfc_allocator.cc:661] Chunk at 0x1020d60a700 of size 256
2017-11-15 14:14:46.931075: I tensorflow/core/common_runtime/bfc_allocator.cc:661] Chunk at 0x1020d60a800 of size 256
2017-11-15 14:14:46.931079: I tensorflow/core/common_runtime/bfc_allocator.cc:661] Chunk at 0x1020d60a900 of size 2560
2017-11-15 14:14:46.931084: I tensorflow/core/common_runtime/bfc_allocator.cc:661] Chunk at 0x1020d60b300 of size 2560
2017-11-15 14:14:46.931089: I tensorflow/core/common_runtime/bfc_allocator.cc:661] Chunk at 0x1020d60bd00 of size 2560
2017-11-15 14:14:46.931094: I tensorflow/core/common_runtime/bfc_allocator.cc:661] Chunk at 0x1020d60c700 of size 40192
2017-11-15 14:14:46.931099: I tensorflow/core/common_runtime/bfc_allocator.cc:661] Chunk at 0x1020d616400 of size 160000
2017-11-15 14:14:46.931104: I tensorflow/core/common_runtime/bfc_allocator.cc:661] Chunk at 0x1020d63d500 of size 692992
2017-11-15 14:14:46.931140: I tensorflow/core/common_runtime/bfc_allocator.cc:661] Chunk at 0x1020d6e6800 of size 2771200
2017-11-15 14:14:46.931147: I tensorflow/core/common_runtime/bfc_allocator.cc:661] Chunk at 0x1020d98b100 of size 27257344
2017-11-15 14:14:46.931154: I tensorflow/core/common_runtime/bfc_allocator.cc:661] Chunk at 0x1020f389b00 of size 109028608
2017-11-15 14:14:46.931169: I tensorflow/core/common_runtime/bfc_allocator.cc:661] Chunk at 0x10215b84000 of size 189882112
2017-11-15 14:14:46.931175: I tensorflow/core/common_runtime/bfc_allocator.cc:661] Chunk at 0x10221099f00 of size 759528448
2017-11-15 14:14:46.931191: I tensorflow/core/common_runtime/bfc_allocator.cc:661] Chunk at 0x1024e4f1b00 of size 80128
2017-11-15 14:14:46.931196: I tensorflow/core/common_runtime/bfc_allocator.cc:661] Chunk at 0x1024e505400 of size 80128
2017-11-15 14:14:46.931202: I tensorflow/core/common_runtime/bfc_allocator.cc:661] Chunk at 0x1024e518d00 of size 80128
2017-11-15 14:14:46.931208: I tensorflow/core/common_runtime/bfc_allocator.cc:661] Chunk at 0x1024e5d0900 of size 80128
2017-11-15 14:14:46.931214: I tensorflow/core/common_runtime/bfc_allocator.cc:661] Chunk at 0x1024e5ee000 of size 256
2017-11-15 14:14:46.931220: I tensorflow/core/common_runtime/bfc_allocator.cc:661] Chunk at 0x1024e5ee100 of size 256
2017-11-15 14:14:46.931225: I tensorflow/core/common_runtime/bfc_allocator.cc:661] Chunk at 0x1024e5ee400 of size 256
2017-11-15 14:14:46.931230: I tensorflow/core/common_runtime/bfc_allocator.cc:661] Chunk at 0x1024e5ee500 of size 80128
2017-11-15 14:14:46.931237: I tensorflow/core/common_runtime/bfc_allocator.cc:661] Chunk at 0x1024e601e00 of size 80128
2017-11-15 14:14:46.931243: I tensorflow/core/common_runtime/bfc_allocator.cc:661] Chunk at 0x1024e615700 of size 80128
2017-11-15 14:14:46.931248: I tensorflow/core/common_runtime/bfc_allocator.cc:661] Chunk at 0x1024e629000 of size 80128
2017-11-15 14:14:46.931254: I tensorflow/core/common_runtime/bfc_allocator.cc:661] Chunk at 0x1024e63c900 of size 590848
2017-11-15 14:14:46.931270: I tensorflow/core/common_runtime/bfc_allocator.cc:661] Chunk at 0x1024e6ccd00 of size 54514432
2017-11-15 14:14:46.931275: I tensorflow/core/common_runtime/bfc_allocator.cc:661] Chunk at 0x10254ec7300 of size 80128
2017-11-15 14:14:46.931281: I tensorflow/core/common_runtime/bfc_allocator.cc:661] Chunk at 0x10254edac00 of size 40192
2017-11-15 14:14:46.931286: I tensorflow/core/common_runtime/bfc_allocator.cc:661] Chunk at 0x10254ee4a00 of size 256
2017-11-15 14:14:46.931292: I tensorflow/core/common_runtime/bfc_allocator.cc:661] Chunk at 0x10254ee4b00 of size 80128
2017-11-15 14:14:46.931298: I tensorflow/core/common_runtime/bfc_allocator.cc:661] Chunk at 0x10254f1c700 of size 54514432
2017-11-15 14:14:46.931305: I tensorflow/core/common_runtime/bfc_allocator.cc:661] Chunk at 0x10258319a00 of size 54514432
2017-11-15 14:14:46.931310: I tensorflow/core/common_runtime/bfc_allocator.cc:661] Chunk at 0x1025b716d00 of size 999168
2017-11-15 14:14:46.931316: I tensorflow/core/common_runtime/bfc_allocator.cc:661] Chunk at 0x1025b80ac00 of size 17319936
2017-11-15 14:14:46.931323: I tensorflow/core/common_runtime/bfc_allocator.cc:661] Chunk at 0x10272142b00 of size 379764224
2017-11-15 14:14:46.931330: I tensorflow/core/common_runtime/bfc_allocator.cc:661] Chunk at 0x10288b6e900 of size 1385728
2017-11-15 14:14:46.931343: I tensorflow/core/common_runtime/bfc_allocator.cc:661] Chunk at 0x10288cc0e00 of size 1385728
2017-11-15 14:14:46.931349: I tensorflow/core/common_runtime/bfc_allocator.cc:661] Chunk at 0x10288f65800 of size 1385728
2017-11-15 14:14:46.931355: I tensorflow/core/common_runtime/bfc_allocator.cc:661] Chunk at 0x102890b7d00 of size 1825536
2017-11-15 14:14:46.931361: I tensorflow/core/common_runtime/bfc_allocator.cc:661] Chunk at 0x10289275800 of size 999168
2017-11-15 14:14:46.931367: I tensorflow/core/common_runtime/bfc_allocator.cc:661] Chunk at 0x10289369700 of size 1739520
2017-11-15 14:14:46.931374: I tensorflow/core/common_runtime/bfc_allocator.cc:661] Chunk at 0x10289512200 of size 912896
2017-11-15 14:14:46.931381: I tensorflow/core/common_runtime/bfc_allocator.cc:661] Chunk at 0x102895f1000 of size 912896
2017-11-15 14:14:46.931388: I tensorflow/core/common_runtime/bfc_allocator.cc:661] Chunk at 0x102896cfe00 of size 912896
2017-11-15 14:14:46.931397: I tensorflow/core/common_runtime/bfc_allocator.cc:661] Chunk at 0x102897aec00 of size 912896
2017-11-15 14:14:46.931402: I tensorflow/core/common_runtime/bfc_allocator.cc:661] Chunk at 0x1028988da00 of size 379764224
2017-11-15 14:14:46.931409: I tensorflow/core/common_runtime/bfc_allocator.cc:661] Chunk at 0x102a02b9800 of size 379764224
2017-11-15 14:14:46.931418: I tensorflow/core/common_runtime/bfc_allocator.cc:661] Chunk at 0x102b6ce5600 of size 11408640
2017-11-15 14:14:46.931424: I tensorflow/core/common_runtime/bfc_allocator.cc:661] Chunk at 0x102b77c6b00 of size 11408640
2017-11-15 14:14:46.931432: I tensorflow/core/common_runtime/bfc_allocator.cc:661] Chunk at 0x102b82a8000 of size 11408640
2017-11-15 14:14:46.931439: I tensorflow/core/common_runtime/bfc_allocator.cc:661] Chunk at 0x102b8d89500 of size 11408640
2017-11-15 14:14:46.931451: I tensorflow/core/common_runtime/bfc_allocator.cc:661] Chunk at 0x102b986aa00 of size 1998336
2017-11-15 14:14:46.931458: I tensorflow/core/common_runtime/bfc_allocator.cc:661] Chunk at 0x102b9a52800 of size 999168
2017-11-15 14:14:46.931464: I tensorflow/core/common_runtime/bfc_allocator.cc:661] Chunk at 0x102b9b46700 of size 999168
2017-11-15 14:14:46.931470: I tensorflow/core/common_runtime/bfc_allocator.cc:661] Chunk at 0x102b9c3a600 of size 9100342016
2017-11-15 14:14:46.931480: I tensorflow/core/common_runtime/bfc_allocator.cc:670] Free at 0x1020d605c00 of size 256
2017-11-15 14:14:46.931489: I tensorflow/core/common_runtime/bfc_allocator.cc:670] Free at 0x1024e52c600 of size 672512
2017-11-15 14:14:46.931495: I tensorflow/core/common_runtime/bfc_allocator.cc:670] Free at 0x1024e5e4200 of size 40448
2017-11-15 14:14:46.931501: I tensorflow/core/common_runtime/bfc_allocator.cc:670] Free at 0x1024e5ee200 of size 512
2017-11-15 14:14:46.931506: I tensorflow/core/common_runtime/bfc_allocator.cc:670] Free at 0x10251aca000 of size 54514432
2017-11-15 14:14:46.931515: I tensorflow/core/common_runtime/bfc_allocator.cc:670] Free at 0x10254ee4900 of size 256
2017-11-15 14:14:46.931520: I tensorflow/core/common_runtime/bfc_allocator.cc:670] Free at 0x10254ef8400 of size 148224
2017-11-15 14:14:46.931526: I tensorflow/core/common_runtime/bfc_allocator.cc:670] Free at 0x1025c88f400 of size 361445120
2017-11-15 14:14:46.931532: I tensorflow/core/common_runtime/bfc_allocator.cc:670] Free at 0x10288e13300 of size 1385728
2017-11-15 14:14:46.931538: I tensorflow/core/common_runtime/bfc_allocator.cc:676]      Summary of in-use Chunks by size: 
2017-11-15 14:14:46.931548: I tensorflow/core/common_runtime/bfc_allocator.cc:679] 37 Chunks of size 256 totalling 9.2KiB
2017-11-15 14:14:46.931559: I tensorflow/core/common_runtime/bfc_allocator.cc:679] 1 Chunks of size 1280 totalling 1.2KiB
2017-11-15 14:14:46.931567: I tensorflow/core/common_runtime/bfc_allocator.cc:679] 16 Chunks of size 2560 totalling 40.0KiB
2017-11-15 14:14:46.931577: I tensorflow/core/common_runtime/bfc_allocator.cc:679] 2 Chunks of size 40192 totalling 78.5KiB
2017-11-15 14:14:46.931585: I tensorflow/core/common_runtime/bfc_allocator.cc:679] 10 Chunks of size 80128 totalling 782.5KiB
2017-11-15 14:14:46.931591: I tensorflow/core/common_runtime/bfc_allocator.cc:679] 1 Chunks of size 160000 totalling 156.2KiB
2017-11-15 14:14:46.931598: I tensorflow/core/common_runtime/bfc_allocator.cc:679] 1 Chunks of size 590848 totalling 577.0KiB
2017-11-15 14:14:46.931605: I tensorflow/core/common_runtime/bfc_allocator.cc:679] 1 Chunks of size 692992 totalling 676.8KiB
2017-11-15 14:14:46.931612: I tensorflow/core/common_runtime/bfc_allocator.cc:679] 4 Chunks of size 912896 totalling 3.48MiB
2017-11-15 14:14:46.931619: I tensorflow/core/common_runtime/bfc_allocator.cc:679] 4 Chunks of size 999168 totalling 3.81MiB
2017-11-15 14:14:46.931628: I tensorflow/core/common_runtime/bfc_allocator.cc:679] 3 Chunks of size 1385728 totalling 3.96MiB
2017-11-15 14:14:46.931638: I tensorflow/core/common_runtime/bfc_allocator.cc:679] 1 Chunks of size 1739520 totalling 1.66MiB
2017-11-15 14:14:46.931644: I tensorflow/core/common_runtime/bfc_allocator.cc:679] 1 Chunks of size 1825536 totalling 1.74MiB
2017-11-15 14:14:46.931650: I tensorflow/core/common_runtime/bfc_allocator.cc:679] 1 Chunks of size 1998336 totalling 1.91MiB
2017-11-15 14:14:46.931657: I tensorflow/core/common_runtime/bfc_allocator.cc:679] 1 Chunks of size 2771200 totalling 2.64MiB
2017-11-15 14:14:46.931664: I tensorflow/core/common_runtime/bfc_allocator.cc:679] 4 Chunks of size 11408640 totalling 43.52MiB
2017-11-15 14:14:46.931671: I tensorflow/core/common_runtime/bfc_allocator.cc:679] 1 Chunks of size 17319936 totalling 16.52MiB
2017-11-15 14:14:46.931679: I tensorflow/core/common_runtime/bfc_allocator.cc:679] 1 Chunks of size 27257344 totalling 25.99MiB
2017-11-15 14:14:46.931689: I tensorflow/core/common_runtime/bfc_allocator.cc:679] 3 Chunks of size 54514432 totalling 155.97MiB
2017-11-15 14:14:46.931696: I tensorflow/core/common_runtime/bfc_allocator.cc:679] 1 Chunks of size 109028608 totalling 103.98MiB
2017-11-15 14:14:46.931703: I tensorflow/core/common_runtime/bfc_allocator.cc:679] 1 Chunks of size 189882112 totalling 181.08MiB
2017-11-15 14:14:46.931710: I tensorflow/core/common_runtime/bfc_allocator.cc:679] 3 Chunks of size 379764224 totalling 1.06GiB
2017-11-15 14:14:46.931719: I tensorflow/core/common_runtime/bfc_allocator.cc:679] 1 Chunks of size 759528448 totalling 724.34MiB
2017-11-15 14:14:46.931728: I tensorflow/core/common_runtime/bfc_allocator.cc:679] 1 Chunks of size 9100342016 totalling 8.47GiB
2017-11-15 14:14:46.931736: I tensorflow/core/common_runtime/bfc_allocator.cc:683] Sum Total of in-use chunks: 10.78GiB
2017-11-15 14:14:46.931747: I tensorflow/core/common_runtime/bfc_allocator.cc:685] Stats: 
Limit:                 11992553882
InUse:                 11574346240
MaxInUse:              11991371008
NumAllocs:                     146
MaxAllocSize:           9100342016

2017-11-15 14:14:46.931778: W tensorflow/core/common_runtime/bfc_allocator.cc:277] ************__**************************************************xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx
2017-11-15 14:14:46.931803: W tensorflow/core/framework/op_kernel.cc:1192] Resource exhausted: OOM when allocating tensor with shape[47470525,2]
2017-11-15 14:14:46.932584: W tensorflow/core/framework/op_kernel.cc:1192] Resource exhausted: OOM when allocating tensor with shape[6814273,25]
	 [[Node: gradients/graphconvolution_1/SparseTensorDenseMatMul_5/SparseTensorDenseMatMul_grad/Gather_1 = Gather[Tindices=DT_INT64, Tparams=DT_FLOAT, validate_indices=true, _device="/job:localhost/replica:0/task:0/device:GPU:0"](graphconvolution_1/SparseTensorDenseMatMul_4/SparseTensorDenseMatMul, gradients/graphconvolution_1/SparseTensorDenseMatMul_5/SparseTensorDenseMatMul_grad/strided_slice_1/_43)]]
2017-11-15 14:14:46.932653: W tensorflow/core/framework/op_kernel.cc:1192] Resource exhausted: OOM when allocating tensor with shape[6814273,25]
	 [[Node: gradients/graphconvolution_1/SparseTensorDenseMatMul_5/SparseTensorDenseMatMul_grad/Gather_1 = Gather[Tindices=DT_INT64, Tparams=DT_FLOAT, validate_indices=true, _device="/job:localhost/replica:0/task:0/device:GPU:0"](graphconvolution_1/SparseTensorDenseMatMul_4/SparseTensorDenseMatMul, gradients/graphconvolution_1/SparseTensorDenseMatMul_5/SparseTensorDenseMatMul_grad/strided_slice_1/_43)]]
2017-11-15 14:14:46.932694: W tensorflow/core/framework/op_kernel.cc:1192] Resource exhausted: OOM when allocating tensor with shape[6814273,25]
	 [[Node: gradients/graphconvolution_1/SparseTensorDenseMatMul_5/SparseTensorDenseMatMul_grad/Gather_1 = Gather[Tindices=DT_INT64, Tparams=DT_FLOAT, validate_indices=true, _device="/job:localhost/replica:0/task:0/device:GPU:0"](graphconvolution_1/SparseTensorDenseMatMul_4/SparseTensorDenseMatMul, gradients/graphconvolution_1/SparseTensorDenseMatMul_5/SparseTensorDenseMatMul_grad/strided_slice_1/_43)]]
2017-11-15 14:14:46.932814: W tensorflow/core/framework/op_kernel.cc:1192] Resource exhausted: OOM when allocating tensor with shape[6814273,25]
	 [[Node: gradients/graphconvolution_1/SparseTensorDenseMatMul_5/SparseTensorDenseMatMul_grad/Gather_1 = Gather[Tindices=DT_INT64, Tparams=DT_FLOAT, validate_indices=true, _device="/job:localhost/replica:0/task:0/device:GPU:0"](graphconvolution_1/SparseTensorDenseMatMul_4/SparseTensorDenseMatMul, gradients/graphconvolution_1/SparseTensorDenseMatMul_5/SparseTensorDenseMatMul_grad/strided_slice_1/_43)]]
2017-11-15 14:14:46.932867: W tensorflow/core/framework/op_kernel.cc:1192] Resource exhausted: OOM when allocating tensor with shape[6814273,25]
	 [[Node: gradients/graphconvolution_1/SparseTensorDenseMatMul_5/SparseTensorDenseMatMul_grad/Gather_1 = Gather[Tindices=DT_INT64, Tparams=DT_FLOAT, validate_indices=true, _device="/job:localhost/replica:0/task:0/device:GPU:0"](graphconvolution_1/SparseTensorDenseMatMul_4/SparseTensorDenseMatMul, gradients/graphconvolution_1/SparseTensorDenseMatMul_5/SparseTensorDenseMatMul_grad/strided_slice_1/_43)]]
2017-11-15 14:14:46.932955: W tensorflow/core/framework/op_kernel.cc:1192] Resource exhausted: OOM when allocating tensor with shape[6814273,25]
	 [[Node: gradients/graphconvolution_1/SparseTensorDenseMatMul_5/SparseTensorDenseMatMul_grad/Gather_1 = Gather[Tindices=DT_INT64, Tparams=DT_FLOAT, validate_indices=true, _device="/job:localhost/replica:0/task:0/device:GPU:0"](graphconvolution_1/SparseTensorDenseMatMul_4/SparseTensorDenseMatMul, gradients/graphconvolution_1/SparseTensorDenseMatMul_5/SparseTensorDenseMatMul_grad/strided_slice_1/_43)]]
Traceback (most recent call last):
  File "/home/sasse/anaconda3/envs/tensorflow/lib/python3.5/site-packages/tensorflow/python/client/session.py", line 1323, in _do_call
    return fn(*args)
  File "/home/sasse/anaconda3/envs/tensorflow/lib/python3.5/site-packages/tensorflow/python/client/session.py", line 1302, in _run_fn
    status, run_metadata)
  File "/home/sasse/anaconda3/envs/tensorflow/lib/python3.5/site-packages/tensorflow/python/framework/errors_impl.py", line 473, in __exit__
    c_api.TF_GetCode(self.status.status))
tensorflow.python.framework.errors_impl.ResourceExhaustedError: OOM when allocating tensor with shape[6814273,25]
	 [[Node: gradients/graphconvolution_1/SparseTensorDenseMatMul_5/SparseTensorDenseMatMul_grad/Gather_1 = Gather[Tindices=DT_INT64, Tparams=DT_FLOAT, validate_indices=true, _device="/job:localhost/replica:0/task:0/device:GPU:0"](graphconvolution_1/SparseTensorDenseMatMul_4/SparseTensorDenseMatMul, gradients/graphconvolution_1/SparseTensorDenseMatMul_5/SparseTensorDenseMatMul_grad/strided_slice_1/_43)]]
	 [[Node: Mean_3/_115 = _Recv[client_terminated=false, recv_device="/job:localhost/replica:0/task:0/device:CPU:0", send_device="/job:localhost/replica:0/task:0/device:GPU:0", send_device_incarnation=1, tensor_name="edge_1165_Mean_3", tensor_type=DT_FLOAT, _device="/job:localhost/replica:0/task:0/device:CPU:0"]()]]

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
  File "train_gcn.py", line 134, in <module>
    outs = sess.run([model.opt_op, model.loss, model.accuracy], feed_dict=feed_dict)
  File "/home/sasse/anaconda3/envs/tensorflow/lib/python3.5/site-packages/tensorflow/python/client/session.py", line 889, in run
    run_metadata_ptr)
  File "/home/sasse/anaconda3/envs/tensorflow/lib/python3.5/site-packages/tensorflow/python/client/session.py", line 1120, in _run
    feed_dict_tensor, options, run_metadata)
  File "/home/sasse/anaconda3/envs/tensorflow/lib/python3.5/site-packages/tensorflow/python/client/session.py", line 1317, in _do_run
    options, run_metadata)
  File "/home/sasse/anaconda3/envs/tensorflow/lib/python3.5/site-packages/tensorflow/python/client/session.py", line 1336, in _do_call
    raise type(e)(node_def, op, message)
tensorflow.python.framework.errors_impl.ResourceExhaustedError: OOM when allocating tensor with shape[6814273,25]
	 [[Node: gradients/graphconvolution_1/SparseTensorDenseMatMul_5/SparseTensorDenseMatMul_grad/Gather_1 = Gather[Tindices=DT_INT64, Tparams=DT_FLOAT, validate_indices=true, _device="/job:localhost/replica:0/task:0/device:GPU:0"](graphconvolution_1/SparseTensorDenseMatMul_4/SparseTensorDenseMatMul, gradients/graphconvolution_1/SparseTensorDenseMatMul_5/SparseTensorDenseMatMul_grad/strided_slice_1/_43)]]
	 [[Node: Mean_3/_115 = _Recv[client_terminated=false, recv_device="/job:localhost/replica:0/task:0/device:CPU:0", send_device="/job:localhost/replica:0/task:0/device:GPU:0", send_device_incarnation=1, tensor_name="edge_1165_Mean_3", tensor_type=DT_FLOAT, _device="/job:localhost/replica:0/task:0/device:CPU:0"]()]]

Caused by op 'gradients/graphconvolution_1/SparseTensorDenseMatMul_5/SparseTensorDenseMatMul_grad/Gather_1', defined at:
  File "train_gcn.py", line 112, in <module>
    model = GCN(placeholders, input_dim=features[2][1], logging=True)
  File "/home/sasse/anaconda3/envs/tensorflow/lib/python3.5/site-packages/gcn-1.0-py3.5.egg/gcn/models.py", line 144, in __init__
    self.build()
  File "/home/sasse/anaconda3/envs/tensorflow/lib/python3.5/site-packages/gcn-1.0-py3.5.egg/gcn/models.py", line 58, in build
    self.opt_op = self.optimizer.minimize(self.loss)
  File "/home/sasse/anaconda3/envs/tensorflow/lib/python3.5/site-packages/tensorflow/python/training/optimizer.py", line 343, in minimize
    grad_loss=grad_loss)
  File "/home/sasse/anaconda3/envs/tensorflow/lib/python3.5/site-packages/tensorflow/python/training/optimizer.py", line 414, in compute_gradients
    colocate_gradients_with_ops=colocate_gradients_with_ops)
  File "/home/sasse/anaconda3/envs/tensorflow/lib/python3.5/site-packages/tensorflow/python/ops/gradients_impl.py", line 581, in gradients
    grad_scope, op, func_call, lambda: grad_fn(op, *out_grads))
  File "/home/sasse/anaconda3/envs/tensorflow/lib/python3.5/site-packages/tensorflow/python/ops/gradients_impl.py", line 353, in _MaybeCompile
    return grad_fn()  # Exit early
  File "/home/sasse/anaconda3/envs/tensorflow/lib/python3.5/site-packages/tensorflow/python/ops/gradients_impl.py", line 581, in <lambda>
    grad_scope, op, func_call, lambda: grad_fn(op, *out_grads))
  File "/home/sasse/anaconda3/envs/tensorflow/lib/python3.5/site-packages/tensorflow/python/ops/sparse_grad.py", line 170, in _SparseTensorDenseMatMulGrad
    cols if not adj_a else rows)
  File "/home/sasse/anaconda3/envs/tensorflow/lib/python3.5/site-packages/tensorflow/python/ops/array_ops.py", line 2486, in gather
    params, indices, validate_indices=validate_indices, name=name)
  File "/home/sasse/anaconda3/envs/tensorflow/lib/python3.5/site-packages/tensorflow/python/ops/gen_array_ops.py", line 1834, in gather
    validate_indices=validate_indices, name=name)
  File "/home/sasse/anaconda3/envs/tensorflow/lib/python3.5/site-packages/tensorflow/python/framework/op_def_library.py", line 787, in _apply_op_helper
    op_def=op_def)
  File "/home/sasse/anaconda3/envs/tensorflow/lib/python3.5/site-packages/tensorflow/python/framework/ops.py", line 2956, in create_op
    op_def=op_def)
  File "/home/sasse/anaconda3/envs/tensorflow/lib/python3.5/site-packages/tensorflow/python/framework/ops.py", line 1470, in __init__
    self._traceback = self._graph._extract_stack()  # pylint: disable=protected-access

...which was originally created as op 'graphconvolution_1/SparseTensorDenseMatMul_5/SparseTensorDenseMatMul', defined at:
  File "train_gcn.py", line 112, in <module>
    model = GCN(placeholders, input_dim=features[2][1], logging=True)
[elided 0 identical lines from previous traceback]
  File "/home/sasse/anaconda3/envs/tensorflow/lib/python3.5/site-packages/gcn-1.0-py3.5.egg/gcn/models.py", line 144, in __init__
    self.build()
  File "/home/sasse/anaconda3/envs/tensorflow/lib/python3.5/site-packages/gcn-1.0-py3.5.egg/gcn/models.py", line 46, in build
    hidden = layer(self.activations[-1])
  File "/home/sasse/anaconda3/envs/tensorflow/lib/python3.5/site-packages/gcn-1.0-py3.5.egg/gcn/layers.py", line 75, in __call__
    outputs = self._call(inputs)
  File "/home/sasse/anaconda3/envs/tensorflow/lib/python3.5/site-packages/gcn-1.0-py3.5.egg/gcn/layers.py", line 180, in _call
    support = dot(self.support[i], pre_sup, sparse=True)
  File "/home/sasse/anaconda3/envs/tensorflow/lib/python3.5/site-packages/gcn-1.0-py3.5.egg/gcn/layers.py", line 33, in dot
    res = tf.sparse_tensor_dense_matmul(x, y)
  File "/home/sasse/anaconda3/envs/tensorflow/lib/python3.5/site-packages/tensorflow/python/ops/sparse_ops.py", line 1719, in sparse_tensor_dense_matmul
    adjoint_b=adjoint_b)
  File "/home/sasse/anaconda3/envs/tensorflow/lib/python3.5/site-packages/tensorflow/python/ops/gen_sparse_ops.py", line 1818, in _sparse_tensor_dense_mat_mul
    name=name)
  File "/home/sasse/anaconda3/envs/tensorflow/lib/python3.5/site-packages/tensorflow/python/framework/op_def_library.py", line 787, in _apply_op_helper
    op_def=op_def)
  File "/home/sasse/anaconda3/envs/tensorflow/lib/python3.5/site-packages/tensorflow/python/framework/ops.py", line 2956, in create_op
    op_def=op_def)
  File "/home/sasse/anaconda3/envs/tensorflow/lib/python3.5/site-packages/tensorflow/python/framework/ops.py", line 1470, in __init__
    self._traceback = self._graph._extract_stack()  # pylint: disable=protected-access

ResourceExhaustedError (see above for traceback): OOM when allocating tensor with shape[6814273,25]
	 [[Node: gradients/graphconvolution_1/SparseTensorDenseMatMul_5/SparseTensorDenseMatMul_grad/Gather_1 = Gather[Tindices=DT_INT64, Tparams=DT_FLOAT, validate_indices=true, _device="/job:localhost/replica:0/task:0/device:GPU:0"](graphconvolution_1/SparseTensorDenseMatMul_4/SparseTensorDenseMatMul, gradients/graphconvolution_1/SparseTensorDenseMatMul_5/SparseTensorDenseMatMul_grad/strided_slice_1/_43)]]
	 [[Node: Mean_3/_115 = _Recv[client_terminated=false, recv_device="/job:localhost/replica:0/task:0/device:CPU:0", send_device="/job:localhost/replica:0/task:0/device:GPU:0", send_device_incarnation=1, tensor_name="edge_1165_Mean_3", tensor_type=DT_FLOAT, _device="/job:localhost/replica:0/task:0/device:CPU:0"]()]]

Thanks,

Roman

Computation of the masked accuracy

Hi, I notice that, in metrics.py, the accuracy is computed as:


def masked_accuracy(preds, labels, mask):
"""Accuracy with masking."""
correct_prediction = tf.equal(tf.argmax(preds, 1), tf.argmax(labels, 1))
accuracy_all = tf.cast(correct_prediction, tf.float32)
mask = tf.cast(mask, dtype=tf.float32)
mask /= tf.reduce_mean(mask)
accuracy_all *= mask
return tf.reduce_mean(accuracy_all)


Here, the accuracy_all is first multiplied with the mean of masks over all samples, and then its values are averaged again (again over all samples). I don't quite understand why the average is done over all samples? In particular, I thought we should only average over the the training samples?

As I'm solving only a classification problem, not semi-supervised learning, I thought the accuracy should be:


def masked_accuracy(preds, labels, mask):
"""Accuracy with masking for supervised classfisication."""
correct_prediction = tf.equal(tf.argmax(preds, 1), tf.argmax(labels, 1))
accuracy_all = tf.cast(correct_prediction, tf.float32)
mask = tf.cast(mask, dtype=tf.float32)
accuracy_all *= mask
returen tf.reduce_sum(accuracy_all)/tf.reduce_sum(mask)


Is this correct?

sorry,can you help me ?

in ind.cora.allx ,which has 1708 train samples consisted of labeled samples and non-labeled samples,but in ind.cora.ally,i find each of 1798 train samples has a label,what's wrong with non-labeled samples?we are going to predict classes that non-labeled samples belong to ,but they had get labels?

prediction not fully worked

Hello,

it seems that if I extract the prediction output in train.py line
outs = sess.run([model.opt_op, model.loss, model.accuracy], feed_dict=feed_dict)
as
outs = sess.run([model.opt_op, model.loss, model.accuracy, model.predict], feed_dict=feed_dict)

the code will failed to extract the prediction results, do you have any idea how to fix it?

Thank you

Computation of the accuracy

Hello,
Could you elaborate on the way you compute the accuracy and which statistic you use (e.g. I know precision-recall statistic for extracting the accuracy).
In addition to this, you use the 'masked' accuracy. Could you provide some explanation and/or links about this?

Support for the multiple gcn filters

Hi, Kipf, thanks for sharing us such a great work.

I'm using gcn to perform a regression work on my dataset, in which I have around 500 nodes and 153 features. The performance is not good even for the training set (R^2 = 0.4)

I was wondering:

  1. Classical CNN models (AlexNet, VGG) often uses multiple filters to extract different patterns, but the current implementation of GCN, if I understand correctly, only uses one graphConv filter. Do you think multiple gcn filters can help?

  2. If yes, how can I add more filters?

Demo Error

Hi,

So I am trying to run the test dataset. I downloaded the original citeseer data, and I am still getting the same error below when running the command: python train.py --dataset citeseer

Traceback (most recent call last):
File "train.py", line 59, in
model = model_func(placeholders, input_dim=features[2][1], logging=True)
File "build/bdist.linux-x86_64/egg/gcn/models.py", line 144, in init
self.build()
File "build/bdist.linux-x86_64/egg/gcn/models.py", line 41, in build
self._build()
File "build/bdist.linux-x86_64/egg/gcn/models.py", line 167, in _build
logging=self.logging))
File "build/bdist.linux-x86_64/egg/gcn/layers.py", line 161, in init
self._log_vars()
File "build/bdist.linux-x86_64/egg/gcn/layers.py", line 82, in _log_vars
tf.summary.histogram(self.name + '/vars/' + var, self.vars[var])
AttributeError: 'module' object has no attribute 'summary'

Thanks!

How to prepare our own data

Hi, everyone. I have a question. If I want to use our own datasets, what is the format of datasets. And what are the variables corresponding to "load_data()". Thank you.

How can we make a multi layer perceptron on graph ?

Hello @tkipf ,

Sorry for my stupid question, but l'm wondering if we can apply a multi layer perceptron on graph data ?
with graph convnet we apply convolution in spectral domain using graph laplacian . What could be the equivalent of that for MLP ?

Thank you

placeholder for featrues

Hi,
Thanks for your work. I noticed that placeholder for features is :
tf.sparse_placeholder(tf.float32, shape=tf.constant(features[2], dtype=tf.int64)
I can't understand why the shape is tf.constant(features[2] and when I assign this placeholder to a variable like:
var = tf.sparse_placeholder(tf.float32, shape=tf.constant(features[2], dtype=tf.int64))
This sentence will raise a typeError, can you give some information about this problem?

Thanks
Lei

Can this modle be applied to directed graphs using a normalized Laplacian matrix?

Hi
I am working on classifying nodes in a strongly connected directed graph and I am wondering if it's possible to replace the Laplacian matrix used in the paper with the normalized Laplacian matrix for directed graphs. The GCN model seems to be compatible with the replacement, but I am not sure whether the renormalization trick and the approximation of lambda_max still work after the replacement. What do you think about applying the model to directed graphs?
Cheers

preprocessing for NELL dataset

As the ICLR2017 said :

We assign separate relation nodes r1 and r2 for each entity pair (e1, r, e2) as (e1, r1) and (e2, r2).

Could you take a example to explain it ?

Graph Classification & Batchwise Training

Hi I have two questions:

  • I would like to repurpose this for graph classification (where I have a single label per graph). I see there is an option to choose featureless=False when defining a new GraphConvolution layer. However, the loss is still computed for each node, and I was wondering how I should change your code.
  • In the context of graph classification, how should I modify your train.py for batch-wise training?

Thanks for putting this together!

Question about processing data

Hi Thomas,

I am confused about how you processed the test data as follow:

features[test_idx_reorder, :] = features[test_idx_range, :]

It seems that you shuffled them but their indexes in graph did not change. Could you tell me why did you do that?

Lee

About Cross-validaion

Hi,

I was just wondering what should I do if I want to use a cross-validation with your GCN? Sorry I'm not quite familiar with TensorFlow, so maybe you have already added a cross_val function and I didn't notice. Or maybe I have to manually split my dataset many times and run your GCN respectively?

Thank you for your code!

Divide by 0 warning with TensorFlow backend while load AIFB dataset

Hi,

I am trying to emulate the results reported in your papers. I am using this command:-
python train.py -d aifb --bases 0 --hidden 16 --l2norm 0. --testing

When I use the tensorflow backend in Keras I get this warning:
train.py:81: RuntimeWarning: divide by zero encountered in divide
d_inv = 1. / d
I was wondering if this is a bug? Any pointers would be appreciated.

P.S. - There is no such warning when I use Theano as the backend.

Thanks.

UnicodeDecodeError: 'utf-8' codec can't decode byte 0x80 in position 0: invalid start byte

Hi

I just ran

python /home/zeno/.software/gcn/gcn/train.py --dataset citeseer

but this results in

Traceback (most recent call last): File "/home/zeno/.software/gcn/gcn/train.py", line 29, in <module> adj, features, y_train, y_val, y_test, train_mask, val_mask, test_mask = load_data(FLAGS.dataset) File "/usr/local/src/anaconda3/lib/python3.5/site-packages/gcn-1.0-py3.5.egg/gcn/utils.py", line 30, in load_data objects.append(pkl.load(open("data/ind.{}.{}".format(dataset_str, names[i])), encoding='latin1')) File "/usr/local/src/anaconda3/lib/python3.5/codecs.py", line 321, in decode (result, consumed) = self._buffer_decode(data, self.errors, final) UnicodeDecodeError: 'utf-8' codec can't decode byte 0x80 in position 0: invalid start byte

Do I need to use a different Python version?

Thank you for your Feedback.

Preprocessing code

Hi,
I need to apply this algorithm to financial data. Could you please provide the code for preprocessing the original data into the required form. Thanks in advance.

How to handle a sparsely-connected graph?

I have a connected graph of 2000 nodes, in which only 200 nodes are attached with data. However, among the resting 2800 nodes, I only care the label predication of 100 nodes. That is, there are only 300 "meaningfull" nodes (200 nodes with labels + 100 nodes to predict).

If I select these meaningful node out of graph, them form a set of isolated subgraphs. I was wondering what is the correct way to handle a sparsely-connected graph? Can the GCNN work well on such sparse graph?

What if I just use the whole graph, through I only care some nodes, in the training? will this help the performance? Any thoughts on this?

Question about NELL processing

In your paper ICLR2017, I have noticed that the relation pair (e1, r, e2) has been divided into (e1,r1) and (e2,r2) while you process the NELL dataset.

As the paper show us "We assign separate relation nodes
r1 and r2 for each entity pair (e1; r; e2) as (e1; r1) and (e2; r2)."

What's the propose of this process? how to do that? Can you take a example to explain that?

Thanks

multiple "support"

Hi

I was wondering about the "num_supports=1".

Is it possible to input multiple supports? Let's say for a graph with different types of connections can I input two different adj matrices / supports?

       support = list()
       support.append(sp.coo_matrix(adj1))
       support.append(sp.coo_matrix(adj2))

It looks like that the "sparse_to_tuple" function is written to deal with list inputs and also the convolve operation seem to have (layers.py line#240):

        supports.append(support)
    output = tf.add_n(supports)

Did you intend to have this feature which is to be able to deal with two different adj matrices for the same graph?

I tried to pass a support with a list of two adj matrices, however, I am getting an error when I tried to do this:

File "D:/gcn/gcn-master/gcn/akanda.py", line 488, in experiment
outs = sess.run([model.opt_op, model.loss, model.accuracy], feed_dict=feed_dict)
File "C:\Program Files\Python36\lib\site-packages\tensorflow\python\client\session.py", line 900, in run
run_metadata_ptr)
File "C:\Program Files\Python36\lib\site-packages\tensorflow\python\client\session.py", line 1104, in _run
np_val = np.asarray(subfeed_val, dtype=subfeed_dtype)
File "C:\Program Files\Python36\lib\site-packages\numpy\core\numeric.py", line 492, in asarray
return array(a, dtype, copy=False, order=order)
ValueError: setting an array element with a sequence.

Thanks a lot!

3703-dim features

Hi,
Thanks for your sharing. It has been noticed that the node features is 3703 dimension. Could you explain some ideas about constructing these large 3703 dim features? Or, could we understand 3703 dim by some independent parts?

Issues running in Python 3.5 , win10

Issues

(step by step)

  • Issue 1: python setup.py install

ValueError: path 'gcn/data/' cannot end with '/'

-Method: delete the '/' (line 17) in the setup.py

  • Issue 2: python train.py

ImportError: No module named 'cPickle'

-Method:
step 1: change the 'cPickle' (line 2) in utils.py as 'pickle'
step 2: run "python setup.py install" again
step 3: continue run "python train.py"

  • Issue 3: python train.py

File "train.py", line 29, in
adj, features, y_train, y_val, y_test, train_mask, val_mask, test_mask = load_data(FLAGS.dataset)
File "C:\Users\Xiangyong Cao\Anaconda3\lib\site-packages\gcn-1.0-py3.5.egg\gcn\utils.py", line 28, in load_data
objects.append(pkl.load(open("data/ind.{}.{}".format(dataset_str, names[i]))))
UnicodeDecodeError: 'gbk' codec can't decode byte 0x80 in position 0: illegal multibyte sequence

-Notes: I haven't tackled this issue and wish that the author can help dealing with is. Thanks!

How to distribute gcn

Hi.
First of all, Good work bro.
I have read your paper and your blog about gcn. One thing that comes to my mind is that:
If the input graph is very large, it seems that your algorithm will try to load everything from train data into memory. Obviously OOM will occur.

Any suggestion?

How to format data for util?

Some general questions:
How do I format my data so that it can be processed by util? Should I store my data in a .txt file? How should that data look like? What are these .allx .ally and .graph files?

Why normalize each row of features?

If I understand correctly, the normalization is performed on each row of the feature, why? I thought it should be performed on each columns, isn't it?

def preprocess_features(features):
    """Row-normalize feature matrix and convert to tuple representation"""
    #??? Why Row-wise normalization? (may connect to chebshev approximation?)
    rowsum = np.array(features.sum(1))
    r_inv = np.power(rowsum, -1).flatten()
    r_inv[np.isinf(r_inv)] = 0.
    r_mat_inv = sp.diags(r_inv)
    features = r_mat_inv.dot(features)
    return sparse_to_tuple(features)

MemoryError with large adjacency matrix during graph-level classification

Hello, thanks for putting together this code. We have a dataset of 156 patients, each represented by a graph of 200 nodes. We are trying to do graph-level classification, so I followed the instructions, creating a sparse, block-diagonal adjacency matrix of dimensions 31,200 (156*200). But we are getting a MemoryError during the preprocessing stage, and we traced the problem to the fact that scipy sparse matrix library fails when trying to create a matrix of that size. Even after experimenting with reducing adjacency matrix size, python still chokes on the large matrix. Any suggestions as to how this issue can be bypassed?

Formatted data for karate example from appendices?

Hi, I was wondering if data for the Zachary's Karate Club examples in the appendices to the paper are available? I know the karate dataset itself is widely available, but I'm looking for a version that's already got Planetoid-style formatting applied, so it can be loaded directly into the gcn scripts.

Problems running GCN in a loop

When i try to put the GCN model construction in a loop (to test various randomly generated hyperparameters), I get the following error: ValueError: Initializer for variable graph_convolution_22/kernel/ is from inside a control-flow construct, such as a loop or conditional. When creating a variable inside a loop or conditional, use a lambda as the initializer

Has anyone come across this error?

Reproducing the Karate Club embedding example

Hi,

I am trying to reproduce the Karate club embedding example and I am wondering if there is any available implementation.
I aim to generate node embeddings based on your GCN model and the Zachary example is an interesting start point.
Please let me know how can I proceed to reproduce the mentioned results.

Many thanks,
Omar

porting to keras

Hi Thomas,
first of all thank you for sharing this implementation!
I intend to port this to keras and before running into a dead end, I would like to ask: Can you think of any design differences that would it make impossible to do this?
cheers
michael

Train acc and val acc are not improving

Hi,

I have a dataset, consisting of 2500 nodes and feature vectors (X (2500,10)) with the values between zero and one (sum of each row is one, e.g., sum(X(i,:))=1). The training and validation accuracy are stuck on a value without any change. What do you think the problem might be?

Epoch: 0037 train_loss= 1.32362 train_acc= 0.62857 val_loss= 1.45957 val_acc= 0.51000 time= 0.31915
Epoch: 0038 train_loss= 1.28895 train_acc= 0.62857 val_loss= 1.45154 val_acc= 0.51000 time= 0.31516
Epoch: 0039 train_loss= 1.28910 train_acc= 0.62857 val_loss= 1.44433 val_acc= 0.51000 time= 0.31915
Epoch: 0040 train_loss= 1.26195 train_acc= 0.62857 val_loss= 1.43808 val_acc= 0.51000 time= 0.31615
Epoch: 0041 train_loss= 1.25370 train_acc= 0.62857 val_loss= 1.43273 val_acc= 0.51000 time= 0.31815
Epoch: 0042 train_loss= 1.24904 train_acc= 0.62857 val_loss= 1.42827 val_acc= 0.51000 time= 0.32114
Epoch: 0043 train_loss= 1.23366 train_acc= 0.62857 val_loss= 1.42465 val_acc= 0.51000 time= 0.31815
Epoch: 0044 train_loss= 1.22609 train_acc= 0.62857 val_loss= 1.42184 val_acc= 0.51000 time= 0.31815
Epoch: 0045 train_loss= 1.22172 train_acc= 0.62857 val_loss= 1.41968 val_acc= 0.51000 time= 0.31416
Epoch: 0046 train_loss= 1.21135 train_acc= 0.62857 val_loss= 1.41816 val_acc= 0.51000 time= 0.31815
Epoch: 0047 train_loss= 1.20308 train_acc= 0.62857 val_loss= 1.41732 val_acc= 0.51000 time= 0.31715
Epoch: 0048 train_loss= 1.19649 train_acc= 0.62857 val_loss= 1.41703 val_acc= 0.51000 time= 0.31815
Epoch: 0049 train_loss= 1.19196 train_acc= 0.62857 val_loss= 1.41708 val_acc= 0.51000 time= 0.31715
Epoch: 0050 train_loss= 1.18271 train_acc= 0.62857 val_loss= 1.41766 val_acc= 0.51000 time= 0.31516
Epoch: 0051 train_loss= 1.18121 train_acc= 0.62857 val_loss= 1.41835 val_acc= 0.51000 time= 0.31615
Epoch: 0052 train_loss= 1.17401 train_acc= 0.62857 val_loss= 1.41921 val_acc= 0.51000 time= 0.31715
Epoch: 0053 train_loss= 1.17111 train_acc= 0.62857 val_loss= 1.42014 val_acc= 0.51000 time= 0.31715

How to add hidden layers

Hi,thanks for your work.
I have read your paper. Now I want to add a hidden layer to reproduce your result in appendix B.
So I do this :

    self.layers.append(GraphConvolution(input_dim=self.input_dim,
                                        output_dim=FLAGS.hidden1,
                                        placeholders=self.placeholders,
                                        act=tf.nn.relu,
                                        dropout=True,
                                        sparse_inputs=True,
                                        logging=self.logging))

    self.layers.append(GraphConvolution(input_dim=FLAGS.hidden1,
                                        output_dim=FLAGS.hidden1,
                                        placeholders=self.placeholders,
                                        act=lambda x: x,
                                        dropout=True,
                                        logging=self.logging))

    self.layers.append(GraphConvolution(input_dim=FLAGS.hidden1,
                                        output_dim=self.output_dim,
                                        placeholders=self.placeholders,
                                        act=lambda x: x,
                                        dropout=True,
                                        logging=self.logging))

And hyperparameters I used are:

  'dataset'       : 'cora',
  'model'         : 'gcn',
  'learning_rate' : 0.01,
  'epochs'        : 400,
  'hidden1'      : 16,
  'dropout'       : 0.5,
  'weight_decay'  : 5e 4,
  'early_stopping': 0, 

But the version I modified only has 79.4% acc less than 90% in paper.
So what is my fault?
Thanks!

Potential incompatibility with tensorflow==0.12.0rc0

When I run train.py with tensorflow==0.12.0rc0, I get the following:

$ python train.py 
Traceback (most recent call last):
  File "train.py", line 59, in <module>
    model = model_func(placeholders, input_dim=features[2][1], logging=True)
  File "build/bdist.linux-x86_64/egg/gcn/models.py", line 147, in __init__
    self.build()
  File "build/bdist.linux-x86_64/egg/gcn/models.py", line 49, in build
    hidden = layer(self.activations[-1])
  File "build/bdist.linux-x86_64/egg/gcn/layers.py", line 75, in __call__
    outputs = self._call(inputs)
  File "build/bdist.linux-x86_64/egg/gcn/layers.py", line 168, in _call
    x = sparse_dropout(x, 1-self.dropout, self.num_features_nonzero)
  File "build/bdist.linux-x86_64/egg/gcn/layers.py", line 27, in sparse_dropout
    return pre_out * tf.inv(keep_prob)
AttributeError: 'module' object has no attribute 'inv'

Is there a known working version of TF I can install for testing this out?

About the data's feature

I notice the X in f(X,A) is known.like cora dataset,there's a file named xxx.x or xxx.tx ,how did you get the feature of the cora?If i only have its adj_matrix,how to use your model?

Can't run on Nell dataset

You reported the performance of GCN on Nell. I notice that you used data provided by Yang. I download Nell from Yang's GitHub https://github.com/kimiyoung/planetoid. But when I run your program on Nell, it runs into a runtime error:

"utils.py", line 51, in load_data 
    features[test_idx_reorder, :] = features[test_idx_range, :]
ValueError: row index 9897 out of bounds

It seems that it is reordering the test data points, in order to keep consistent with adjacency matrix, but some indices are out of bounds.

The full stacktrace:

$ python train.py --dataset nell.0.01
Traceback (most recent call last):
  File "train.py", line 29, in <module>
    adj, features, y_train, y_val, y_test, train_mask, val_mask, test_mask = load_data(FLAGS.dataset)
  File "/Users/liqimai/anaconda3/lib/python3.5/site-packages/gcn-1.0-py3.5.egg/gcn/utils.py", line 51, in load_data
    features[test_idx_reorder, :] = features[test_idx_range, :]
  File "/Users/liqimai/anaconda3/lib/python3.5/site-packages/scipy/sparse/lil.py", line 289, in __getitem__
    return self._get_row_ranges(i, j)
  File "/Users/liqimai/anaconda3/lib/python3.5/site-packages/scipy/sparse/lil.py", line 329, in _get_row_ranges
    j_start, j_stop, j_stride, nj)
  File "scipy/sparse/_csparsetools.pyx", line 787, in scipy.sparse._csparsetools.lil_get_row_ranges (scipy/sparse/_csparsetools.c:11978)
ValueError: row index 9897 out of bounds

about the training loss

When my label is a high-dimensional vector, and the label is not sparse. I find that the training loss has not been reduced(or the reduction is particularly small),and the end result is not good.how to solve the problem ? Thanks

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.