matenure / mvgae Goto Github PK
View Code? Open in Web Editor NEWDrug Similarity Integration Through Attentive Multi-view Graph Auto-Encoders (IJCAI 2018)
Drug Similarity Integration Through Attentive Multi-view Graph Auto-Encoders (IJCAI 2018)
Hi, thank you for your work.
I don't understand what it means to predict specific DDI type. How do you conduct this experiments and what's the meaning of the results? The multi-type prediction is a multi-label classification problem, why the ROC-AUC metric could be used? Do the results represent the average of the prediction result of each type?
I am looking forward to your reply! Thank you!
It's my honor to read your article about drug similarity integration based on GraphCNN AutoEncoder. The accurate prediction of DDIs is one of my research interests. I found the articles that conduct research works on DDIs often emphasize the innovation of methods, but no one share or release his datasets. Those datasets that are used in DDI-related article vary from one to another. I am very interested in your work, can you share the datasets mentioned in your paper?
Hello Ma,
I am trying to run your code but it requires
I checked the preprocessing files and https://github.com/matenure/FastGCN/tree/master/data
for data format but couldn't find enough resources.
Since most of the graph convolutional networks are based on node prediction so their format are different i think?
Can you provide a simple preprocessing script with few artificial datapoints, That would help to understand the shape of all format placeholders.
If I am trying to run this code on cora dataset I am getting error :
ValueError: shapes (1,2708) and (1,1) not aligned: 2708 (dim 1) != 1 (dim 0)
my data shapes look like this:
Can you share the correct shape format?
Second I was going through the paper (https://arxiv.org/pdf/1804.10850.pdf) , Paper says :
Assume we have adjacency matrix Au for view u, we assign attention weights g u ∈ R N∗N to the graph edges, such that the integrated adjacency matrix becomes sigma u g u A u where is the element wise multiplication.
But in implementation you are not using element wise multiplication, also I am not clear about how you are concatenating with 0? If i am getting right then final mixedADJ will be same shape as original adj?
def attention(self):
self.attweights = tf.get_variable("attWeights",[self.num_support, self.output_dim],initializer=tf.contrib.layers.xavier_initializer())
#self.attbiases = tf.get_variable("attBiases",[self.num_support, self.output_dim],initializer=tf.contrib.layers.xavier_initializer())
attention = []
self.attADJ = []
for i in range(self.num_support):
#tmpattention = tf.matmul(tf.reshape(self.attweights[i],[1,-1]), self.adjs[i])+tf.reshape(self.attbiases[i],[1,-1])
tmpattention = tf.matmul(tf.reshape(self.attweights[i], [1, -1]), self.adjs[i])
#tmpattention = tf.reshape(self.attweights[i],[1,-1]) #test the performance of non-attentive vector weights
attention.append(tmpattention)
print("attention_sie",attention.size)
attentions = tf.concat(0, attention)
self.attention = tf.nn.softmax(attentions,0)
for i in range(self.num_support):
self.attADJ.append(tf.matmul(tf.diag(self.attention[i]),self.adjs[i]))
self.mixedADJ = tf.add_n(self.attADJ)
Thank you
Keep writing and keep sharing good work.
Looking forward to your reply, Thank you :)
Hi, thanks for your code.
For the multiview similarity in the paper, for example, "Drug Indication" in the "Multilabel Prediction of Specific DDI Types", could you please tell me that how do you embed a drug into the 1702 dimension embedding vector, as well as others?
Looking forward to your reply. Thank you!
Hello,
Could you please let me know the sequence in which I should run the files if I want to run semiGAE_mult.py. I am not understanding what to run to get the datasets - 'allx', 'ally', 'graph',"adjmat", "trainMask", "valMask", "testMask"
Thanks.
A declarative, efficient, and flexible JavaScript library for building user interfaces.
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
An Open Source Machine Learning Framework for Everyone
The Web framework for perfectionists with deadlines.
A PHP framework for web artisans
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
Some thing interesting about web. New door for the world.
A server is a program made to process requests and deliver data to clients.
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
Some thing interesting about visualization, use data art
Some thing interesting about game, make everyone happy.
We are working to build community through open source technology. NB: members must have two-factor auth.
Open source projects and samples from Microsoft.
Google ❤️ Open Source for everyone.
Alibaba Open Source for everyone
Data-Driven Documents codes.
China tencent open source team.