Comments (11)
Hi,
Thanks for your interest in our work. The 186 targeted nodes are already sampled so you don't need to sample from them again. According to our paper,
The nodes in test set with degree larger than 10 are set as target nodes. For Pubmed dataset, we only sample 10% of them`.
This means, we first obtain the test nodes with degree larger than 10 (there will be 1860 nodes), and then we sample 10% of them. Hence, the number of the target nodes is 186.
from pro-gnn.
thanks a lot
from pro-gnn.
Hi!
Could you release the code that generated the nettack attack? I want to use all the target nodes as the test set (The nodes in test set with degree larger than 10).
Thanks a lot!
from pro-gnn.
Basically, we just sequentially attack those target nodes. I modified the example code as follows
from deeprobust.graph.defense import GCN
from deeprobust.graph.targeted_attack import Nettack
from deeprobust.graph.utils import *
from deeprobust.graph.data import Dataset
def attack_all():
cnt = 0
degrees = adj.sum(0).A1
node_list = select_nodes() # obtain the nodes to be attacked
num = len(node_list)
print('=== Attacking %s nodes sequentially ===' % num)
modified_adj = adj
for target_node in node_list:
n_perturbations = int(degrees[target_node])
model = Nettack(surrogate, nnodes=modified_adj.shape[0], attack_structure=True, attack_features=False, device=device)
model = model.to(device)
model.attack(features, modified_adj, labels, target_node, n_perturbations, verbose=False)
modified_adj = model.modified_adj
Feel free to let me know if you have further questions.
from pro-gnn.
thanks a lot!
from pro-gnn.
Hi!
I meet some problems when i use Nettack to attack polblogs dataset with n_perturbations=1, the code as follows
modified_adj = adj
print('=== [Poisoning] Attacking %s nodes respectively ===' % len(node_list))
for target_node in tqdm(node_list):
model = Nettack(surrogate, nnodes=modified_adj.shape[0], attack_structure=True, attack_features=False, device=device)
model = model.to(device)
model.attack(features, modified_adj, labels, target_node, int(n_perturbations), verbose=False)
modified_adj = model.modified_adj
print(modified_adj.nnz)
modified_adj = modified_adj.tocsr()
the origin graph has 33430 nnz(non zero elements), but after sequentially attack 443 nodes with n_perturbations=1, the modified_adj only has 33364 nnz, is that correct? why the edges in modified_adj less than origin adj?
from pro-gnn.
Hi, I would suggest you check the changes made on the adjacency matrix for each iteration. It could happen that the attacker deleted some edges.
from pro-gnn.
Basically, we just sequentially attack those target nodes. I modified the example code as follows
from deeprobust.graph.defense import GCN from deeprobust.graph.targeted_attack import Nettack from deeprobust.graph.utils import * from deeprobust.graph.data import Dataset def attack_all(): cnt = 0 degrees = adj.sum(0).A1 node_list = select_nodes() # obtain the nodes to be attacked num = len(node_list) print('=== Attacking %s nodes sequentially ===' % num) modified_adj = adj for target_node in node_list: n_perturbations = int(degrees[target_node]) model = Nettack(surrogate, nnodes=modified_adj.shape[0], attack_structure=True, attack_features=False, device=device) model = model.to(device) model.attack(features, modified_adj, labels, target_node, n_perturbations, verbose=False) modified_adj = model.modified_adj
Feel free to let me know if you have further questions.
When you compare the defense performance of different models under Nettack, does these models use the same data set?
I mean, I use GCN as a surrogate model to attack the graph structure, and then use other models to train on this modified graph. Is this correct?
I think different models should use themselves as surrogate models when testing defense performance. Is this the truth?
from pro-gnn.
Hi! Thanks for sharing the code, I'd like to ask you about the details of the datasets!
for the Citeseer dataset, the edges of LCC in your article are 3668, but the edges of LCC in some other articles are 3757。
Why is the number different?
from pro-gnn.
Hi! Thanks for sharing the code, I'd like to ask you about the details of the datasets! for the Citeseer dataset, the edges of LCC in your article are 3668, but the edges of LCC in some other articles are 3757。 Why is the number different?
Sorry for the late reply (I just noticed this message). I am not sure why the difference happens but according to my experiment the number should be 3668. I remember it should also be 3668 for Citeseer when checking the original code of nettack,
from pro-gnn.
Basically, we just sequentially attack those target nodes. I modified the example code as follows
from deeprobust.graph.defense import GCN from deeprobust.graph.targeted_attack import Nettack from deeprobust.graph.utils import * from deeprobust.graph.data import Dataset def attack_all(): cnt = 0 degrees = adj.sum(0).A1 node_list = select_nodes() # obtain the nodes to be attacked num = len(node_list) print('=== Attacking %s nodes sequentially ===' % num) modified_adj = adj for target_node in node_list: n_perturbations = int(degrees[target_node]) model = Nettack(surrogate, nnodes=modified_adj.shape[0], attack_structure=True, attack_features=False, device=device) model = model.to(device) model.attack(features, modified_adj, labels, target_node, n_perturbations, verbose=False) modified_adj = model.modified_adj
Feel free to let me know if you have further questions.
When you compare the defense performance of different models under Nettack, does these models use the same data set?
I mean, I use GCN as a surrogate model to attack the graph structure, and then use other models to train on this modified graph. Is this correct?
I think different models should use themselves as surrogate models when testing defense performance. Is this the truth?
Sorry for the late reply (I just noticed this message). I simply used GCN as the surrogate model and generated the attacked graphs. All (defense) models used the same attacked graphs.
from pro-gnn.
Related Issues (20)
- About the usage of validation set. HOT 8
- About the hyperparameter HOT 2
- A question about generating nettacked data HOT 2
- About the attacked graph HOT 1
- About pubmed dataset HOT 1
- codes between this one and deeprobust don't match well HOT 4
- the probleam of nettack HOT 3
- Questions about Netattack HOT 9
- Questions about node selection in Nettack HOT 2
- About Accuracy rate HOT 5
- about the details of the metattack experiment HOT 2
- About reconstructed graph data
- About the dataset
- Code for training using GAT HOT 2
- about GCN-Jaccard results HOT 1
- cora dataset node classification performance under mettack HOT 6
- How to compute the rank of adjacency maxtrix HOT 1
- GPU OOM on Pubmed HOT 1
- Problem of experiments results on Polblogs dataset HOT 7
Recommend Projects
-
React
A declarative, efficient, and flexible JavaScript library for building user interfaces.
-
Vue.js
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
-
Typescript
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
-
TensorFlow
An Open Source Machine Learning Framework for Everyone
-
Django
The Web framework for perfectionists with deadlines.
-
Laravel
A PHP framework for web artisans
-
D3
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
-
Recommend Topics
-
javascript
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
-
web
Some thing interesting about web. New door for the world.
-
server
A server is a program made to process requests and deliver data to clients.
-
Machine learning
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
-
Visualization
Some thing interesting about visualization, use data art
-
Game
Some thing interesting about game, make everyone happy.
Recommend Org
-
Facebook
We are working to build community through open source technology. NB: members must have two-factor auth.
-
Microsoft
Open source projects and samples from Microsoft.
-
Google
Google ❤️ Open Source for everyone.
-
Alibaba
Alibaba Open Source for everyone
-
D3
Data-Driven Documents codes.
-
Tencent
China tencent open source team.
from pro-gnn.