Comments (8)
Could you add a Linear layer at the beginning of your model:
nn.Linear(in_feats, out_feats)
So that it transforms the input feature dimensions to the same size of later layers?
from dgl.
So this should only be considered when designing the model? Can PGExplainer be used If a fixed-structure model is finished training?
Could you add a Linear layer at the beginning of your model:
nn.Linear(in_feats, out_feats)
So that it transforms the input feature dimensions to the same size of later layers?
from dgl.
I am sorry. I misunderstand the meaning of the parameter 'num_feature', I change it to the feature dimension of the final representation of the node and it works.
from dgl.
However, whether it is my own model or the official example, the edge_weight obtained is close to 0. Do I need to normalize it?
from dgl.
which official example you're using? please share more details? which dataset? what are the arguments including hyperparameters?
from dgl.
#%%
import torch as th
import torch.nn as nn
import dgl
from dgl.nn import GraphConv
from dgl.data import GINDataset
from dgl.dataloading import GraphDataLoader
from dgl.nn.pytorch.explain import PGExplainer
import numpy as np
from tqdm import tqdm
# Set the device to GPU 2
device = th.device("cuda:2" if th.cuda.is_available() else "cpu")
# Define the model
class Model(nn.Module):
def __init__(self, in_feats, out_feats):
super().__init__()
self.conv = GraphConv(in_feats, out_feats)
self.fc = nn.Linear(out_feats, out_feats)
nn.init.xavier_uniform_(self.fc.weight)
def forward(self, g, h, embed=False, edge_weight=None):
h = self.conv(g, h, edge_weight=edge_weight)
if embed:
return h
with g.local_scope():
g.ndata["h"] = h
hg = dgl.mean_nodes(g, "h")
return self.fc(hg)
# Load dataset
data = GINDataset("MUTAG", self_loop=True)
dataloader = GraphDataLoader(data, batch_size=16, shuffle=True)
# Initialize the model and move it to the device
feat_size = data[0][0].ndata["attr"].shape[1]
model = Model(feat_size, data.gclasses).to(device)
criterion = nn.CrossEntropyLoss()
optimizer = th.optim.Adam(model.parameters(), lr=1e-2)
# Training loop
for epoch in tqdm(range(200)):
for bg, labels in dataloader:
bg, labels = bg.to(device), labels.to(device)
preds = model(bg, bg.ndata["attr"].to(device))
loss = criterion(preds, labels)
optimizer.zero_grad()
loss.backward()
optimizer.step()
# Initialize the explainer
explainer = PGExplainer(model, data.gclasses).to(device)
#%%
# Train the explainer
init_tmp, final_tmp = 5.0, 1.0
optimizer_exp = th.optim.Adam(explainer.parameters(), lr=0.01)
for epoch in tqdm(range(100)):
tmp = float(init_tmp * np.power(final_tmp / init_tmp, epoch / 20))
for bg, labels in dataloader:
bg, labels = bg.to(device), labels.to(device)
loss = explainer.train_step(bg, bg.ndata["attr"].to(device), tmp)
optimizer_exp.zero_grad()
loss.backward()
optimizer_exp.step()
# Explain the prediction for graph 0
graph, l = data[0]
graph = graph.to(device)
graph_feat = graph.ndata.pop("attr").to(device)
probs, edge_weight = explainer.explain_graph(graph, graph_feat)
Here is the offical example. The edge_weight is like
tensor([9.2120e-13, 9.2120e-13, 6.8172e-13, 9.2120e-13, 5.4758e-13, 1.2444e-12,
5.4758e-13, 5.4758e-13, 4.0203e-13, 2.3260e-13, 5.4758e-13, 1.2444e-12,
1.2444e-12, 1.2444e-12, 7.1015e-13, 1.2444e-12, 7.1015e-13, 1.1197e-10,
5.2539e-13, 4.0185e-13, 1.1197e-10, 1.5726e-10, 4.4363e-10, 5.7454e-10,
1.5726e-10, 1.2444e-12, 1.2444e-12, 1.2444e-12, 7.1015e-13, 1.2444e-12,
7.1015e-13, 5.2539e-13, 4.0185e-13, 4.0185e-13, 5.2539e-13, 5.2539e-13,
6.8172e-13, 6.8172e-13, 4.0203e-13, 6.8172e-13, 5.2539e-13, 6.8172e-13,
5.2539e-13, 7.1015e-13, 4.0185e-13, 4.0185e-13, 9.2120e-13, 7.1015e-13,
1.2444e-12, 4.0185e-13, 4.0185e-13, 7.1015e-13, 4.0185e-13, 4.0185e-13,
4.0185e-13, 7.1015e-13, 4.0185e-13, 7.1015e-13, 9.2120e-13, 1.2444e-12,
9.2120e-13, 6.8172e-13, 6.8172e-13, 6.8172e-13, 9.2120e-13, 6.8172e-13,
7.1015e-13, 9.2120e-13, 1.2444e-12, 4.4363e-10, 2.0517e-10, 2.0517e-10,
1.1654e-10, 2.0517e-10, 2.0139e-11, 2.0517e-10, 2.0139e-11],
device='cuda:2', grad_fn=<DivBackward0>)
I also try to modify feature dimension of the final node representation and change the second argument 'num_feature‘ correspondingly like this. Am i right?
class Model(nn.Module):
def __init__(self, in_feats, out_feats):
super().__init__()
self.conv = GraphConv(in_feats, out_feats*2)
self.fc = nn.Linear(out_feats*2, out_feats)
nn.init.xavier_uniform_(self.fc.weight)
def forward(self, g, h, embed=False, edge_weight=None):
h = self.conv(g, h, edge_weight=edge_weight)
if embed:
return h
with g.local_scope():
g.ndata["h"] = h
hg = dgl.mean_nodes(g, "h")
return self.fc(hg)
explainer = PGExplainer(model, data.gclasses*2).to(device)
from dgl.
@mufeili Hi could you help give some guidance on how to dive deep into this issue?
from dgl.
This issue has been automatically marked as stale due to lack of activity. It will be closed if no further activity occurs. Thank you
from dgl.
Related Issues (20)
- [GraphBolt] ogbn-arxiv accuracy values are lower than expected. HOT 10
- Code running for distributed graph training HOT 2
- [liburing] redefinition of 'struct in6_pktinfo' when building on ubi7 with gcc/g++ 9.5.0 HOT 2
- [GraphBolt] Fix the performance issue of `graphbolt::parallel_for`. HOT 1
- [GraphBolt] Edge feature fetching does not work.
- [GraphBolt] Store `ORIGINAL_EDGE_ID` in `FeatureStore`. HOT 5
- DGL installation does not install pyyaml and pydantic as dependancies HOT 1
- [GraphBolt] Enable `CPUCachedFeature` for r-gcn mag240M example.
- [CI] Update CI compiler versions
- how to use gpu to generate graph ? (to generate edges and nodes ) HOT 2
- ImportError: Cannot load Graphbolt C++ library HOT 7
- Restrict `torch` versions in `setup.py` HOT 1
- [GraphBolt] WARNING: An experimental feature for CUDA allocations is turned on HOT 12
- sample_neighbor not behaving as expected on onDiskDataset HOT 11
- HeteroGraphConv's forward need to modify HOT 3
- HeteroGraphConv documentation sample code crash HOT 1
- DGL DataLoader does not maintain example order with shuffle=False when using multiple workers. HOT 1
- [DataLoader] User-defined Dataloader Problem HOT 1
- SDDMM operator fails in distributed environment
- [GraphBolt] Hetero example broken HOT 1
Recommend Projects
-
React
A declarative, efficient, and flexible JavaScript library for building user interfaces.
-
Vue.js
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
-
Typescript
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
-
TensorFlow
An Open Source Machine Learning Framework for Everyone
-
Django
The Web framework for perfectionists with deadlines.
-
Laravel
A PHP framework for web artisans
-
D3
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
-
Recommend Topics
-
javascript
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
-
web
Some thing interesting about web. New door for the world.
-
server
A server is a program made to process requests and deliver data to clients.
-
Machine learning
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
-
Visualization
Some thing interesting about visualization, use data art
-
Game
Some thing interesting about game, make everyone happy.
Recommend Org
-
Facebook
We are working to build community through open source technology. NB: members must have two-factor auth.
-
Microsoft
Open source projects and samples from Microsoft.
-
Google
Google ❤️ Open Source for everyone.
-
Alibaba
Alibaba Open Source for everyone
-
D3
Data-Driven Documents codes.
-
Tencent
China tencent open source team.
from dgl.