Giter Site home page Giter Site logo

Comments (8)

BarclayII avatar BarclayII commented on August 15, 2024

Could you add a Linear layer at the beginning of your model:

nn.Linear(in_feats, out_feats)

So that it transforms the input feature dimensions to the same size of later layers?

from dgl.

transparency066 avatar transparency066 commented on August 15, 2024

So this should only be considered when designing the model? Can PGExplainer be used If a fixed-structure model is finished training?

Could you add a Linear layer at the beginning of your model:

nn.Linear(in_feats, out_feats)

So that it transforms the input feature dimensions to the same size of later layers?

from dgl.

transparency066 avatar transparency066 commented on August 15, 2024

I am sorry. I misunderstand the meaning of the parameter 'num_feature', I change it to the feature dimension of the final representation of the node and it works.

from dgl.

transparency066 avatar transparency066 commented on August 15, 2024

However, whether it is my own model or the official example, the edge_weight obtained is close to 0. Do I need to normalize it?

from dgl.

Rhett-Ying avatar Rhett-Ying commented on August 15, 2024

which official example you're using? please share more details? which dataset? what are the arguments including hyperparameters?

from dgl.

transparency066 avatar transparency066 commented on August 15, 2024
#%%
import torch as th
import torch.nn as nn
import dgl
from dgl.nn import GraphConv
from dgl.data import GINDataset
from dgl.dataloading import GraphDataLoader
from dgl.nn.pytorch.explain import PGExplainer
import numpy as np
from tqdm import tqdm

# Set the device to GPU 2
device = th.device("cuda:2" if th.cuda.is_available() else "cpu")

# Define the model
class Model(nn.Module):
    def __init__(self, in_feats, out_feats):
        super().__init__()
        self.conv = GraphConv(in_feats, out_feats)
        self.fc = nn.Linear(out_feats, out_feats)
        nn.init.xavier_uniform_(self.fc.weight)

    def forward(self, g, h, embed=False, edge_weight=None):
        h = self.conv(g, h, edge_weight=edge_weight)
        if embed:
            return h
        with g.local_scope():
            g.ndata["h"] = h
            hg = dgl.mean_nodes(g, "h")
            return self.fc(hg)

# Load dataset
data = GINDataset("MUTAG", self_loop=True)
dataloader = GraphDataLoader(data, batch_size=16, shuffle=True)

# Initialize the model and move it to the device
feat_size = data[0][0].ndata["attr"].shape[1]
model = Model(feat_size, data.gclasses).to(device)
criterion = nn.CrossEntropyLoss()
optimizer = th.optim.Adam(model.parameters(), lr=1e-2)

# Training loop
for epoch in tqdm(range(200)):
    for bg, labels in dataloader:
        bg, labels = bg.to(device), labels.to(device)
        preds = model(bg, bg.ndata["attr"].to(device))
        loss = criterion(preds, labels)
        optimizer.zero_grad()
        loss.backward()
        optimizer.step()

# Initialize the explainer
explainer = PGExplainer(model, data.gclasses).to(device)

#%%
# Train the explainer
init_tmp, final_tmp = 5.0, 1.0
optimizer_exp = th.optim.Adam(explainer.parameters(), lr=0.01)
for epoch in tqdm(range(100)):
    tmp = float(init_tmp * np.power(final_tmp / init_tmp, epoch / 20))
    for bg, labels in dataloader:
        bg, labels = bg.to(device), labels.to(device)
        loss = explainer.train_step(bg, bg.ndata["attr"].to(device), tmp)
        optimizer_exp.zero_grad()
        loss.backward()
        optimizer_exp.step()

# Explain the prediction for graph 0
graph, l = data[0]
graph = graph.to(device)
graph_feat = graph.ndata.pop("attr").to(device)
probs, edge_weight = explainer.explain_graph(graph, graph_feat)

Here is the offical example. The edge_weight is like

tensor([9.2120e-13, 9.2120e-13, 6.8172e-13, 9.2120e-13, 5.4758e-13, 1.2444e-12,
        5.4758e-13, 5.4758e-13, 4.0203e-13, 2.3260e-13, 5.4758e-13, 1.2444e-12,
        1.2444e-12, 1.2444e-12, 7.1015e-13, 1.2444e-12, 7.1015e-13, 1.1197e-10,
        5.2539e-13, 4.0185e-13, 1.1197e-10, 1.5726e-10, 4.4363e-10, 5.7454e-10,
        1.5726e-10, 1.2444e-12, 1.2444e-12, 1.2444e-12, 7.1015e-13, 1.2444e-12,
        7.1015e-13, 5.2539e-13, 4.0185e-13, 4.0185e-13, 5.2539e-13, 5.2539e-13,
        6.8172e-13, 6.8172e-13, 4.0203e-13, 6.8172e-13, 5.2539e-13, 6.8172e-13,
        5.2539e-13, 7.1015e-13, 4.0185e-13, 4.0185e-13, 9.2120e-13, 7.1015e-13,
        1.2444e-12, 4.0185e-13, 4.0185e-13, 7.1015e-13, 4.0185e-13, 4.0185e-13,
        4.0185e-13, 7.1015e-13, 4.0185e-13, 7.1015e-13, 9.2120e-13, 1.2444e-12,
        9.2120e-13, 6.8172e-13, 6.8172e-13, 6.8172e-13, 9.2120e-13, 6.8172e-13,
        7.1015e-13, 9.2120e-13, 1.2444e-12, 4.4363e-10, 2.0517e-10, 2.0517e-10,
        1.1654e-10, 2.0517e-10, 2.0139e-11, 2.0517e-10, 2.0139e-11],
       device='cuda:2', grad_fn=<DivBackward0>)

I also try to modify feature dimension of the final node representation and change the second argument 'num_feature‘ correspondingly like this. Am i right?

class Model(nn.Module):
    def __init__(self, in_feats, out_feats):
        super().__init__()
        self.conv = GraphConv(in_feats, out_feats*2)
        self.fc = nn.Linear(out_feats*2, out_feats)
        nn.init.xavier_uniform_(self.fc.weight)

    def forward(self, g, h, embed=False, edge_weight=None):
        h = self.conv(g, h, edge_weight=edge_weight)
        if embed:
            return h
        with g.local_scope():
            g.ndata["h"] = h
            hg = dgl.mean_nodes(g, "h")
            return self.fc(hg)

explainer = PGExplainer(model, data.gclasses*2).to(device)

from dgl.

Rhett-Ying avatar Rhett-Ying commented on August 15, 2024

@mufeili Hi could you help give some guidance on how to dive deep into this issue?

from dgl.

github-actions avatar github-actions commented on August 15, 2024

This issue has been automatically marked as stale due to lack of activity. It will be closed if no further activity occurs. Thank you

from dgl.

Related Issues (20)

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.