Comments (7)
I remember this paper (https://arxiv.org/pdf/1705.08415.pdf) mentioned this problem and propose to do GNN on line graph. On line graph, m_{uv} and m_{wu} are connected nodes. The idea is to maintain two GNNs. On the line graph GNN, it computes the summation of m_{uv}. On the normal graph, it computes the summation of x. The two GNNs need to cooperate by state sharing (currently just manually set/get reprs).
from dgl.
Just a side note: the official implementation simply builds for each edge a list of indices of all the incoming edges, stored in an integer Tensor. The indices start from 1, with 0 for a "dummy" edge whose message is always a zero vector. The update phase then simply involves an embedding lookup in message tensor with the index tensor, followed by a sum.
@jermainewang For this reason, I don't think a line graph is necessary for most cases involving such kind of loopy belief propagation (though I'm not sure if line graph could be a more general solution). Also, the line graph deduced from the bidirectional graph would still include a connection for every (u, v)-(v, u)
which is to be excluded.
from dgl.
For the official implementation, does that means the edge features are variable length vector depending on the number of in-coming edges? If so, it might be troublesome for us to batch them in a tensor.
For the line graph, the author proposes the construction as follows:
from dgl.
Re official implementation: I'm not sure what you meant by saying "edge features are variable length vector". All the message vectors have the same number of elements. While each edge does have a variable number of incoming messages, they solve the problem by padding with a dummy edge whose message is always a zero vector, so that the number of incoming messages become the same. This idea only works for reducing with sum()
or alike.
Re line graph: I see. A small caveat is that such a construction differs from networkx.line_graph()
.
from dgl.
For now, I guess I'll stick to the line graph approach. But it seems to me that this will introduce a non-negligible overhead, particularly due to the construction of another DGLGraph
object with node attributes duplicated onto the new edge nodes.
A more elegant approach would be to define some other primitives that support edge-to-edge (or message-to-message) updates, but I'm not sure how to do that, and I'm reluctant to have other primitive(s) added solely for loopy BP. I wonder if there is any other solution...
from dgl.
Would it suffice by putting a filter on message reduction?
from dgl.
We decided that doing message passing on line graph (specifically, the line graph without feedback from itself; see Joan's paper above) is a more general solution.
from dgl.
Related Issues (20)
- dataloader causes RuntimeError: CUDA error: unspecified launch failure HOT 3
- There have big bug if I use DGL with tensorflow HOT 1
- [Graphbolt] A new graph with separated small graphs HOT 1
- Problem CUSPARSE ERROR: 10 HOT 1
- [Graphbolt] Support csc/csr format for SampledSubgraph.
- Cause an endless loop and high memory usage when use `list()` to wrap an object of HeteroNodeView HOT 1
- [Graphbolt] Redesign the from_csc API. HOT 1
- Build failed on `s390x`(LinuxONE) platfrom HOT 2
- [Graphbolt] Enable feature fetching threading in graphbolt.
- [Graphbolt] SampledSubgraph use CSC format instead of COO. HOT 1
- Input feature mismatch when using a heterogeneous dgl graph with Source node having multiple edges to target node HOT 1
- [Graphbolt] Implement on-disk feature store with Linux Ayns IO on SSD. HOT 2
- [DGL 2.0] Add documentation for GT for DGL 2.0
- [Graphbolt] Feature fetcher support DGLMinibatch.
- [Graphbolt] Analyze the hetero sampling with multiple small graphs.
- NameError: name '_CAPI_DGLRPCReset' is not defined when calling `dgl.distributed.initialize()` HOT 2
- Can't build DGL graphbolt after #6417 HOT 2
- [Graphbolt] Support fetch label from feature store.
- Graphbolt: Enable separate stream for CUDA memory copy and computation HOT 2
- [Graphbolt] Enable Link Prediction Example `model.inference`
Recommend Projects
-
React
A declarative, efficient, and flexible JavaScript library for building user interfaces.
-
Vue.js
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
-
Typescript
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
-
TensorFlow
An Open Source Machine Learning Framework for Everyone
-
Django
The Web framework for perfectionists with deadlines.
-
Laravel
A PHP framework for web artisans
-
D3
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
-
Recommend Topics
-
javascript
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
-
web
Some thing interesting about web. New door for the world.
-
server
A server is a program made to process requests and deliver data to clients.
-
Machine learning
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
-
Visualization
Some thing interesting about visualization, use data art
-
Game
Some thing interesting about game, make everyone happy.
Recommend Org
-
Facebook
We are working to build community through open source technology. NB: members must have two-factor auth.
-
Microsoft
Open source projects and samples from Microsoft.
-
Google
Google ❤️ Open Source for everyone.
-
Alibaba
Alibaba Open Source for everyone
-
D3
Data-Driven Documents codes.
-
Tencent
China tencent open source team.
from dgl.