Comments (9)
Thanks for asking.
If you have an adjacency matrix A, and a degree node matrix D you can normalize it by doing what I call Kipf's normalization which is a form of reduced adjacency matrix [1]:
But in graph CNN this is slightly different: The A becomes
Then the normalization becomes:
I believe what we used here was a Laplacian normalization with accounting to self nodes by adding identity to the nodes
Line 43 in 9347d30
I did a quick test to illustrate the difference:
import networkx as nx
import numpy as np
#Kipf's normalization
A = np.asarray([[0,5,9],[5,0,8],[9,8,0]])
print("A Matrix")
print(A)
A_hat = A+np.eye(3)
print("A_hat Matrix")
print(Ahat)
D_hat = np.eye(3)*2 +np.eye(3) #Fully connected graph degree matrix of A_hat
print("D_hat Matrix")
print(D)
D_half_inv = np.linalg.inv(np.sqrt(D))
Kip_normalization = np.matmul(np.matmul(D_half_inv,A_hat),D_half_inv)
print("\n Kipf normalization")
print(Kip_normalization)
#Using laplacian
G = nx.from_numpy_matrix(A_hat) #create a graph
A_lapl = nx.normalized_laplacian_matrix(G).toarray()
print("\n Laplacian normalization")
print(A_lapl)
which yields:
A Matrix
[[0 5 9]
[5 0 8]
[9 8 0]]
A_hat Matrix
[[1. 5. 9.]
[5. 1. 8.]
[9. 8. 1.]]
D_hat Matrix
[[3. 0. 0.]
[0. 3. 0.]
[0. 0. 3.]]
Kipf normalization
[[0.33333333 1.66666667 3. ]
[1.66666667 0.33333333 2.66666667]
[3. 2.66666667 0.33333333]]
Laplacian normalization
[[ 0.93333333 -0.34503278 -0.54772256]
[-0.34503278 0.92857143 -0.50395263]
[-0.54772256 -0.50395263 0.94444444]]
Thanks for noticing this, I will check into more details and either update the paper or the code to see which works best, according to [2] I think the proposed normalization by Kipf is to avoid the computation of lapalcian.
[1] https://en.wikipedia.org/wiki/Laplacian_matrix
[2] https://tkipf.github.io/graph-convolutional-networks/
from social-stgcnn.
Dear Authors
Thanks for your detailed reply ! The example you provided is exactly what I have observed.
Just wondering: could your way be another approach to do approximation in [2] ?
#1. Original paper assumes learning will adapt theta = theta_{0} = -theta_{1}, which leads to the proposed formular (the 2nd equation of your reply).
#2. However, if assume theta = theta_{0} = theta_{1}, then the RHS of equation (6) from [2] becomes:
theta * L * x, which is also ok (I guess).
Thanks again for your time and effort !
from social-stgcnn.
Can you refer to which equation you mean in [2]?
I'm not sure I'm following theta thing
from social-stgcnn.
Sorry for the confusion.
First, in Kipf's paper, the reason why we use normalized adjacency matrix is from below two deductions:
Second, if it is possible to assume , then above equations will be re-formulated as:
So, normalized lapalcian fits the approximation.
from social-stgcnn.
I will get back to this by the beginning of next weak, sorry for the delay
from social-stgcnn.
Dear Authors
Thanks for your detailed reply
can i believe the difference between stgcn and social-stgcnn as below
1.stgcn use
2.social-stgcnn use
lapalcian=Dhat -Ahat
What is the difference between the two
from social-stgcnn.
Hi, I will try to answer to my best:
Kipf normalization [ Not sure if this is stgcn or not]
[[0.33333333 1.66666667 3. ]
[1.66666667 0.33333333 2.66666667]
[3. 2.66666667 0.33333333]]
Laplacian normalization [Ours]
[[ 0.93333333 -0.34503278 -0.54772256]
[-0.34503278 0.92857143 -0.50395263]
[-0.54772256 -0.50395263 0.94444444]]
This how the A matrix looks like after normalization, so ours row and columns sums to 0.
I didn't try the Kipf's normalization method to see the outcome on our approach. You can test both and understand how it effects your results @hzzzzjzyq
from social-stgcnn.
Hi, I just want to ask the following question: why are the sums of rows and columns of the Laplacian normalization matrix equal to 0? Is there any strict proof?
>>> A_lapl
array([[ 0.93333333, -0.34503278, -0.54772256],
[-0.34503278, 0.92857143, -0.50395263],
[-0.54772256, -0.50395263, 0.94444444]])
>>> np.sum(A_lapl,axis=0)
array([ 0.040578 , 0.07958602, -0.10723074])
>>> np.sum(A_lapl,axis=1)
array([ 0.040578 , 0.07958602, -0.10723074])
from social-stgcnn.
from social-stgcnn.
Related Issues (20)
- anorm problem about step_rel and step_ HOT 4
- Data visualization HOT 3
- Why does the loss value change from positive to negative HOT 1
- PROBLEM About TCN HOT 1
- could min-pedestrian greater than 1? HOT 1
- How to handle trajectories with missing frames? HOT 2
- Visualize trajectory in picture HOT 6
- Details about data processing HOT 3
- Questions about data process
- I am very curious about the element in your proposed adjacency matrix
- I am very curious about the element in your proposed adjacency matrix HOT 3
- When I test my own model, some datasets work fine, but the univ dataset reports errors. The specific error message is as follows. HOT 1
- details about dataset .txt file encoding type. HOT 1
- A question of determining linear or nonlinear
- During visualization, the jupyter notebook encountered the following error
- Details about Gaussian path visualization
- Question about the code in basic gcn unit
- final predicted trajectory data
- This work is very cool. As a beginner in learning trajectory intention prediction, I would like to ask some questions.
- make our own dataset
Recommend Projects
-
React
A declarative, efficient, and flexible JavaScript library for building user interfaces.
-
Vue.js
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
-
Typescript
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
-
TensorFlow
An Open Source Machine Learning Framework for Everyone
-
Django
The Web framework for perfectionists with deadlines.
-
Laravel
A PHP framework for web artisans
-
D3
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
-
Recommend Topics
-
javascript
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
-
web
Some thing interesting about web. New door for the world.
-
server
A server is a program made to process requests and deliver data to clients.
-
Machine learning
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
-
Visualization
Some thing interesting about visualization, use data art
-
Game
Some thing interesting about game, make everyone happy.
Recommend Org
-
Facebook
We are working to build community through open source technology. NB: members must have two-factor auth.
-
Microsoft
Open source projects and samples from Microsoft.
-
Google
Google ❤️ Open Source for everyone.
-
Alibaba
Alibaba Open Source for everyone
-
D3
Data-Driven Documents codes.
-
Tencent
China tencent open source team.
from social-stgcnn.