Giter Site home page Giter Site logo

fluxml / geometricflux.jl Goto Github PK

View Code? Open in Web Editor NEW
349.0 8.0 29.0 17.98 MB

Geometric Deep Learning for Flux

Home Page: https://fluxml.ai/GeometricFlux.jl/stable/

License: MIT License

Julia 66.63% Shell 0.04% TeX 32.23% Ruby 1.10%
geometric-deep-learning juliagraphs flux graph-neural-networks machine-learning deep-learning

geometricflux.jl's People

Contributors

abieler avatar alperyilmaz avatar carlolucibello avatar chrisrackauckas avatar emsal0 avatar github-actions[bot] avatar ilancoulon avatar jarbus avatar juliatagbot avatar kshyatt avatar logankilpatrick avatar skyleaworlder avatar xvilka avatar yuehhua avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

geometricflux.jl's Issues

GAT outputs same embedding for all nodes if defined on complete graph

I am trying to train a 3 layer GAT network with multiple heads on a complete graph of 100 nodes. Each node represents a point in space, so the node's features are its coordinates in [0,1]. The output of the network is the same embedding for each of the nodes, a (128, 100) matrix with each column identical to the rest.

A toy example

using GeometricFlux, LightGraphs, Flux

nodes_c = 10
node_features = rand(2, nodes_c)

g = LightGraphs.complete_graph(nodes_c)

model = Chain(GATConv(g, 2=>4), GATConv(g, 4=>8), GATConv(g, 8=>16))

model(node_features)

which outputs the same 16d embedding for each node.

Am I misunderstanding something about GATs here, or does the GATConv implementation not work with complete graphs?

GCN example still doesn't work

Hi,
I've been trying to run your GCN example and unfortunately, it doesn't work on the CPU. I've found #64 which was supposed to solve this, but I'm getting the issue described in #61 while running 14468b1.

Code for reference
using GeometricFlux
using Flux
using Flux: onehotbatch, onecold, logitcrossentropy, throttle
using Flux: @epochs
using JLD2  # use v0.1.2
using Statistics: mean
using SparseArrays
using LightGraphs.SimpleGraphs
using LightGraphs: adjacency_matrix

@load "data/cora_features.jld2" features
@load "data/cora_labels.jld2" labels
@load "data/cora_graph.jld2" g

num_nodes = 2708
num_features = 1433
hidden = 16
target_catg = 7
epochs = 20

## Preprocessing data
train_X = Float32.(features)  # dim: num_features * num_nodes
train_y = Float32.(labels)  # dim: target_catg * num_nodes

adj_mat = Matrix{Float32}(adjacency_matrix(g))

## Model
model = Chain(GCNConv(adj_mat, num_features=>hidden, relu),
              Dropout(0.5),
              GCNConv(adj_mat, hidden=>target_catg),
              softmax)

## Loss
loss(x, y) = logitcrossentropy(model(x), y)
accuracy(x, y) = mean(onecold(model(x)) .== onecold(y))


## Training
ps = Flux.params(model)
train_data = [(train_X, train_y)]
opt = ADAM(0.05)
evalcb() = @show(accuracy(train_X, train_y))

@epochs epochs Flux.train!(loss, ps, train_data, opt, cb=throttle(evalcb, 10))
(TMP_Geometric_Demo) pkg> status
Status `~/Documents/git/TMP_Geometric_Demo/Project.toml`
  [587475ba] Flux v0.11.2
  [7e08b658] GeometricFlux v0.7.4 `~/.julia/dev/GeometricFlux`
  [033835bb] JLD2 v0.2.4
  [093fc24a] LightGraphs v1.3.0
  [2f01184e] SparseArrays
  [10745b16] Statistics

Cannot differentiate GATConv

I am trying to use the GATConv, and differentiating it seems impossible.

Trying to launch those lines of code:

using GeometricFlux
using Zygote: gradient
using Flux: params

adj =  [0. 1. 0. 1.;
        1. 0. 1. 0.;
        0. 1. 0. 1.;
        1. 0. 1. 0.]

gat = GATConv(adj, 3=>5, heads=1, concat=true)
X = rand(3, 4)
gradient(() -> sum(gat(X)), params(gat))

I get the following error:

ERROR: LoadError: Need an adjoint for constructor Base.Iterators.Pairs{Symbol,Array,Tuple{Symbol,Symbol},NamedTuple{(:M, :cluster),Tuple{Array{Float64,2},Array{Int64,1}}}}. Gradient is of type Dict{Any,Any}
Stacktrace:
 [1] error(::String) at .\error.jl:33
 [2] (::Zygote.Jnew{Base.Iterators.Pairs{Symbol,Array,Tuple{Symbol,Symbol},NamedTuple{(:M, :cluster),Tuple{Array{Float64,2},Array{Int64,1}}}},Nothing,false})(::Dict{Any,Any}) at C:\[...]\.juliapro\JuliaPro_v1.4.2-1\packages\Zygote\YeCEW\src\lib\lib.jl:306
 [3] (::Zygote.var"#380#back#193"{Zygote.Jnew{Base.Iterators.Pairs{Symbol,Array,Tuple{Symbol,Symbol},NamedTuple{(:M, :cluster),Tuple{Array{Float64,2},Array{Int64,1}}}},Nothing,false}})(::Dict{Any,Any}) at C:\[...]\.juliapro\JuliaPro_v1.4.2-1\packages\ZygoteRules\6nssF\src\adjoint.jl:49
 [4] Pairs at .\iterators.jl:169 [inlined]
 [5] (::typeof(∂(Base.Iterators.Pairs)))(::Dict{Any,Any}) at C:\[...]\.juliapro\JuliaPro_v1.4.2-1\packages\Zygote\YeCEW\src\compiler\interface2.jl:0
 [6] pairs at .\iterators.jl:226 [inlined]
 [7] (::typeof(∂(pairs)))(::Dict{Any,Any}) at C:\[...]\.juliapro\JuliaPro_v1.4.2-1\packages\Zygote\YeCEW\src\compiler\interface2.jl:0 (repeats 2 times)
 [8] #propagate#105 at C:\[...]\GeometricFlux.jl\src\layers\msgpass.jl:44 [inlined]
 [9] (::typeof(∂(#propagate#105)))(::FillArrays.Fill{Float64,2,Tuple{Base.OneTo{Int64},Base.OneTo{Int64}}}) at C:\[...]\.juliapro\JuliaPro_v1.4.2-1\packages\Zygote\YeCEW\src\compiler\interface2.jl:0 (repeats 2 times)
 [10] GATConv at C:\[...]\GeometricFlux.jl\src\layers\conv.jl:285 [inlined]
 [11] (::typeof(∂(λ)))(::FillArrays.Fill{Float64,2,Tuple{Base.OneTo{Int64},Base.OneTo{Int64}}}) at C:\[...]\.juliapro\JuliaPro_v1.4.2-1\packages\Zygote\YeCEW\src\compiler\interface2.jl:0
 [12] #5 at C:\[...]\failgat.jl:12 [inlined]
 [13] (::typeof(∂(#5)))(::Float64) at C:\[...]\.juliapro\JuliaPro_v1.4.2-1\packages\Zygote\YeCEW\src\compiler\interface2.jl:0
 [14] (::Zygote.var"#49#50"{Zygote.Params,Zygote.Context,typeof(∂(#5))})(::Float64) at C:\[...]\.juliapro\JuliaPro_v1.4.2-1\packages\Zygote\YeCEW\src\compiler\interface.jl:179
 [15] gradient(::Function, ::Zygote.Params) at C:\[...]\.juliapro\JuliaPro_v1.4.2-1\packages\Zygote\YeCEW\src\compiler\interface.jl:55
 [16] top-level scope at C:\[...]\failgat.jl:12
 [17] include(::Module, ::String) at .\Base.jl:377
 [18] exec_options(::Base.JLOptions) at .\client.jl:288
 [19] _start() at .\client.jl:484

I'll try to investigate, but maybe @yuehhua already got this kind of error?

Domain Error with negative sqrt.

When training my neural network with GCNs, I sometimes (randomly) run into the following error which is caused by calling square root of a negative real number. It is interesting to mention that my loss function is log. cross entropy and I do not use any square roots anywhere. Also, I am using a sigmoid activator at the last layer to make sure no such errors occur. However, on random, this error happens during training with this package and I do not know how to fix it.

DomainError with -2.748779e11:
sqrt will only return a complex result if called with a complex argument. Try sqrt(Complex(x)).
in top-level scope at base\util.jl:234
in macro expansion at my_training.jl:431
in collect at base\array.jl:670
in collect_to_with_first! at base\array.jl:689
in collect_to! at base\array.jl:711
in iterate at base\generator.jl:47
in at base\none
in at Flux\IjMZL\src\layers\basic.jl:38
in applychain at Flux\IjMZL\src\layers\basic.jl:36
in Chain at Flux\IjMZL\src\layers\basic.jl:38
in applychain at Flux\IjMZL\src\layers\basic.jl:36
in Chain at Flux\IjMZL\src\layers\basic.jl:38
in applychain at Flux\IjMZL\src\layers\basic.jl:36
in at ev_training.jl:208
in at Flux\IjMZL\src\layers\basic.jl:38
in applychain at Flux\IjMZL\src\layers\basic.jl:36
in applychain at Flux\IjMZL\src\layers\basic.jl:36
in at GeometricFlux\k4atN\src\layers\conv.jl:48
in normalized_laplacian##kw at GeometricFlux\k4atN\src\graph\featuredgraphs.jl:133
in #normalized_laplacian#104 at GeometricFlux\k4atN\src\graph\featuredgraphs.jl:133
in at GeometricFlux\k4atN\src\operations\linalg.jl:127
in #normalized_laplacian#85 at GeometricFlux\k4atN\src\operations\linalg.jl:128
in _normalized_laplacian at GeometricFlux\k4atN\src\operations\linalg.jl:133
in inv_sqrt_degree_matrix##kw at GeometricFlux\k4atN\src\operations\linalg.jl:98
in #inv_sqrt_degree_matrix#83 at GeometricFlux\k4atN\src\operations\linalg.jl:98
in materialize at base\broadcast.jl:820
in copy at base\broadcast.jl:840
in copyto! at base\broadcast.jl:864
in copyto! at base\broadcast.jl:909
in macro expansion at base\simdloop.jl:77
in macro expansion at base\broadcast.jl:910
in getindex at base\broadcast.jl:564
in _broadcast_getindex at base\broadcast.jl:603
in _getindex at base\broadcast.jl:628
in _broadcast_getindex at base\broadcast.jl:604
in _broadcast_getindex_evalf at base\broadcast.jl:631
in sqrt at base\math.jl:557
in throw_complex_domainerror at base\math.jl:33

**
Pkg.status()
[fbb218c0] BSON v0.2.6
[052768ef] CUDA v1.2.1
[3895d2a7] CUDAapi v4.0.0
[587475ba] Flux v0.11.0
[7e08b658] GeometricFlux v0.6.1
[bd48cda9] GraphRecipes v0.5.4
[2e9cd046] Gurobi v0.8.1
[4138dd39] JLD v0.10.0
[033835bb] JLD2 v0.1.14
[4076af6c] JuMP v0.21.3
[093fc24a] LightGraphs v1.3.3
[b8f27783] MathOptInterface v0.9.14
[fdba3010] MathProgBase v0.7.8
[91a5bcdd] Plots v1.5.8
[c36e90e8] PowerModels v0.17.2
[438e738f] PyCall v1.91.4
[82193955] SCIP v0.9.6
[47aef6b3] SimpleWeightedGraphs v1.1.1
[9f7883ad] Tracker v0.2.8
[e88e6eb3] Zygote v0.5.4
[37e2e46d] LinearAlgebra
[10745b16] Statistics

Bug with CUDA + GATConv

I just found what I believe to be a bug with the GATConv layer. The bug appeared on SeaPearl.jl so I don't have a proper code to reproduce the bug yet, but I'll work on it early next week.

Description

The bug appears in the following context:

  • Working with a Flux.Chain containing at least 2 GATConv;
  • Loading the chain on a GPU with CUDA;
  • Forward passing on the chain.

When running the code, I receive the following error:

ERROR: LoadError: MethodError: update_batch_edge(::GATConv{NullGraph, Float32}, ::Vector{Vector{Int64}}, ::CuArray{Float32, 2}, ::CuArray{Float32, 2}, ::FillArrays.Fill{Float32, 1, Tuple{Base.OneTo{Int64}}}) is ambiguous. Candidates:
  update_batch_edge(g::GATConv, adj, E::AbstractMatrix{T} where T, X::AbstractMatrix{T} where T, u) in GeometricFlux at /home/pierre/Documents/Stage/GeometricFlux.jl/src/layers/conv.jl:308
  update_batch_edge(mp::T, adj, E::CuArray{T, 2} where T, X::CuArray{T, 2} where T, u) where T<:MessagePassing in GeometricFlux at /home/pierre/Documents/Stage/GeometricFlux.jl/src/cuda/msgpass.jl:21
  update_batch_edge(mp::T, adj, E::AbstractMatrix{T} where T, X::CuArray{T, 2} where T, u) where T<:MessagePassing in GeometricFlux at /home/pierre/Documents/Stage/GeometricFlux.jl/src/cuda/msgpass.jl:11
  update_batch_edge(mp::T, adj, E::CuArray{T, 2} where T, X::AbstractMatrix{T} where T, u) where T<:MessagePassing in GeometricFlux at /home/pierre/Documents/Stage/GeometricFlux.jl/src/cuda/msgpass.jl:16
Possible fix, define
  update_batch_edge(::T, ::Any, ::CuArray{T, 2} where T, ::CuArray{T, 2} where T, ::Any) where T<:GATConv
Stacktrace:
  [1] propagate(gn::GATConv{NullGraph, Float32}, adj::Vector{Vector{Int64}}, E::CuArray{Float32, 2}, V::CuArray{Float32, 2}, u::FillArrays.Fill{Float32, 1, Tuple{Base.OneTo{Int64}}}, naggr::Function, eaggr::Nothing, vaggr::Nothing)
    @ GeometricFlux ~/Documents/Stage/GeometricFlux.jl/src/layers/gn.jl:65
  [2] propagate(mp::GATConv{NullGraph, Float32}, adj::Vector{Vector{Int64}}, E::CuArray{Float32, 2}, X::CuArray{Float32, 2}, aggr::Function)
    @ GeometricFlux ~/Documents/Stage/GeometricFlux.jl/src/layers/msgpass.jl:57
  [3] propagate(mp::GATConv{NullGraph, Float32}, fg::FeaturedGraph{CuArray{Float32, 2}, CuArray{Float32, 2}, CuArray{Float32, 2}, FillArrays.Fill{Float32, 1, Tuple{Base.OneTo{Int64}}}}, aggr::Function)
    @ GeometricFlux ~/Documents/Stage/GeometricFlux.jl/src/layers/msgpass.jl:52
  [4] (::GATConv{NullGraph, Float32})(fg::FeaturedGraph{CuArray{Float32, 2}, CuArray{Float32, 2}, CuArray{Float32, 2}, FillArrays.Fill{Float32, 1, Tuple{Base.OneTo{Int64}}}})
    @ GeometricFlux ~/Documents/Stage/GeometricFlux.jl/src/layers/conv.jl:345
  [5] applychain(fs::Tuple{GATConv{NullGraph, Float32}}, x::FeaturedGraph{CuArray{Float32, 2}, CuArray{Float32, 2}, CuArray{Float32, 2}, FillArrays.Fill{Float32, 1, Tuple{Base.OneTo{Int64}}}}) (repeats 2 times)
    @ Flux ~/.julia/packages/Flux/goUGu/src/layers/basic.jl:36
  [6] (::Chain{Tuple{GATConv{NullGraph, Float32}, GATConv{NullGraph, Float32}}})(x::FeaturedGraph{CuArray{Float32, 2}, CuArray{Float32, 2}, FillArrays.Fill{Float32, 2, Tuple{Base.OneTo{Int64}, Base.OneTo{Int64}}}, FillArrays.Fill{Float32, 1, Tuple{Base.OneTo{Int64}}}})
    @ Flux ~/.julia/packages/Flux/goUGu/src/layers/basic.jl:38

(Call stack truncated because there is a lot of SeaPearl related code)

Explanation

After tracking the bug, I found out what is happening:

  • during the first pass in a GATConv, everything works fine, but after the computation, in
    propagate(gn, adj, E, V, u, naggr, eaggr, vaggr) @ GeometricFlux /GeometricFlux.jl/src/layers/gn.jl:65 the ef field of the FeaturedGraph is filled with E::CuArray despite it being a computation side product for the vertex features.
  • as a result the FeaturedGraph now has a CuArray instead of a FillArray for the ef field.
  • during the second pass, the update_batch_edge is called with 2 CuArray, which isn't catched in the file cuda/conv.jl which only has
update_batch_edge(g::GATConv, adj, E::Fill{S,2,Axes}, X::CuMatrix, u) where {S,Axes} = update_batch_edge(g, adj, X)

Potential Fixes

The potential fixes I see are either:

  • Add a function signature in cuda/conv.jl => easiest one;
  • Do not fill the ef field for GATConv => most logical, but not entirely sure how to do it;

What are your thoughts about this ?

Test errors on Julia 1.3 and 1.4 master

I tried with Julia 1.3 and 1.4-master, GeometricFlux v0.1.1 and get these errors:

Test Summary:                                                   | Pass  Error  Total
GeometricFlux                                                   |  244      4    248
  Test neighboring                                              |    4             4
  Test MessagePassing layer                                     |    1             1
  Test multi-thread MessagePassing layer                        |    1             1
  Test GCNConv layer                                            |    4             4
  Test ChebConv layer                                           |    7             7
  Test GraphConv layer                                          |    4      1      5
  Test GATConv layer                                            |    3      1      4
  Test GatedGraphConv layer                                     |    2      1      3
  Test EdgeConv layer                                           |    1      1      2
  Test sumpool                                                  |    2             2
  Test subpool                                                  |    2             2
  Test prodpool                                                 |    2             2
  Test divpool                                                  |    2             2
  Test maxpool                                                  |    2             2
  Test minpool                                                  |    2             2
  Test meanpool                                                 |    2             2
  Test InnerProductDecoder layer                                |    2             2
  Test VariationalEncoder layer                                 |    1             1
  Test GAE model                                                |    1             1
  Test VGAE model                                               |    1             1
  Test Linear Algebra                                           |   24            24
  Test Scatter Add                                              |   24            24
  Test Scatter Sub                                              |   24            24
  Test Scatter Max                                              |   24            24
  Test Scatter Min                                              |   24            24
  Test Scatter Mul                                              |   24            24
  Test Scatter Div                                              |    6             6
  Test Scatter Mean                                             |    6             6
  Test support of LightGraphs for GCNConv layer                 |    3             3
  Test support of LightGraphs for ChebConv layer                |    6             6
  Test support of LightGraphs for GraphConv layer               |    4             4
  Test support of LightGraphs for GATConv layer                 |    3             3
  Test support of LightGraphs for GatedGraphConv layer          |    2             2
  Test support of LightGraphs for EdgeConv layer                |    1             1
  Test support of SimpleWeightedGraphs for GCNConv layer        |    3             3
  Test support of SimpleWeightedGraphs for ChebConv layer       |    6             6
  Test support of SimpleWeightedGraphs for GraphConv layer      |    4             4
  Test support of SimpleWeightedGraphs for GATConv layer        |    3             3
  Test support of SimpleWeightedGraphs for GatedGraphConv layer |    2             2
  Test support of SimpleWeightedGraphs for EdgeConv layer       |    1             1
  Test adjlist                                                  |    4             4

GAT example not working

I changed the preprocessing part

train_X = Float32.(features) |> gpu  # dim: num_features * num_nodes
train_y = Float32.(labels) |> gpu  # dim: target_catg * num_nodes

into

train_X = Matrix{Float32}(features) |> gpu  # dim: num_features * num_nodes
train_y = Matrix{Float32}(labels) |> gpu  # dim: target_catg * num_nodes

in order to avoid some GPU errors. However, the training function is stuck at epoch 1 even after many minutes.

model(train_X) gives 56×1 array, but train_y is 7×2708 array. Is there a problem with data dimension or reshaping?

thanks.

GATConv does not seem to have the desired behaviour

Making GATConv a MessagePassing layer seems not as straight forward as we could think.

Let's look at this line: https://github.com/yuehhua/GeometricFlux.jl/blob/b072f4ed2dc7177590f386320006cd3534f53c2f/src/layers/conv.jl#L298

function message(g::GATConv, x_i::AbstractVector, x_j::AbstractVector, e_ij)
    x_i = reshape(g.weight*x_i, :, g.heads)
    x_j = reshape(g.weight*x_j, :, g.heads)
    n = size(x_i, 1)
    α = vcat(x_i, x_j+zero(x_j)) .* g.a
    α = reshape(sum(α, dims=1), g.heads)
    α = leakyrelu.(α, g.negative_slope)
    α = Flux.softmax(α) # The same line in context
    reshape(x_j .* reshape(α, 1, g.heads), n*g.heads)
end

And let's suppose we are in the case heads=1. In that case, α is a array of one single value. It means that applying a softmax to it will always turn our α into 1.

The question would now be: how to indeed do a softmax, but over every other α?
One way I have in mind would be to put that α (without applying a softmax) as the first number of the message vector. Then, we would have to dispatch the aggregation function so we could take those alphas into account, compute their softmax, and then do the proper sum with the right coefficients.

@yuehhua Do you think it would be easy to do that? What function should I exactly dispatch?

Method error when training GCNConv with FeaturedGraph input

I made the following modifications to the GCNConv layer example to incorporate FeaturedGraph input:

train_X_fg = GeometricFlux.FeaturedGraph(adj_mat, train_X)

model = Chain(GCNConv(num_features => hidden, relu),
              Dropout(0.5),
              GCNConv(hidden => target_catg),
              x -> softmax(x.nf[])) |> gpu

train_data = [(train_X_fg, train_y)]
@epochs epochs Flux.train!(loss, ps, train_data, opt, cb=throttle(evalcb, 10))

I get the following error:

ERROR: MethodError: no method matching (::GeometricFlux.var"#adjacency_matrix#76")(::FeaturedGraph{Array{Float32,2},SparseMatrixCSC{Float32,Int64},Array{Float64,2},Array{Float64,1}})
Closest candidates are:
  adjacency_matrix(::AbstractArray{T,2} where T) at /Users/thomasfalconer/.julia/packages/GeometricFlux/k4atN/src/operations/linalg.jl:6
  adjacency_matrix(::AbstractArray{T,2} where T, ::DataType) at /Users/thomasfalconer/.julia/packages/GeometricFlux/k4atN/src/operations/linalg.jl:6
Stacktrace:
 [1] _pullback at /Users/thomasfalconer/.julia/packages/Zygote/iFibI/src/lib/grad.jl:8 [inlined]
 [2] GCNConv at /Users/thomasfalconer/.julia/packages/GeometricFlux/k4atN/src/layers/conv.jl:55 [inlined]
 [3] _pullback(::Zygote.Context, ::GCNConv{Float32,typeof(relu),FeaturedGraph{Array{Float64,2},Array{Float64,2},Array{Float64,2},Array{Float64,1}}}, ::FeaturedGraph{Array{Float32,2},SparseMatrixCSC{Float32,Int64},Array{Float64,2},Array{Float64,1}}) at /Users/thomasfalconer/.julia/packages/Zygote/iFibI/src/compiler/interface2.jl:0
 [4] applychain at /Users/thomasfalconer/.julia/packages/Flux/IjMZL/src/layers/basic.jl:36 [inlined]
 [5] _pullback(::Zygote.Context, ::typeof(Flux.applychain), ::Tuple{GCNConv{Float32,typeof(relu),FeaturedGraph{Array{Float64,2},Array{Float64,2},Array{Float64,2},Array{Float64,1}}},Dropout{Float64,Colon},GCNConv{Float32,typeof(identity),FeaturedGraph{Array{Float64,2},Array{Float64,2},Array{Float64,2},Array{Float64,1}}},var"#11#12"}, ::FeaturedGraph{Array{Float32,2},SparseMatrixCSC{Float32,Int64},Array{Float64,2},Array{Float64,1}}) at /Users/thomasfalconer/.julia/packages/Zygote/iFibI/src/compiler/interface2.jl:0
 [6] Chain at /Users/thomasfalconer/.julia/packages/Flux/IjMZL/src/layers/basic.jl:38 [inlined]
 [7] _pullback(::Zygote.Context, ::Chain{Tuple{GCNConv{Float32,typeof(relu),FeaturedGraph{Array{Float64,2},Array{Float64,2},Array{Float64,2},Array{Float64,1}}},Dropout{Float64,Colon},GCNConv{Float32,typeof(identity),FeaturedGraph{Array{Float64,2},Array{Float64,2},Array{Float64,2},Array{Float64,1}}},var"#11#12"}}, ::FeaturedGraph{Array{Float32,2},SparseMatrixCSC{Float32,Int64},Array{Float64,2},Array{Float64,1}}) at /Users/thomasfalconer/.julia/packages/Zygote/iFibI/src/compiler/interface2.jl:0
 [8] loss at ./REPL[42]:2 [inlined]
 [9] _pullback(::Zygote.Context, ::typeof(loss), ::FeaturedGraph{Array{Float32,2},SparseMatrixCSC{Float32,Int64},Array{Float64,2},Array{Float64,1}}, ::SparseMatrixCSC{Float32,Int64}) at /Users/thomasfalconer/.julia/packages/Zygote/iFibI/src/compiler/interface2.jl:0
 [10] adjoint at /Users/thomasfalconer/.julia/packages/Zygote/iFibI/src/lib/lib.jl:175 [inlined]
 [11] _pullback at /Users/thomasfalconer/.julia/packages/ZygoteRules/6nssF/src/adjoint.jl:47 [inlined]
 [12] #15 at /Users/thomasfalconer/.julia/packages/Flux/IjMZL/src/optimise/train.jl:83 [inlined]
 [13] _pullback(::Zygote.Context, ::Flux.Optimise.var"#15#21"{typeof(loss),Tuple{FeaturedGraph{Array{Float32,2},SparseMatrixCSC{Float32,Int64},Array{Float64,2},Array{Float64,1}},SparseMatrixCSC{Float32,Int64}}}) at /Users/thomasfalconer/.julia/packages/Zygote/iFibI/src/compiler/interface2.jl:0
 [14] pullback(::Function, ::Zygote.Params) at /Users/thomasfalconer/.julia/packages/Zygote/iFibI/src/compiler/interface.jl:172
 [15] gradient(::Function, ::Zygote.Params) at /Users/thomasfalconer/.julia/packages/Zygote/iFibI/src/compiler/interface.jl:53
 [16] macro expansion at /Users/thomasfalconer/.julia/packages/Flux/IjMZL/src/optimise/train.jl:82 [inlined]
 [17] macro expansion at /Users/thomasfalconer/.julia/packages/Juno/tLMZd/src/progress.jl:134 [inlined]
 [18] train!(::Function, ::Zygote.Params, ::Array{Tuple{FeaturedGraph{Array{Float32,2},SparseMatrixCSC{Float32,Int64},Array{Float64,2},Array{Float64,1}},SparseMatrixCSC{Float32,Int64}},1}, ::ADAM; cb::Flux.var"#throttled#26"{Flux.var"#throttled#22#27"{Bool,Bool,typeof(evalcb),Int64}}) at /Users/thomasfalconer/.julia/packages/Flux/IjMZL/src/optimise/train.jl:80
 [19] top-level scope at /Users/thomasfalconer/.julia/packages/Flux/IjMZL/src/optimise/train.jl:115
 [20] top-level scope at /Users/thomasfalconer/.julia/packages/Juno/tLMZd/src/progress.jl:134

Having trouble investigating the cause of this error @yuehhua

References in FeaturedGraphs

Hey there,
having a problem using the GCNConv Layer with e.g. SimpleGraphs.
Calling GCNConv with a SimpleGraph or SimpleWeightedGraphs throws
MethodError: no method matching nv(::Base.RefValue{SimpleWeightedGraph{Int64,Float64}})

For me, this could be resolved changing the creation of the FeaturedGraph within the GCNConv function of the respective graph file (e.g. simplegraphs.jl) from
fg = FeaturedGraph(Ref(g), Ref(nothing))
to
fg = FeaturedGraph(g, nothing), because the first line resulting in a reference within a reference.

Example code mistake

Hi!

I was trying to replicate your example code for the GCN and I think there's one small mistake on the accuracy function, when calculating the onecold you are supposed to transpose the matrix in order to get the 7 variable transformation.

accuracy(x, y) = mean(onecold(model(x)') .== onecold(y'))

BR.

Support for graph nets

It is great to see GeometricFlux growing fast!

Graph nets are a generic framework proposed for graph processing - see also this tutorial and Python implementation. It would be great if GeometricFlux would support this framework.

The graph network as a special case supports GCNConv but with different graphs - also discussed in #6. For example, a graph net can also operate on edge properties.

Perhaps this type of framework is too generic for what you have in mind for GeometricFlux. At least, I hope that the referred papers/tutorials help to further motivate support for different shape graphs, which I think is an important capability of a graph processing package.

More "mutating arrays is not supported"

Great work on this package! Really like it.

While using it I ran in two cases where I got the "mutating arrays is not supported" error, both when using FeaturedGraphs. The following minimal code triggers the error:

using LightGraphs, Flux, GeometricFlux, GraphSignals

graph = SimpleGraph(10,10)
node_features = zeros(Float32, 4, 10)

featured_graph = GeometricFlux.FeaturedGraph(graph, node_features)

model = Chain(
  GCNConv(4=>4, relu),
  node_feature,
  Dense(4, 1)
)

grad = Flux.gradient(params(model)) do
  sum(model(featured_graph))
end

The first errors is caused by the code here and can be solved by changing the creation of the FeaturedGraph to the code below. Maybe there should be an earlier check to ensure the types are the same (with a more meaningful error message)?

adj_matrix = Matrix{Float32}(adjacency_matrix(graph))
featured_graph = GeometricFlux.FeaturedGraph(adj_matrix, node_features)

The second error is caused on this line and can be solved by disabling the cache on the GCNConv layer. It seems zygote is complaining about the g.fg.graph = A part in the code, which is skipped when the cache is disabled. I'm not even sure caching like this is possible in code that will be differentiated.

Full working code:

using LightGraphs, Flux, GeometricFlux, GraphSignals

graph = SimpleGraph(10,10)
node_features = zeros(Float32, 4, 10)

adj_matrix = Matrix{Float32}(adjacency_matrix(graph))     # ADDED
featured_graph = GeometricFlux.FeaturedGraph(adj_matrix, node_features)     # CHANGED

model = Chain(
  GCNConv(4=>4, relu, cache=false),     # CHANGED
  node_feature,
  Dense(4, 1)
)

grad = Flux.gradient(params(model)) do
  sum(model(featured_graph))
end

package versions used:

  [587475ba] Flux v0.11.1
  [7e08b658] GeometricFlux v0.7.1 `https://github.com/yuehhua/GeometricFlux.jl.git#master`
  [3ebe565e] GraphSignals v0.1.1
  [093fc24a] LightGraphs v1.3.3
  [e88e6eb3] Zygote v0.5.8

LightGraphs dependency warning

I’m using GeometricFlux and building my graphs using SimpleWeightedGraph. However, it seems to get some of the functions I need I also have to import LightGraphs itself (for example, the add_edge! function). Then, when I import GeometricFlux I get this warning:

┌ Warning: Package GeometricFlux does not have LightGraphs in its dependencies:
│ - If you have GeometricFlux checked out for development and have
│   added LightGraphs as a dependency but haven't updated your primary
│   environment's manifest file, try `Pkg.resolve()`.
│ - Otherwise you may need to report an issue with GeometricFlux
└ Loading LightGraphs into GeometricFlux from project dependency, future warnings for GeometricFlux are suppressed.

I brought this up on the #graphs channel in the Slack and Dhairya suggested I report it here.

GPU performance of scatter functions

I was considering using the scatter_add to perform some aggregation operations of large vectors. However, running on GPU resulted in significantly slower execution rather than the expected speedup.

Ex:

using CUDAnative
using CuArrays
using GeometricFlux

nbins = 20
hist = zeros(Float32, 1, nbins)
δ = rand(Float32, 1, 10_000_000)
idx = rand(1:nbins, 10_000_000)

hist_gpu = CuArray(hist)
δ_gpu = CuArray(δ)
idx_gpu = CuArray(idx)

@time scatter_add!(hist, δ, idx)
1.426842 seconds (20.00 M allocations: 457.771 MiB, 5.32% gc time)

CuArrays.@time scatter_add!(hist_gpu, δ_gpu, idx_gpu)
5.482533 seconds (39 CPU allocations: 1.156 KiB)

Is it an expected behavior?

I don't know if you had the occasion compare with the performance of the pytorch geometric implementation: https://github.com/rusty1s/pytorch_scatter/. I got issues with the installation so I couldn't perform a benchmark.

Question: Is differentiation w.r.t graph supported?

For a project involving sensitivity analysis, I'd like to take the gradient of a GNN w.r.t to its inputs. Is this currently supported in GeometricFlux? If not, what would be blockers to add it? (I assume it'd have to do with the interaction of LightGraphs, Zygote and any sparsity tricks used?)

Support for TopK Unpool

First of all, thank you so much for the great work!

I am wondering if there is support for the "gUnpool" layer mentioned in this paper Graph U-Nets. Thank you!

TagBot trigger issue

This issue is used to trigger TagBot; feel free to unsubscribe.

If you haven't already, you should update your TagBot.yml to include issue comment triggers.
Please see this post on Discourse for instructions and more details.

Support for sparse array computations

So far, layers accept sparse arrays and transform them into dense array for following computation. It causes computationally inefficient when the matrices are truly sparse.
There are two main sparse arrays are going to support: SparseMatrixCSC and CUSPARSE. In general, common computations are supported by default but there are still some computations are not fully supported or not well-behaved. The operations between two different kinds of arrays come out with inconsistent array types. Zygote support is not available with sparse arrays. So the following works are needed:

  • check common matrix operations with SparseMatrixCSC
  • check common matrix operations with CUSPARSE
  • check matrix operations with SparseMatrixCSC and CUSPARSE resulting consistent array types
  • gradient for common matrix operations with SparseMatrixCSC
  • gradient for common matrix operations with CUSPARSE

Edge features in GCNConv layer

The FeaturedGraph type permits edge features as well as node features, however Im not too sure how these are then incorporated in the GCNConv layer. Also, what is the recommended structured of the edge features, im assuming a similar structure to a weighted adjacency matrix, but wondering if this is limited to a 1 dimensional feature for each edge, or if a 3 dimension tensor/ channel structure is permitted for multiple edge features?

function (g::GCNConv)(fg::FeaturedGraph)
    X = node_feature(fg)
    A = adjacency_matrix(fg)
    g.fg isa NullGraph || (g.fg.graph[] = A)
    L = normalized_laplacian(A, eltype(X); selfloop=true)
    X_ = g.σ.(g.weight * X * L .+ g.bias)
    FeaturedGraph(A, X_)
end

Cannot give transposed features to GCNConv

When trying to transpose a matrix to then give it as features to a layer like GCNConv, the convert function triggers an error.

Here is the code:

using GeometricFlux

adj =  [0. 1. 0. 1.;
        1. 0. 1. 0.;
        0. 1. 0. 1.;
        1. 0. 1. 0.]

gcn = GCNConv(adj, 3=>5)
X = rand(4, 3)
Y = gcn(transpose(X))

And the error is:

ERROR: LoadError: MethodError: Cannot `convert` an object of type Array{Float64,2} to an object of type LinearAlgebra.Transpose{Float64,Array{Float64,2}}
Closest candidates are:
  convert(::Type{LinearAlgebra.Transpose{T,S}}, ::LinearAlgebra.Transpose) where {T, S} at D:\buildbot\worker\package_win64\build\usr\share\julia\stdlib\v1.5\LinearAlgebra\src\adjtrans.jl:199
  convert(::Type{T}, ::T) where T<:AbstractArray at abstractarray.jl:14
  convert(::Type{T}, ::LinearAlgebra.Factorization) where T<:AbstractArray at D:\buildbot\worker\package_win64\build\usr\share\julia\stdlib\v1.5\LinearAlgebra\src\factorization.jl:55
  ...
Stacktrace:
 [1] (::GCNConv{Float32,typeof(identity),GraphSignals.FeaturedGraph{Array{Float64,2},FillArrays.Fill{Float64,2,Tuple{Base.OneTo{Int64},Base.OneTo{Int64}}},FillArrays.Fill{Float64,2,Tuple{Base.OneTo{Int64},Base.OneTo{Int64}}},FillArrays.Fill{Float64,1,Tuple{Base.OneTo{Int64}}}}})(::Array{Float64,2}, ::LinearAlgebra.Transpose{Float64,Array{Float64,2}}) at C:\Users\ilanc\Dropbox\Stage-MTL\julia\GeometricFlux.jl\src\layers\conv.jl:47
 [2] (::GCNConv{Float32,typeof(identity),GraphSignals.FeaturedGraph{Array{Float64,2},FillArrays.Fill{Float64,2,Tuple{Base.OneTo{Int64},Base.OneTo{Int64}}},FillArrays.Fill{Float64,2,Tuple{Base.OneTo{Int64},Base.OneTo{Int64}}},FillArrays.Fill{Float64,1,Tuple{Base.OneTo{Int64}}}}})(::LinearAlgebra.Transpose{Float64,Array{Float64,2}}) at C:\Users\ilanc\Dropbox\Stage-MTL\julia\GeometricFlux.jl\src\layers\conv.jl:54
 [3] top-level scope at C:\Users\ilanc\Dropbox\Stage-MTL\julia\GeometricFlux.jl\gcn_transpose.jl:10
 [4] include(::String) at .\client.jl:457
 [5] top-level scope at REPL[2]:1
in expression starting at C:\Users\ilanc\Dropbox\Stage-MTL\julia\GeometricFlux.jl\gcn_transpose.jl:10

Commenting the convert line works fine but creates another error on the CUDA part. Maybe there is a better way to deal with that other error? I'm not sure converting like that is pretty idiomatic.

Subgraph update

It will be great to have training data with partial graph from reference graph. Training data may contain several samples with a partial graph from reference and train each sample with different partial graph.
I will implement mask in FeaturedGraph. Whenever node_feature or edge_feature is accessed, one will get masked graph as partial graph for training. The default mask is turned off.

Increase test cases

This package is lack of test cases. Increase test cases is needed to ensure these API functional and well-behaved.

innapropriate promotion in GAT layer

The negative_slope argument is causing undesired promotion. This is easy enough to get around for, now, but at least the default value should be .2fo instead of .2 since the default datatype is Float32

GATConv aggregates by default

GATConv seems to aggregate the features by default.

So if I give a FxN matrix to a GATConv, the output is a F' vector (where N is the number of nodes in the graph, and F, F' are the dimensions of the features.

Why is that? Is it on purpose?

Because it makes it impossible to have a chain of GATConv for example.

Provide ready-to-use datasets

  • Citation network datasets
    • Cora
    • CiteSeer
    • PubMed
  • Protein-protein interaction networks
  • Reddit dataset
  • QM7b dataset
  • Relational entities networks
    • AIFB
    • MUTAG
    • BGS
    • AM

Suggestions are welcome.

Sound support of variable graphs

There are more and more applications in GNN. In training, people demand model to be trained on distinct graphs. Variable graphs are needed to customize their training process.
So far, I implement the FeaturedGraph to equip graph structure on features, so one can input feature with a graph. However, it is experimentally implemented to support GCNConv. To support all layers, there are more work to do.

GCNConv example fails

I tried to run the gcn example. But it seems to fail on both CPU and GPU (with different errors)

Error on GPU

type CuArray has no field f

Stacktrace:
 [1] getproperty(::CuArray{Bool,2,Nothing}, ::Symbol) at ./Base.jl:33
 [2] adjoint at /home/avik-pal/.julia/packages/Zygote/YeCEW/src/lib/lib.jl:204 [inlined]
 [3] _pullback(::Zygote.Context, ::typeof(ZygoteRules.literal_getproperty), ::CuArray{Bool,2,Nothing}, ::Val{:f}) at /home/avik-pal/.julia/packages/ZygoteRules/6nssF/src/adjoint.jl:47
 [4] #_mapreduce#27 at /home/avik-pal/.julia/packages/GPUArrays/JqOUg/src/host/mapreduce.jl:49 [inlined]
 [5] _pullback(::Zygote.Context, ::GPUArrays.var"##_mapreduce#27", ::Colon, ::Nothing, ::typeof(GPUArrays._mapreduce), ::typeof(==), ::typeof(&), ::CuArray{Float32,2,Nothing}, ::LinearAlgebra.Adjoint{Float32,CuArray{Float32,2,Nothing}}) at /home/avik-pal/.julia/packages/Zygote/YeCEW/src/compiler/interface2.jl:0
 [6] adjoint at /home/avik-pal/.julia/packages/Zygote/YeCEW/src/lib/lib.jl:179 [inlined]
 [7] _pullback at /home/avik-pal/.julia/packages/ZygoteRules/6nssF/src/adjoint.jl:47 [inlined]
 [8] _pullback(::Zygote.Context, ::GPUArrays.var"#_mapreduce##kw", ::NamedTuple{(:dims, :init),Tuple{Colon,Nothing}}, ::typeof(GPUArrays._mapreduce), ::typeof(==), ::typeof(&), ::CuArray{Float32,2,Nothing}, ::LinearAlgebra.Adjoint{Float32,CuArray{Float32,2,Nothing}}) at /home/avik-pal/.julia/packages/Zygote/YeCEW/src/compiler/interface2.jl:0
 [9] adjoint at /home/avik-pal/.julia/packages/Zygote/YeCEW/src/lib/lib.jl:179 [inlined]
 [10] _pullback at /home/avik-pal/.julia/packages/ZygoteRules/6nssF/src/adjoint.jl:47 [inlined]
 [11] #mapreduce#25 at /home/avik-pal/.julia/packages/GPUArrays/JqOUg/src/host/mapreduce.jl:28 [inlined]
 [12] adjoint at /home/avik-pal/.julia/packages/Zygote/YeCEW/src/lib/lib.jl:179 [inlined]
 [13] _pullback at /home/avik-pal/.julia/packages/ZygoteRules/6nssF/src/adjoint.jl:47 [inlined]
 [14] mapreduce at /home/avik-pal/.julia/packages/GPUArrays/JqOUg/src/host/mapreduce.jl:28 [inlined]
 [15] ishermitian at /home/avik-pal/.julia/packages/GPUArrays/JqOUg/src/host/mapreduce.jl:86 [inlined]
 [16] issymmetric at /buildworker/worker/package_linux64/build/usr/share/julia/stdlib/v1.5/LinearAlgebra/src/generic.jl:1157 [inlined]
 [17] #degrees#73 at /home/avik-pal/.julia/packages/GeometricFlux/0yTVp/src/operations/linalg.jl:35 [inlined]
 [18] _pullback(::Zygote.Context, ::GeometricFlux.var"##degrees#73", ::Symbol, ::typeof(GeometricFlux.degrees), ::CuArray{Float32,2,Nothing}, ::Type{Float32}) at /home/avik-pal/.julia/packages/Zygote/YeCEW/src/compiler/interface2.jl:0 (repeats 2 times)
 [19] #inv_sqrt_degree_matrix#75 at /home/avik-pal/.julia/packages/GeometricFlux/0yTVp/src/operations/linalg.jl:92 [inlined]
 [20] _pullback(::Zygote.Context, ::GeometricFlux.var"##inv_sqrt_degree_matrix#75", ::Symbol, ::typeof(GeometricFlux.inv_sqrt_degree_matrix), ::CuArray{Float32,2,Nothing}, ::Type{Float32}) at /home/avik-pal/.julia/packages/Zygote/YeCEW/src/compiler/interface2.jl:0 (repeats 2 times)
 [21] #normalized_laplacian#77 at /home/avik-pal/.julia/packages/GeometricFlux/0yTVp/src/operations/linalg.jl:122 [inlined]
 [22] _pullback(::Zygote.Context, ::GeometricFlux.var"##normalized_laplacian#77", ::Bool, ::typeof(normalized_laplacian), ::CuArray{Float32,2,Nothing}, ::Type{Float32}) at /home/avik-pal/.julia/packages/Zygote/YeCEW/src/compiler/interface2.jl:0 (repeats 2 times)
 [23] #normalized_laplacian#85 at /home/avik-pal/.julia/packages/GeometricFlux/0yTVp/src/graph/featuredgraphs.jl:68 [inlined]
 [24] _pullback(::Zygote.Context, ::GeometricFlux.var"##normalized_laplacian#85", ::Bool, ::typeof(normalized_laplacian), ::FeaturedGraph{CuArray{Float32,2,Nothing},Nothing}, ::Type{Float32}) at /home/avik-pal/.julia/packages/Zygote/YeCEW/src/compiler/interface2.jl:0 (repeats 2 times)
 [25] GCNConv at /home/avik-pal/.julia/packages/GeometricFlux/0yTVp/src/layers/conv.jl:48 [inlined]
 [26] _pullback(::Zygote.Context, ::GCNConv{Float32,typeof(relu),FeaturedGraph{CuArray{Float32,2,Nothing},Nothing}}, ::CuArray{Float32,2,Nothing}) at /home/avik-pal/.julia/packages/Zygote/YeCEW/src/compiler/interface2.jl:0
 [27] applychain at /home/avik-pal/.julia/packages/Flux/Fj3bt/src/layers/basic.jl:36 [inlined]
 [28] _pullback(::Zygote.Context, ::typeof(Flux.applychain), ::Tuple{GCNConv{Float32,typeof(relu),FeaturedGraph{CuArray{Float32,2,Nothing},Nothing}},Dropout{Float64,Colon},GCNConv{Float32,typeof(identity),FeaturedGraph{CuArray{Float32,2,Nothing},Nothing}},typeof(softmax)}, ::CuArray{Float32,2,Nothing}) at /home/avik-pal/.julia/packages/Zygote/YeCEW/src/compiler/interface2.jl:0
 [29] Chain at /home/avik-pal/.julia/packages/Flux/Fj3bt/src/layers/basic.jl:38 [inlined]
 [30] _pullback(::Zygote.Context, ::Chain{Tuple{GCNConv{Float32,typeof(relu),FeaturedGraph{CuArray{Float32,2,Nothing},Nothing}},Dropout{Float64,Colon},GCNConv{Float32,typeof(identity),FeaturedGraph{CuArray{Float32,2,Nothing},Nothing}},typeof(softmax)}}, ::CuArray{Float32,2,Nothing}) at /home/avik-pal/.julia/packages/Zygote/YeCEW/src/compiler/interface2.jl:0
 [31] loss at ./In[4]:20 [inlined]
 [32] _pullback(::Zygote.Context, ::typeof(loss), ::CuArray{Float32,2,Nothing}, ::CuArray{Float32,2,Nothing}) at /home/avik-pal/.julia/packages/Zygote/YeCEW/src/compiler/interface2.jl:0
 [33] adjoint at /home/avik-pal/.julia/packages/Zygote/YeCEW/src/lib/lib.jl:179 [inlined]
 [34] _pullback at /home/avik-pal/.julia/packages/ZygoteRules/6nssF/src/adjoint.jl:47 [inlined]
 [35] #17 at /home/avik-pal/.julia/packages/Flux/Fj3bt/src/optimise/train.jl:89 [inlined]
 [36] _pullback(::Zygote.Context, ::Flux.Optimise.var"#17#25"{typeof(loss),Tuple{CuArray{Float32,2,Nothing},CuArray{Float32,2,Nothing}}}) at /home/avik-pal/.julia/packages/Zygote/YeCEW/src/compiler/interface2.jl:0
 [37] pullback(::Function, ::Zygote.Params) at /home/avik-pal/.julia/packages/Zygote/YeCEW/src/compiler/interface.jl:174
 [38] gradient(::Function, ::Zygote.Params) at /home/avik-pal/.julia/packages/Zygote/YeCEW/src/compiler/interface.jl:54
 [39] macro expansion at /home/avik-pal/.julia/packages/Flux/Fj3bt/src/optimise/train.jl:88 [inlined]
 [40] macro expansion at /home/avik-pal/.julia/packages/Juno/tLMZd/src/progress.jl:134 [inlined]
 [41] train!(::typeof(loss), ::Zygote.Params, ::Array{Tuple{CuArray{Float32,2,Nothing},CuArray{Float32,2,Nothing}},1}, ::ADAM; cb::Flux.var"#throttled#20"{Flux.var"#throttled#16#21"{Bool,Bool,typeof(evalcb),Int64}}) at /home/avik-pal/.julia/packages/Flux/Fj3bt/src/optimise/train.jl:81
 [42] top-level scope at ./In[4]:30

Error on CPU

Need an adjoint for constructor SparseMatrixCSC{Float32,Int64}. Gradient is of type Array{Float32,2}

Stacktrace:
 [1] error(::String) at ./error.jl:33
 [2] (::Zygote.Jnew{SparseMatrixCSC{Float32,Int64},Nothing,false})(::Array{Float32,2}) at /home/avik-pal/.julia/packages/Zygote/YeCEW/src/lib/lib.jl:306
 [3] (::Zygote.var"#392#back#193"{Zygote.Jnew{SparseMatrixCSC{Float32,Int64},Nothing,false}})(::Array{Float32,2}) at /home/avik-pal/.julia/packages/ZygoteRules/6nssF/src/adjoint.jl:49
 [4] SparseMatrixCSC at /buildworker/worker/package_linux64/build/usr/share/julia/stdlib/v1.5/SparseArrays/src/sparsematrix.jl:32 [inlined]
 [5] (::typeof((SparseMatrixCSC{Float32,Int64})))(::Array{Float32,2}) at /home/avik-pal/.julia/packages/Zygote/YeCEW/src/compiler/interface2.jl:0
 [6] SparseMatrixCSC at /buildworker/worker/package_linux64/build/usr/share/julia/stdlib/v1.5/SparseArrays/src/sparsematrix.jl:45 [inlined]
 [7] (::typeof((SparseMatrixCSC)))(::Array{Float32,2}) at /home/avik-pal/.julia/packages/Zygote/YeCEW/src/compiler/interface2.jl:0
 [8] SparseMatrixCSC at /buildworker/worker/package_linux64/build/usr/share/julia/stdlib/v1.5/SparseArrays/src/sparsematrix.jl:580 [inlined]
 [9] (::typeof((SparseMatrixCSC{Float32,Int64})))(::Array{Float32,2}) at /home/avik-pal/.julia/packages/Zygote/YeCEW/src/compiler/interface2.jl:0
 [10] convert at /buildworker/worker/package_linux64/build/usr/share/julia/stdlib/v1.5/SparseArrays/src/sparsematrix.jl:603 [inlined]
 [11] (::typeof((convert)))(::Array{Float32,2}) at /home/avik-pal/.julia/packages/Zygote/YeCEW/src/compiler/interface2.jl:0
 [12] GCNConv at /home/avik-pal/.julia/packages/GeometricFlux/0yTVp/src/layers/conv.jl:49 [inlined]
 [13] (::typeof((λ)))(::Array{Float32,2}) at /home/avik-pal/.julia/packages/Zygote/YeCEW/src/compiler/interface2.jl:0
 [14] applychain at /home/avik-pal/.julia/packages/Flux/Fj3bt/src/layers/basic.jl:36 [inlined]
 [15] (::typeof((applychain)))(::Array{Float32,2}) at /home/avik-pal/.julia/packages/Zygote/YeCEW/src/compiler/interface2.jl:0
 [16] Chain at /home/avik-pal/.julia/packages/Flux/Fj3bt/src/layers/basic.jl:38 [inlined]
 [17] (::typeof((λ)))(::Array{Float32,2}) at /home/avik-pal/.julia/packages/Zygote/YeCEW/src/compiler/interface2.jl:0
 [18] loss at ./In[5]:20 [inlined]
 [19] (::typeof((loss)))(::Float32) at /home/avik-pal/.julia/packages/Zygote/YeCEW/src/compiler/interface2.jl:0
 [20] #174 at /home/avik-pal/.julia/packages/Zygote/YeCEW/src/lib/lib.jl:182 [inlined]
 [21] #358#back at /home/avik-pal/.julia/packages/ZygoteRules/6nssF/src/adjoint.jl:49 [inlined]
 [22] #17 at /home/avik-pal/.julia/packages/Flux/Fj3bt/src/optimise/train.jl:89 [inlined]
 [23] (::typeof((λ)))(::Float32) at /home/avik-pal/.julia/packages/Zygote/YeCEW/src/compiler/interface2.jl:0
 [24] (::Zygote.var"#49#50"{Zygote.Params,Zygote.Context,typeof((λ))})(::Float32) at /home/avik-pal/.julia/packages/Zygote/YeCEW/src/compiler/interface.jl:179
 [25] gradient(::Function, ::Zygote.Params) at /home/avik-pal/.julia/packages/Zygote/YeCEW/src/compiler/interface.jl:55
 [26] macro expansion at /home/avik-pal/.julia/packages/Flux/Fj3bt/src/optimise/train.jl:88 [inlined]
 [27] macro expansion at /home/avik-pal/.julia/packages/Juno/tLMZd/src/progress.jl:134 [inlined]
 [28] train!(::typeof(loss), ::Zygote.Params, ::Array{Tuple{SparseMatrixCSC{Float32,Int64},SparseMatrixCSC{Float32,Int64}},1}, ::ADAM; cb::Flux.var"#throttled#20"{Flux.var"#throttled#16#21"{Bool,Bool,typeof(evalcb),Int64}}) at /home/avik-pal/.julia/packages/Flux/Fj3bt/src/optimise/train.jl:81
 [29] top-level scope at ./In[5]:30

GAT example fails

Hi!

Thank you for your GeometricFlux.

I was trying your example codes for GAT but failed.

ERROR: LoadError: MethodError: no method matching adjlist(::SimpleGraph{Int64})
Closest candidates are:
adjlist(!Matched::T) where T<:MessagePassing at /home/xh/.julia/packages/GeometricFlux/u3Wfx/src/layers/msgpass.jl:5
adjlist(!Matched::T) where T<:GeometricFlux.Meta at /home/xh/.julia/packages/GeometricFlux/u3Wfx/src/layers/meta.jl:3

Could you give me some hints? Thank you!

(@v1.4) pkg> status
Status ~/.julia/environments/v1.4/Project.toml
[3a865a2d] CuArrays v2.2.0
[864edb3b] DataStructures v0.17.17
[5789e2e9] FileIO v1.3.0
[587475ba] Flux v0.10.4
[7e08b658] GeometricFlux v0.5.0
[a2cc645c] GraphPlot v0.4.2
[033835bb] JLD2 v0.1.13
[093fc24a] LightGraphs v1.3.3
[2f01184e] SparseArrays
[10745b16] Statistics

XH

Conflict between Flux and ScatterNNLib inside GeometricFlux

When trying to test GeometricFlux on my computer using Julia 1.6, I get a conflict for the use of gather between Flux and ScatterNNLib. The beginning of the output is given at the end but the following line would be the only interesting one:
WARNING: both ScatterNNlib and Flux export "gather"; uses of it in module GeometricFlux must be qualified.

Maybe this is related to yuehhua/ScatterNNlib.jl#32 but I am not sure. I have the same kind of errors in my own package that uses GeometricFlux. Any idea about that error?

(GeometricFlux) pkg> test
     Testing GeometricFlux
┌ Warning: Could not use exact versions of packages in manifest, re-resolving
└ @ Pkg.Operations /buildworker/worker/package_linux64/build/usr/share/julia/stdlib/v1.6/Pkg/src/Operations.jl:1524
      Status `/tmp/jl_x6mnS2/Project.toml`
  [052768ef] CUDA v2.6.2
  [864edb3b] DataStructures v0.18.9
  [1a297f60] FillArrays v0.11.7
  [587475ba] Flux v0.11.6
  [7e08b658] GeometricFlux v0.7.5 `/mnt/c/Users/ilanc/Dropbox/Stage-MTL/julia/GeometricFlux.jl`
  [21828b05] GraphMLDatasets v0.1.2
  [3ebe565e] GraphSignals v0.1.13
  [093fc24a] LightGraphs v1.3.5
  [626554b9] MetaGraphs v0.6.7
  [189a3867] Reexport v1.0.0
  [ae029012] Requires v1.1.3
  [b1168b60] ScatterNNlib v0.1.7
  [47aef6b3] SimpleWeightedGraphs v1.1.1
  [e88e6eb3] Zygote v0.6.8
  [700de1a5] ZygoteRules v0.2.1
  [37e2e46d] LinearAlgebra `@stdlib/LinearAlgebra`
  [9a3f8284] Random `@stdlib/Random`
  [2f01184e] SparseArrays `@stdlib/SparseArrays`
  [10745b16] Statistics `@stdlib/Statistics`
  [8dfed614] Test `@stdlib/Test`
      Status `/tmp/jl_x6mnS2/Manifest.toml`
  [621f4979] AbstractFFTs v1.0.1
  [1520ce14] AbstractTrees v0.3.4
  [79e6a3ab] Adapt v3.2.0
  [ec485272] ArnoldiMethod v0.1.0
  [ab4f0b2a] BFloat16s v0.1.0
  [b99e7846] BinaryProvider v0.5.10
  [a74b3585] Blosc v0.7.0
  [e1450e63] BufferedStreams v1.0.0
  [fa961155] CEnum v0.4.1
  [336ed68f] CSV v0.8.4
  [052768ef] CUDA v2.6.2
  [082447d4] ChainRules v0.7.57
  [d360d2e6] ChainRulesCore v0.9.36
  [944b1d66] CodecZlib v0.7.0
  [3da002f7] ColorTypes v0.10.12
  [5ae59095] Colors v0.12.6
  [bbf7d656] CommonSubexpressions v0.3.0
  [34da2185] Compat v3.25.0
  [8f4d0f93] Conda v1.5.1
  [9a962f9c] DataAPI v1.6.0
  [124859b0] DataDeps v0.7.7
  [864edb3b] DataStructures v0.18.9
  [e2d170a0] DataValueInterfaces v1.0.0
  [163ba53b] DiffResults v1.0.3
  [b552c78f] DiffRules v1.0.2
  [e2ba6199] ExprTools v0.1.3
  [1a297f60] FillArrays v0.11.7
  [53c48c17] FixedPointNumbers v0.8.4
  [587475ba] Flux v0.11.6
  [f6369f11] ForwardDiff v0.10.17
  [d9f16b24] Functors v0.1.0
  [0c68f7d7] GPUArrays v6.2.0
  [61eb1bfa] GPUCompiler v0.10.0
  [7e08b658] GeometricFlux v0.7.5 `/mnt/c/Users/ilanc/Dropbox/Stage-MTL/julia/GeometricFlux.jl`
  [a1251efa] GraphLaplacians v0.1.2
  [21828b05] GraphMLDatasets v0.1.2
  [3ebe565e] GraphSignals v0.1.13
  [f67ccb44] HDF5 v0.14.3
  [cd3eb016] HTTP v0.9.5
  [7869d1d1] IRTools v0.4.2
  [d25df0c9] Inflate v0.1.2
  [83e8ac13] IniFile v0.5.0
  [82899510] IteratorInterfaceExtensions v1.0.0
  [033835bb] JLD2 v0.3.3
  [692b3bcd] JLLWrappers v1.2.0
  [682c06a0] JSON v0.21.1
  [e5e0dc1b] Juno v0.8.4
  [929cbde3] LLVM v3.6.0
  [093fc24a] LightGraphs v1.3.5
  [23992714] MAT v0.9.2
  [1914dd2f] MacroTools v0.5.6
  [739be429] MbedTLS v1.0.3
  [e89f7d12] Media v0.5.0
  [c03570c3] Memoize v0.4.4
  [626554b9] MetaGraphs v0.6.7
  [e1d29d7a] Missings v0.4.5
  [872c559c] NNlib v0.7.17
  [77ba4419] NaNMath v0.3.5
  [bac558e1] OrderedCollections v1.4.0
  [69de0a69] Parsers v1.1.0
  [2dfb63ee] PooledArrays v1.2.1
  [438e738f] PyCall v1.92.2
  [189a3867] Reexport v1.0.0
  [ae029012] Requires v1.1.3
  [b1168b60] ScatterNNlib v0.1.7
  [6c6a2e73] Scratch v1.0.3
  [91c51154] SentinelArrays v1.2.16
  [699a6c99] SimpleTraits v0.9.3
  [47aef6b3] SimpleWeightedGraphs v1.1.1
  [a2af1166] SortingAlgorithms v0.3.1
  [276daf66] SpecialFunctions v1.3.0
  [90137ffa] StaticArrays v1.1.0
  [2913bbd2] StatsBase v0.33.4
  [3783bdb8] TableTraits v1.0.0
  [bd369af6] Tables v1.4.1
  [a759f4b9] TimerOutputs v0.5.8
  [3bb67fe8] TranscodingStreams v0.9.5
  [5c2747f8] URIs v1.2.0
  [81def892] VersionParsing v1.2.0
  [a5390f91] ZipFile v0.9.3
  [e88e6eb3] Zygote v0.6.8
  [700de1a5] ZygoteRules v0.2.1
  [0b7ba130] Blosc_jll v1.14.3+1
  [0234f1f7] HDF5_jll v1.12.0+1
  [5ced341a] Lz4_jll v1.9.2+2
  [458c3c95] OpenSSL_jll v1.1.1+6
  [efe28fd5] OpenSpecFun_jll v0.5.3+4
  [3161d3a3] Zstd_jll v1.4.8+0
  [0dad84c5] ArgTools `@stdlib/ArgTools`
  [56f22d72] Artifacts `@stdlib/Artifacts`
  [2a0f44e3] Base64 `@stdlib/Base64`
  [ade2ca70] Dates `@stdlib/Dates`
  [8bb1440f] DelimitedFiles `@stdlib/DelimitedFiles`
  [8ba89e20] Distributed `@stdlib/Distributed`
  [f43a241f] Downloads `@stdlib/Downloads`
  [9fa8497b] Future `@stdlib/Future`
  [b77e0a4c] InteractiveUtils `@stdlib/InteractiveUtils`
  [4af54fe1] LazyArtifacts `@stdlib/LazyArtifacts`
  [b27032c2] LibCURL `@stdlib/LibCURL`
  [76f85450] LibGit2 `@stdlib/LibGit2`
  [8f399da3] Libdl `@stdlib/Libdl`
  [37e2e46d] LinearAlgebra `@stdlib/LinearAlgebra`
  [56ddb016] Logging `@stdlib/Logging`
  [d6f4376e] Markdown `@stdlib/Markdown`
  [a63ad114] Mmap `@stdlib/Mmap`
  [ca575930] NetworkOptions `@stdlib/NetworkOptions`
  [44cfe95a] Pkg `@stdlib/Pkg`
  [de0858da] Printf `@stdlib/Printf`
  [9abbd945] Profile `@stdlib/Profile`
  [3fa0cd96] REPL `@stdlib/REPL`
  [9a3f8284] Random `@stdlib/Random`
  [ea8e919c] SHA `@stdlib/SHA`
  [9e88b42a] Serialization `@stdlib/Serialization`
  [1a1011a3] SharedArrays `@stdlib/SharedArrays`
  [6462fe0b] Sockets `@stdlib/Sockets`
  [2f01184e] SparseArrays `@stdlib/SparseArrays`
  [10745b16] Statistics `@stdlib/Statistics`
  [fa267f1f] TOML `@stdlib/TOML`
  [a4e569a6] Tar `@stdlib/Tar`
  [8dfed614] Test `@stdlib/Test`
  [cf7118a7] UUIDs `@stdlib/UUIDs`
  [4ec0a83e] Unicode `@stdlib/Unicode`
  [e66e0078] CompilerSupportLibraries_jll `@stdlib/CompilerSupportLibraries_jll`
  [deac9b47] LibCURL_jll `@stdlib/LibCURL_jll`
  [29816b5a] LibSSH2_jll `@stdlib/LibSSH2_jll`
  [c8ffd9c3] MbedTLS_jll `@stdlib/MbedTLS_jll`
  [14a3606d] MozillaCACerts_jll `@stdlib/MozillaCACerts_jll`
  [83775a58] Zlib_jll `@stdlib/Zlib_jll`
  [8e850ede] nghttp2_jll `@stdlib/nghttp2_jll`
  [3f19e933] p7zip_jll `@stdlib/p7zip_jll`
     Testing Running tests...
┌ Warning: CUDA unavailable, not loading CUDA support
└ @ ScatterNNlib ~/.julia/packages/ScatterNNlib/AFYe7/src/ScatterNNlib.jl:47
┌ Warning: CUDA unavailable, not testing GPU support
└ @ Main /mnt/c/Users/ilanc/Dropbox/Stage-MTL/julia/GeometricFlux.jl/test/runtests.jl:41
WARNING: both ScatterNNlib and Flux export "gather"; uses of it in module GeometricFlux must be qualified
WARNING: both LightGraphs and GraphSignals export "laplacian_matrix"; uses of it in module GeometricFlux must be qualified
layer with graph: Error During Test at /mnt/c/Users/ilanc/Dropbox/Stage-MTL/julia/GeometricFlux.jl/test/layers/conv.jl:127
  Got exception outside of a @test
  UndefVarError: gather not defined
  Stacktrace:
    [1] (::GeometricFlux.var"#1#2"{Vector{Int64}})(Δ::Matrix{Float32})
      @ GeometricFlux /mnt/c/Users/ilanc/Dropbox/Stage-MTL/julia/GeometricFlux.jl/src/pool.jl:72
    [2] (::GeometricFlux.var"#4#back#3"{GeometricFlux.var"#1#2"{Vector{Int64}}})(Δ::Matrix{Float32})
      @ GeometricFlux ~/.julia/packages/ZygoteRules/OjfTt/src/adjoint.jl:59
    [3] Pullback
      @ /mnt/c/Users/ilanc/Dropbox/Stage-MTL/julia/GeometricFlux.jl/src/pool.jl:137 [inlined]
    [4] (::typeof(∂(pool)))(Δ::Matrix{Float32})
      @ Zygote ~/.julia/packages/Zygote/lwmfx/src/compiler/interface2.jl:0
    [5] Pullback
      @ /mnt/c/Users/ilanc/Dropbox/Stage-MTL/julia/GeometricFlux.jl/src/layers/msgpass.jl:49 [inlined]
    [6] (::typeof(∂(aggregate_neighbors)))(Δ::Matrix{Float32})
      @ Zygote ~/.julia/packages/Zygote/lwmfx/src/compiler/interface2.jl:0
    [7] Pullback
      @ /mnt/c/Users/ilanc/Dropbox/Stage-MTL/julia/GeometricFlux.jl/src/layers/gn.jl:64 [inlined]
    [8] (::typeof(∂(propagate)))(Δ::Tuple{Nothing, Fill{Float32, 2, Tuple{Base.OneTo{Int64}, Base.OneTo{Int64}}}, Nothing})
      @ Zygote ~/.julia/packages/Zygote/lwmfx/src/compiler/interface2.jl:0
    [9] Pullback
      @ /mnt/c/Users/ilanc/Dropbox/Stage-MTL/julia/GeometricFlux.jl/src/layers/msgpass.jl:58 [inlined]
   [10] (::typeof(∂(propagate)))(Δ::Tuple{Nothing, Fill{Float32, 2, Tuple{Base.OneTo{Int64}, Base.OneTo{Int64}}}})
      @ Zygote ~/.julia/packages/Zygote/lwmfx/src/compiler/interface2.jl:0
   [11] Pullback
      @ /mnt/c/Users/ilanc/Dropbox/Stage-MTL/julia/GeometricFlux.jl/src/layers/conv.jl:237 [inlined]
   [12] (::typeof(∂(λ)))(Δ::Fill{Float32, 2, Tuple{Base.OneTo{Int64}, Base.OneTo{Int64}}})
      @ Zygote ~/.julia/packages/Zygote/lwmfx/src/compiler/interface2.jl:0
   [13] Pullback
      @ /mnt/c/Users/ilanc/Dropbox/Stage-MTL/julia/GeometricFlux.jl/test/layers/conv.jl:141 [inlined]
   [14] (::typeof(∂(λ)))(Δ::Float32)
      @ Zygote ~/.julia/packages/Zygote/lwmfx/src/compiler/interface2.jl:0
   [15] (::Zygote.var"#41#42"{typeof(∂(λ))})(Δ::Float32)
      @ Zygote ~/.julia/packages/Zygote/lwmfx/src/compiler/interface.jl:41
   [16] gradient(f::Function, args::Matrix{Float32})
      @ Zygote ~/.julia/packages/Zygote/lwmfx/src/compiler/interface.jl:59
   [17] macro expansion
      @ /mnt/c/Users/ilanc/Dropbox/Stage-MTL/julia/GeometricFlux.jl/test/layers/conv.jl:141 [inlined]
   [18] macro expansion
      @ /buildworker/worker/package_linux64/build/usr/share/julia/stdlib/v1.6/Test/src/Test.jl:1151 [inlined]
   [19] macro expansion
      @ /mnt/c/Users/ilanc/Dropbox/Stage-MTL/julia/GeometricFlux.jl/test/layers/conv.jl:128 [inlined]
   [20] macro expansion
      @ /buildworker/worker/package_linux64/build/usr/share/julia/stdlib/v1.6/Test/src/Test.jl:1151 [inlined]
   [21] macro expansion
      @ /mnt/c/Users/ilanc/Dropbox/Stage-MTL/julia/GeometricFlux.jl/test/layers/conv.jl:125 [inlined]
   [22] macro expansion
      @ /buildworker/worker/package_linux64/build/usr/share/julia/stdlib/v1.6/Test/src/Test.jl:1151 [inlined]
   [23] top-level scope
      @ /mnt/c/Users/ilanc/Dropbox/Stage-MTL/julia/GeometricFlux.jl/test/layers/conv.jl:18
   [24] include(fname::String)
      @ Base.MainInclude ./client.jl:444
   [25] macro expansion
      @ /mnt/c/Users/ilanc/Dropbox/Stage-MTL/julia/GeometricFlux.jl/test/runtests.jl:46 [inlined]
   [26] macro expansion
      @ /buildworker/worker/package_linux64/build/usr/share/julia/stdlib/v1.6/Test/src/Test.jl:1151 [inlined]
   [27] top-level scope
      @ /mnt/c/Users/ilanc/Dropbox/Stage-MTL/julia/GeometricFlux.jl/test/runtests.jl:45
   [28] include(fname::String)
      @ Base.MainInclude ./client.jl:444
   [29] top-level scope
      @ none:6
   [30] eval
      @ ./boot.jl:360 [inlined]
   [31] exec_options(opts::Base.JLOptions)
      @ Base ./client.jl:261
   [32] _start()
      @ Base ./client.jl:485

missing adjoints for pooling functions with specified graph size

The adjoint is not implemented for pooling function with custom c as inputs. For this case, Zygote produces nothing for the gradient.

Usually, we don't specify c, but It will be an issue if the last node in the array is disconnected: specifying c makes sure the dimensions are consistent.

example:

cluster = [1; 2; 2; 1]
z = ones(4, 4)
test(z) = sum( sumpool(cluster, z) )
test'(z) # returns gradient 
cluster = [1; 2; 2; 1]
z = ones(4, 4)
test(z) = sum( sumpool(cluster, z, 2))
test'(z) # returns nothing

proposed fix:

I wonder if adding the following would fix it?

@adjoint sumpool(cluster::AbstractArray{Int}, X::AbstractArray{T}, c::Int) where {T<:Real} =
    sumpool(cluster, X, c), Δ -> (nothing, gather(zero(Δ)+Δ, cluster), nothing)

Metagraphs.jl integraion

So far, Metagraph and related objects are not available to put into the model directly.
We should provide a way to let user specify which node/edge features to be used in model.

GCNConv and SimpleWeightedGraphs

Having trouble using a SimpleWeightedGraph as input for the GCNConv-layer.
My graph is an undirected simple Int64 graph with Float64 weights and calling adjacency_matrix() on this works fine.
But during training I get the error

InexactError: Int64(-1.0384249352874253)
Stacktrace:
[1] Int64 at .\float.jl:710 [inlined]
[2] _map_zeropres!(::Type{Int64}, ::SparseMatrixCSC{Int64,Int64}, ::SparseMatrixCSC{Float64,Int64}) at D:\buildbot\worker\package_win64\build\usr\share\julia\stdlib\v1.5\SparseArrays\src\higherorderfns.jl:246
[3] _noshapecheck_map(::Type{Int64}, ::SparseMatrixCSC{Float64,Int64}) at D:\buildbot\worker\package_win64\build\usr\share\julia\stdlib\v1.5\SparseArrays\src\higherorderfns.jl:165
[4] copy at D:\buildbot\worker\package_win64\build\usr\share\julia\stdlib\v1.5\SparseArrays\src\higherorderfns.jl:169 [inlined]
[5] materialize(::Base.Broadcast.Broadcasted{SparseArrays.HigherOrderFns.SparseMatStyle,Nothing,Type{Int64},Tuple{SparseMatrixCSC{Float64,Int64}}}) at .\broadcast.jl:837
[6] adjacency_matrix(::SimpleWeightedGraph{Int64,Float64}, ::DataType; dir::Symbol) at C:\Users\rcars.julia\packages\SimpleWeightedGraphs\IDzOp\src\overrides.jl:32
[7] adjacency_matrix at C:\Users\rcars.julia\packages\SimpleWeightedGraphs\IDzOp\src\overrides.jl:31 [inlined]
[8] adjacency_matrix at C:\Users\rcars.julia\packages\GraphSignals\dB9OV\src\linalg.jl:7 [inlined] (repeats 2 times)
[9] _pullback at C:\Users\rcars.julia\packages\Zygote\chgvX\src\lib\grad.jl:8 [inlined]

and I don't see why it tries to change the values to Int64.
Does anyone have an idea?

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.