Giter Site home page Giter Site logo

sciml / deepequilibriumnetworks.jl Goto Github PK

View Code? Open in Web Editor NEW
46.0 46.0 6.0 3.15 MB

Implicit Layer Machine Learning via Deep Equilibrium Networks, O(1) backpropagation with accelerated convergence.

Home Page: https://docs.sciml.ai/DeepEquilibriumNetworks/stable/

License: MIT License

Julia 100.00%
deep-equilibrium-models deep-learning implicit-deep-learning julia machine-learning neural-networks nonlinear-equations nonlinear-solve

deepequilibriumnetworks.jl's People

Contributors

00krishna avatar anasabdelr avatar arnostrouwen avatar avik-pal avatar chrisrackauckas avatar dependabot[bot] avatar devmotion avatar ranocha avatar thazhemadam avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar

deepequilibriumnetworks.jl's Issues

TypeErro in DEQ example: non-boolean (Nothing) used in boolean context

Please help me to understand the cause of the error when running the DEQ example from Julia's blog (Deep Equilibrium Models)

this code

using Flux
using DiffEqSensitivity
using SteadyStateDiffEq
using OrdinaryDiffEq
#using CUDA
using Plots
using LinearAlgebra
#CUDA.allowscalar(false)

struct DeepEquilibriumNetwork{M,P,RE,A,K}
    model::M
    p::P
    re::RE
    args::A
    kwargs::K
end

Flux.@functor DeepEquilibriumNetwork

function DeepEquilibriumNetwork(model, args...; kwargs...)
    p, re = Flux.destructure(model)
    return DeepEquilibriumNetwork(model, p, re, args, kwargs)
end

Flux.trainable(deq::DeepEquilibriumNetwork) = (deq.p,)

function (deq::DeepEquilibriumNetwork)(x::AbstractArray{T}, p = deq.p) where {T}
    z = deq.re(p)(x)
    # Solving the equation f(u) - u = du = 0
    # The key part of DEQ is similar to that of NeuralODEs
    dudt(u, _p, t) = deq.re(_p)(u .+ x) .- u
    ssprob = SteadyStateProblem(ODEProblem(dudt, z, (zero(T), one(T)), p))
    return solve(ssprob, deq.args...; u0 = z, deq.kwargs...).u
end

ann = Chain(Dense(1, 5), Dense(5, 1))

deq = DeepEquilibriumNetwork(ann, DynamicSS(Tsit5(), abstol = 1.0f-2, reltol = 1.0f-2))

# Let's run a DEQ model on linear regression for y = 2x
X = reshape(Float32[1; 2; 3; 4; 5; 6; 7; 8; 9; 10], 1, :) 
Y = 2 .* X
opt = ADAM(0.05)

loss(x, y) = sum(abs2, y .- deq(x))

Flux.train!(loss, Flux.params(deq), ((X, Y),), opt)

throws the following error on line (JuliaFlux.train!(loss, Flux.params(deq), ((X, Y),), opt))

ERROR: LoadError: TypeError: non-boolean (Nothing) used in boolean context
Stacktrace:
  [1] _concrete_solve_adjoint(::SteadyStateProblem{Matrix{Float32}, false, Vector{Float32}, ODEFunction{false, var"#dudt#10"{DeepEquilibriumNetwork{Chain{Tuple{Dense{typeof(identity), Matrix{Float32}, Vector{Float32}}, Dense{typeof(identity), Matrix{Float32}, Vector{Float32}}}}, Vector{Float32}, Flux.var"#60#62"{Chain{Tuple{Dense{typeof(identity), Matrix{Float32}, Vector{Float32}}, Dense{typeof(identity), Matrix{Float32}, Vector{Float32}}}}}, Tuple{DynamicSS{Tsit5{typeof(OrdinaryDiffEq.trivial_limiter!), typeof(OrdinaryDiffEq.trivial_limiter!), Static.False}, Float32, Float32, Float64}}, Base.Iterators.Pairs{Union{}, Union{}, Tuple{}, NamedTuple{(), Tuple{}}}}, Matrix{Float32}}, UniformScaling{Bool}, Nothing, Nothing, Nothing, Nothing, Nothing, Nothing, Nothing, Nothing, Nothing, Nothing, Nothing, Nothing, typeof(SciMLBase.DEFAULT_OBSERVED), Nothing}, Base.Iterators.Pairs{Union{}, Union{}, Tuple{}, NamedTuple{(), Tuple{}}}}, ::DynamicSS{Tsit5{typeof(OrdinaryDiffEq.trivial_limiter!), typeof(OrdinaryDiffEq.trivial_limiter!), Static.False}, Float32, Float32, Float64}, ::Nothing, ::Matrix{Float32}, ::Vector{Float32}; kwargs::Base.Iterators.Pairs{Union{}, Union{}, Tuple{}, NamedTuple{(), Tuple{}}})
    @ DiffEqSensitivity C:\Users\D\.julia\packages\DiffEqSensitivity\Kg0cc\src\concrete_solve.jl:92
  [2] _concrete_solve_adjoint
    @ C:\Users\D\.julia\packages\DiffEqSensitivity\Kg0cc\src\concrete_solve.jl:72 [inlined]
  [3] #_solve_adjoint#56
    @ C:\Users\D\.julia\packages\DiffEqBase\0PaUK\src\solve.jl:347 [inlined]
  [4] _solve_adjoint
    @ C:\Users\D\.julia\packages\DiffEqBase\0PaUK\src\solve.jl:322 [inlined]
  [5] #rrule#54
    @ C:\Users\D\.julia\packages\DiffEqBase\0PaUK\src\solve.jl:310 [inlined]
  [6] rrule
    @ C:\Users\D\.julia\packages\DiffEqBase\0PaUK\src\solve.jl:310 [inlined]
  [7] rrule
    @ C:\Users\D\.julia\packages\ChainRulesCore\oBjCg\src\rules.jl:134 [inlined]
  [8] chain_rrule
    @ C:\Users\D\.julia\packages\Zygote\umM0L\src\compiler\chainrules.jl:216 [inlined]
  [9] macro expansion
    @ C:\Users\D\.julia\packages\Zygote\umM0L\src\compiler\interface2.jl:0 [inlined]
 [10] _pullback(::Zygote.Context, ::typeof(DiffEqBase.solve_up), ::SteadyStateProblem{Matrix{Float32}, false, Vector{Float32}, ODEFunction{false, var"#dudt#10"{DeepEquilibriumNetwork{Chain{Tuple{Dense{typeof(identity), Matrix{Float32}, Vector{Float32}}, Dense{typeof(identity), Matrix{Float32}, Vector{Float32}}}}, Vector{Float32}, Flux.var"#60#62"{Chain{Tuple{Dense{typeof(identity), Matrix{Float32}, Vector{Float32}}, Dense{typeof(identity), Matrix{Float32}, Vector{Float32}}}}}, Tuple{DynamicSS{Tsit5{typeof(OrdinaryDiffEq.trivial_limiter!), typeof(OrdinaryDiffEq.trivial_limiter!), Static.False}, Float32, Float32, Float64}}, Base.Iterators.Pairs{Union{}, Union{}, Tuple{}, NamedTuple{(), Tuple{}}}}, Matrix{Float32}}, UniformScaling{Bool}, Nothing, Nothing, Nothing, Nothing, Nothing, Nothing, Nothing, Nothing, Nothing, Nothing, Nothing, Nothing, typeof(SciMLBase.DEFAULT_OBSERVED), Nothing}, Base.Iterators.Pairs{Union{}, Union{}, Tuple{}, NamedTuple{(), Tuple{}}}}, ::Nothing, ::Matrix{Float32}, ::Vector{Float32}, ::DynamicSS{Tsit5{typeof(OrdinaryDiffEq.trivial_limiter!), typeof(OrdinaryDiffEq.trivial_limiter!), Static.False}, Float32, Float32, Float64})
    @ Zygote C:\Users\D\.julia\packages\Zygote\umM0L\src\compiler\interface2.jl:9
 [11] _apply
    @ .\boot.jl:804 [inlined]
 [12] adjoint
    @ C:\Users\D\.julia\packages\Zygote\umM0L\src\lib\lib.jl:200 [inlined]
 [13] _pullback
    @ C:\Users\D\.julia\packages\ZygoteRules\AIbCs\src\adjoint.jl:65 [inlined]
 [14] _pullback
    @ C:\Users\D\.julia\packages\DiffEqBase\0PaUK\src\solve.jl:73 [inlined]
 [15] _pullback(::Zygote.Context, ::DiffEqBase.var"##solve#38", ::Nothing, ::Matrix{Float32}, ::Nothing, ::Base.Iterators.Pairs{Union{}, Union{}, Tuple{}, NamedTuple{(), Tuple{}}}, ::typeof(solve), ::SteadyStateProblem{Matrix{Float32}, false, Vector{Float32}, ODEFunction{false, var"#dudt#10"{DeepEquilibriumNetwork{Chain{Tuple{Dense{typeof(identity), Matrix{Float32}, Vector{Float32}}, Dense{typeof(identity), Matrix{Float32}, Vector{Float32}}}}, Vector{Float32}, Flux.var"#60#62"{Chain{Tuple{Dense{typeof(identity), Matrix{Float32}, Vector{Float32}}, Dense{typeof(identity), Matrix{Float32}, Vector{Float32}}}}}, Tuple{DynamicSS{Tsit5{typeof(OrdinaryDiffEq.trivial_limiter!), typeof(OrdinaryDiffEq.trivial_limiter!), Static.False}, Float32, Float32, Float64}}, Base.Iterators.Pairs{Union{}, Union{}, Tuple{}, NamedTuple{(), Tuple{}}}}, Matrix{Float32}}, UniformScaling{Bool}, Nothing, Nothing, Nothing, Nothing, Nothing, Nothing, Nothing, Nothing, Nothing, Nothing, Nothing, Nothing, typeof(SciMLBase.DEFAULT_OBSERVED), Nothing}, Base.Iterators.Pairs{Union{}, Union{}, Tuple{}, 
NamedTuple{(), Tuple{}}}}, ::DynamicSS{Tsit5{typeof(OrdinaryDiffEq.trivial_limiter!), typeof(OrdinaryDiffEq.trivial_limiter!), Static.False}, Float32, Float32, Float64})
    @ Zygote C:\Users\D\.julia\packages\Zygote\umM0L\src\compiler\interface2.jl:0
 [16] _apply(::Function, ::Vararg{Any, N} where N)
    @ Core .\boot.jl:804
 [17] adjoint
    @ C:\Users\D\.julia\packages\Zygote\umM0L\src\lib\lib.jl:200 [inlined]
 [18] _pullback
    @ C:\Users\D\.julia\packages\ZygoteRules\AIbCs\src\adjoint.jl:65 [inlined]
 [19] _pullback
    @ C:\Users\D\.julia\packages\DiffEqBase\0PaUK\src\solve.jl:68 [inlined]
 [20] _pullback(::Zygote.Context, ::CommonSolve.var"#solve##kw", ::NamedTuple{(:u0,), Tuple{Matrix{Float32}}}, ::typeof(solve), ::SteadyStateProblem{Matrix{Float32}, false, Vector{Float32}, ODEFunction{false, var"#dudt#10"{DeepEquilibriumNetwork{Chain{Tuple{Dense{typeof(identity), Matrix{Float32}, Vector{Float32}}, Dense{typeof(identity), Matrix{Float32}, Vector{Float32}}}}, Vector{Float32}, Flux.var"#60#62"{Chain{Tuple{Dense{typeof(identity), Matrix{Float32}, Vector{Float32}}, Dense{typeof(identity), Matrix{Float32}, Vector{Float32}}}}}, Tuple{DynamicSS{Tsit5{typeof(OrdinaryDiffEq.trivial_limiter!), typeof(OrdinaryDiffEq.trivial_limiter!), Static.False}, Float32, Float32, Float64}}, Base.Iterators.Pairs{Union{}, Union{}, Tuple{}, NamedTuple{(), Tuple{}}}}, Matrix{Float32}}, UniformScaling{Bool}, Nothing, Nothing, Nothing, Nothing, Nothing, Nothing, Nothing, Nothing, Nothing, Nothing, Nothing, Nothing, typeof(SciMLBase.DEFAULT_OBSERVED), Nothing}, Base.Iterators.Pairs{Union{}, Union{}, Tuple{}, NamedTuple{(), Tuple{}}}}, ::DynamicSS{Tsit5{typeof(OrdinaryDiffEq.trivial_limiter!), typeof(OrdinaryDiffEq.trivial_limiter!), Static.False}, Float32, Float32, Float64})
    @ Zygote C:\Users\D\.julia\packages\Zygote\umM0L\src\compiler\interface2.jl:0
 [21] _apply(::Function, ::Vararg{Any, N} where N)
    @ Core .\boot.jl:804
 [22] adjoint
    @ C:\Users\D\.julia\packages\Zygote\umM0L\src\lib\lib.jl:200 [inlined]
 [23] _pullback
    @ C:\Users\D\.julia\packages\ZygoteRules\AIbCs\src\adjoint.jl:65 [inlined]
 [24] _pullback
    @ c:\Users\D\w7d\test_flux_e[ample.jl:33 [inlined]
 [25] _pullback(::Zygote.Context, ::DeepEquilibriumNetwork{Chain{Tuple{Dense{typeof(identity), Matrix{Float32}, Vector{Float32}}, Dense{typeof(identity), Matrix{Float32}, Vector{Float32}}}}, Vector{Float32}, Flux.var"#60#62"{Chain{Tuple{Dense{typeof(identity), Matrix{Float32}, Vector{Float32}}, Dense{typeof(identity), Matrix{Float32}, Vector{Float32}}}}}, Tuple{DynamicSS{Tsit5{typeof(OrdinaryDiffEq.trivial_limiter!), typeof(OrdinaryDiffEq.trivial_limiter!), Static.False}, Float32, Float32, Float64}}, Base.Iterators.Pairs{Union{}, Union{}, Tuple{}, NamedTuple{(), Tuple{}}}}, ::Matrix{Float32}, 
::Vector{Float32})
    @ Zygote C:\Users\D\.julia\packages\Zygote\umM0L\src\compiler\interface2.jl:0
 [26] _pullback
    @ c:\Users\D\w7d\test_flux_e[ample.jl:28 [inlined]
 [27] _pullback(ctx::Zygote.Context, f::DeepEquilibriumNetwork{Chain{Tuple{Dense{typeof(identity), Matrix{Float32}, Vector{Float32}}, Dense{typeof(identity), Matrix{Float32}, Vector{Float32}}}}, Vector{Float32}, Flux.var"#60#62"{Chain{Tuple{Dense{typeof(identity), Matrix{Float32}, Vector{Float32}}, Dense{typeof(identity), Matrix{Float32}, Vector{Float32}}}}}, Tuple{DynamicSS{Tsit5{typeof(OrdinaryDiffEq.trivial_limiter!), typeof(OrdinaryDiffEq.trivial_limiter!), Static.False}, Float32, Float32, Float64}}, Base.Iterators.Pairs{Union{}, Union{}, Tuple{}, NamedTuple{(), Tuple{}}}}, args::Matrix{Float32})
    @ Zygote C:\Users\D\.julia\packages\Zygote\umM0L\src\compiler\interface2.jl:0
 [28] _pullback
    @ c:\Users\D\w7d\test_flux_e[ample.jl:45 [inlined]
 [29] _pullback(::Zygote.Context, ::typeof(loss), ::Matrix{Float32}, ::Matrix{Float32})
    @ Zygote C:\Users\D\.julia\packages\Zygote\umM0L\src\compiler\interface2.jl:0
 [30] _apply
    @ .\boot.jl:804 [inlined]
 [31] adjoint
    @ C:\Users\D\.julia\packages\Zygote\umM0L\src\lib\lib.jl:200 [inlined]
 [32] _pullback
    @ C:\Users\D\.julia\packages\ZygoteRules\AIbCs\src\adjoint.jl:65 [inlined]
 [33] _pullback
    @ C:\Users\D\.julia\packages\Flux\BPPNj\src\optimise\train.jl:105 [inlined]
 [34] _pullback(::Zygote.Context, ::Flux.Optimise.var"#39#45"{typeof(loss), Tuple{Matrix{Float32}, Matrix{Float32}}})   
    @ Zygote C:\Users\D\.julia\packages\Zygote\umM0L\src\compiler\interface2.jl:0
 [35] pullback(f::Function, ps::Zygote.Params)
    @ Zygote C:\Users\D\.julia\packages\Zygote\umM0L\src\compiler\interface.jl:352
 [36] gradient(f::Function, args::Zygote.Params)
    @ Zygote C:\Users\D\.julia\packages\Zygote\umM0L\src\compiler\interface.jl:75
 [37] macro expansion
    @ C:\Users\D\.julia\packages\Flux\BPPNj\src\optimise\train.jl:104 [inlined]
 [38] macro expansion
    @ C:\Users\D\.julia\packages\Juno\n6wyj\src\progress.jl:134 [inlined]
 [39] train!(loss::Function, ps::Zygote.Params, data::Tuple{Tuple{Matrix{Float32}, Matrix{Float32}}}, opt::ADAM; cb::Flux.Optimise.var"#40#46")
    @ Flux.Optimise C:\Users\D\.julia\packages\Flux\BPPNj\src\optimise\train.jl:102
 [40] train!(loss::Function, ps::Zygote.Params, data::Tuple{Tuple{Matrix{Float32}, Matrix{Float32}}}, opt::ADAM)
    @ Flux.Optimise C:\Users\D\.julia\packages\Flux\BPPNj\src\optimise\train.jl:100
 [41] top-level scope
    @ c:\Users\D\w7d\test_flux_e[ample.jl:47
in expression starting at c:\Users\D\w7d\test_flux_e[ample.jl:47

Operating System: Windows 10
Julia 1.6.5
VScode 1.63.2
Pkg.status

  [052768ef] CUDA v3.6.4
  [31a5f54b] Debugger v0.7.0
  [2b5f629d] DiffEqBase v6.81.0
  [41bf760c] DiffEqSensitivity v6.68.0
  [587475ba] Flux v0.12.8
  [5903a43b] Infiltrator v1.1.2
  [98e50ef6] JuliaFormatter v0.21.2
  [aa1ae85d] JuliaInterpreter v0.9.1
  [eb30cadb] MLDatasets v0.5.14
  [2774e3e8] NLsolve v4.5.1
  [1dea7af3] OrdinaryDiffEq v6.4.2
  [91a5bcdd] Plots v1.25.6
  [ee283ea6] Rebugger v0.2.2
  [9672c7b4] SteadyStateDiffEq v1.6.6
  [c3572dad] Sundials v4.9.1
  [e88e6eb3] Zygote v0.6.33
  [37e2e46d] LinearAlgebra
  [8dfed614] Test

Quick start example fails to run when batch size increases moderately

To reproduce:

using DeepEquilibriumNetworks, Lux, Random, Zygote
# using LuxCUDA, LuxAMDGPU ## Install and Load for GPU Support

seed = 0
rng = Random.default_rng()
Random.seed!(rng, seed)
model = Chain(Dense(2 => 2),
    DeepEquilibriumNetwork(Parallel(+,
            Dense(2 => 2; use_bias=false),
            Dense(2 => 2; use_bias=false)),
        ContinuousDEQSolver(; abstol=0.1f0, reltol=0.1f0, abstol_termination=0.1f0,
            reltol_termination=0.1f0);
        save_everystep=true))

gdev = gpu_device()
cdev = cpu_device()

ps, st = Lux.setup(rng, model) |> gdev
x = rand(rng, Float32, 2, 100) |> gdev
y = rand(rng, Float32, 2, 100) |> gdev
gs = only(Zygote.gradient(p -> sum(abs2, first(first(model(x, p, st))) .- y), ps))

This gives the following error + warning in 1.9 (used a try,catch log because the original error flooded the repl and can't be accessed ๐Ÿ˜… ):

โ”Œ Warning: Automatic AD choice of autojacvec failed in ODE adjoint, failing back to ODE adjoint + numerical vjp
โ”” @ SciMLSensitivity ~/.julia/packages/SciMLSensitivity/U8Axh/src/sensitivity_interface.jl:381
โ”Œ Warning: AD choice of autojacvec failed in nonlinear solve adjoint
โ”” @ SciMLSensitivity ~/.julia/packages/SciMLSensitivity/U8Axh/src/steadystate_adjoint.jl:112
1278

and

Error encountered: MethodError: no method matching jacobian(::SciMLSensitivity.ParamGradientWrapper{ODEFunction{false, SciMLBase.FullSpecialize, DeepEquilibriumNetworks.var"#dudt#50"{Lux.Experimental.StatefulLuxLayer{Parallel{NamedTuple{(:layer_1, :layer_2), Tuple{Dense{false, typeof(identity), typeof(glorot_uniform), typeof(zeros32)}, Dense{false, typeof(identity), typeof(glorot_uniform), typeof(zeros32)}}}, Nothing, typeof(+)}, Nothing, NamedTuple{(:layer_1, :layer_2), Tuple{NamedTuple{(), Tuple{}}, NamedTuple{(), Tuple{}}}}}}, LinearAlgebra.UniformScaling{Bool}, Nothing, Nothing, Nothing, Nothing, Nothing, Nothing, Nothing, Nothing, Nothing, Nothing, Nothing, Nothing, Nothing, Nothing, typeof(SciMLBase.DEFAULT_OBSERVED), Nothing, Nothing}, Nothing, Matrix{Float32}}, ::NamedTuple{(:ps, :x), Tuple{NamedTuple{(:layer_1, :layer_2), Tuple{NamedTuple{(:weight,), Tuple{Matrix{Float32}}}, NamedTuple{(:weight,), Tuple{Matrix{Float32}}}}}, Matrix{Float32}}}, ::SteadyStateAdjoint{0, true, Val{:central}, Bool, Nothing, NamedTuple{(), Tuple{}}})

Closest candidates are:
  jacobian(::Any, !Matched::AbstractArray{<:Number}, ::SciMLBase.AbstractOverloadingSensitivityAlgorithm)
   @ SciMLSensitivity ~/.julia/packages/SciMLSensitivity/U8Axh/src/derivative_wrappers.jl:128

TagBot trigger issue

This issue is used to trigger TagBot; feel free to unsubscribe.

If you haven't already, you should update your TagBot.yml to include issue comment triggers.
Please see this post on Discourse for instructions and more details.

If you'd like for me to do this for you, comment TagBot fix on this issue.
I'll open a PR within a few hours, please be patient!

Separate out package code from paper code

Currently, the package has a lot of fluff which is not really needed. I will branch it out to paper which contains the scripts and everything needed to reproduce the results from the paper.

Stuff that need to be removed:

  • experiments directory -- Maybe convert that into usage examples.
  • scripts directory -- Has some scripts for distributed training using MPI (will move them to FluxMPI which seems to be an appropriate place for their usage)
  • Inside src
    • solve.jl -- See #33
    • logger.jl -- Time to place it somewhere appropriate. Have been copy-pasting that file in all my projects ๐Ÿ˜“
    • datasets.jl -- Again not relevant to DEQs

Stuff to be added:

  • Some documentation would be good (even I forget how to use a few of those functions ๐Ÿ˜… )

See ap/release

Upstream DiffEq Patches

DiffEq Patches that need to go through before a proper package release:

Unbreak Nested AD

LuxDL/Lux.jl#591 + LuxDL/LuxLib.jl#53 breaks Zygote over Zygote completely for Lux to bring in more performant GPU kernels (and somewhat improved memory usage on CPU).

We need to shift to the syntax described in LuxDL/Lux.jl#598 which makes nesting AD extremely efficient for Lux by swapping nested reverse mode with forward over reverse

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.