stor-i / gaussianprocesses.jl Goto Github PK
View Code? Open in Web Editor NEWA Julia package for Gaussian Processes
Home Page: https://stor-i.github.io/GaussianProcesses.jl/latest/
License: Other
A Julia package for Gaussian Processes
Home Page: https://stor-i.github.io/GaussianProcesses.jl/latest/
License: Other
Is it possible to draw random samples from a GP
? At the very least, sampling from the prior would be very easy to implement. This would be useful for me and others as well, I'd expect. I think the conventional API would be along the lines of:
gp = GP(x,y,m,k)
rand(gp,10) # draw 10 samples from posterior
I'm also interested in sampling from the prior distribution. This is perhaps a separate issue, but I would prefer that there be a separate function fit!(gp::GP,x::Array,y::Array)
to fit the model. This way I could draw samples from the prior like so:
gp = GP(m,k) # create prior
rand(gp,10) # draw 10 samples from prior
fit!(gp,x,y) # fit to data
rand(gp,10) # draw 10 samples from posterior
Edit/Update: I dug into the code a little bit and found the full_cov
option, which suggests a wrapper along the lines of:
using Distributions
function rand(gp::GP, n::Int; x_ax=collect(linspace(0,1)))
mu,sigma = predict(gp,x_ax;full_cov=true)
return rand(MvNormal(mu,sigma),n)
end
I think this sort of works? I tend to get a Base.LinAlg.PosDefException
unless I add a small identity matrix to sigma
. Also, I have been trouble using gp = GP()
to sample from the prior. Any help on this is greatly appreciated!
I'm going to start a new branch for updating the syntax for julia v0.6. I'm going to assume we're not bothered with retaining v0.5 syntax compatibility, as that's a lot more work. Is that ok?
d, n = 3, 10
x = 2π * rand(d, n)
y = Float64[sum(sin.(x[:,i])) for i in 1:n]/d
mZero = MeanZero()
kern = SE(0.0,0.0)
gp = GPE(x, y, mZero, kern)
y_pred, sig = predict_y(gp, x) # no problem
y_pred, sig = predict_y(gp, x; full_cov=true) # MethodError
MethodError: no method matching +(::PDMats.PDMat{Float64,Array{Float64,2}}, ::Float64)
Closest candidates are:
+(::Any, ::Any, ::Any, ::Any...) at operators.jl:424
+(::Bool, ::T<:AbstractFloat) where T<:AbstractFloat at bool.jl:96
+(::Float64, ::Float64) at float.jl:375
...
Stacktrace:
[1] #predict_y#79(::Bool, ::Function, ::GaussianProcesses.GPE, ::Array{Float64,2}) at /Users/imolk/Library/Julia/packages/v0.6/GaussianProcesses/src/GPE.jl:221
[2] (::GaussianProcesses.#kw##predict_y)(::Array{Any,1}, ::GaussianProcesses.#predict_y, ::GaussianProcesses.GPE, ::Array{Float64,2}) at ./<missing>:0
using GaussianProcesses
k2=RQIso(0.0,0.0,0.0)
cov(k2, 3.0)
@code_warntype cov(k2, 3.0)
produces
Variables:
#self#::Base.#cov
rq::GaussianProcesses.RQIso
r::Float64
Body:
begin
SSAValue(2) = (Core.getfield)(rq::GaussianProcesses.RQIso,:σ2)::Float64
SSAValue(1) = (Base.box)(Base.Float64,(Base.add_float)(1.0,(Base.box)(Base.Float64,(Base.div_float)(r::Float64,(Base.box)(Base.Float64,(Base.mul_float)((Base.box)(Base.Float64,(Base.mul_float)(2.0,(Core.getfield)(rq::GaussianProcesses.RQIso,:α)::Float64)),(Core.getfield)(rq::GaussianProcesses.RQIso,:ℓ2)::Float64))))))
SSAValue(0) = (Base.box)(Base.Float64,(Base.neg_float)((Core.getfield)(rq::GaussianProcesses.RQIso,:α)::Float64))
return (SSAValue(2) * $(Expr(:invoke, LambdaInfo for ^(::Float64, ::Float64), :(Base.^), SSAValue(1), SSAValue(0))))::ANY
end::ANY
I think this is actually a julia bug, but I'm not sure how to reproduce it...
The current interface for setting priors is restricted to simple models. This need to be expanded to be applicable for the fixed kernels.
The best way forward is probably to implement parameters as objects, making it easier to assign priors to the parameters.
Great package. At the moment only 2-dim regression for X = 2×n Array{Float64,2}
is supported right? Do you guys have any plans to extend this to higher dimensions?
I would have thought that the time to draw samples grows linearly with the number of samples drawn (although I know next to nothing about GPs). It seems to grow roughly quadratically (using setup from README "Sampling from the GP"):
julia> @time prior=rand(gp, linspace(-5,5,10), 10);
0.000228 seconds (348 allocations: 10.891 KB)
julia> @time prior=rand(gp, linspace(-5,5,100), 10);
0.002107 seconds (3.27 k allocations: 621.484 KB)
julia> @time prior=rand(gp, linspace(-5,5,1000), 10);
0.112869 seconds (33.06 k allocations: 54.125 MB, 8.04% gc time)
julia> @time prior=rand(gp, linspace(-5,5,10000), 10);
15.259130 seconds (339.22 k allocations: 5.223 GB, 2.18% gc time)
Is this correct?
How does GaussianProcesses.jl compare - feature-wise - with GPy?
Is it production-ready?
Thanks!
The current master produces an error when the module is loaded with julia v0.4 as Docile is not imported and so @document is not available.
I'm wondering if the functionality in the noise branch (possibly extended to handle a full error covariance matrix) is slated to be merged into master?
Now that we can have non-Gaussian data, (e.g. Vector{Bool} in the classification case), it would be useful if we could introduce an ArgumentError
when initialising the GP()
function.
optimize! relies heavily on these functions. Efficiency could be improved for instance through greater use of bang functions.
In GP.jl, the "_predict" function computes Sigma, then does Sigma = max(Sigma,0) before returning it.
Unless this isn't the standard element-wise "max" function, this is obviously not what you want, since covariance matrices can have negative off-diagonal elements.
hello,
i tried some performance tests for the predict funtion in Gp.jl. For my testdata the default case was 3x-5x slower then the full cov case.
Maybe someone can replace the original upper version with the faster one in the lower code example.
Additional i will suggest to add some performance tests, to ensure that coming changes will not decrease the performance.
(Note: as always for performance tests julia needs some 'warmup' calls before timing stays stable.)
Original "slow" version:
function predict(gp::GP, x::Matrix{Float64}; full_cov::Bool=false)
size(x,1) == gp.dim || throw(ArgumentError("Gaussian Process object and input observations do not have consistent dimensions"))
if full_cov
return _predict(gp, x)
else
## calculate prediction for each point independently
mu = Array(Float64, size(x,2))
Sigma = similar(mu)
for k in 1:size(x,2)
out = _predict(gp, x[:,k:k])
mu[k] = out[1][1]
Sigma[k] = out[2][1]
end
return mu, Sigma
end
end
Faster one:
function predict(gp::GP, x::Matrix{Float64}; full_cov::Bool=false)
size(x,1) == gp.dim || throw(ArgumentError("Gaussian Process object and input observations do not have consistent dimensions"))
mu, Sigma = _predict(gp, x);
if ~full_cov # use reduced predictive covariance
Sigma = diag(Sigma);
end
return mu, Sigma
end
Please correct me if some informations are lost in my suggestion.
Current unit tests are set for the exact GP constructor and should be extended to GPMC
as well.
Really too simple to justify a PR. Line 21 of mat52_ard.jl should be:
21 mat.ℓ2 = exp(hyp[1:(mat.dim-1)])
rather than
21 mat.ℓ = exp(hyp[1:(mat.dim-1)])
Error with kernel_data_key
Great package! Especially the having the gradient ready for optim is very neat.
It may be a good idea to base the computations on the PDMats package. This would
The later point was actually my motivation to include a sparse matrices in PDMats.
PS: Also MLKernels might be interesting to look at.
Use of this package massively increases performance of crossKern and grad_stack.
At the moment, parameter inference works on the assumption that the observations are Gaussian. This provides a closed form expression for the marginal likelihood used by the optimizer. This needs to be extended to non-Gaussian likelihoods where inference is performed using, for example, Laplace approximations or expectation-propagation.
In the 1-D regression example, logObsNoise
is presented as "log standard deviation of noise", but in the code, this input is never multiplied by 2 when using exp
. Also, the printout of a GP seems to indicate that logObsNoise should be a variance. I think it's inconsistent as is.
As a side note, should the fieldname be different? Like maybe logNoiseVar
? It's longer and uglier, but explicit.
The Gaussian Processes fitting sometimes fails due to a PosDef error, especially when observation noise is low. This also leads to the occasional failure of the optimize! function. Error can be avoided by increasing noise parameter of GP, or by fixing noise in the optimize!
An automated way of dealing with this will be required, for instance the Rasmussen package for Matlab automatically adds small amounts of noise to the covariance matrix when the noise parameter is very small.
Commit 1edae62 might be the cause of an ERROR: MethodError: no method matching set_params!(::GaussianProcesses.GP, ::Float64; noise=true, mean=true, kern=true)
when running test_optim.jl
. Could anybody check if this is the case?
Thanks!
When optimising the likelihood, the distance matrix is calculated again for each change of parameters. This could be avoided by storing the distance in an object used by GP.
I am getting the following error and a not sure what is wrong. I am calling optimize!()
with a valid gp. I wasn't having this problem in Julia 0.4, and just switched to Julia 0.5.
ERROR: LoadError: MethodError: no method matching set_params!(::GaussianProcesses.GP, ::Float64; noise=true, mean=true, kern=true)
Closest candidates are:
set_params!(::GaussianProcesses.GP, ::Array{Float64,1}; noise, mean, kern) at /home/brett/.julia/v0.5/GaussianProcesses/src/GP.jl:275
set_params!{K<:GaussianProcesses.Kernel}(::GaussianProcesses.Masked{K<:GaussianProcesses.Kernel}, ::Any) at /home/brett/.julia/v0.5/GaussianProcesses/src/kernels/masked_kernel.jl:55 got unsupported keyword arguments "noise", "mean", "kern"
in #optimize!#19(::Bool, ::Bool, ::Bool, ::Optim.ConjugateGradient{Void,Optim.##29#31,LineSearches.#hagerzhang!}, ::Array{Any,1}, ::Function, ::GaussianProcesses.GP) at /home/brett/.julia/v0.5/GaussianProcesses/src/optimize.jl:20
in optimize!(::GaussianProcesses.GP) at /home/brett/.julia/v0.5/GaussianProcesses/src/optimize.jl:17
in macro expansion; at /home/brett/GitProjects/TALAF_publications/Figures/scripts/1dBO_example.jl:130 [inlined]
in anonymous at ./<missing>:?
in include_from_node1(::String) at ./loading.jl:488
while loading /home/brett/GitProjects/TALAF_publications/Figures/scripts/1dBO_example.jl, in expression starting on line 96
At this point, it would be interesting to run some benchmarks with simple kernels and moderately large datasets to see how this package compares to others in terms of performance. Other packages of interest include:
The gradient of the log likelihood with respect to the mean parameter doesn't match its numerical approximation. In fact it seems to be the right number but the wrong sign. The gradient is computed in GP.jl
as gp.dmLL[i+noise] = -dot(Mgrads[:,i],gp.alpha)
. I'm not sure how that's obtained, so I don't know if that minus sign should just be removed, or if the issue runs deeper.
using GaussianProcesses
# Simulate example
n = 10
x = 2π * rand(n)
μ = 0.2
y = μ + sin(x) + 0.05*randn(n)
mConst = MeanConst(0.0) # constant mean
kern = SE(0.0,0.0)
logObsVar = -1.0
gp = GP(x,y,mConst,kern, logObsVar) # Fit the GP
GaussianProcesses.update_mll_and_dmll!(gp)
prev_mLL=gp.mLL
prev_dmLL=gp.dmLL
prev_params=GaussianProcesses.get_params(gp)
dθ=[0.0, 1.0, 0.0, 0.0]*1e-4 # increment mean parameter
GaussianProcesses.set_params!(gp, prev_params.+dθ)
GaussianProcesses.update_mll_and_dmll!(gp)
println("change in log likelihood: ", gp.mLL-prev_mLL)
println("expected change in log likelihood: ", dot(prev_dmLL, dθ))
change in log likelihood: 4.723900128666969e-5
expected change in log likelihood: -4.7253655770280115e-5
No problem for the other parameters:
prev_mLL=gp.mLL
prev_dmLL=gp.dmLL
prev_params=GaussianProcesses.get_params(gp)
dθ=[1.0, 0.0, 1.0, 1.0]*1e-4 # increment all other parameters
GaussianProcesses.set_params!(gp, prev_params.+dθ)
GaussianProcesses.update_mll_and_dmll!(gp)
println("change in log likelihood: ", gp.mLL-prev_mLL)
println("expected change in log likelihood: ", dot(prev_dmLL, dθ))
change in log likelihood: -0.0007003607397644274
expected change in log likelihood: -0.0007003395285220553
I am happy to introduce GeoStats.jl, a package that contains generalizations of Gaussian processes to situations where the mean of the random field is unknown and where more general covariance metrics are needed. It includes Gaussian processes as a special case and supports many other features for geostatistical analysis.
GeoStats.jl has a lot of overlap with GaussianProcesses.jl at the implementation level, and some of you might find it useful in your work. I opened this issue to share the project with you.
I am planning to unify many other geostatistical methods in future releases, please feel free to watch the project or mention it in the README if you think it can be useful to other people in the community.
I have written a few examples to illustrate the current features:
http://nbviewer.jupyter.org/github/juliohm/GeoStats.jl/tree/master/examples/
BenchmarkLite
may no longer be working.
A lot of the code for the stationary kernels could be recycled by moving this into methods for an abstract super type.
In the case of non-Gaussian likelihoods, we're using the mcmc
function to infer the model parameters and latent function, which we can think of as our posterior. Alternatively, we could use variational inference to get an approximation to the posterior. This is done using optimisation rather than sampling so should be considerably faster.
We could implement the ADVI algorithm, which in our case is even simpler as our latent function and parameters are already defined on the unconstrained real space. Therefore, we'd essentially be using optimisation find a Normal approximation to our posterior.
Is it possible to have kernels operate in different domains, as described in the GPy documentation "basic kernels, operating on different domains"?
Thanks!
The idea is to allow users to easily create new kernels with defined gradients by just specifying a covariance function. ForwardDiff might be used for this.
Is there a function for adding new observations to an existing GP?
Hi,
I'm the author of ScikitLearn.jl, a port/wrapping of the scikit-learn interface. I'd like to have an example of implementing the interface before announcing the package, and GaussianProcesses.jl is a good candidate. Since you're already storing the hyperparameters in GP
, it will be straight-forward. If you're interested, I will submit a PR.
For GP classification we have a probit link function. It would be nice if we could allow users to choose a different link function, e.g. logit.
There's a lot of repeated code for the sum and product kernels. These could be refashioned as subtypes of a composite kernel abstract type.
old version
gp.cK = PDMat(crossKern(gp.x,gp.k) + exp(gp.logNoise)*eye(gp.nobsv) + 1e-8*eye(gp.nobsv))
new code version
gp.cK = PDMat(crossKern(gp.x,gp.k) + exp(2*gp.logNoise)*eye(gp.nobsv) + 1e-8*eye(gp.nobsv))
i checked the results with some testdata from the gpml matlab box, maybe someone can add some logical tests too ;-)
I am using the linear_trend.jl example:
using GaussianProcesses
using PyPlot
x=[-4.0,-3.0,-1.0,0.0,2.0];
y = 2.0x + 0.5rand(5);
xpred = collect(-5.0:0.1:5.0);
mLin = MeanLin([0.5]) # linear mean function
kern = SE(0.0,0.0) # squared exponential kernel function
gp = GP(x,y,mLin,kern) # fit the GP
This works, and the gp object is correctly returned. But when I do 'plot(gp)', I get a long error:
ERROR: PyError (:PyObject_Call) <type 'exceptions.TypeError'>
TypeError('float() argument must be a string or a number',)
File "/usr/local/lib/python2.7/site-packages/matplotlib/pyplot.py", line 3161, in plot
ret = ax.plot(*args, **kwargs)
File "/usr/local/lib/python2.7/site-packages/matplotlib/init.py", line 1819, in inner
return func(ax, *args, **kwargs)
File "/usr/local/lib/python2.7/site-packages/matplotlib/axes/_axes.py", line 1383, in plot
self.add_line(line)
File "/usr/local/lib/python2.7/site-packages/matplotlib/axes/_base.py", line 1703, in add_line
self._update_line_limits(line)
File "/usr/local/lib/python2.7/site-packages/matplotlib/axes/_base.py", line 1725, in update_line_limits
path = line.get_path()
File "/usr/local/lib/python2.7/site-packages/matplotlib/lines.py", line 938, in get_path
self.recache()
File "/usr/local/lib/python2.7/site-packages/matplotlib/lines.py", line 634, in recache
y = np.asarray(yconv, np.float)
File "/usr/local/lib/python2.7/site-packages/numpy/core/numeric.py", line 482, in asarray
return array(a, dtype, copy=False, order=order)
in pyerr_check at /usr/home/ko/.julia/v0.5/PyCall/src/exception.jl:56 [inlined]
in pyerr_check at /usr/home/ko/.julia/v0.5/PyCall/src/exception.jl:61 [inlined]
in macro expansion at /usr/home/ko/.julia/v0.5/PyCall/src/exception.jl:81 [inlined]
in #_pycall#66(::Array{Any,1}, ::Function, ::PyCall.PyObject, ::GaussianProcesses.GP, ::Vararg{GaussianProcesses.GP,N}) at /usr/home/ko/.julia/v0.5/PyCall/src/PyCall.jl:550
in _pycall(::PyCall.PyObject, ::GaussianProcesses.GP, ::Vararg{GaussianProcesses.GP,N}) at /usr/home/ko/.julia/v0.5/PyCall/src/PyCall.jl:538
in #pycall#70(::Array{Any,1}, ::Function, ::PyCall.PyObject, ::Type{PyCall.PyAny}, ::GaussianProcesses.GP, ::Vararg{GaussianProcesses.GP,N}) at /usr/home/ko/.julia/v0.5/PyCall/src/PyCall.jl:572
in pycall(::PyCall.PyObject, ::Type{PyCall.PyAny}, ::GaussianProcesses.GP, ::Vararg{GaussianProcesses.GP,N}) at /usr/home/ko/.julia/v0.5/PyCall/src/PyCall.jl:572
in #plot#85(::Array{Any,1}, ::Function, ::GaussianProcesses.GP, ::Vararg{GaussianProcesses.GP,N}) at /usr/home/ko/.julia/v0.5/PyPlot/src/PyPlot.jl:172
in plot(::GaussianProcesses.GP, ::Vararg{GaussianProcesses.GP,N}) at /usr/home/ko/.julia/v0.5/PyPlot/src/PyPlot.jl:169
in _init at /usr/local/lib/julia/sys.so:? (repeats 2 times)
in eval_user_input(::Any, ::Base.REPL.REPLBackend) at ./REPL.jl:64
in macro expansion at ./REPL.jl:95 [inlined]
in (::Base.REPL.##3#4{Base.REPL.REPLBackend})() at ./event.jl:68
julia>
Hi all,
I have a problem with passing keyword arguments from optimize! to the function optimize of the Optim.jl package.
When I try to set the number of iterations for the optimization (i.e. setting the keyword argument iterations) I get:
What is the proper way to use keyword arguments in optimize!?
I see you are using Optim with this package. As a heads-up, there will be some breaking changes in Optim v0.8: JuliaNLSolvers/Optim.jl#337 JuliaNLSolvers/Optim.jl#329
Tagging v0.8 is still some time away, but you may want to put an upper limit in REQUIRE to make sure things don't brake down the line.
Create MCMC samplers to allow for Bayesian estimation of GP hyperparameters
A domain error is triggered when running
mZero = MeanZero()
kern = Mat(5/2,[0.0,0.0],0.0)
gp = GP(X,Y,mZero,kern)
optimize!(gp; method=Optim.ConjugateGradient())
I'm using
GaussianProcesses 0.4.0+
Julia Version 0.5.1
I've struggled to find a minimal example to trigger the error, the best I could do is for the values of X and Y given at the end.
It seems to me that the error comes from line 107 in GP.jl
gp.mLL = -dot((gp.y - μ),gp.alpha)/2.0 - logdet(gp.cK)/2.0 - gp.nobsv*log(2π)/2.0
when returns NaN when gp.cK has a negative determinant.
Running this code on my laptop where
GaussianProcesses 0.4.0
Julia Version 0.5.1-pre+31
generates a different error when running optimize!(gp; method=Optim.ConjugateGradient())
ERROR: MethodError: no method matching set_params!(::GaussianProcesses.GP, ::Float64; noise=true, mean=true, kern=true)
Closest candidates are:
set_params!(::GaussianProcesses.GP, ::Array{Float64,1}; noise, mean, kern) at /home/art/.julia/v0.5/GaussianProcesses/src/GP.jl:275
set_params!{K<:GaussianProcesses.Kernel}(::GaussianProcesses.Masked{K<:GaussianProcesses.Kernel}, ::Any) at /home/art/.julia/v0.5/GaussianProcesses/src/kernels/masked_kernel.jl:55 got unsupported keyword arguments "noise", "mean", "kern"
in #optimize!#19(::Bool, ::Bool, ::Bool, ::Optim.ConjugateGradient{Void,Optim.##29#31,LineSearches.#hagerzhang!}, ::Array{Any,1}, ::Function, ::GaussianProcesses.GP) at /home/art/.julia/v0.5/GaussianProcesses/src/optimize.jl:20
in (::GaussianProcesses.#kw##optimize!)(::Array{Any,1}, ::GaussianProcesses.#optimize!, ::GaussianProcesses.GP) at ./<missing>:0
I'm using the data
Y = [-0.000160691 0.000561494 -0.000308228 4.14104e-5 0.000306943 6.24922e-5 -0.00013596 9.83276e-5 -0.000105637 -2.11221e-5 0.00373866 0.000200135 -0.000462546 -0.000230539 0.0003362 -0.000120488 0.000201228 0.000141567 6.60807e-5 -0.000240906 0.00527562 0.00112132 0.000880385 -0.000602714 -0.000203268 2.6165e-5 0.000257117 0.000272523 -0.000526565 -0.000142842 0.0113493 0.00218881 0.000720203 0.000591266 0.000606935 0.000629937 1.19301e-5 0.000336753 1.32784e-5 5.98963e-5 0.00307527 0.00158222 0.000883546 0.000434129 -0.000172114 0.000570647 -0.000293091 -0.000187017 0.000111851 0.00037517][:]
X =[0.01 0.2
0.01 0.4
0.01 0.6
0.01 0.8
0.01 1.0
0.01 1.2
0.01 1.4
0.01 1.6
0.01 1.8
0.01 2.0
0.02 0.2
0.02 0.4
0.02 0.6
0.02 0.8
0.02 1.0
0.02 1.2
0.02 1.4
0.02 1.6
0.02 1.8
0.02 2.0
0.03 0.2
0.03 0.4
0.03 0.6
0.03 0.8
0.03 1.0
0.03 1.2
0.03 1.4
0.03 1.6
0.03 1.8
0.03 2.0
0.04 0.2
0.04 0.4
0.04 0.6
0.04 0.8
0.04 1.0
0.04 1.2
0.04 1.4
0.04 1.6
0.04 1.8
0.04 2.0
0.05 0.2
0.05 0.4
0.05 0.6
0.05 0.8
0.05 1.0
0.05 1.2
0.05 1.4
0.05 1.6
0.05 1.8
0.05 2.0]'
Unit tests as a minimum should verify that all mean and kernels function without error. Some verification that output is correct (for example by comparison with other packages) is also desirable.
Using the Plots.jl package would remove the need to define optional package dependencies and define plotting functions for each plotting back end. This would open up many new plotting back-ends (such as Plotly) and only the skeleton package RecipesBase.jl would need to be added to REQUIRE.
The data in the notebook directory is available through the RDatasets package:
using RDatasets
crab_data = datasets("MASS", "crabs")
Using RDatasets will eliminate the need to store the data files currently being used. It may also serve as a good example of how to construct a GP object from a DataFrame.
Hi
Thank you for creating this package. I only found out about GPR a few months ago and I was hoping to implement it in an algorithm I hope to write soon.
Going through the Readme it looks as though you can only specify a single value for the (log) observation noise which I assume is applied to all observations equally.
In my case (I do research in X-ray crystallography) each observation is given it's own error. Is there support for specifying a Vector of (log) observation noise to correspond to each observation?
A declarative, efficient, and flexible JavaScript library for building user interfaces.
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
An Open Source Machine Learning Framework for Everyone
The Web framework for perfectionists with deadlines.
A PHP framework for web artisans
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
Some thing interesting about web. New door for the world.
A server is a program made to process requests and deliver data to clients.
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
Some thing interesting about visualization, use data art
Some thing interesting about game, make everyone happy.
We are working to build community through open source technology. NB: members must have two-factor auth.
Open source projects and samples from Microsoft.
Google ❤️ Open Source for everyone.
Alibaba Open Source for everyone
Data-Driven Documents codes.
China tencent open source team.