Giter Site home page Giter Site logo

alshedivat / keras-gp Goto Github PK

View Code? Open in Web Editor NEW
244.0 244.0 54.0 91 KB

Keras + Gaussian Processes: Learning scalable deep and recurrent kernels.

License: MIT License

Python 89.29% MATLAB 10.71%
gaussian-processes keras machine-learning neural-networks tensorflow theano

keras-gp's People

Contributors

alshedivat avatar andrewgordonwilson avatar bokorn avatar dustinvtran avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

keras-gp's Issues

Octave Executable error

Hello,
I am unable to run the actuator code. I have renamed the file lstm-gp.py(in the screenshot). I have also attached the .bash_profile file screenshot where I have set the two environment variables OCTAVE_EXECUTABLE and GPML_PATH.
screen shot 2017-04-27 at 12 27 56 am
screen shot 2017-04-27 at 12 28 36 am

Save and load trained keras-gp model

Dear Maruan,
I was wondering if there is a way to save and load a trained keras-gp model.
In keras this is usually done by using:

model.save('my_model.h5')
model = load_model('my_model.h5')

Following this way, it results in an error not recognising the GP layer.
Is there a way to correctly save and load a keras-gp model?

I would appreciate any hint!
Best,
Marc

A strange phenomenon when gp-lstm is applied to a simulated dataset

Many thanks for your patience to read my question!

I have constructed a x sequence and y sequence:
X = np.linspace(0, 1, 2000)
y[i] = 0.5 * X[i] + 2 + np.random.normal(0, X[i])

However, when I applied the gp-lstm model to my simulated data, the prediction of mean sequence and volatility sequence are both constant sequence.

predicted mean sequence
[[[4.63309287]
[4.63309287]
[4.63309287]
[4.63309287]
[4.63309287]
[4.63309287]
.....................
[4.63309287]
[4.63309287]
[4.63309287]
[4.63309287]
[4.63309287]
[4.63309287]
[4.63309287]
[4.63309287]
[4.63309287]]]

predicted volatility sequence
[[[0.01834713]
[0.01834713]
[0.01834713]
[0.01834713]
[0.01834713]
[0.01834713]
[0.01834713]
....................
[0.01834713]
[0.01834713]
[0.01834713]
[0.01834713]
[0.01834713]
[0.01834713]]]

RMSE

Hi Maruan,

in your examples you use the rmse on the standardized data-sets.
In your paper, are you using this definition as well?
I was experimenting a little bit with some data-sets in your paper and got near these rmse's just by also using the standardized data-sets.
There isn't any hint in your paper, just we are using the rmse.

Cheers Roman Foell

Training GP on outputs from the last hidden layer

I've just read your paper - thank you for you great work! Measure of confidence is essential to many applications.

I have a general question on the combination on GPs and Neural Networks:

Suppose you've trained a model (e.g. containing several LSTM layers followed by a linear layer for the output $y$). If the model generalizes properly, you have a low train and test error. This means, that the last linear layer uses the learned features from previous layers to generate outputs.

Few days ago I had an Idea to replace the last linear layer with a GP. I would train this GP separetely with the outputs of the last hidden layer and the task outputs $y$ after the "main" training of the whole deep structure (containing linear layer).

Your paper integrates learning of the GP in the learning process, my approach is a bit different. It would be nice to have an opinion about my idea whether I would get the same performance.

Grid Creation

Great work thanks for this amazing library!
Is there a way to define the grid for MSGP precisely and not only the parameter 'k'?
I would like to define the GPML parameter xg manually. So I wonder if this is possible in keras-gp?

missing startup.m in the gpml in the backend?

I tried running something that is basically the example in https://github.com/alshedivat/keras-gp/blob/master/examples/msgp_sm_kernel_mlp_kin40k.py

I believe the problematic line is https://github.com/alshedivat/keras-gp/blob/master/examples/msgp_sm_kernel_mlp_kin40k.py#L62
oct2py kept complaining about the startup function which I assume should be imported from startup.m I wonder if therre should have been a startup.m file in the repository?

Is this problem cause by something else? Thanks.

run()

File "DKLKeras-SM.py", line 123, in run
nb_train_samples=len(X_train))
File "DKLKeras-SM.py", line 198, in assemble_mlp
nb_train_samples=nb_train_samples)
File "/Users/yvonne/Envs/DKLKeras3.6/lib/python3.6/site-packages/kgp-0.3.2-py3.6.egg/kgp/layers.py", line 63, in init
self.backend = GP_BACKEND(engine, engine_kwargs, gpml_path)
File "/Users/yvonne/Envs/DKLKeras3.6/lib/python3.6/site-packages/kgp-0.3.2-py3.6.egg/kgp/backend/gpml.py", line 69, in init
self.eng.eval('startup', verbose=0)
File "/Users/yvonne/Envs/DKLKeras3.6/lib/python3.6/site-packages/kgp-0.3.2-py3.6.egg/kgp/backend/engines.py", line 99, in eval
self._eng.eval(expr, verbose=verbose)
File "/Users/yvonne/Envs/DKLKeras3.6/lib/python3.6/site-packages/oct2py/core.py", line 304, in eval
out_file=self._reader.out_file)
File "/Users/yvonne/Envs/DKLKeras3.6/lib/python3.6/site-packages/oct2py/core.py", line 625, in evaluate
raise Oct2PyError(msg)
oct2py.utils.Oct2PyError: Oct2Py tried to run:

A question about hyperparameters

Dear developer:

If we set gp_hypers = {'lik': -2.0, 'cov': [[-0.7], [0.0]]} for Gaussian process, will the three hyperpatameters be constant during the training process? Or length-scale, signal std dev and noise variance will change along with the training process? Thanks!

gpml backend error

I get the following error when trying to run the MSGP-LSTM example (the GP-LSTM example runs fine):

Oct2PyError: Oct2Py tried to run:
"""

[dlik_dx ] = dlik(hyp, {@meanzero}, {@covGrid, {@covSEiso}, xg}, {@likGauss}, [], X_tr, y_tr);

"""
Octave returned:
error: binary operator '*' not implemented for 'sparse matrix' by 'float matrix' operations
error: called from
apx>@ at line 135 column 18
ldB2_grid>@ at line 221 column 35
apx>conjgrad at line 285 column 36
apx>linsolve at line 276 column 23
ldB2_grid>@ at line 222 column 21
infGaussLik at line 25 column 7
dlikGrid at line 36 column 11
@ at line 1 column 20

I'm running Octave 4.2.1 with the latest statistics package.

How can I get one standard deviation of the predictive distributions?

In the Figure 5 of paper "Learning Scalable Deep Kernels with Recurrent Structure", predictive uncertainty of the GP-LSTM model is showed by contour plots and error-bars; the latter denote one standard deviation of the predictive distributions.
My question is how can I get one standard deviation of the predictive distributions when using keras-gp? I haven't found any interface function to get the standard deviation of the predictive distributions. Would you please help solve this problem?

P-Dimensional GP input shape

Dear Maruan,

I am working with a high dimensional problem (n,L,p) where n= #samples, L= input sequence of each feature and p= #features.
I read your paper about recurrent deep kernels and keras-gp seems to be able to handle a p-dimensional gp-input.
However, I tried to run your code with a higher gp input shape.
Following the notation in your MSGP actuator example, I replaced the dataset and I changed the parameter:

gp_input_shape=p (line 55)

according to my number of features.
The training time seems to explode even though the scalable MSGP model should efficiently handle the higher dimensional gp input shape as I read in your paper.
Is there something I have to adjust before working with higher GP input shapes?

I would be very thankful for any help!

Best,
Marc

Missing actuator.mat file

Hi,

thanks for the paper and the code.
I was trying to run the examples, but it seems like the actuator.mat file is not located at the given link anymore.
Is it possible to give a new link for the dataset?
Thanks a lot!

TensorFlow error when running gp_lstm_actuator.py

I tried to run gp_lstm_actuator.py as-is after running setup.py install and get the following error:
(Windows 7, anaconda3)

C:\Anaconda3\python.exe C:/Projects/ML/kgp/examples/gp_lstm_actuator.py
Using TensorFlow backend.
Using GPML backend with Octave engine as the default one.
Loading data...C:\Projects\ML\kgp\kgp\datasets\sysid.py:70: UserWarning: Cannot find DATA_PATH variable in the environment. Using <current_working_directory>/data/ instead.
warnings.warn("Cannot find DATA_PATH variable in the environment. "
Done.

of loaded points: 1024

Traceback (most recent call last):
File "C:/Projects/ML/kgp/examples/gp_lstm_actuator.py", line 109, in
main()
File "C:/Projects/ML/kgp/examples/gp_lstm_actuator.py", line 83, in main
model = assemble('GP-LSTM', [nn_configs['1H'], gp_configs['GP']])
File "C:\Projects\ML\kgp\kgp\utils\assemble.py", line 93, in assemble
return assemble_gprnn(*params)
File "C:\Projects\ML\kgp\kgp\utils\assemble.py", line 191, in assemble_gprnn
RNN = assemble_rnn(nn_params, final_reshape=False)
File "C:\Projects\ML\kgp\kgp\utils\assemble.py", line 170, in assemble_rnn
previous = Layer(previous)
File "C:\Anaconda3\lib\site-packages\keras-2.0.4-py3.5.egg\keras\layers\recurrent.py", line 243, in call
return super(Recurrent, self).call(inputs, **kwargs)
File "C:\Anaconda3\lib\site-packages\keras-2.0.4-py3.5.egg\keras\engine\topology.py", line 558, in call
self.build(input_shapes[0])
File "C:\Anaconda3\lib\site-packages\keras-2.0.4-py3.5.egg\keras\layers\recurrent.py", line 1012, in build
constraint=self.bias_constraint)
File "C:\Anaconda3\lib\site-packages\keras-2.0.4-py3.5.egg\keras\legacy\interfaces.py", line 88, in wrapper
return func(*args, **kwargs)
File "C:\Anaconda3\lib\site-packages\keras-2.0.4-py3.5.egg\keras\engine\topology.py", line 391, in add_weight
weight = K.variable(initializer(shape), dtype=dtype, name=name)
File "C:\Anaconda3\lib\site-packages\keras-2.0.4-py3.5.egg\keras\layers\recurrent.py", line 1004, in bias_initializer
self.bias_initializer((self.units * 2,), *args, **kwargs),
File "C:\Anaconda3\lib\site-packages\keras-2.0.4-py3.5.egg\keras\backend\tensorflow_backend.py", line 1681, in concatenate
return tf.concat([to_dense(x) for x in tensors], axis)
File "C:\Anaconda3\lib\site-packages\tensorflow\python\ops\array_ops.py", line 1000, in concat
dtype=dtypes.int32).get_shape(
File "C:\Anaconda3\lib\site-packages\tensorflow\python\framework\ops.py", line 669, in convert_to_tensor
ret = conversion_func(value, dtype=dtype, name=name, as_ref=as_ref)
File "C:\Anaconda3\lib\site-packages\tensorflow\python\framework\constant_op.py", line 176, in _constant_tensor_conversion_function
return constant(v, dtype=dtype, name=name)
File "C:\Anaconda3\lib\site-packages\tensorflow\python\framework\constant_op.py", line 165, in constant
tensor_util.make_tensor_proto(value, dtype=dtype, shape=shape, verify_shape=verify_shape))
File "C:\Anaconda3\lib\site-packages\tensorflow\python\framework\tensor_util.py", line 367, in make_tensor_proto
_AssertCompatible(values, dtype)
File "C:\Anaconda3\lib\site-packages\tensorflow\python\framework\tensor_util.py", line 302, in _AssertCompatible
(dtype.name, repr(mismatch), type(mismatch).name))
TypeError: Expected int32, got list containing Tensors of type '_Message' instead.

Code for Pytorch

Hi,

Do you guys have similar implementation in pytorch as well? If so, can you please point me in that direction?

Thanks

Recurrent structure

Hello

how does the recurrent structure look like for the GP-LSTM. My question is focused on the output-data y or output layer and the first input layer. Is it, that you used all the hidden layers or better their outputs for recurrence, so also the first one and/or did you also use the output y recurrent in the output layer?

Thanks.

Roman

Training Error (Model object has no attribute '_check_num_samples')

Hi, I'm very interested in your algorithm.
I downloaded the latest keras-gp codes, using keras v2.0.2.

When calling example codes, *_actuator.py, I get following error message:

Training...
Traceback (most recent call last):
File "./gp_lstm_actuator.py", line 109, in
main()
File "./gp_lstm_actuator.py", line 93, in main
epochs=epochs, batch_size=batch_size, verbose=2)
File "/home/myname/opt/kgp/kgp/utils/experiment.py", line 67, in train
**fit_kwargs)
File "/home/myname/opt/kgp/kgp/models.py", line 128, in fit
**kwargs)
File "/home/myname/opt/anaconda2-4.3.0/lib/python2.7/site-packages/keras/engine/training.py", line 1485, in fit
initial_epoch=initial_epoch)
File "/home/myname/opt/kgp/kgp/tweaks.py", line 72, in _fit_loop
num_train_samples = self._check_num_samples(ins, batch_size,
AttributeError: 'Model' object has no attribute '_check_num_samples'

Keras itself works fine.
I cannot run other example codes, because I cannot find how to make kin40k.npz optimized for your sample codes from original kin40k datasets (sorry, it is other problem for me...)
Could you advise me if other information for my environment is required?

Best regards.
Masaru.

Octave returned: error: Two few unique points.

Hi,

Thanks for your wonderful paper and code. I enjoyed reading the paper and running the code.
I have a question though. When I tried msgp_mlp_kin40k, I got stuck with the error of two few unique points.

My DNN is quite large:

inputs = Input(shape=input_shape)
hidden = Dense(1000, activation='relu', name='dense1')(inputs)
hidden = Dropout(0.5)(hidden)
hidden = Dense(1000, activation='relu', name='dense2')(hidden)
hidden = Dropout(0.5)(hidden)
hidden = Dense(500, activation='relu', name='dense3')(hidden)
hidden = Dropout(0.5)(hidden)
hidden = Dense(50, activation='relu', name='dense4')(hidden)
hidden = Dropout(0.25)(hidden)
hidden = Dense(2, activation='relu', name='dense5')(hidden)
gp = GP(hyp={
'lik': np.log(0.3),
'mean': [],
'cov': [[0.5], [1.0]],
},
inf='infGrid', dlik='dlikGrid',
opt={'cg_maxit': 2000, 'cg_tol': 1e-6},
mean='meanZero', cov='covSEiso',
update_grid=1,
grid_kwargs={'eq': 1, 'k': 70.},
batch_size=batch_size,
nb_train_samples=nb_train_samples)
outputs = [gp(hidden)]
return Model(inputs=inputs, outputs=outputs)

I only got this with this reasonably large-scale DNN. With a smaller-scale DNN I didn't encounter the error at all.

Do you have any idea of how to solve the problem?

Thanks a lot.


Training...
Function evaluation 0; Value 3.462581e+04
Epoch 1/500
8704/10000 [=========================>....] - ETA: 0s - loss: 1.0904 - gp_1_mse: 0.6684 - gp_1_nlml: 34598.8203 - mse: 0.6684 - nlml: 34598.8203/usr/local/lib/python2.7/dist-packages/keras/callbacks.py:405: RuntimeWarning: Can save best model only with val_loss available, skipping.
'skipping.' % (self.monitor), RuntimeWarning)
10000/10000 [==============================] - 1s - loss: 1.0836 - gp_1_mse: 0.6684 - gp_1_nlml: 34598.8203 - mse: 0.6684 - nlml: 34598.8203 - val_mse: 1.2968 - val_nlml: 34598.8218
Function evaluation 0; Value 5.264150e+04
Epoch 2/500
10000/10000 [==============================] - 0s - loss: 1.0001 - gp_1_mse: 0.9989 - gp_1_nlml: 52614.5039 - mse: 0.9989 - nlml: 52614.5039 - val_mse: 0.9924 - val_nlml: 52614.5037
Function evaluation 0; Value 5.264920e+04
Epoch 3/500
10000/10000 [==============================] - 0s - loss: 1.0000 - gp_1_mse: 0.9998 - gp_1_nlml: 52622.2109 - mse: 0.9998 - nlml: 52622.2109 - val_mse: 0.9922 - val_nlml: 52622.2115
Function evaluation 0; Value 5.259500e+04
Epoch 4/500
10000/10000 [==============================] - 0s - loss: 0.9999 - gp_1_mse: 0.9993 - gp_1_nlml: 52568.0078 - mse: 0.9993 - nlml: 52568.0078 - val_mse: 0.9923 - val_nlml: 52568.0072
Traceback (most recent call last):
File "./examples/msgp_mlp_kin40k.py", line 133, in
main()
File "./examples/msgp_mlp_kin40k.py", line 123, in main
epochs=epochs, batch_size=batch_size, verbose=1)
File "/home/hoangcuong2011/Desktop/kgp/kgp/utils/experiment.py", line 67, in train
**fit_kwargs)
File "/home/hoangcuong2011/Desktop/kgp/kgp/models.py", line 128, in fit
**kwargs)
File "/usr/local/lib/python2.7/dist-packages/keras/engine/training.py", line 1430, in fit
initial_epoch=initial_epoch)
File "/home/hoangcuong2011/Desktop/kgp/kgp/tweaks.py", line 98, in _fit_loop
callbacks.on_epoch_begin(epoch)
File "/usr/local/lib/python2.7/dist-packages/keras/callbacks.py", line 63, in on_epoch_begin
callback.on_epoch_begin(epoch, logs)
File "/home/hoangcuong2011/Desktop/kgp/kgp/callbacks.py", line 58, in on_epoch_begin
gp.backend.update_grid('tr', verbose=self.verbose)
File "/home/hoangcuong2011/Desktop/kgp/kgp/backend/gpml.py", line 157, in update_grid
self.eng.eval(_gp_create_grid.format(**self.config), verbose=verbose)
File "/home/hoangcuong2011/Desktop/kgp/kgp/backend/engines.py", line 99, in eval
self._eng.eval(expr, verbose=verbose)
File "/usr/local/lib/python2.7/dist-packages/oct2py/core.py", line 304, in eval
out_file=self._reader.out_file)
File "/usr/local/lib/python2.7/dist-packages/oct2py/core.py", line 625, in evaluate
raise Oct2PyError(msg)
oct2py.utils.Oct2PyError: Oct2Py tried to run:
"""

xg = covGrid('create', X_tr, eq, k);

"""
Octave returned:
error: Two few unique points.
error: called from
apxGrid>creategrid at line 395 column 46
apxGrid at line 169 column 5
covGrid at line 8 column 44

ImportError: cannot import name '_to_list' from 'keras.engine.topology'

I'm trying to add a GP layer to my RNN layer that I use as generator network in a GAN.

I used the example code from the repository, but got the following error:

Using GPML backend with Octave engine as the default one.
Traceback (most recent call last):
  File "3ddiffusion.py", line 20, in <module>
    from kgp.layers import GP
  File "/home/ruben/repositories/kgp/kgp/__init__.py", line 9, in <module>
    from . import models
  File "/home/ruben/repositories/kgp/kgp/models.py", line 13, in <module>
    from keras.engine.topology import _to_list
ImportError: cannot import name '_to_list' from 'keras.engine.topology'

Do you have any idea what is wrong? Thank you!

No dataset found for examples

Hey Maruan, when I call gp_lstm_actuator.py from the examples directory, I get:

"Cannot find DATA_PATH variable in the environment. "
Exception: Cannot find DATA_PATH variable in the environment. DATA_PATH should be the folder that contains `sysid/` directory with the data. Please export DATA_PATH before loading the data.

which I take to mean either 1. I didn't download the actuator data and need to, or 2. The setup script downloaded the data but didn't set the DATA_PATH

In any case, can you comment on where this data might reside, and what the folder structure should look like to run the examples?

Thanks!

Gaussian Process as non-final layer

Hi,

This is a fantastic job gluing Keras to GPML. I got your examples to work and a toy problem of my own working already, so I'm quite happy. The next thing I wanted to try is to connect several GP layers to Dense keras layer before the output. I'm getting some errors when attempting this, despite it compiling.

Here's my code:

    return GP(inf='infGrid',
            lik='likGauss',
            dlik='dlikGrid',
            cov='covSEiso',
            opt={'cg_maxit': 2000,'cg_tol': 1e-6},
            mean='meanConst',
            grid_kwargs={'eq': 1,'k':150.0}, #equispaced grid w/ 100 pts over data range
            update_grid=1,
            batch_size=batch_size,
            nb_train_samples=nb_train_samples,
            hyp={'lik': float(np.log(2.0)), #hyper parameter is the std dev
                 'cov': [[1.0],[0.5]], #hyper initial params are tiled for all dims
                 'mean': float(0.1)}
            )

#note: passing information to matlab engine requires that it be python types, not numpy types.
def assemble_hierarchal_model(input_shape,chunk_D,batch_size,nb_train_samples):
    inp = Input(shape=input_shape)
    slice_1 = Lambda(lambda x: x[...,0:chunk_D])(inp)
    slice_2 = Lambda(lambda x: x[...,chunk_D:(chunk_D*2)])(inp)
    gp1 = make_GP_layer(batch_size,nb_train_samples)
    gp2 = make_GP_layer(batch_size,nb_train_samples)
    g_1 = gp1(slice_1)
    g_2 = gp2(slice_2)
    slurp = Concatenate()([g_1,g_2])
    sslurp = Reshape((2,))(slurp)
    out = Dense(1,use_bias=False)(sslurp)
    model = Model(inputs=inp, outputs=out)
    #loss = [gen_gp_loss(x) for x in [g_1,g_2]]
    model.compile(optimizer=Adam(1e-4), loss='mse')
    return model

As mentioned, the model compiles. The reshape layer is necessary to keep the TensorFlow back-end happy. Somehow it can detect the size of the GP output layers and correctly concatenate them, but when I add the Dense layer after the Concatenate, it acts like it doesn't know the size. Reshape fixes this.

Anyhow, if you run this model, though, it gives an odd error:

/Users/fdfuller/anaconda/lib/python2.7/site-packages/tensorflow/python/framework/ops.pyc in control_dependencies(self, control_inputs)
   3312     current = self._current_control_dependencies()
   3313     for c in control_inputs:
-> 3314       c = self.as_graph_element(c)
   3315       if isinstance(c, Tensor):
   3316         c = c.op

/Users/fdfuller/anaconda/lib/python2.7/site-packages/tensorflow/python/framework/ops.pyc in as_graph_element(self, obj, allow_tensor, allow_operation)
   2403 
   2404     with self._lock:
-> 2405       return self._as_graph_element_locked(obj, allow_tensor, allow_operation)
   2406 
   2407   def _as_graph_element_locked(self, obj, allow_tensor, allow_operation):

/Users/fdfuller/anaconda/lib/python2.7/site-packages/tensorflow/python/framework/ops.pyc in _as_graph_element_locked(self, obj, allow_tensor, allow_operation)
   2492       # We give up!
   2493       raise TypeError("Can not convert a %s into a %s."
-> 2494                       % (type(obj).__name__, types_str))
   2495 
   2496   def get_operations(self):

TypeError: Can not convert a int into a Tensor or Operation.

I'm digging into your backend to try and understand this. If I use the gp loss function, it complains that the Dense layer doesn't know about dh/dx and won't compile. With the mse loss, it compiles but gives this error. Possibly because the kgp Model doesn't have this loss registered?

Anyways, any tips would be helpful.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.