Giter Site home page Giter Site logo

takikadiri / kedro-boot Goto Github PK

View Code? Open in Web Editor NEW
34.0 4.0 6.0 1.78 MB

A kedro plugin that streamlines the integration between Kedro projects and third-party applications, making it easier for you to develop end-to-end production-ready data science applications.

License: Apache License 2.0

Python 100.00%

kedro-boot's Introduction

Kedro Boot Logo - Ligh Kedro Boot Logo - Dark Python version PyPI version License Powered by Kedro

What is kedro-boot ?

Kedro Boot simplifies the integration of your Kedro pipelines with any applications. It's a framework for creating APIs and SDKs for your Kedro projects. It offers these key functionalities:

  • ๐Ÿ’‰ Data injection: Streamline the process of feeding data from any application into Kedro pipelines.
  • โšก Low-Latency: Execute multiple pipeline runs with minimal delay with optimisation for time-sensitive web applications.

This enables using Kedro pipelines in a wide range of use cases, including model serving, data apps (streamlit, dash), statistical simulations, parallel processing of unstructured data, streaming... It also streamlines the deployment of your Kedro pipelines into various Data & AI Platforms (e.g. Databricks, DataRobot...).

If you're new to kedro, we invite you to visit kedro docs

How do I use Kedro Boot ?

Any application can consume kedro pipelines through REST APIs or as a library (SDK). kedro-boot provides utilities and abstractions for each of these integration patterns.

Getting started

Use case 1 : The standalone mode - How to run a kedro pipeline from another application

In this section, we assume you want to trigger the run of a kedro pipeline from another application which holds the entry point. This refer to applications that have their own CLI entry points (e.g. streamlit run) and cannot be embedded in kedro's entry point (e.g. you cannot open streamlit with kedro run). Low code UI (Streamlit, Dash...) and Data & AI Platforms are examples of such applications.

Important

The 1st key concept of kedro-boot is the KedroBootSession. It is basically a standard KedroSession with 2 main differences:

  • you can run the same session multiple times with many speed optimisation (including dataset caching)
  • you can pass data and parameters at runtime : session.run(inputs={"your_dataset_name": your_data}, itertime_params={"my_param": your_new_param})

The KedroBootSession should be created with either boot_project or boot_package (if the project as been previously packaged with kedro package). A basic example would be the following:

from kedro_boot.app.booter import boot_project
from kedro_boot.app.booter import boot_package
from kedro_boot.framework.compiler.specs import CompilationSpec

session = boot_project(
    project_path="<your_project_path>",
    compilation_specs=[CompilationSpec(inputs=["your_dataset_name"])], # Would be infered if not given
    kedro_args={ # kedro run args
        "pipeline": "your_pipeline", # IMPORTANT : You must create one KedroBootSession per pipeline, except for namespaced pipelines
        "conf_source": "<your_conf_source>",
    },
)

# session = boot_package(
#     package_name="<your_package_name>",
#     compilation_specs=[CompilationSpec(inputs=["your_dataset_name"])],
#     kedro_args={
#         "pipeline": "your_pipeline",
#         "conf_source": "<your_conf_source>",
#     },
# )

run_results = session.run(inputs={"your_dataset_name": your_data})
run_results2 = session.run(inputs={"your_dataset_name": your_data2})

You can found a complete example of a steamlit app that serve an ML model in the Kedro Boot Examples project. We invite you to test it to gain a better understanding of Kedro Boot's boot_project or boot_package interfaces.

Tip

The CompilationSpec gives you advanced control on how to configure the behaviour (which dataset to preload and cache, which arguments to pass on each iteration...). See the documentation for more details.

Use case 2 : The embedded mode - Launching an application within kedro

Important

The 2nd key concept of kedro-boot is the KedroBootApp which is an implementation of the AbstractKedroBootApp. When used inside a kedro project, this class automatically creates a KedroBootSession which is passed to a _run abstract method. You can inherit from it to customize the way the way your pipeline is ran (e.g. running it mulitple times) or to start an application inside a kedro run (e.g. serve the pipeline as API) .

This mode involves using kedro-boot to embed an application inside a Kedro project, leveraging kedro's entry points, session and config loader for managing application lifecycle. It's suitable for use cases when the application is lightweight and owned by the same team that developed the kedro pipelines.

Hereafter is an example of a KedroBootApp on how to loop over a pipeline for a given number of iterations passed trhough a config file

from kedro_boot.app import AbstractKedroBootApp
from kedro_boot.framework.session import KedroBootSession


class KedroBootApp(AbstractKedroBootApp):
    def _run(self, kedro_boot_session: KedroBootSession):
        # leveraging config_loader to manage app's configs
        my_app_configs = kedro_boot_session.config_loader[
            "my_app"
        ]  # You should delcare this config pattern in settings.py

        for _ in my_app_configs.get("num_iteration"):  # Doing mutliples pipeline runs
            kedro_boot_session.run(
                namespace="my_namespace",
            )

The Kedro Boot App could be declared either in kedro's settings.py or as kedro boot run CLI args :

  • Declaring KedroBootApp through settings.py
from your_package.your_module import KedroBootApp

APP_CLASS = KedroBootApp
APP_ARGS = {} # Any class init args

The Kedro Boot App could be started with:

kedro boot run <kedro_run_args>
  • Declaring Kedro Boot App through kedro boot run CLI (Take precedence)
kedro boot run --app path.to.your.KedroBootApp <kedro_run_args>

You can find an example of a Monte Carlo App embedded into a kedro project.

Advanced use cases

Consuming Kedro pipeline through REST API

kedro-boot implements natively some KedroBootApp as described in use case 2.

You can serve your kedro pipelines as a REST API using kedro-boot FastAPI Server

First you should install kedro-boot with fastapi extra dependency

pip install kedro-boot[fastapi]

Then you can serve your fastapi app with :

kedro boot fastapi --app path.to.your.fastapi.app <kedro_run_args>

Your fastapi app objects will be mapped with kedro pipeline objects, and the run results will be injected into your KedroFastAPI object through FastAPI dependency injection. Here is an illustration of the kedro <-> fastapi objects mapping:

Kedro FastAPI objects mapping

A default FastAPI app is used if no FastAPI app given. It would serve a single endpoint that run in background your selected pipeline

kedro boot fastapi <kedro_run_args>

These production-ready features would be natively included in your FastAPI apps:

  • Embedded Gunicorn web server (only for Linux and macOS)
  • Pyctuator that report some service health metrology and application states. Usually used by service orchestrators (kubernetes) or monitoring to track service health and ensure it's high availability
  • Multiple environments configurations, leveraging kedro's OmegaConfigLoader. ["fastapi*/"] config pattern could be used to configure the web server. Configs could also be passed as CLI args (refer to --help)

You can learn more by testing the spaceflights Kedro FastAPI example that showcases serving multiples endpoints operations that are mapped to differents pipeline namespaces.

Understanding the integration process

Any python applications could consume kedro pipeline as a library. The integration process involves two steps:

  • Registring kedro pipeline
  • Creating a KedroBootSession

Registring Kedro pipelines

Kedro Boot prepare the catalog for the application consumption through a compilation process that follow a compilation specs.

Compilation specs defines the namespaces and their datasets (inputs, outputs, parameters) that would be exposed to the application. It specify also if artifacts datasets should be infered during the compilation process (artifacts datasets are loaded in MemoryDataset at runtime in order to speed up iteration time)

The compilation specs are either given by the Application or infered from the Kedro Pipeline.

Here is an example of registring a pipeline that contains inference and evaluation namespaces:

# create inference namespace. All the inference pipeline's datasets will be exposed to the app, except "regressor" and "model_options.
inference_pipeline = pipeline(
    [features_nodes, prediction_nodes],
    inputs={"regressor": "training.regressor"},
    parameters="model_options",
    namespace="inference",
)
# create evaluation namespace. All the evaluation pipeline's datasets will be exposed to the app, except "feature_store", "regressor" and "model_options.
evaluation_pipeline = pipeline(
    [model_input_nodes, prediction_nodes, evaluation_nodes],
    inputs={"features_store": "features_store", "regressor": "training.regressor"},
    parameters="model_options",
    namespace="evaluation",
)

spaceflights_pipelines = inference_pipeline + evaluation_pipeline

return {"__default__": spaceflights_pipelines}

In this example, all the namespaces and ther namespaced datasets (inputs, outputs, parameters) would infer compilation specs and therefore would be exposed to the KedroBootApp.

You can use kedro-viz to visualize the datasets that woulc be exposed to the kedro boot apps. In the figure below, for the inference namespace, we see clearly that inference.feature_store and inference.predictions will be exposed to the applicaton (the blue one).

pipeline_namespace

Below are the differents categories of datasets that forms the compiled catalog.

  • Inputs: inputs datasets that are be injected by the app at iteration time.
  • Outputs: outputs dataset that hold the run results.
  • Parameters: parameters that are injected by the app at iteration time.
  • Artifacts: artifacts datasets that are materialized (loaded as MemoryDataset) at startup time.
  • Templates: template datasets that contains ${itertime_params: param_name}. Their attributes are interpolated at iteration time.

You can compile the catalog without actually using it in a Kedro Boot App. This is helpful for verifying if the expected artifacts datasets are correctly infered or if the template datasets are correctly detected. Here is an example of the catalog compilation report for a pipeline that contains an inference namespace.

kedro boot compile

Compilation results:

INFO  catalog compilation completed for the namespace 'inference'. Here is the report:                                                          
    - Input datasets to be replaced/rendered at iteration time: {'inference.features_store'}
    - Output datasets that hold the results of a run at iteration time: {'inference.predictions'}
    - Parameter datasets to be replaced/rendered at iteration time: set()
    - Artifact datasets to be materialized (preloader as memory dataset) at startup time: {'training.regressor'}
    - Template datasets to be rendered at iteration time: set()
		
INFO  Catalog compilation completed.           

We can see that the training.regressor is being infered as artifact, it will be loaded as memory dataset to speed up iterations and prevent memory leak in a web app use case.

Note that when infering compilation specs, a pipeline that have no namespaces is also exposed to the kedro boot apps (have a compilation spec), but does not expose any datasets. Applications could provide their own compilation specs in order to specify the datasets that are needed to be exposed.

Why does Kedro Boot exist ?

Kedro Boot unlock the value of your kedro pipelines by giving you a structured way for integrating them in a larger application. We developed kedro-boot to achieve the following:

  • Streamline deployment of kedro pipelines in batch, streaming and web app context.
  • Encourage reuse and prevent rewriting pipeline's logic by the team that own the front application
  • Leverage kedro's capabilities for business logic separation and authoring
  • Leverage kedro's capabilities for managing an application lifecycle

Kedro Boot apps utilize Kedro's pipeline as a means to construct and manage business logic and some part or I/O. Kedro's underlying principles and internals ensure the maintainability, clarity, reuse, and visibility (kedro-viz) of business logic within Kedro Boot Apps, thanks to Kedro's declarative nature.

Where do I test Kedro Boot ?

You can refer to the Kedro Boot Examples project; it will guide you through four examples of Kedro Boot usages.

Can I contribute ?

We'd be happy to receive help to maintain and improve the package. Any PR will be considered.

kedro-boot's People

Contributors

deepyaman avatar derluke avatar galileo-galilei avatar github-actions[bot] avatar m-gris avatar mpkrass7 avatar takikadiri avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar

kedro-boot's Issues

Compatibility with other plugins?

Does Kedro-boot work with Kedro-docker? I am looking to containerize a kedro pipeline exposed as a fast-api in order to then scale it with Kubernetes. Will this be possible with Kedro-boot?

Proofreading the README

I started to follow the README step by step to try to make it more beginner friendly. The ultimate goal is to have a "copy-pasteable " example so user can try kedroboot out easily. thi is just the beginning, and I'll resume it later.

Here is my journey:

Step 1 : Start from scratch from an empty starter

kedro new --starter=spaceflights-pandas 
pip install kedro-boot==0.2.0
conda create -n temp python=3.10 -y
kedro registry list # warning

PB

Failed to load kedro_boot.app.fastapi.cli commands from EntryPoint(name='fastapi', value='kedro_boot.app.fastapi.cli:fastapi_command', utils.py:11 group='kedro_boot'). Full exception: No module named 'uvicorn'

Solution

pip install -e . # to get pandas and all. This is more du to the starter but confusing

Step 2: try to start a Kedro Server

kedro boot fastapi

PB

no command name fastapi

Solution

This is due to pyctuator not installed => indeed the readme shows "pip install kedro-boot[fastapi]", my bad but the error message is very confusing.

PB 2

"No config patterns were found for 'fastapi' in your config loader" # weird because there is a try/except but it is a MissingConfigError instead of a KeyError

except MissingConfigException:
LOGGER.warning(

Solution 2

  • add empty fastapi.yml

๐Ÿ’ก Maybe we should add a "kedro boot fastapi init" command ?

  • update settings.py
CONFIG_LOADER_ARGS = {
    "base_env": "base", # needed
    "default_run_env": "local", # needed
    "config_patterns": {
        "fastapi": ["fastapi*/"],
    },
}

Step 3

The result of the previous step serves on localhost:5000 {"detail":"Not Found"}. this is not niformative. i guess the next step is to create a pydantic model for the dataset to model but this is not very clear.

๐Ÿ’ก Maybe we should add a resolver and read it form a metadata in the catalog?

# catalog.yml

my_data: 
    type: ...
    filepath: ...
    metadata: 
        fastapi: 
            schema: ${pydantic:path/to/file.py}

Kedro Boot Session for Standalone Apps

Some Applications cannot run inside kedro's entry point, therefore they can't be a Kedro Boot Apps. Streamlit is a good example of such application, it provides it's proper CLI entry point that cannot be embeded in another application. Data & AI platforms are also an examples of such Standalone applications having their own entry points.

Kedro Boot propose a booter that can be used by those Standalone Apps in order to get a Kedro Boot Session. The booter is still an MVP and lack some features that we need to address :

  • Standalone Apps could consume either a kedro project or kedro package. We should provide two interfaces for each cases ; boot_project and boot_package
  • Standalone Apps developers may not have access to the kedro package code (developed by differents teams), or doesn't want to alterate kedro project code, therefore they cannot rely only on compilation specs that would be infered from the selected Pipeline. We should provide a way to specify explicitely compilation specs through the booter

Support dataset factories

kedro introduced dataset factories in 0.18.12 . Which uses pattern matching to reduce the number of catalog entries.

The datasets are matched/resolved lazily at load time, which don't make them natively compatible with kedro-boot that compile everything relative to the catalog before the first pipeline run, and threfore before the first datasets loading.

Currenly kedro boot support it through this contribution but we should go further by aligning with how kedro handle dataset factories and implementing it at catalog compiler level

Literal String Instead of List of Lists in Template_Params Causes Unexpected Behavior in CustomDataset Constructor

We are experiencing an issue where the literal string '[[ data_ids ]]' is being passed to the constructor of our dataset, instead of the expected list of lists. This problem occurs in the context of a FastAPI route with a Kedro session and a custom dataset configuration using a catalog entry.

Here are the details of the issue and our current workaround:

  1. FastAPI Route with Kedro Session:

    In our FastAPI application, we have a route that uses a Kedro session to run a task. The data_ids parameter, which is intended to be a list of integers, is being incorrectly passed as a string. Here's the relevant part of the route:

    # ... [previous code for route setup] ...
    
    @app.post("/api/v1/data", response_model=OutputData)
    async def get_output_data(input_data: InputData):
        data_ids = input_data.data_ids  # data_ids is expected to be a list of ints
    
        # ... [additional code and logging] ...
    
        output = session.run(
            name="web_api", 
            inputs={"input_data": input_data},
            template_params={"data_ids": data_ids},
        )
  2. Catalog Entry Issue and Workaround:

    The issue also manifests in our catalog entry configuration for a custom dataset:

    result:
      type: performance_optimisation_engine.extras.datasets.web_api_dataset.ResultDataset
      credentials: mysql
      data_ids: [[ data_ids ]]

    With this setup, the string '[[ data_ids ]]' is passed to the constructor instead of the list of lists. To address this, we modified the catalog entry to pass data_ids as a string explicitly:

    result:
      type: performance_optimisation_engine.extras.datasets.web_api_dataset.ResultDataset
      credentials: mysql
      data_ids: '[[ data_ids ]]'

    Then, in the constructor of our dataset, we convert this string back into a list of lists:

    s = str(self.data_ids).replace("'", '"')
    self.data_ids = json.loads(s)

While this workaround is functional, it's not ideal. We suspect this might be a bug or at least a feature that requires clarification in the documentation. Any assistance in resolving this issue more elegantly would be greatly appreciated.

Thank you for your time and consideration.

Support kedro 0.19.x

kedro-boot is slightly impacted by these kedro 0.19.x breaking changes :

  • Changes in datasets arguments, attributes and method naming (ex: from data_sets to datasets)
  • Remove of create_default_data_set() method in the Runner in favour of using dataset factories to create default dataset instances.

Since the next kedro-boot release will break a lot of user facing interfaces (AppPipeline, template params, some refactoring), i think that we need to make effort to support kedro 0.18.x alongside 0.19.x. I'm not aware how rapidly users migrate to 0.19.x

ConfigLoader->OmniaConfigLoader

Hi, I am building a pipeline to use with Kedro-boot, but get this error:

ImportError: cannot import name 'ConfigLoader' from 'kedro.config' (/opt/homebrew/Caskroom/miniforge/base/envs/cogniteds/lib/python3.11/site-packages/kedro/config/__init__.py)
Traceback (most recent call last):

According to the new release note, ConfigLoader has been replaced with OmniaConfigLoader, are you planning to release a new version.

Aligning template params with kedro configurations management

kedro-boot introduces a new kind of params, we used to call them template params, they are resolved at each iteration time (each kedro boot session run), enabling injection of params from the kedro boot app to the kedro project. Currently, template params uses Jinja expressions that have this pattern [[expression]].

Kedro stopeed using Jinja and adopted OmegaConf for all configurations. In order to align with kedro and reduce the cognitifive effort of the users, we should also try to handle our template params with OmegaConf. Here is a possible way of integrating it:

User interface :
kedro named recently the runtime parameters provided through the CLI runtime_params, they are used to overrides values of certain keys in configuration (catalog, parameters, ..). In order to align with the naming convention, we can name our iteration params, itertime_params instead of template_params. Here is an example of a dataset that have an itertime_params in one of its keys:

shuttles:
  type: pandas.SQLQueryDataset
  query: "select shuttle, shuttle_id from spaceflights.shuttles where shuttle_id = ${itertime_params:shuttle_id,1234}"

The signature of the resolver could be :
${itertime_params:<param_name>,<default_value>}

Backend :
In the backend, it's a OmegaConf resolver that will be added to the ConfigLoader :

"custom_resolvers": {
    "itertime_params": lambda variable, default_value=None: f"${{oc.select:{variable},{default_value}}}",
}

At kedro booting process, the configuration files would be materialized as python object and all the params would be resolved. At the end of the booting process, we will end up with some datasets that have ${oc.select:{variable},{default_value}} in some of their attribute's values. Thoses values will be lazily resolved at each iteration using iteration params that would be injected by the kedro boot apps.

Related: #6

Is dataset caching persistent across runs?

Thanks for creating this awesome project! I am excited to use it as a plugin for Kedro pipeline-parameter sweeps (e.g. via Hydra or Optuna).

I was interested in this point in the README:

you can run the same session multiple times with many speed optimisation (including dataset caching)

but I couldn't find any information about it in the code-base. Is it implemented? If so, is the dataset cached to disk across session runs, or is it just kedro.io.CachedDataSet under the hood?

Drop AppPipeline in favor of pipeline namespaces

kedro-boot introduces AppPipeline which is a kedro Pipeline that have some additional metadata used by kedro-boot as a specs for the compilation process.

Kedro's pipeline terminorlogy can already be confusing. Adding another Kind of Pipeline will make kedro-boot hard to grasp for users, and make it harder for us to maintain and document. What if we can infer the kedro-boot compilation specs only from the Pipeline object, leveraging namespaces. Here is a potential mapping between AppPipeline attributes and the Pipeline infered attributes:

AppPipeline name --> Pipeline namespace or 'None' if no namespace
AppPipeline inputs --> namespaced free inputs datasets
AppPipeline outputs --> namespaced outputs datasets
AppPipeline parameters --> namespaced parameters datasets
AppPipeline artifacts --> Automatically infered with this spec : free inputs - namespaced free inputs (artifacts inference can be disabled)

This will allow us to leverage also kedro-viz, as we could visualize theses attributes :

namespace_example

Here we see clearly which datasets that will be "touched" by the kedro boot apps (the namespaced part, the blue one). The "regressor" in this case is not a namespaced free inputs datasets, it will be infered as an artifacts.

Note that kedro boot apps could explicitely give their own compilation specs, which take precedence over these automatically infered specs.

Kedro Boot Apps Settings and CLIs

Kedro Boot Apps are currently setted through CLI. Here is an example of setting an App using Kedro Boot CLI command kedro boot --app path.to.app_class <kedro_params>. Even if this works for regular use cases, it can be limited when we need to set the APP Class globally (at source level) and when we need to have a dedicated CLI command for a Kedro Boot App.

Settings Kedro boot Apps globally through Kedro project settings

Settings the App class through CLI can be repetitive in some cases (similar issue with kedro runner). Moreover A CLI args are a runtime/environment settings, sometimes we need to set the App class globally (at source level). We propose to use the Kedro settings.py as an additional channel to set the App Class and it's Init Args.

from kedro.config import OmegaConfigLoader  # noqa: E402

CONFIG_LOADER_CLASS = OmegaConfigLoader
CONFIG_LOADER_ARGS = {
    "config_patterns": {
        "<app_config_name>": ["<app_config_files>"..],
    }
}
from my_app_package import MyApp
APP_CLASS = MyApp
APP_ARGS = {
"init_attribute": "value"
}

The APP_ARGS are used to init the Kedro Boot App Class, they are not runtime args. Runtime/env args are setted through a config file (application.yml by default). Users could explicitely add a config file for his app by adding the associated config pattern in the CONFIG_LOADER_ARGS. Since the App object have access to the config loader through the Kedro Boot Session, it can load the appropriate configs, using the same kedro configs mechanisms.

Note that the App Class given by CLI command take precedence over App Class given by the project settings.

Extend Kedro Boot commands with Apps commands

A Kedro Boot App could have a CLI command. We need to provide a mechanism to extend the Kedro Boot CLI commands with the App Command. Here is a proposed solution path :

  • Add a subcommand level in Kedro Boot CLI commands : kedro boot <app_command_name> <app_command_params> <kedro_params>
  • Include run and dryrun as project specific commands from Kedro Boot. kedro boot --app path.to.app_class <kedro_params> would become kedro boot run --app path.to.app_class <kedro_params>
  • Create the App command through Kedro Boot Command Factory, in order to keep the CLI booting logic and kedro CLI params (pipeline, extra params, tags, ....). Here is an example of using the Kedro Boot Command Factory:
import click
from kedro_boot.framework.cli import kedro_boot_command_factory

from my_fastapi_package import MyFastapiApp


fastapi_command_params = [
    click.option(
        "--host",
        type=str,
        default="127.0.0.1",
        help="web server host",
    )
    click.option(
        "--port",
        type=str,
        default="8080",
        help="web server port",
    )
]

fastapi_command = kedro_boot_command_factory(name="fastapi", app_class=MyFastapiApp, app_params=fastapi_command_params)
[project.entry-points."kedro_boot"]
fastapi = "<my_fastapi_package.command_module:fastapi_command"

The App command could then be used as follow: kedro boot fastapi --host 127.0.0.1 --port 8000 <kedro_params>

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.