Giter Site home page Giter Site logo

whynot's Introduction

WhyNot Logo

Build Status Documentation Status Code style: black

WhyNot is a Python package that provides an experimental sandbox for decisions in dynamics, connecting tools from causal inference and reinforcement learning with challenging dynamic environments. The package facilitates developing, testing, benchmarking, and teaching causal inference and sequential decision making tools.

For an introduction to WhyNot and a brief tutorial, see our walkthrough video. For more detailed information, check out the documentation.

Table of Contents

  1. Basic installation instructions
  2. Quick start examples
  3. Simulators in WhyNot
  4. Using estimators in R
  5. Frequently asked questions
  6. Citing WhyNot

WhyNot is still under active development! If you find bugs or have feature requests, please file a Github issue. We welcome all kinds of issues, especially those related to correctness, documentation, performance, and new features.

Basic installation instructions

  1. (Optionally) create a virtual environment
python3 -m venv whynot-env
source whynot-env/bin/activate
  1. Install via pip
pip install whynot

You can also install WhyNot directly from source.

git clone https://github.com/zykls/whynot.git
cd whynot
pip install -r requirements.txt

Quick start examples

Causal inference

Every simulator in WhyNot comes equipped with a set of experiments probing different aspects of causal inference. In this section, we show how to run experiments probing average treatment effect estimation on the World3 simulator. World3 is a dynamical systems model that studies the interplay between natural resource constraints, population growth, and industrial development.

First, we examine all of the experiments available for World3.

import whynot as wn
experiments = wn.world3.get_experiments()
print([experiment.name for experiment in experiments])
#['PollutionRCT', 'PollutionConfounding', 'PollutionUnobservedConfounding', 'PollutionMediation']

These experiments generate datasets both in the setting of a pure randomized control trial (PollutionRCT), as well as with (unobserved) confounding and mediation. We will run a randomized control experiment. The description property offers specific details about the experiment.

rct = wn.world3.PollutionRCT
rct.description
#'Study effect of intervening in 1975 to decrease pollution generation on total population in 2050.'

We can run the experiment using the experiment run function and specifying a desired sample size num_samples. The experiment then returns a causal Dataset consisting of the covariates for each unit, the treatment assignment, the outcome, and the ground truth causal effect for each unit. All of this data is contained in NumPy arrays, which makes it easy to connect to causal estimators.

import numpy as np

dataset = rct.run(num_samples=200, seed=1111, show_progress=True)
(X, W, Y) = dataset.covariates, dataset.treatments, dataset.outcomes
treatment_effect = np.mean(dataset.true_effects)

# Plug-in your favorite causal estimator
estimated_ate = np.mean(Y[W == 1.]) -  np.mean(Y[W  == 0.])

WhyNot also enables you to run a large collection of causal estimators on the data for benchmarking and comparison. The main function to do this is the causal_suite which, given the causal dataset, runs all of the estimators on the dataset and returns an InferenceResult for each estimator containing its estimated treatment effects and uncertainty estimates like confidence intervals.

# Run the suite of estimates
estimated_effects = wn.causal_suite(
    dataset.covariates, dataset.treatments, dataset.outcomes)

# Evaluate the relative error of the estimates
true_sate = dataset.sate
for estimator, estimate in estimated_effects.items():
    relative_error = np.abs((estimate.ate - true_sate) / true_sate)
    print("{}: {:.2f}".format(estimator, relative_error))
# ols: 1.06
# propensity_score_matching: 1.38
# propensity_weighted_ols: 1.37

In addition to experiments studying average treatment effect, WhyNot also supports causal inference experiments studying

  1. Heterogeneous treatment effects,
  2. Time-varying treatment policies
  3. Causal structure discovery

Sequential decision making

WhyNot supports experimentation with sequential decision making and reinforcement learning via unified interface with the OpenAI gym. In this section, we give a simple example showing how to use the HIV simulator for sequential decision making experiments.

First, we initialize the environment and set the random seed.

import whynot.gym as gym

env = gym.make('HIV-v0')
env.seed(1)

Observations in the simulator are a set of 6 states, capturing infected and uninfected T-lymphocytes, macrophages, immune response, and copies of free virus. Actions correspond to choosing between different drugs and dosages for treatment.

For illustration, we repeatedly chose actions, which correspond to treatment policy decisions, in the environment and measure both the next state and the reward. In this case, the reward weighs the strength of the immune response, the virus count, and the cost of the chosen treatment.

observation = env.reset()
for _ in range(100):
    action = env.action_space.sample()  # Replace with your treatment policy
    observation, reward, done, info = env.step(action)
    if done:
        observation = env.reset()

For more details on the simulation, as well as a fully worked out policy gradient example, see this notebook.

Strategic classification

Beyond settings typically studied in sequential decision making, WhyNot also supports experiments with standard supervised learning algorithms in dynamic settings. In this section, we show how to use WhyNot to study the performance of classifiers when individuals being classified behave strategically to improve their outcomes, a problem sometimes called strategic classification.

First, we set up the credit environment.

import whynot.gym as gym

env = gym.make('Credit-v0')
env.seed(1)

Observations in this environment correspond to a dataset of features for each individual and a label indicating whether they experience financial distress from the Kaggle GiveMeSomeCredit dataset.

dataset = env.reset()

Actions in the environment correspond to choosing a classifier to predict default. In response, individuals then strategically adapt their features in order to obtain a more favorable credit score. The subsequent observation is the adapted features, and the reward is the classifier's loss on this distribution

theta = env.action_space.sample() # Your classifier
dataset, loss, done, info = env.step(theta)

We can then experiment with the long-term equilibrium arising from repeatedly updating the classifier to cope with strategic response.

def learn_classifier(features, labels):
    # Replace with your learning algorithm
    return env.action_space.sample()

dataset = env.reset()
for _ in range(100):
    theta = learn_classifier(dataset["features"], dataset["labels"])
    dataset, loss, _, _ = env.step(theta)

For more details on the simulation and a complete example showing the standard retraining procedures perform in a strategic setting, see this notebook.

Beyond strategic classification, WhyNot also supports simulators and experiments evaluating other aspects of machine learning, e.g. fairness criteria, in dynamic settings.

For more examples and demonstrations of how to design and conduct experiments in each of these settings, check out usage and our collection of examples.

Simulators in WhyNot

WhyNot provides a large number of simulated environments from fields ranging from economics to epidemiology. Each simulator comes equipped with a representative set of causal inference experiments and exports a uniform Python interface that makes it easy to construct new causal inference experiments in these environments, as well as an OpenAI gym interface to perform reinforcement learning experiments in new environments.

The simulators in WhyNot currently include:

For a detailed overview of these simulators, please see the simulator documentation.

Using causal estimators in R

WhyNot ships with a small set of causal estimators written in pure Python. To access other estimators, please install the companion library whynot_estimators, which includes a host of state-of-the-art causal inference methods implemented in R.

To get the basic framework, run

pip install whynot_estimators

If you have R installed, you can install the causal_forest estimator by using

python -m  whynot_estimators install causal_forest

To see all of the available estimators, run

python -m  whynot_estimators show_all

See whynot_estimators for instructions on installing specific estimators, especially if you do not have an existing R build.

Frequently asked questions

1. Why is it called WhyNot?

Why not?

2. What are the intended use cases?

WhyNot supports multiple use cases, some technical, some pedagogical, each suited for a different group of users. We envision at least five primary use cases:

  • Developing: Researchers can use WhyNot in the process of developing new methods for causal inference and decision making in dynamic settings. WhyNot can serve as a substitute for ad-hoc synthetic data where needed, providing a greater set of challenging test cases.

  • Testing: Researchers can use WhyNot to design robustness checks for methods and gain insight into the failure cases of these methods.

  • Benchmarking: Practitioners can use WhyNot to compare multiple methods on the same set of tasks. WhyNot does not dictate any particular benchmark, but rather supports the community in creating useful benchmarks.

  • Learning: Students of causality and dynamic decision making might find WhyNot to be a helpful training resource. WhyNot is easy-to-use and does not require much prior experience to get started with.

  • Teaching: Instructors can use WhyNot as a tool students engage with to learn and solve problems.

3. What uses are not intended?

  • Basis of real-world policy and interventions: The simulators included in WhyNot were selected because they offer realistic technical challenges for causal inference and dynamic decision making tools, not because they offer faithful models of the real world. In many cases, they have been contested or criticized as representations of the real world. For this reason, the simulators should not directly be used to design real-world interventions or policy.

  • Substitute for healthy debate: Success in simulated environments does not guarantee success in real scenarios, but a failure in simulated environments can nonetheless lead to insight into weaknesses of a particular approach. WhyNot does not obviate the need for debate around common assumptions in causal inference.

  • Substitute for real world experiments and data: WhyNot does not substitute for high-quality empirical work on real data sets. WhyNot is a tool for understanding and evaluating methods for causal inference and decision making in dynamics, not certifying their validity in real-world scenarios.

  • Substitute for theory: WhyNot can help create understanding in contexts where theoretical analysis is challenging, but does not reduce the need for theoretical guarantees and formal analysis.

4. Why start from dynamical systems?

Dynamical systems provide a natural setting to study causal inference. The physical world is a dynamical system, and causal inference inevitably has to grapple with data generated from some dynamical process. Moreover, the temporal structure of the dynamics gives rise to nontrivial problem instances with both confounding and mediation. Dynamics also naturally lead to time-varying causal effects and allow for time-varying treatments and sequential decision making.

5. What what simulators are included and why?

WhyNot contains a range of different simulators, and an overview is provided in the documentation here.

6. What’s the difference between WhyNot and CauseMe?

CauseMe is an online platform for benchmarking causal discovery methods. Users can register and evaluate causal discovery methods on an existing repository of data sets, or contribute their own data sets with known ground truth. CauseMe is an excellent platform that we recommend in addition to WhyNot. We encourage users to export data sets derived from WhyNot and make them accessible through CauseMe. In this case, we ask that you reference WhyNot.

7. What’s the difference between WhyNot and CausalML?

CausalML is a Python package that provides a range of causal inference methods. The estimators provided by CausalML are available in WhyNot via the whynot_estimators package. While WhyNot provides simulators and derived experimental designs on synthetic data, CausalML focuses on providing estimators. We made these estimators available for use on top of WhyNot.

8. What’s the difference between WhyNot and EconML?

EconML is a Python package that provides tools from machine learning and econometrics for causal inference. Like CausalML, EconML focuses on providing estimators, and we made these estimators available for use on top of WhyNot.

9. How can I best contribute to WhyNot?

Thanks so much for considering to contribute to WhyNot. The package is open source and MIT licensed. We invite contributions broadly in a number of areas, including the addition of simulators, causal estimators, sequential decision making algorithms, documentation, performance improvements, code quality and tests.

Citing WhyNot

If you use WhyNot for published work, we encourage you to cite the project. Please use the following BibTeX entry:

@software{miller2020whynot,
  author       = {John Miller and
                  Chloe Hsu and
                  Jordan Troutman and
                  Juan Perdomo and
                  Tijana Zrnic and
                  Lydia Liu and
                  Yu Sun and
                  Ludwig Schmidt and
                  Moritz Hardt},
  title        = {WhyNot},
  year         = 2020,
  publisher    = {Zenodo},
  doi          = {10.5281/zenodo.3875775},
  url          = {https://doi.org/10.5281/zenodo.3875775}
}

whynot's People

Contributors

amroid avatar chloechsu avatar ecreager avatar marimeireles avatar millerjohnp avatar mrtzh avatar yoshavit avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

whynot's Issues

Creating multiple instances of one environment with different configurations

I'd like to create a multiple instances of one environment (e.g., Credit) with different configurations (e.g., each one of them with only subset of the data).

How can I do that? It seems that gym.make doesn't support that, so even creating my own modified version of a simulator won't work (except of creating multiple sub-packages, as many as different configuration I want)

Split requirements

Currently the requirements file contains packages relevant to building up the docs, but not all necessary packages to installing whynot.
I propose using the setup.py file to separate the different necessary packages:

    install_requires=[

    ],
    extras_require={
        "test": [
 
        ],
        "examples": [

        ],
        "docs": [
        ],
    }

That will allow us to:

  1. delete the requirements.txt file as it will be no longer needed
  2. users will now be able to choose the different pkgs they wish to install
    pip install -e . will install whynot alone.
    pip install -e ".[test, docs]". would install both whynot and the pkgs necessary for test and docs, for example.

conda package for whynot

whynot is almost eligible for creating a conda package. For that we need:

  • a versioning system built-in to github
  • the merge of conda-forge/staged-recipes#23105 which seems complicated? I thought it'd be a simple noarch package but apparently people from pyminiracer did something wrong when creating the sdist? Or something weird.

Tests broken in main

A few tests are broken. Here's a list:

================================== short test summary info ===================================
FAILED tests/test_delayed_impact.py::test_repayment_fcn - TypeError: Index.get_loc() got an unexpected keyword argument 'method'
FAILED tests/test_dynamics.py::test_basestate - AssertionError: assert 4 == 3
FAILED tests/test_simulators.py::test_dynamics_initial_state[delayed_impact] - TypeError: Index.get_loc() got an unexpected keyword argument 'method'
FAILED tests/test_simulators.py::test_dynamics_initial_state[credit] - IndexError: too many indices for array: array is 1-dimensional, but 2 were indexed
FAILED tests/test_simulators.py::test_dynamics_initial_state[world3] - TypeError: 'JSObject' object is not iterable
FAILED tests/test_simulators.py::test_dynamics_intervention[delayed_impact] - TypeError: Index.get_loc() got an unexpected keyword argument 'method'
FAILED tests/test_simulators.py::test_dynamics_intervention[world3] - TypeError: 'JSObject' object is not iterable
FAILED tests/test_simulators.py::test_simulator_experiments[delayed_impact] - TypeError: Index.get_loc() got an unexpected keyword argument 'method'
FAILED tests/test_simulators.py::test_simulator_experiments[world3] - TypeError: 'JSObject' object is not iterable
FAILED tests/test_simulators.py::test_parallelize[delayed_impact] - TypeError: Index.get_loc() got an unexpected keyword argument 'method'
FAILED tests/test_simulators.py::test_parallelize[world3] - TypeError: 'JSObject' object is not iterable
FAILED tests/test_world3.py::test_set_state - TypeError: 'JSObject' object is not iterable
FAILED tests/gym_tests/test_envs.py::test_env[spec0] - IndexError: tuple index out of range
FAILED tests/gym_tests/test_envs.py::test_env[spec1] - TypeError: RandomNumberGenerator._generator_ctor() takes from 0 to 1 positional arguments...
FAILED tests/gym_tests/test_envs.py::test_env[spec2] - TypeError: RandomNumberGenerator._generator_ctor() takes from 0 to 1 positional arguments...
FAILED tests/gym_tests/test_envs.py::test_env[spec3] - TypeError: RandomNumberGenerator._generator_ctor() takes from 0 to 1 positional arguments...
FAILED tests/gym_tests/test_envs.py::test_env[spec4] - TypeError: RandomNumberGenerator._generator_ctor() takes from 0 to 1 positional arguments...
FAILED tests/gym_tests/test_envs.py::test_random_rollout[HIV-v0] - TypeError: RandomNumberGenerator._generator_ctor() takes from 0 to 1 positional arguments...
FAILED tests/gym_tests/test_envs.py::test_random_rollout[world3-v0] - TypeError: RandomNumberGenerator._generator_ctor() takes from 0 to 1 positional arguments...
FAILED tests/gym_tests/test_envs.py::test_random_rollout[opioid-v0] - TypeError: RandomNumberGenerator._generator_ctor() takes from 0 to 1 positional arguments...
FAILED tests/gym_tests/test_envs.py::test_config[HIV-v0] - TypeError: RandomNumberGenerator._generator_ctor() takes from 0 to 1 positional arguments...
FAILED tests/gym_tests/test_envs.py::test_config[world3-v0] - TypeError: RandomNumberGenerator._generator_ctor() takes from 0 to 1 positional arguments...
FAILED tests/gym_tests/test_envs.py::test_config[opioid-v0] - TypeError: RandomNumberGenerator._generator_ctor() takes from 0 to 1 positional arguments...
FAILED tests/gym_tests/test_envs.py::test_config[Zika-v0] - TypeError: RandomNumberGenerator._generator_ctor() takes from 0 to 1 positional arguments...
FAILED tests/gym_tests/test_envs.py::test_credit_config - IndexError: tuple index out of range
FAILED tests/gym_tests/test_envs.py::test_credit_initial_state - IndexError: tuple index out of range
FAILED tests/gym_tests/test_registration.py::test_make - TypeError: RandomNumberGenerator._generator_ctor() takes from 0 to 1 positional arguments...
FAILED tests/gym_tests/test_registration.py::test_malformed_lookup - AssertionError: Unexpected message: Malformed environment ID: “Breakout-v0”.(Currently al...
=================== 28 failed, 60 passed, 56 warnings in 573.65s (0:09:33) ===================

Are these tests still relevant? Should we fix them? Did they ever run?

Potential Multicollinearity Issue with Credit Simulator

In the GiveMeSomeCredit dataset there are three features that are very correlated to each other with Pearson's r ~ 0.98 (NumberOfTime30-59DaysPastDueNotWorse, NumberOfTimes90DaysLate, NumberOfTime60-89DaysPastDueNotWorse). This multicollinearity might cause to instability in fitting Logistic Regression model. Possible mitigation is to drop these features and include only their sum.
The model on that dataset archives the same accuracy.

Versioning system

Hi all,

Great project!
I'd like to propose a versioning system so we can easily track changes and so users can pin this package and have it working with everything else they use (as long as other versions are pinned too).
Besides being a good practice on the open science side, it allows easy reproducibility, it's also a good software design choice and would allow whynot to be present in the conda ecosystem.
So, once we have this in place I can create a conda package for it.

The system I'd like to propose is semantic versioning, so simply v.1.0.0, if you all agree.

However before releasing a first version I'd like to merge the PR I'm writing that will fix current issues with dependencies.

Gym registry issue prevents import of whynot

Hi,

I just started looking into that repo and went into troubles when trying to import the package in Python. I set up a new virtual environment with only whynot and dependencies installed (Python 3.8.14). Then, when running
import whynot as wn , I get the following error:

ImportError Traceback (most recent call last)
Cell In [2], line 4
1 import matplotlib.pyplot as plt
2 import numpy as np
----> 4 import whynot as wn
5 import whynot.gym as gym
7 import scripts.utils as utils

File ~/opt/miniconda3/envs/testenv/lib/python3.8/site-packages/whynot/init.py:5
3 version = "0.12.0"
4 from whynot.algorithms import *
----> 5 from whynot.simulators import *
6 from whynot import causal_graphs, dynamics, framework
7 from whynot.dynamics import (
8 DynamicsExperiment,
9 Run,
10 )

File ~/opt/miniconda3/envs/testenv/lib/python3.8/site-packages/whynot/simulators/init.py:4
1 """Simulator initialization."""
3 from whynot.simulators import civil_violence
----> 4 from whynot.simulators import credit
5 from whynot.simulators import delayed_impact
6 from whynot.simulators import dice

File ~/opt/miniconda3/envs/testenv/lib/python3.8/site-packages/whynot/simulators/credit/init.py:15
3 from whynot.simulators.credit.simulator import (
4 agent_model,
5 Config,
(...)
10 State,
11 )
13 from whynot.simulators.credit.dataloader import CreditData
---> 15 from whynot.simulators.credit.environments import *
17 SUPPORTS_CAUSAL_GRAPHS = True

File ~/opt/miniconda3/envs/testenv/lib/python3.8/site-packages/whynot/simulators/credit/environments.py:4
1 """Interactive environment for the credit simulator."""
2 import numpy as np
----> 4 from whynot.gym import spaces
5 from whynot.gym.envs import ODEEnvBuilder, register
6 from whynot.simulators.credit import (
7 agent_model,
8 CreditData,
(...)
13 State,
14 )

File ~/opt/miniconda3/envs/testenv/lib/python3.8/site-packages/whynot/gym/init.py:12
9 from gym.core import Env
10 from gym import logger
---> 12 from whynot.gym.envs import make, spec, register
14 all = ["Env", "make", "spec", "register"]

File ~/opt/miniconda3/envs/testenv/lib/python3.8/site-packages/whynot/gym/envs/init.py:3
1 """Environments based on WhyNot simulators."""
----> 3 from whynot.gym.envs.registration import registry, register, make, spec
4 from whynot.gym.envs.ode_env import ODEEnvBuilder

File ~/opt/miniconda3/envs/testenv/lib/python3.8/site-packages/whynot/gym/envs/registration.py:4
1 """Global registry of environments, for consistency with openai gym."""
2 import importlib
----> 4 from gym.envs.registration import EnvRegistry
6 # Keep for consistency with original API
7 # pylint:disable-msg=invalid-name
8 # Have a global registry
9 registry = EnvRegistry()

ImportError: cannot import name 'EnvRegistry' from 'gym.envs.registration' (/Users/.../opt/miniconda3/envs/testenv/lib/python3.8/site-packages/gym/envs/registration.py)

Because this seems to be related to gym (by default version 0.26.2 got installed), I tried out different gym versions and indeed with gym version 0.23.0 or lower the import works. So maybe, a quick-fix could be to restrict the gym version. It seems that from version 0.24.0 onwards, the gym registry module/API got changed causing an issue when importing whynot.

Having gym 0.23.0 installed, allows me to run the performative prediction example, which is what I'm currently interested in. However, in the walkthrough example there still occurs an error when running the simulator of world3:

dataset = experiment.run(num_samples=200, seed=1234, show_progress=True)

RuntimeError: Native library not available at /Users/mgorecki/opt/miniconda3/envs/testenv/lib/python3.8/site-packages/py_mini_racer/libmini_racer.dylib

I couldn't resolve that so far.

Moving to gymnasium compatibility

I'm creating this issue to discuss my current ideas/progress on making whynot compatible with the latest version of gymnasium.

I'm thinking about erasing registration.py and instead just initializing the environment on the main file, like the examples in the petting zoo:

from tutorial3_action_masking import CustomEnvironment

from pettingzoo.test import parallel_api_test

if __name__ == "__main__":
    env = CustomEnvironment()
    parallel_api_test(env, num_cycles=1_000_000)

There's no longer support for registration. This doesn't seem really necessary? But maybe I'm missing something and we should support registration ourselves?

Add graphs for causal inference

It seems like it should be pretty straightforward!
whynot gives us the graph topology already (wn.causal_graphs.build_dynamics_graph) so we just need to model it and use a graphic library to show it.
I think I have an idea on how to use graphviz for this. But open to suggestions and other people taking this.
It's mostly a suggestion for what I perceived as a low hanging fruit.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.