Giter Site home page Giter Site logo

facebookresearch / balance Goto Github PK

View Code? Open in Web Editor NEW
673.0 7.0 42.0 114.5 MB

The balance python package offers a simple workflow and methods for dealing with biased data samples when looking to infer from them to some target population of interest.

Home Page: https://import-balance.org

License: GNU General Public License v2.0

Python 52.45% Shell 0.24% Makefile 0.04% CSS 0.15% Batchfile 0.07% Jupyter Notebook 45.84% JavaScript 0.70% R 0.23% MDX 0.28%

balance's Introduction

balance_logo_horizontal

balance: a python package for balancing biased data samples

balance is currently in beta and under active development. Follow us on github!

What is balance?

balance is a Python package offering a simple workflow and methods for dealing with biased data samples when looking to infer from them to some population of interest.

Biased samples often occur in survey statistics when respondents present non-response bias or survey suffers from sampling bias (that are not missing completely at random). A similar issue arises in observational studies when comparing the treated vs untreated groups, and in any data that suffers from selection bias.

Under the missing at random assumption (MAR), bias in samples could sometimes be (at least partially) mitigated by relying on auxiliary information (a.k.a.: “covariates” or “features”) that is present for all items in the sample, as well as present in a sample of items from the population. For example, if we want to infer from a sample of respondents to some survey, we may wish to adjust for non-response using demographic information such as age, gender, education, etc. This can be done by weighing the sample to the population using auxiliary information.

The package is intended for researchers who are interested in balancing biased samples, such as the ones coming from surveys, using a Python package. This need may arise by survey methodologists, demographers, UX researchers, market researchers, and generally data scientists, statisticians, and machine learners.

More about the methodological background can be found in Sarig, T., Galili, T., & Eilat, R. (2023). balance – a Python package for balancing biased data samples.

Installation

Requirements

You need Python 3.8 or later to run balance. balance can be built and run from Linux, OSX, and Windows (NOTE: method="ipw" is currently not supported on Windows).

The required Python dependencies are:

REQUIRES = [
    "numpy",
    "pandas<=1.4.3",
    "ipython",
    "scipy<=1.9.2",
    "patsy",
    "seaborn<=0.11.1",
    "plotly",
    "matplotlib",
    "statsmodels",
    "scikit-learn",
    "ipfn",
    "session-info",
]

Note that glmnet_python must be installed from the Github source

See setup.py for more details.

Installing balance

As a prerequisite, you must install glmnet_python from source:

python -m pip install git+https://github.com/bbalasub1/[email protected]

Installing via PyPi

We recommend installing balance from PyPi via pip for the latest stable version:

python -m pip install balance

Installation will use Python wheels from PyPI, available for OSX, Linux, and Windows.

Installing from Source/Git

You can install the latest (bleeding edge) version from Git:

python -m pip install git+https://github.com/facebookresearch/balance.git

Alternatively, if you have a local clone of the repo:

cd balance
python -m pip install .

Getting started

balance’s workflow in high-level

The core workflow in balance deals with fitting and evaluating weights to a sample. For each unit in the sample (such as a respondent to a survey), balance fits a weight that can be (loosely) interpreted as the number of people from the target population that this respondent represents. This aims to help mitigate the coverage and non-response biases, as illustrated in the following figure.

total_survey_error_img

The weighting of survey data through balance is done in the following main steps:

  1. Loading data of the respondents of the survey.
  2. Loading data about the target population we would like to correct for.
  3. Diagnostics of the sample covariates so to evaluate whether weighting is needed.
  4. Adjusting the sample to the target.
  5. Evaluation of the results.
  6. Use the weights for producing population level estimations.
  7. Saving the output weights.

You can see a step-by-step description (with code) of the above steps in the General Framework page.

Code example of using balance

You may run the following code to play with balance's basic workflow (these are snippets taken from the quickstart tutorial):

We start by loading data, and adjusting it:

from balance import load_data, Sample

# load simulated example data
target_df, sample_df = load_data()

# Import sample and target data into a Sample object
sample = Sample.from_frame(sample_df, outcome_columns=["happiness"])
target = Sample.from_frame(target_df)

# Set the target to be the target of sample
sample_with_target = sample.set_target(target)

# Check basic diagnostics of sample vs target before adjusting:
# sample_with_target.covars().plot()

You can read more on evaluation of the pre-adjusted data in the Pre-Adjustment Diagnostics page.

Next, we adjust the sample to the population by fitting balancing survey weights:

# Using ipw to fit survey weights
adjusted = sample_with_target.adjust()

You can read more on adjustment process in the Adjusting Sample to Population page.

The above code gets us an adjusted object with weights. We can evaluate the benefit of the weights to the covariate balance, for example by running:

print(adjusted.summary())
    # Covar ASMD reduction: 62.3%, design effect: 2.249
    # Covar ASMD (7 variables):0.335 -> 0.126
    # Model performance: Model proportion deviance explained: 0.174

adjusted.covars().plot(library = "seaborn", dist_type = "kde")

And get:

We can also check the impact of the weights on the outcome using:

# For the outcome:
print(adjusted.outcomes().summary())
    # 1 outcomes: ['happiness']
    # Mean outcomes:
    #             happiness
    # source
    # self        54.221388
    # unadjusted  48.392784
    #
    # Response rates (relative to number of respondents in sample):
    #    happiness
    # n     1000.0
    # %      100.0
adjusted.outcomes().plot()

You can read more on evaluation of the post-adjusted data in the Evaluating and using the adjustment weights page.

Finally, the adjusted data can be downloaded using:

adjusted.to_download()  # Or:
# adjusted.to_csv()

To see a more detailed step-by-step code example with code output prints and plots (both static and interactive), please go over to the tutorials section.

Implemented methods for adjustments

balance currently implements various adjustment methods. Click the links to learn more about each:

  1. Logistic regression using L1 (LASSO) penalization.
  2. Covariate Balancing Propensity Score (CBPS).
  3. Post-stratification.
  4. Raking.

Implemented methods for diagnostics/evaluation

For diagnostics the main tools (comparing before, after applying weights, and the target population) are:

  1. Plots
    1. barplots
    2. density plots (for weights and covariances)
    3. qq-plots
  2. Statistical summaries
    1. Weights distributions
      1. Kish’s design effect
      2. Main summaries (mean, median, variances, quantiles)
    2. Covariate distributions
      1. Absolute Standardized Mean Difference (ASMD). For continuous variables, it is Cohen's d. Categorical variables are one-hot encoded, Cohen's d is calculated for each category and ASMD for a categorical variable is defined as Cohen's d, average across all categories.

You can read more on evaluation of the post-adjusted data in the Evaluating and using the adjustment weights page.

Other resources

More details

Getting help, submitting bug reports and contributing code

You are welcome to:

Citing balance

Sarig, T., Galili, T., & Eilat, R. (2023). balance – a Python package for balancing biased data samples. https://arxiv.org/abs/2307.06024

BibTeX: @misc{sarig2023balance, title={balance - a Python package for balancing biased data samples}, author={Tal Sarig and Tal Galili and Roee Eilat}, year={2023}, eprint={2307.06024}, archivePrefix={arXiv}, primaryClass={stat.CO} }

License

The balance package is licensed under the GPLv2 license, and all the documentation on the site is under CC-BY.

News

You can follow updates on our:

Acknowledgements / People

The balance package is actively maintained by people from the Core Data Science team (in Tel Aviv and Boston), by Tal Sarig, Tal Galili and Steve Mandala.

The balance package was (and is) developed by many people, including: Roee Eilat, Tal Galili, Daniel Haimovich, Kevin Liou, Steve Mandala, Adam Obeng (author of the initial internal Meta version), Tal Sarig, Luke Sonnet, Sean Taylor, Barak Yair Reif, and others. If you worked on balance in the past, please email us to be added to this list.

The balance package was open-sourced by Tal Sarig, Tal Galili and Steve Mandala in late 2022.

Branding created by Dana Beaty, from the Meta AI Design and Marketing Team. For logo files, see here.

balance's People

Contributors

ahakso avatar amyreese avatar antonk52 avatar dependabot[bot] avatar dhirschfeld avatar eltociear avatar facebook-github-bot avatar igorsugak avatar jknoxville avatar sarigt avatar talgalili avatar zbraiterman avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar

balance's Issues

[BUG] simulation data of target is not the same is sample

This makes the comparison of happiness after weighting to be wrong.
The purpose of having the outcome in the target population was to show how adjustment gets us closer there.

This:
https://github.com/facebookresearch/balance/blob/main/balance/datasets/__init__.py#L62
Should be made to be like this:
https://github.com/facebookresearch/balance/blob/main/balance/datasets/__init__.py#L89

After the change, the seed would change a bit - so tests would need to be fixed.

Migrate license from GPL2 to MIT

This issue is for anyone interested in tracking the task of changing the license of the balance package.

If you have direct value in us making this change, please leave a comment with your use-case. Such comments would help us prioritize this work.

[BUG] libgfortran.so.3: cannot open shared object file when running sample_with_target.adjust(max_de=None)

Describe the bug

I got OSError: libgfortran.so.3: cannot open shared object file: No such file or directory when I ran sample_with_target.adjust(max_de=None)

Session information

Please run paste here the output of running the following in your notebook/terminal:
Already satisfied all the requirement in the overview pages and installed glmnet_python and balance using the sample code

OSError                                   Traceback (most recent call last)
OSError: libgfortran.so.3: cannot open shared object file: No such file or directory
# Sessions info python 3.8.16

Screenshots

image

Reproducible example

Please provide us with (any that apply):

  1. Code: I ran this code in the tutorial:
    Using ipw to fit survey weights
    adjusted = sample_with_target.adjust(max_de=None)
  2. Reference: https://import-balance.org/docs/docs/overview/#code-example-of-using-balance

Additional context

Add any other context about the problem here that might help us solve it.

[FEATURE] Chainging the way Sample take strategy for adjust

The existing implementation of the adjust function in the literal module, which relies on a separate mapper to import the appropriate function, complicates the process of adding new strategies. To address this issue, I propose introducing a new function signature that clearly indicates the minimal arguments for the Sample object and specifies the return type as TypedDict. Here's an example of how it could be implemented:

e.g:

return_func_sig = TypedDict({ 
        "weight": pd.DataFrame,
        "model": {
            "method": str,
            "X_matrix_columns": List[str],
            ...
            }
)

Callable[[pd.DataFrame, pd.DataFrame, ...], return_func_sig]

It should be possible to pass this callable as a strategy to adjust function and it would call this function instead.

Incompatible with Python 3.11

I'm unable to pip install this package in a clean environment with Python 3.11 installed on my machine. The problem seems to be the dependency on scipy<=1.8.1. The maximum version (1.8.1) is only compatible with Python versions 3.8 - 3.10.

Scipy v1.6.1 is not flagged as incompatible with Python 3.11, and is the version picked automatically when I pip install balance. However, this fails with the error, metadata-generation-failed - which also seems to be a versioning issue. I suspect all versions of scipy prior to 1.9.0 are incompatible with Python 3.11.

Request: can you make the package compatible with scipy v1.10.0 please? (which installs in Python 3.11 with no problems)

Failing that, can anyone suggest any workarounds?

[BUG] rake doesn't support trimming - but also doesn't indicate it to the user

This is violating user expectation.

E.g.:

weights_untrimmed = sample.adjust(
    variables=weighting_variables,
    method="rake",
    weight_trimming_mean_ratio=0,
    transformations=auto_recodes,
)

By expectation violation I mean,

  1. there isn’t anything in the docs that suggests that rake doesn’t support weight_trimming_mean_ratio or weight_trimming_percentile
  2. there isn’t an error thrown when a value is passed to either weight_trimming_mean_ratio or weight_trimming_percentile suggesting that rake doesn’t support it.
  3. this can result people thinking the weights are getting trimmed when they aren’t.

(Reported by David Lovis-McMahon)

module 'balance.adjustment' has no attribute 'apply_transformations'

Just trying the CBPS sample notebook and ran up this error.

---------------------------------------------------------------------------
AttributeError                            Traceback (most recent call last)
[<ipython-input-58-ee26876ffb7a>](https://localhost:8080/#) in <module>
----> 1 adjusted_cbps = sample_with_target.adjust(method = "cbps")

1 frames
[/usr/local/lib/python3.8/dist-packages/balance/weighting_methods/cbps.py](https://localhost:8080/#) in cbps(sample_df, sample_weights, target_df, target_weights, variables, transformations, na_action, formula, balance_classes, cbps_method, max_de, opt_method, opt_opts, weight_trimming_mean_ratio, weight_trimming_percentile, random_seed, *args, **kwargs)
    434     # should be transformed with the *identity function*,
    435     # otherwise will be dropped from the model
--> 436     sample_df, target_df = balance_adjustment.apply_transformations(
    437         (sample_df, target_df), transformations=transformations
    438     )

AttributeError: module 'balance.adjustment' has no attribute 'apply_transformations'

[FEATURE] Migrate from glmnet_python to sklearn

Currently the usage of glmnet_python in ipw fuction is a blocker to:

  1. Using ipw on Windows (due to installation of glment_pyton) (issue: #26)
  2. Migrating the license from GPL2 to MIT (issue: #16)
  3. Shorter installation (no need to install glmnet)

Current plan is to move from glmnet_python to sklearn during May-June 2023. This issue is to follow on progress.

Can't use `method = "cbps"` (windows, Python 3.10)

And yet another - I'm so sorry...

Also just trying to complete the quick start example in a Python 3.10 environment, I just tried switching the method, so running this line instead:

adjusted_cbps = sample_with_target.adjust(method = "cbps")

This fails with the following error:

AttributeError: module 'balance.adjustment' has no attribute 'apply_transformations'

Sorry I have no guesses as to what's causing this one.

When using the RStudio IDE to run Python: Seaborn plots not working

Hi guys,

As reported before, the Seaborn-based plots don't seem to be working in a Python 3.10 environment on my Windows machine e.g. this line from the quick start: adjusted.covars().plot(library = "seaborn", dist_type = "kde"). It doesn't throw an error, just doesn't output any plots.

Here is my session info as requested by @talgalili - hope it helps!

-----
balance             0.3.0
pandas              1.4.3
session_info        1.0.0
-----
IPython             8.9.0
PIL                 9.4.0
asttokens           NA
backcall            0.2.0
beta_ufunc          NA
binom_ufunc         NA
coxnet              NA
cvcompute           NA
cvelnet             NA
cvfishnet           NA
cvglmnet            NA
cvglmnetCoef        NA
cvglmnetPredict     NA
cvlognet            NA
cvmrelnet           NA
cvmultnet           NA
cycler              0.10.0
cython_runtime      NA
dateutil            2.8.2
decorator           5.1.1
elnet               NA
executing           1.2.0
fishnet             NA
glmnet              NA
glmnetCoef          NA
glmnetControl       NA
glmnetPredict       NA
glmnetSet           NA
glmnet_python       NA
hypergeom_ufunc     NA
jedi                0.18.2
joblib              1.2.0
kiwisolver          1.4.4
loadGlmLib          NA
lognet              NA
matplotlib          3.6.3
mpl_toolkits        NA
mrelnet             NA
nbinom_ufunc        NA
nt                  NA
numpy               1.24.1
packaging           23.0
parso               0.8.3
patsy               0.5.3
pickleshare         0.7.5
pkg_resources       NA
plotly              5.13.0
prompt_toolkit      3.0.36
pure_eval           0.2.2
pygments            2.14.0
pyparsing           3.0.9
pytz                2022.7.1
rpycall             NA
rpytools            NA
scipy               1.8.1
seaborn             0.11.1
setuptools          65.6.3
six                 1.16.0
sklearn             1.2.1
stack_data          0.6.2
statsmodels         0.13.5
tenacity            NA
threadpoolctl       3.1.0
traitlets           5.9.0
wcwidth             0.2.6
wtmean              NA
-----
Python 3.10.9 | packaged by conda-forge | (main, Jan 11 2023, 15:15:40) [MSC v.1916 64 bit (AMD64)]

[FEATURE] Raise error (or warning?!) when providing weights with 'None'

Not sure yet if it's better to raise an error, or a warning (and impute it with 0s). But let's go with error.
But at least it's worth giving a printout of userid with the None weights (or at least "head" of it, maybe with some of their features also)

A good place to do this is by adding the check here:
https://github.com/facebookresearch/balance/blob/main/balance/sample_class.py#L268

And add tests that verify it works here:
https://github.com/facebookresearch/balance/blob/main/tests/test_sample.py

[FEATURE] Import the Empirical Calibration package to `adjust`

The Empirical Calibration package, developed by Google, provides a method to compute empirical calibration weights using convex optimization. This approach balances out the marginal distribution of covariates directly while reducing the inflation of variance. This is similar to performing raking while trying to keep the weights to be as equal as possible. It offers a bias correction solution that resembles the raking and CBPS methods that are implemented in the balance package.

It might be worth importing it into balance in the future.

Reference

Title: A Python Library For Empirical Calibration
Authors: Xiaojing Wang, Jingang Miao, Yunting Sun
Year: 2019
Journal: arXiv preprint arXiv:1906.11920
URL: https://doi.org/10.48550/arXiv.1906.11920

[BUG] method = 'rake' return AttributeError

Describe the bug

The same code has no error when running method ='ipw', and method = 'cbps', but return below error when using raking.
The below code return error

sample_with_target.adjust(method = "rake",variables = variables) 
table_current.loc[feature, weight_col]                      
AttributeError: 'numpy.int64' object has no attribute 'loc'

###Update on 2023/03/08###
This bug is returned because some of the bin that appears in the sample has never appeared in the target.
Once I add the sample to the target to make sure all bins appear in the target, the bug disappear.

Session information

Please run paste here the output of running the following in your notebook/terminal:

# Sessions info
import session_info
session_info.show(html=False, dependencies=True)

balance 0.9.1
balance_functions NA
boto3 1.28.28
dateutil 2.8.2
matplotlib 3.7.2
numpy 1.24.4
pandas 1.4.3
psutil 5.9.5
seaborn 0.12.2
session_info 1.0.0
tqdm 4.65.0

OpenSSL 23.2.0
PIL 10.0.0
anyio NA
arrow 1.2.3
asttokens NA
attr 23.1.0
attrs 23.1.0
babel 2.12.1
backcall 0.2.0
beta_ufunc NA
binom_ufunc NA
botocore 1.31.28
brotli NA
certifi 2023.05.07
cffi 1.15.1
charset_normalizer 3.2.0
cloudpickle 2.2.1
colorama 0.4.4
comm 0.1.3
coxnet NA
cryptography 41.0.2
cvcompute NA
cvelnet NA
cvfishnet NA
cvglmnet NA
cvglmnetCoef NA
cvglmnetPredict NA
cvlognet NA
cvmrelnet NA
cvmultnet NA
cycler 0.10.0
cython_runtime NA
debugpy 1.6.7
decorator 5.1.1
defusedxml 0.7.1
elnet NA
executing 1.2.0
fastjsonschema NA
fishnet NA
fqdn NA
fsspec 2023.6.0
glmnet NA
glmnetCoef NA
glmnetControl NA
glmnetPredict NA
glmnetSet NA
glmnet_python NA
google NA
hypergeom_ufunc NA
idna 3.4
ipfn NA
ipykernel 6.24.0
ipython_genutils 0.2.0
ipywidgets 8.0.7
isoduration NA
jedi 0.18.2
jinja2 3.1.2
jmespath 1.0.1
joblib 1.3.1
json5 NA
jsonpointer 2.4
jsonschema 4.18.4
jsonschema_specifications NA
jupyter_events 0.6.3
jupyter_server 2.7.0
jupyterlab_server 2.23.0
kiwisolver 1.4.4
loadGlmLib NA
lognet NA
markupsafe 2.1.3
matplotlib_inline 0.1.6
mpl_toolkits NA
mrelnet NA
nbformat 5.9.1
nbinom_ufunc NA
ncf_ufunc NA
overrides NA
packaging 21.3
parso 0.8.3
patsy 0.5.3
pexpect 4.8.0
pickleshare 0.7.5
pkg_resources NA
platformdirs 3.9.1
plotly 5.15.0
prometheus_client NA
prompt_toolkit 3.0.39
ptyprocess 0.7.0
pure_eval 0.2.2
pyarrow 12.0.1
pydev_ipython NA
pydevconsole NA
pydevd 2.9.5
pydevd_file_utils NA
pydevd_plugins NA
pydevd_tracing NA
pygments 2.15.1
pyparsing 3.0.9
pythonjsonlogger NA
pytz 2023.3
referencing NA
requests 2.31.0
rfc3339_validator 0.1.4
rfc3986_validator 0.1.1
rpds NA
s3fs 0.4.2
scipy 1.9.1
send2trash NA
six 1.16.0
sklearn 1.3.0
sniffio 1.3.0
socks 1.7.1
stack_data 0.6.2
statsmodels 0.14.0
tenacity NA
threadpoolctl 3.2.0
tornado 6.3.2
traitlets 5.9.0
typing_extensions NA
uri_template NA
urllib3 1.26.14
wcwidth 0.2.6
webcolors 1.13
websocket 1.6.1
wtmean NA
yaml 6.0
zmq 25.1.0

IPython 8.14.0
jupyter_client 8.3.0
jupyter_core 5.3.1
jupyterlab 4.0.3
notebook 6.5.4

Python 3.10.12 | packaged by conda-forge | (main, Jun 23 2023, 22:40:32) [GCC 12.3.0]
Linux-5.10.209-198.812.amzn2.x86_64-x86_64-with-glibc2.26

Session information updated at 2024-03-05 04:21

Screenshots

If applicable, add screenshots to help explain your problem.
image (4)
image (5)

Reproducible example

Please provide us with (any that apply):

  1. Code: code we can run to reproduce the issue (in terminal or python notebook)
    sample = Sample.from_frame(sample_df2[:50]) target = Sample.from_frame(target_df2[:500]) sample_with_target = sample.set_target(target) adjusted_ads_weight = sample_with_target.adjust(method = "rake",variables = variables_subset2)
    sample_df2 and target_df2 are dataframes with two numerical columns.
    image (6)

  2. Reference: If the issue is in a tutorial, please provide the link to it, and the exact place in which the code fails.

Additional context

Add any other context about the problem here that might help us solve it.

Page edit not working

Hi team! Here is your first issue:

On trying to edit any wiki page it redirects to non existing page.

image

image

[FEATURE] Weight trimming to match a specific design effect

Regarding balance/graviton (once we deal with the glmnet->sklearn transition).

How about we just move to using max_de to be based on weight trimming only (instead of shrinkage in the LASSO lambda stage)?
And if the results using weight trimming (for some max_de), are tolerable (similar to existing solution), then transitioning to it / maintaining it should be easier than the current CV solution.

TBD...

[BUG]

Running the following code line

from balance import Sample

results in the following error message:

ModuleNotFoundError: No module named 'cvglmnet'

Installation of the required module is not possible on WSL 2 despite running it with pip (see below)

ERROR: Could not find a version that satisfies the requirement cvglmnet (from versions: none)
ERROR: No matching distribution found for cvglmnet

The module does not seem to exist anywhere.

Can't use `method = ipw` in Windows

Hi guys,

I'm so sorry to raise another issue, but now I have balance successfully loaded in a Python 3.10 environment, I'm trying to run through the quick start example. I can load the datasets and set targets fine, but this line:

adjusted = sample_with_target.adjust(max_de=None)

fails with the error:

ValueError: loadGlmlib does not currently work for windows

Judging by your docs on the method it's probably something to do with glmnet-python which I installed from its GitHub repo.

Treatment in balance library for sample and target population

I have one question regarding balance. You guys are using ipw as one of the method to balance covariate of sample dataset. Are we considering treatment 0 for sample df and treatment 1 for target df and then run weighting algorithm on top of combined dataset (concat(sample_df, target_df))?

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.