Giter Site home page Giter Site logo

pzivich / delicatessen Goto Github PK

View Code? Open in Web Editor NEW
22.0 22.0 2.0 4.02 MB

Delicatessen: the Python one-stop sandwich (variance) shop 🥪

Home Page: https://deli.readthedocs.io/en/latest/index.html

License: MIT License

Python 100.00%
estimating-equations estimating-functions m-estimation m-estimator m-estimators python robust-statistics sandwich statistics

delicatessen's Introduction

I am an epidemiologists primarily working in infectious diseases, causal inference, statistics, and open-source software. You can find more on my personal website: https://pzivich.github.io/

Paul's GitHub stats

delicatessen's People

Contributors

mwklose avatar pzivich avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar

Forkers

ehsanx jeannyww

delicatessen's Issues

Add docs on developing wrapper functions

Is your feature request related to a problem? Please describe.
Documentations could be improved by addition of details on writing wrapper functions. I do this myself, but would be helpful for others who want to use delicatessen but without having to always interact with the estimating equations themselves.

Describe the solution you'd like
A new page on RTD that provides examples and advice

Describe alternatives you've considered
None

Additional context
None

Accelerated Failure Time Models

Is your feature request related to a problem? Please describe.
Right now, only the Weibull accelerated failure time model is supported. If we implement the generalized-gamma model, then I will be able to implement various other distributions automatically (weibull, exponential, gamma, inverse-weibull, inverse-gamma). This could replace much of the survival.py estimating equations.

Describe the solution you'd like
Implement the generalized-gamma AFT model.

Describe alternatives you've considered
Bespoke estimating equations, which would be burdensome to implement and upkeep.

Additional context
Some references

https://www.sciencedirect.com/science/article/abs/pii/S0167947316302390

https://link.springer.com/article/10.1007/s00362-022-01300-4

https://univ-pau.hal.science/hal-02953269/

https://www.hindawi.com/journals/mpe/2011/150294/

https://lifelines.readthedocs.io/en/latest/fitters/regression/GeneralizedGammaRegressionFitter.html

Compute covariance without calling `MEstimator`

Is your feature request related to a problem? Please describe.

A nice addition would be to allow for computation of the covariance matrix given an estimating equation and the parameters. Essentially, go around the root-finding procedure and just use delicatessen to compute the variance.

Describe the solution you'd like

A separate function that just computes the covariance given an input estimating equation and theta values.

Describe alternatives you've considered

Run the root-finding procedure. But this is not ideal for running M-estimators within M-estimators.

Additional context

None

Fast Jacobian & SciPy version

Is your feature request related to a problem? Please describe.
The current approach to evaluating the break matrix is very slow. I want to use scipy.optimize.approx_fprime to get the Jacobian of the estimating equations going forward. This is much faster than my approach of getting the bread. This becomes important when the estimating equations have lots of parameters (minor differences for <20 params). Regardless, not making the switch makes the code much slower than need be.

However, this new feature in scipy needs v1.9.0+. This means future versions will drop support for Python version <3.8 and dependencies will be scipy>=1.9.0 and numpy>=1.18.5

Roadmap is to drop support for earlier version of scipy/numpy and python for v1.0 release.

Describe the solution you'd like
I would like to keep versions and have faster bread matrix calculations (but don't want to solve this issue on my own).

Describe alternatives you've considered
We can either be slow (and cover more versions in the future) or be fast for more 'modern' versions. Latter seems to be preferred.

Additional context
None

Pharmacokinetic models

Is your feature request related to a problem? Please describe.

Add more support for pharmacokinetic models (like existing dose-response models). This topic area is a bit unfamiliar to me, but I can add the models from Bonate's book as I read through it. This issue will be a place for me to collect those models. I will also need to look around for external software or examples to compare to.

These would replace the current estimating_equations/dose_response.py file with the more general estimating_equations/pharmacokinetics.py file. Since all imports are done in estimating_equations/__init__.py this won't change anything on the user side.

Describe the solution you'd like

Add corresponding estimating equations

Describe alternatives you've considered

None

Additional context

Bonate PL. "Pharmacokinetic-Pharmacodynamic Modeling and Simulation" 2nd edition.

G-estimation of log-linear SNMM

Is your feature request related to a problem? Please describe.
Add log-linear SNMM support for g-estimation

Describe the solution you'd like
Option using the existing model argument. This is built in already, but I need to confirm that I have implemented correctly via some outside software.

Describe alternatives you've considered
Right now, there is no log-linear option supported.

Additional context
Related to #26 #28

Incremental root-finding

Is your feature request related to a problem? Please describe.

Solving for the roots of large stacks of estimating equations is difficult at times. Here one can usually break the stack into pieces and then solve each. I currently do this by hand for practical problems that involve lots of parameters. But that requires doing it manually

Describe the solution you'd like

A new optional argument that divides the root finding into separate tasks. This would automate what I implement by hand in these cases

Describe alternatives you've considered

Do by hand which can be tedious to do...

Additional context

None

AIPW alternative implementations

Is your feature request related to a problem? Please describe.

Currently I use the plug-in AIPW. This can have subpar performance with extreme weights due to it being unbounded. The AIPW implementation via weighted regression avoids this.

This is an easy option to add for ee_aipw

Describe the solution you'd like

Add optional argument to decide which AIPW estimator to apply

Describe alternatives you've considered

Ignore

Additional context

None

Restructure /tests

As I've added more estimating equations, the test scripts for the built-in estimating equations has become harder to manage. I am going to restructure that so that script in /estimating_equations gets its own test script. This should make management a bit easier going forward

Twice differentiable LASSO

Is your feature request related to a problem? Please describe.

The current LASSO technically is invalid with the sandwich. That is because the score is not differentiable at zero. There are alternatives penalties, that mimic LASSO, where the score is differentiable everywhere. These might be better to include as a sort of default option (other LASSO will be maintained, but would be good to have a valid version as the default).

Describe the solution you'd like

There are various different penalties that could be added, beside the bridge penalty. For LASSO specifically, the dLASSO is one option

https://arxiv.org/pdf/1609.04985.pdf

The only issue is that this penalty will be biased. Some SCAD-dLASSO hybrid would be ideal, as it would solve both the twice differentiable and unbiased part of the penalization. I haven't seen this proposed, so would either need to find such a paper or I would need to put out some methods paper on it first.

Describe alternatives you've considered

Leave as-is. Don't add anymore penalized regression.

Additional context

None

Estimating Equation: Regression Calibration

Is your feature request related to a problem? Please describe.

Regression calibration is a method to correct for measurement error in the exposure. It actually is pretty easily to implement (and fairly intuitive). Basically, one fits a regression model for the true exposure and a outcome model with the mismeasured exposure. The coefficient from the mismeasured exposure is rescaled using the first model.

As estimating equations, this is fairly simple to implement. Basically, you have two regression models and that's it. So, the entire thing can be built using ee_regression. M-estimation and the sandwich variance are particularly valuable, since it automates the whole process (regression calibration can be annoying otherwise).

One caution is that regression calibration assumes measurement error is non-differential. So, that is a limitation. MIME still works in that context.

Describe the solution you'd like

Extend measurement.py to include regression calibration as a built-in equation. This is a nice complement to Rogan-Gladen, which deals with outcome measurement error. From that perspective, I cover two major types of measurement error.

Two (?) design matrices would be the inputs, one outcome variable. I should use ee_glm as the backbone for the outcome model to allow for maximum flexibility in that model specification. For the calibration model, I can just use linear regression (that is the current standard).

Describe alternatives you've considered

None.

Additional context

Some references

Boe, L. A., Shaw, P. A., Midthune, D., Gustafson, P., Kipnis, V., Park, E., ... & of the STRATOS Initiative, O. B. O. T. M. E. A. M. T. G. (2023). Issues in Implementing Regression Calibration Analyses. American Journal of Epidemiology, 192(8), 1406-1414.

https://academic.oup.com/aje/article-abstract/132/4/734/102293?redirectedFrom=fulltext&login=false

https://onlinelibrary.wiley.com/doi/10.1002/sim.4780080905

https://academic.oup.com/aje/article-abstract/136/11/1400/79365?redirectedFrom=fulltext&login=false

Plot robust loss functions

Is your feature request related to a problem? Please describe.
To RTD, add a plot of the robust loss functions (or rather their derivatives) over the space.

Describe the solution you'd like
Create a simple plot (like stats odds & ends or statsmodels documentation).

Describe alternatives you've considered
NA

Additional context
This applies to v0.6+, where additional loss functions are added.

Smoothing approximations of `psi`

Is your feature request related to a problem? Please describe.

M-estimation has issues when the estimating functions are not smooth. The most obvious example is the percentiles. This issue comes up for the root-finding procedure (there are multiple valid solutions) and variance estimation (derivative is not defined at particular points). To get around these issues, it has been proposed to use some smoothing approximation. Coverage of this would expand possible use-cases.

Describe the solution you'd like

Not sure exactly. At the least, it would be good to have some examples on the website that showcase the smoothing approximation method. That would make it easier to adapt. I could also then remove some of the warnings in the documentations for the median or percentile built-in estimating equations.

More broadly, some smoothing functions that can be called from utils would be useful. However, I'm not sure if spline smoothing is fine in all cases. I need to consider this further.

Describe alternatives you've considered

Continue recommending against percentile functions and other highly non-smooth estimating equations. It isn't ideal, but it can be fine. For example, the robust mean does reduce the influence of outliers (so functions kind of like the median). But this doesn't naturally extend to percentiles.

Additional context

Shang, Z. (2010). Convergence rate and Bahadur type representation of general smoothing spline M-estimates.

Hampel, F., Hennig, C., & Ronchetti, E. (2011). A smoothing principle for the Huber and other location M-estimators. Computational Statistics & Data Analysis, 55(1), 324-337.

Extended Rogan-Gladen

Is your feature request related to a problem? Please describe.

Upcoming release will have the standard Rogan-Gladen. I should also add an extended version of the Rogan-Gladen. This extended version would allow for logistic models for sensitivity, specificity, and the outcome. Notice that the estimating function would differ:

R{\mu - (mu^* + Sp - 1) / (Se + Sp - 1)}

as there needs to be subject-specific adjustments to the outcome measures instead of an overall update (like the standard Rogan-Gladen).

Describe the solution you'd like

Another estimating equation built in for the extended Rogan-Gladen. Due to their differences in form and what is needed, I think it is best to have as a separate class. I've already experimented a bit with this.

Describe alternatives you've considered

User can do by-hand.

Additional context

None

Efficient g-estimation

Is your feature request related to a problem? Please describe.

Currently, the g-estimation approach implemented follows the equations in Robins et al. 1992. The alternative estimating equations in Robins et al. 1994 are doubly-robust. The '94 paper provides the efficient g-estimator. In certain cases, these two estimators are asymptotically equivalent. To quote from Vansteelandt & Sjolander Epidemiol. Methods 2016; 5(1): 37–56

It follows in that case that the efficient g-estimator of ψ is asymptotically equivalent (and under some conditions also mathematically identical – not shown) to the proposed estimator, i. e. the solution to eq. [18].

[18] of above is the currently implementation in ee_gestimation_snmm. So, these solutions won't be necessarily identical in finite samples (if my reading is correct, which also agrees with the examples I've built). What the mathematical conditions are for them to be equal is unclear to me... but doesn't really matter here.

Describe the solution you'd like

Request is to add the efficient g-estimator in ee_gestimation_snmm. To support both options (since they are not identical), there should be an optional argument, like approach='efficient'. While it will change future behavior, the default behavior should be the efficient estimator. This default will also make the log-linear SNMM easier (see below).

I will also need to add an argument for the outcome process model. This will only be used by the efficient estimator and what provides the double robustness and efficiency. Due to the double robustness, this model can also simply not be specified (the predicted values from this 'model' are just set to be all zeroes). So, default behavior will be no outcome process model. This will make it easier to support both g-estimators.

The following code provides a simple implementation of the efficient g-estimator for a linear SNMM.

def psi(theta):
    # Breaking out parameters
    phi = theta[0:2]
    alpha = theta[2:6]
    beta = theta[6:]

    h_phi = y - np.dot(V*A[:, None], phi)

    # Propensity score model
    ee_pra = ee_regression(theta=alpha, X=Wa, y=d['A'],
                           model='logistic')
    pi_a = inverse_logit(np.dot(Wa, alpha))
    # Outcome process model
    ee_out = ee_regression(theta=beta, X=Wy, y=h_phi,
                           model='linear')
    yhat = np.dot(Wy, beta)

    # Estimating equations for SNMM
    a_resid = (A - pi_a)[:, None]
    y_resid = (y - yhat)[:, None]
    snm = np.dot(V, phi)[:, None]
    ee_snm = (a_resid * (y_resid - snm*a_resid) * V).T

    return np.vstack([ee_snm, ee_pra, ee_out])

Note that this issue will also help to solve #30 as the estimating equations for the efficient log-linear SNMM are provided in Section 3 of Vansteelandt & Joffe Statistical Science 2014; 29(4): 707–731. I have not been as successful in finding the inefficient log-linear SNMM.

Describe alternatives you've considered

None.

Additional context

None.

[BUG] Powell Hybrid error tolerance specification

Expected behavior
The tolerance parameter in MEstimator.estimate is expected to control the error tolerance for the corresponding key-word root-finding algorithms. However, this is not the case for the Powell hybrid method ('hybr').

Describe the bug
On line 482, the xtol is set. But this is incorrect. Instead, maxfev per the SciPy documentation here. Thankfully this bug only leads to the hybrid method using the default SciPy error tolerance. Therefore, this bug should have minimal impacts on analyses using previous versions.

Relevant error output
None

Versions:

  • OS: Windows
  • Python version: 3.9.4
  • delicatessen: v1.2
  • NumPy: v1.22.2
  • SciPy: v1.9.2

To Reproduce
N/A

Additional context
N/A

Rescale generated spline terms

Is your feature request related to a problem? Please describe.
I should add normalized splines. Essentially, I should divided by (knots[-1] - knots[0]) ** term to rescale the terms. This will make some of the background model estimating easier (less extreme covariate values).

The normalized splines would be an option to preserve historical behavior. In a 3.0 release, I would like to have the rescaled splines be the default option. This would be in a major release, since it changes the default behavior.

Describe the solution you'd like
Add normed argument to splines. It would ad (knots[-1] - knots[0]) ** term as a divisor to each spline term.

Describe alternatives you've considered
Rescaling covariates prior to using the M-estimator. A bit annoying and I can simplify with this trick.

Additional context
None

Convert time and delta to arrays in survival estimating equations [BUG]

Expected behavior
Currently, the time and delta arguments are expected to be numpy arrays. Generally, I use np.asarray to convert the inputs, to ensure the object typing is correct. I do not do this for survival estimating equations as of v0.5

Describe the bug
Built-in survival estimating equations will return an error when provided list objects as arguments.

Relevant error output
NA

Versions:

  • delicatessen: v0.5 and lower

To Reproduce
NA

Additional context
NA

G-estimation of Structural Nested Models

Is your feature request related to a problem? Please describe.
Unrelated to a problem.

Describe the solution you'd like
Add g-estimation of structural nested models as a built-in estimating equation. G-estimation will complete the g-methods availability in deli.

Describe alternatives you've considered
None

Additional context
See Technical Point 14.2 of Chapter 14 of Hernan & Robins for the estimating equation. Also relevant is Technical Point 14.1

Pooled logistic for survival analyses

Is your feature request related to a problem? Please describe.

Add an estimating equation for pooled (logistic) regression to support survival analysis operations. This is a finite-dimension M-estimator, so standard theory would apply. This also opens up various survival analysis options, like computing IPCW, g-computation, and others.

Describe the solution you'd like

Build an estimating equation for pooled logistic regression. Note that it would not require a long data set. Specifically, we should evaluate something like the following
$$\sum_{i=1}^n \left( \sum_{k \in R} (\Delta_i t_k - m(W_i, S_i; \beta)) \left[ W_i, S_i \right]^T \right) = 0$$
this makes a compact estimating equation which avoids the expansion into a long data set. This avoids mistakes potentially introduced in data processing steps (for the users). This is the advantage of working with the score! However, it requires some finesse to specify the estimating equation programmatically. Particularly, the design matrix for time (i.e., $S$) which is dependent on $k$.

Challenges here:

  • Need to process the time design matrix if we don't covert to a long data structure
  • Weights can be time-dependent, which complicates the implementation that doesn't require a long data structure (weights are a matrix instead of a vector in that case)
  • User can't directly control who contributes as the compact structure sums over the time internally. This can be modified by using the weights argument, but is more opaque.

Describe alternatives you've considered

Code from scratch each time (I would rather not, and would be good support for users).

Additional context

Abbott, R. D. (1985). Logistic regression in survival analysis. American Journal of Epidemiology, 121(3), 465-471.

D'Agostino, R. B., Lee, M. L., Belanger, A. J., Cupples, L. A., Anderson, K., & Kannel, W. B. (1990). Relation of pooled logistic regression to time dependent Cox regression analysis: the Framingham Heart Study. Statistics in Medicine, 9(12), 1501-1515.

Hernán, M. A. (2010). The hazards of hazard ratios. Epidemiology, 21(1), 13-15.

Ngwa, J. S., Cabral, H. J., Cheng, D. M., Pencina, M. J., Gagnon, D. R., LaValley, M. P., & Cupples, L. A. (2016). A comparison of time dependent Cox regression, pooled logistic regression and cross sectional pooling with simulations and an application to the Framingham Heart Study. BMC Medical Research Methodology, 16, 1-12.

EE for generalized gamma

Is your feature request related to a problem? Please describe.

Add the estimating equations for the generalized gamma distribution. They are provided in "A comprehensive toolbox for the gamma distribution: The gammadist package"

Describe the solution you'd like

Add a new ee function for implementation.

Describe alternatives you've considered

None

Additional context

None

Regression Proximal Causal Inference

Is your feature request related to a problem? Please describe.

Add a general implementation of proximal CI to causal.py.

Describe the solution you'd like

There is a general implementation of proximal CI using GLM's. This would be good to include as an option.

Describe alternatives you've considered

None.

Additional context

Reference: https://arxiv.org/abs/2402.00335

Improved errors for `init` output mistmatches

Is your feature request related to a problem? Please describe.
Right now, we don't check the shape of the returned values versus the init's. The length of the inits should match one of the dimensions. This means, that something like scipy handles those errors. However, I think we can be more informative (error-wise) on bad shapes of objects returned by psi.

Describe the solution you'd like
A check on the shape of the returned object. We already extract n via one of the 2D. So, just check the other dimension and assert if it matches len(init).

Will also need to add a quick tests to make sure functions as intended error-wise

Describe alternatives you've considered
None.

Additional context
None.

Improve np.nan error handling in estimating function

Is your feature request related to a problem? Please describe.

I'm always frustrated when I have a np.nan occur but I don't know which row it is happening in.

Describe the solution you'd like

Have the error identify which rows are the problematic ones. This will speed up de-bugging the estimating functions.

Describe alternatives you've considered

Manually running the estimating function and checking. Very annoying

Additional context

None

Generalized Additive Models [EE]

Is your feature request related to a problem? Please describe.
No, this is only an enhancement.

Describe the solution you'd like
Add estimating equations for generalized additive models. This is fairly straightforward as I already have L2-penalized regression implemented. All that really needs to be added is functionalities for different basis functions (like splines). There are some options I need to think through further though (e.g., applying different basis functions to different elements, generating predicted values from the GAM, as the predictors will be internally transformed)

Describe alternatives you've considered
Use outside GAM implementations, but this would be a nice thing to have. Also would put delicatessen closer so flexible ML models (a benefit in bioinformatics uptake).

Additional context
None

Add functionality to compute the gradient for the bread matrix

Is your feature request related to a problem? Please describe.
Not related to a problem.

Describe the solution you'd like
Add internal functions in delicatessen.utilities for computing the derivatives. Ideally, I would support both the central difference method (approximations) and forward mode automatic differentiation (exact).

Describe alternatives you've considered
Right now, we rely on scipy approx_fprime to compute the derivatives. This is fine but it restricted the versions I supported in the v1 release. In the future, I would like to avoid having to update based on functionalities of other libraries. I want to use the minimal amount of outside functionalities. Right now, the derivative piece is the main part.

The automatic differentation would be a nice feature as it gives the exact derivatives rather than an approximation.

Additional context
I have working implementations of central difference and automatic differentiation written. I need to verify them further and ensure they work as intended. Plan would be to add at v2 release.

Finite Sample Corrected Sandwich

Is your feature request related to a problem? Please describe.

There is a finite sample correction to the sandwich variance estimator. Some of these are built-in to geex. This feature could be beneficial for pharmacokinetic models or other settings where sample sizes might be relatively small.

Describe the solution you'd like

Extension to the sandwich variance estimator that incorporates proposed corrections to the sandwich variance estimator. Some of these are detailed in the geex documentation.

Describe alternatives you've considered

Don't support this feature

Additional context

https://bsaul.github.io/geex/articles/v05_finite_sample_corrections.html

[WEB] Explain how to handle np.nan

Is your feature request related to a problem? Please describe.

It isn't always clear how to put an estimating equation with np.nan into a working estimating equation. This will 'fail' silently, with the root-finder not doing anything (and the variance giving back nan's).

Describe the solution you'd like

Update the website vignettes to discuss how to handle this. It comes up in fusion examples (unless you do other data processing).

Will also want to create a warning system to detect this behavior.

Describe alternatives you've considered

NA

Additional context

NA

Error in robust mean estimating equation [BUG]

Expected behavior
The robust mean estimating equation is incorrect.

Describe the bug
The np.clip utility occurs at the wrong place. The clip is called before the subtraction (not after). This is generally okay when mu=0 but not for more general cases. This problem only affects ee_mean_robust (i.e., it does not occur for ee_robust_regression.

Relevant error output
N/A

Versions:

  • OS: all
  • Python version: all
  • delicatessen: upt to v0.5
  • NumPy: all
  • SciPy: all

Outlier-Robust Dose-Response PL models

Is your feature request related to a problem? Please describe.

It is related to an analytic problem (not a software problem).

Describe the solution you'd like

Implement a robust version of the PL models, where the models are robust to response outliers.

Describe alternatives you've considered

NA

Additional context

This problem came up in discussion with people I've talked to. They were excited about the opportunity for PL models that are robust to the presence of outliers, since this is a common issue.

Support for multinomial logistic regression

Is your feature request related to a problem? Please describe.
Unrelated to problem.

Describe the solution you'd like
Addition of estimating equations for multinomial logistic regression. This is currently a major missing component for the regression support of deli.

Ideally, I would integrate into ee_regression as an option. However, y is a bit of a unique matter since it takes a set of values and predicted probabilities are for each level. By processing y from the categories into a dummy variable behind the scenes, it becomes opaque to users. Instead, I am going to force users to dummy code y outside of the built-in estimating equation. This aligns the predictions a bit better. So, multinomial logit models are going to be separate ee_... objects

This addition will also require reconsidering how the prediction function works. A vector of predictions is no longer sufficient. We will need a matrix instead.

Describe alternatives you've considered
Composing the logistic models together (doesn't work because it is a different model). We need new estimating functions for this.

Additional context
Below is some rough code to implement. Note that y has a different data structure from the other regression models.

def ee_mlogit(theta, X, y):
    X = np.asarray(X)
    y = np.asarray(y)
    theta = np.asarray(theta)
    n_y_vals = y.shape[1]
    n_x_vals = X.shape[1]

    # Compute the denominator
    start_index = 0
    exp_pred_y = []
    denom = 1
    for i in range(1, n_y_vals):
        end_index = start_index + n_x_vals
        beta_i = theta[start_index: end_index]
        pred_y = np.exp(np.dot(X, beta_i))
        denom = denom + pred_y
        exp_pred_y.append(pred_y)
        start_index = end_index

    efuncs = []
    yhat_ref = y[:, 0] - 1/denom
    for i in range(1, n_y_vals):
        yhat_i = yhat_ref + (y[:, i] - exp_pred_y[i-1]/denom)
        residual = yhat_i[:, None] * X
        efuncs.append(residual.T)

    return np.vstack(efuncs)

Robins Sensitivity Analysis

Is your feature request related to a problem? Please describe.
Not related to problem.

Describe the solution you'd like
Provide a built-in version of the Robins sensitivity analysis. I have already programmed the estimating equation for the paper Cole et al. (2023) Epidemiology. This would be generalizing that estimating equation, adding some documentation, and then adding tests.

Describe alternatives you've considered
Have people go to the paper and code from scratch, or find my code to do it. Neither of these is as stable as adopting the estimating equation into the family of built-in ones.

Additional context
Code is here: https://github.com/pzivich/publications-code/tree/master/RobinsSensitivityAnalysis

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.