Giter Site home page Giter Site logo

trelawney's Introduction

trelawney

Documentation Status

MIT License

Trelawney is a general interpretability package that aims at providing a common api to use most of the modern interpretability methods to shed light on sklearn compatible models (support for Keras and XGBoost are tested).

Trelawney will try to provide you with two kind of explanation when possible:

  • global explanation of the model that highlights the most importance features the model uses to make its predictions globally
  • local explanation of the model that will try to shed light on why a specific model made a specific prediction

The Trelawney package is build around:

  • some model specific explainers that use the inner workings of some types of models to explain them:
    • LogRegExplainer that uses the weights of the your logistic regression to produce global and local explanations of your model
    • TreeExplainer that uses the path of your tree (single tree model only) to produce explanations of the model
  • Some model agnostic explainers that should work with all models:
    • LimeExplainer that uses the Lime package to create local explanations only (the local nature of Lime prohibits it from generating global explanations of a model
    • ShapExplainer that uses the SHAP package to create local and global explanations of your model
    • SurrogateExplainer that creates a general surogate of your model (fitted on the output of your model) using an explainable model (DecisionTreeClassifier,`LogisticRegression` for now). The explainer will then use the internals of the surrogate model to explain your black box model as well as informing you on how well the surrogate model explains the black box one

Quick Tutorial (30s to Trelawney):

Here is an example of how to use a Trelawney explainer

>>> model = LogisticRegression().fit(X, y)
>>> # creating and fiting the explainer
>>> explainer = ShapExplainer()
>>> explainer.fit(model, X, y)
>>> # explaining observation
>>> explanation =  explainer.explain_local(X_expain)
[
    {'var_1': 0.1, 'var_2': -0.07, ...},
    ...
    {'var_1': 0.23, 'var_2': -0.15, ...} ,
]
>>> explanation =  explainer.graph_local_explanation(X_expain.iloc[:1, :])

Local Explanation Graph

>>> explanation =  explainer.feature_importance(X_expain)
{'var_1': 0.5, 'var_2': 0.2, ...} ,
>>> explanation =  explainer.graph_feature_importance(X_expain)

R .. image:: http://drive.google.com/uc?export=view&id=1R2NFEU0bcZYpeiFsLZDKYfPkjHz-cHJ_

width:400
alt:Local Explanation Graph

FAQ

Why should I use Trelawney rather than Lime and SHAP

while you can definitally use the Lime and SHAP packages directly (they will give you more control over how to use their packages), they are very specialized packages with different APIs, graphs and vocabulary. Trelawnaey offers you a unified API, representation and vocabulary for all state of the art explanation methods so that you don't lose time adapting to each new method but just change a class and Trelawney will adapt to you.

How to create implement my own interpretation method in the Trelawney framework?

To implement your own explainer you will need to inherit from the BaseExplainer class and overide it's three abstract methods as such:

>>> class MyOwnInterpreter(BaseExplainer):
...     def fit(self, model: sklearn.base.BaseEstimator, x_train: Union[pd.Series, pd.DataFrame, np.ndarray],
...             y_train: pd.DataFrame):
...             # fit your interpreter with some training data if needed
...             pass
...    def explain_local(self, x_explain: Union[pd.Series, pd.DataFrame, np.ndarray],
...                      n_cols: Optional[int] = None) -> List[Dict[str, float]]:
...             # interpret a single prediction of the model
...             pass
...     def feature_importance(self, x_explain: Union[pd.Series, pd.DataFrame, np.ndarray],
...                            n_cols: Optional[int] = None) -> Dict[str, float]:
...             # interpret the global importance of all at most n_cols features on the predictions on x_explain
...             pass

You can find some more information by reading the documentation of the BaseExplainer class. If possible don't hesitate to contribute to trelawney and create a PR.

Comming Soon

  • Regressor Support (PR welcome)
  • Image and text Support (PR welcome)

Credits

This package was created with Cookiecutter and the audreyr/cookiecutter-pypackage project template.

trelawney's People

Contributors

aredier avatar skanderkam avatar inesva avatar ameliemeurer avatar ludmilaexbrayat avatar

Stargazers

Thiago Oliveira Pinheiro avatar <^..^> avatar NOKUBI Takatsugu avatar Paul avatar Luis Montero avatar Renato Zimmermann avatar Chenghao Mou avatar Charlie Bonfield avatar Kevin avatar HouDaniel avatar R3 avatar Muhammad Naqvi avatar YOLO avatar Kevin Neal avatar jiandong avatar Igor Brigadir avatar luckytina avatar Zhirui (Jerry) Wang avatar Eric avatar Qiqun.H avatar xinyao avatar AaronCao avatar  avatar Seder(方进) avatar 爱可可-爱生活 avatar  avatar  avatar Mobility and Sustainability avatar Gustavo Morales avatar Matthew avatar Thierno Ibrahima DIOP avatar Daniel Wertheimer avatar Dennis Muth avatar Peter Gagarinov avatar PN avatar John avatar pablo lopez avatar Szymon Bobek avatar Philip Patterson avatar  avatar Alaa Sarhan avatar Joshua Levy avatar Leon Zhu avatar  avatar Hugo VASSELIN avatar  avatar Jay Shah avatar Gang Bai avatar Michael Rosenberg avatar Dmitry Kulikov avatar Jean-Eudes Peloye avatar  avatar Benoît Faucon avatar mseris avatar Dimitri Lozeve avatar  avatar  avatar  avatar

Watchers

James Cloos avatar jiandong avatar luckytina avatar  avatar  avatar  avatar

trelawney's Issues

Shap explainer research

research phase on shap

  • can it explain local
  • can it explain global
  • what model can it explain

Lime research

we need to know if lime is a suitable exlainer:

  • can it explain local
  • can it explain global
  • what model can it explain

SHAP explainer

Add a shap_explainer.py to do local and global shap explanation

Regressor Support

Today we only support classifier, we should be able to use the same explainers for regressors as well.
This includes thinking about the code architecture to achieve this without creating more and more classes..

Improvements

Thanks for the work, this is a good step forward.
To make easier integration of new interpretation framework,
Can you detail in docs :
API to include new Interpretation models.

For API, building you can refer to this framework :
https://github.com/arita37/mlmodels

local explanation of Tree

TreeExplainer Does not have a local explanation although there are some methods we can use.
This includes a litle bit of research on how to do this mathematically and implementing it

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.