Giter Site home page Giter Site logo

rehmanzafar / dlime_experiments Goto Github PK

View Code? Open in Web Editor NEW
25.0 3.0 16.0 5.34 MB

In this work, we propose a deterministic version of Local Interpretable Model Agnostic Explanations (LIME) and the experimental results on three different medical datasets shows the superiority for Deterministic Local Interpretable Model-Agnostic Explanations (DLIME).

Home Page: https://doi.org/10.3390/make3030027

License: MIT License

Python 12.55% Jupyter Notebook 87.45%
breast-cancer-dataset python3 random-forest classifiers neural-networks scikit-learn clusters xai explainable-ai explainable-ml

dlime_experiments's Introduction

Deterministic Local Interpretable Model-Agnostic Explanations for Stable Explainability

Experiments

Setup Environment

The following python environment and packages are used to conduct the experiments:

  • python==3.6
  • Boruta==0.1.5
  • numpy==1.16.1
  • pandas==0.24.2
  • scikit-learn==0.20.2
  • scipy==1.2.1

These packages can be installed by executing the following command: pip3.6 install -r requirements.txt

Datasets

To conduct the experiments we have used the following three healthcare datasets from UCI repository:

Breast cancer dataset comes along with scikit-learn package, therefore, there is no need to download this dataset. The rest of the datasets are already downloaded and available in "data" folder.

Algorithms

The following classifiers and algorithms are used in this study:

  • Random Forest
  • Neural Networks
  • Linear Regression
  • Logistic Regression
  • K-Nearest Neighbours
  • K-Means Clustering
  • Agglomerative Hierarchical Clustering

Execute Code

Run the following files to reproduce the results. The results of LIME are not deterministic and it may produce different results.

Experiments on Breast Cancer Dataset:
  • python3.6 experiments_bc_nn.py
  • python3.6 experiments_bc_rf.py
Experiments on Indian Liver Patient Dataset:
  • python3.6 experiments_ildp_nn.py
  • python3.6 experiments_ildp_rf.py
Experiments on Hepatitis Dataset:
  • python3.6 experiments_hp_nn.py
  • python3.6 experiments_hp_rf.py
For the quality of the explanations:
  • python3.6 experiments_bc_lgr_fidelity_v2p0-mc-v2.py
  • python3.6 evaluate_quality_v0.py

Results

The results will be saved inside "results" directory in pdf and csv format. The quality of the explanation is shown in the image below: Quality of Explanations

Citation

Please consider citing our work if you use this code for your research.

Initial Results

@InProceedings{zafar2019dlime,
  author    = {Muhammad Rehman Zafar and Naimul Mefraz Khan},
  title     = {DLIME: A Deterministic Local Interpretable Model-Agnostic Explanations Approach for Computer-Aided Diagnosis Systems},
  booktitle = {In proceeding of ACM SIGKDD Workshop on Explainable AI/ML (XAI) for Accountability, Fairness, and Transparency},
  year      = {2019},
  publisher = {ACM},
  address   = {Anchorage, Alaska}
}

Extended Version

@article{zafar2021deterministic,
  title={Deterministic Local Interpretable Model-Agnostic Explanations for Stable Explainability},
  author={Zafar, Muhammad Rehman and Khan, Naimul},
  journal={Machine Learning and Knowledge Extraction},
  volume={3},
  number={3},
  pages={525--541},
  year={2021},
  publisher={Multidisciplinary Digital Publishing Institute}
}

dlime_experiments's People

Contributors

rehmanzafar avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar

dlime_experiments's Issues

Support for python 3.6

Dear Sir,

Currently, there is no support for Python 3.6. So how can I use this code? I am interested in using code for project work. Urgently needed.

error when reproducing the results

Thank you for the great paper.

When I was running the code in experiments_bc_rf.py, I got an error in line 80

This method as_pyplot_to_figure belongs to Explanation object but not to explainer_tabular. LimeTabularExplainer. It seems to me that these two objects are independent from each other. Have I missed anything? Could you please advise how to reproduce the result correctly?

Here is the Colab I used.

Ask questions

Hi, thank you very much for sharing the code.
But there is a problem I want to ask, that is, after improvement, the contribution of each feature of the advanced interpretation result is the same. I don’t know whether such a result still has interpretive significance.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.