Name: Jonathan Crabbé
Type: User
Company: University of Cambridge
Bio: 🎓PhD in ML Interpretability @ Cambridge ⚙️ Ex-Research Scientist Intern @ Apple & @ MSFTResearch 💡 Interested in Interpretability, Robust ML & GenAI
Twitter: JonathanICrabbe
Location: London, United Kingdom
Blog: https://jonathancrabbe.github.io/
Jonathan Crabbé's Projects
This repository contains the implementation of Concept Activation Regions, a new framework to explain deep neural networks with human concepts. For more details, please read our NeurIPS 2022 paper: 'Concept Activation Regions: a Generalized Framework for Concept-Based Explanations.
This repository contains the implementation of Dynamask, a method to identify the features that are salient for a model to issue its prediction when the data is represented in terms of time series. For more details on the theoretical side, please read our ICML 2021 paper: 'Explaining Time Series Predictions with Dynamic Masks'.
This repository implements time series diffusion in the frequency domain.
This repository contains the implementation of ITErpretability, a new framework to benchmark treatment effect deep neural network estimators with interpretability. For more details, please read our NeurIPS 2022 paper: 'Benchmarking Heterogeneous Treatment Effect Models through the Lens of Interpretability'.
This repository contains the implementation of Label-Free XAI, a new framework to adapt explanation methods to unsupervised models. For more details, please read our ICML 2022 paper: 'Label-Free Explainability for Unsupervised Models'.
Fichiers relatifs au projet de mécanique rationnelle 2
Github des fichiers utiles au projet
This repository contains the implementation of the explanation invariance and equivariance metrics, a framework to evaluate the robustness of interpretability methods.
This repository contains the implementation of SimplEx, a method to explain the latent representations of black-box models with the help of a corpus of examples. For more details, please read our NeurIPS 2021 paper: 'Explaining Latent Representations with a Corpus of Examples'.
Github for the NIPS 2020 paper "Learning outside the black-box: at the pursuit of interpretable models"
Synthèses de l'École Polytechnique de Bruxelles