SinaMohseni / Awesome-XAI-EvaluationLinks
Reference tables to introduce and organize evaluation methods and measures for explainable machine learning systems
☆75Updated 3 years ago
Alternatives and similar repositories for Awesome-XAI-Evaluation
Users that are interested in Awesome-XAI-Evaluation are comparing it to the libraries listed below
Sorting:
- OpenXAI : Towards a Transparent Evaluation of Model Explanations☆248Updated last year
- Papers and code of Explainable AI esp. w.r.t. Image classificiation☆218Updated 3 years ago
- Adversarial Attacks on Post Hoc Explanation Techniques (LIME/SHAP)☆84Updated 2 years ago
- Code for using CDEP from the paper "Interpretations are useful: penalizing explanations to align neural networks with prior knowledge" ht…☆128Updated 4 years ago
- code release for the paper "On Completeness-aware Concept-Based Explanations in Deep Neural Networks"☆53Updated 3 years ago
- Implementation of Adversarial Debiasing in PyTorch to address Gender Bias☆31Updated 5 years ago
- Towards Automatic Concept-based Explanations☆161Updated last year
- Using / reproducing ACD from the paper "Hierarchical interpretations for neural network predictions" 🧠 (ICLR 2019)☆129Updated 4 years ago
- Python implementation for evaluating explanations presented in "On the (In)fidelity and Sensitivity for Explanations" in NeurIPS 2019 for…☆25Updated 3 years ago
- Interesting resources related to Explainable Artificial Intelligence, Interpretable Machine Learning, Interactive Machine Learning, Human…☆74Updated 3 years ago
- List of relevant resources for machine learning from explanatory supervision☆160Updated 2 months ago
- A PyTorch 1.6 implementation of Layer-Wise Relevance Propagation (LRP).☆138Updated 4 years ago
- A pytorch implemention of the Explainable AI work 'Contrastive layerwise relevance propagation (CLRP)'☆17Updated 3 years ago
- ☆122Updated 3 years ago
- Library implementing state-of-the-art Concept-based and Disentanglement Learning methods for Explainable AI☆54Updated 3 years ago
- Official Code Implementation of the paper : XAI for Transformers: Better Explanations through Conservative Propagation☆65Updated 3 years ago
- XAI-Bench is a library for benchmarking feature attribution explainability techniques☆70Updated 2 years ago
- NumPy library for calibration metrics☆73Updated last week
- A curated list of awesome Fairness in AI resources☆328Updated 2 years ago
- Reliability diagrams visualize whether a classifier model needs calibration☆158Updated 3 years ago
- 💡 Adversarial attacks on explanations and how to defend them☆328Updated 10 months ago
- reference implementation for "explanations can be manipulated and geometry is to blame"☆37Updated 3 years ago
- Detect model's attention☆168Updated 5 years ago
- Model Agnostic Counterfactual Explanations☆88Updated 3 years ago
- General fair regression subject to demographic parity constraint. Paper appeared in ICML 2019.☆16Updated 5 years ago
- Towards Robust Interpretability with Self-Explaining Neural Networks, Alvarez-Melis et al. 2018☆15Updated 5 years ago
- Concept Bottleneck Models, ICML 2020☆215Updated 2 years ago
- A lightweight implementation of removal-based explanations for ML models.☆58Updated 4 years ago
- Official repository for CMU Machine Learning Department's 10732: Robustness and Adaptivity in Shifting Environments☆74Updated 2 years ago
- Toolkit for building machine learning models that generalize to unseen domains and are robust to privacy and other attacks.☆175Updated 2 years ago