stefanoteso / awesome-explanatory-supervision
List of relevant resources for machine learning from explanatory supervision
β155Updated 6 months ago
Alternatives and similar repositories for awesome-explanatory-supervision:
Users that are interested in awesome-explanatory-supervision are comparing it to the libraries listed below
- Using / reproducing ACD from the paper "Hierarchical interpretations for neural network predictions" π§ (ICLR 2019)β128Updated 3 years ago
- Towards Automatic Concept-based Explanationsβ157Updated 8 months ago
- Causal Explanation (CXPlain) is a method for explaining the predictions of any machine-learning model.β130Updated 4 years ago
- Model Agnostic Counterfactual Explanationsβ87Updated 2 years ago
- Reference tables to introduce and organize evaluation methods and measures for explainable machine learning systemsβ74Updated 2 years ago
- XAI-Bench is a library for benchmarking feature attribution explainability techniquesβ60Updated last year
- All about explainable AI, algorithmic fairness and moreβ107Updated last year
- Local explanations with uncertainty π!β39Updated last year
- OpenXAI : Towards a Transparent Evaluation of Model Explanationsβ237Updated 5 months ago
- A repository for explaining feature attributions and feature interactions in deep neural networks.β185Updated 3 years ago
- Interesting resources related to Explainable Artificial Intelligence, Interpretable Machine Learning, Interactive Machine Learning, Humanβ¦β72Updated 2 years ago
- Official repository for CMU Machine Learning Department's 10732: Robustness and Adaptivity in Shifting Environmentsβ73Updated 2 years ago
- β32Updated 3 years ago
- LOcal Rule-based Exlanationsβ50Updated last year
- Code to reproduce our paper on probabilistic algorithmic recourse: https://arxiv.org/abs/2006.06831β36Updated 2 years ago
- π‘ Adversarial attacks on explanations and how to defend themβ304Updated last month
- Adversarial Attacks on Post Hoc Explanation Techniques (LIME/SHAP)β81Updated 2 years ago
- This repository contains the implementation of SimplEx, a method to explain the latent representations of black-box models with the help β¦β24Updated last year
- Codebase for "Deep Learning for Case-based Reasoning through Prototypes: A Neural Network that Explains Its Predictions" (to appear in AAβ¦β73Updated 7 years ago
- References for Papers at the Intersection of Causality and Fairnessβ18Updated 6 years ago
- β124Updated 3 years ago
- Code for "Generative causal explanations of black-box classifiers"β33Updated 4 years ago
- Code accompanying the paper "Preserving Causal Constraints in Counterfactual Explanations for Machine Learning Classifiers"β30Updated last year
- A lightweight implementation of removal-based explanations for ML models.β57Updated 3 years ago
- Code/figures in Right for the Right Reasonsβ55Updated 4 years ago
- Code for using CDEP from the paper "Interpretations are useful: penalizing explanations to align neural networks with prior knowledge" htβ¦β128Updated 3 years ago
- Code and data for the experiments in "On Fairness and Calibration"β50Updated 2 years ago
- β264Updated 5 years ago
- This is a collection of papers and other resources related to fairness.β91Updated last year
- python tools to check recourse in linear classificationβ74Updated 4 years ago