stefanoteso / awesome-explanatory-supervisionLinks
List of relevant resources for machine learning from explanatory supervision
β162Updated 6 months ago
Alternatives and similar repositories for awesome-explanatory-supervision
Users that are interested in awesome-explanatory-supervision are comparing it to the libraries listed below
Sorting:
- Using / reproducing ACD from the paper "Hierarchical interpretations for neural network predictions" π§ (ICLR 2019)β129Updated 4 years ago
- Causal Explanation (CXPlain) is a method for explaining the predictions of any machine-learning model.β132Updated 5 years ago
- Reference tables to introduce and organize evaluation methods and measures for explainable machine learning systemsβ76Updated 3 years ago
- Official repository for CMU Machine Learning Department's 10732: Robustness and Adaptivity in Shifting Environmentsβ77Updated 3 years ago
- Code for using CDEP from the paper "Interpretations are useful: penalizing explanations to align neural networks with prior knowledge" htβ¦β128Updated 4 years ago
- XAI-Bench is a library for benchmarking feature attribution explainability techniquesβ70Updated 3 years ago
- OpenXAI : Towards a Transparent Evaluation of Model Explanationsβ252Updated last year
- Implementation of the paper "Shapley Explanation Networks"β88Updated 5 years ago
- Model Agnostic Counterfactual Explanationsβ88Updated 3 years ago
- [NeurIPS 2021] WRENCH: Weak supeRvision bENCHmarkβ226Updated last year
- A curated list of awesome Fairness in AI resourcesβ330Updated 2 years ago
- β33Updated 4 years ago
- Interesting resources related to Explainable Artificial Intelligence, Interpretable Machine Learning, Interactive Machine Learning, Humanβ¦β75Updated 3 years ago
- Optimal Transport Dataset Distanceβ174Updated 3 years ago
- β125Updated 4 years ago
- A repository for explaining feature attributions and feature interactions in deep neural networks.β192Updated 4 years ago
- Code to reproduce our paper on probabilistic algorithmic recourse: https://arxiv.org/abs/2006.06831β37Updated 3 years ago
- π‘ Adversarial attacks on explanations and how to defend themβ332Updated last year
- References for Papers at the Intersection of Causality and Fairnessβ18Updated 7 years ago
- PyTorch Explain: Interpretable Deep Learning in Python.β168Updated last year
- Calibration library and code for the paper: Verified Uncertainty Calibration. Ananya Kumar, Percy Liang, Tengyu Ma. NeurIPS 2019 (Spotligβ¦β152Updated 3 years ago
- Code accompanying the paper "Preserving Causal Constraints in Counterfactual Explanations for Machine Learning Classifiers"β31Updated 2 years ago
- A curated list of programmatic weak supervision papers and resourcesβ189Updated 2 years ago
- Code repository for our paper "Failing Loudly: An Empirical Study of Methods for Detecting Dataset Shift": https://arxiv.org/abs/1810.119β¦β107Updated last year
- Beta Shapley: a Unified and Noise-reduced Data Valuation Framework for Machine Learning (AISTATS 2022 Oral)β43Updated 3 years ago
- All about explainable AI, algorithmic fairness and moreβ110Updated 2 years ago
- Local explanations with uncertainty π!β42Updated 2 years ago
- Adversarial Attacks on Post Hoc Explanation Techniques (LIME/SHAP)β85Updated 3 years ago
- This is a collection of papers and other resources related to fairness.β95Updated 2 months ago
- Reusable BatchBALD implementationβ78Updated last year