feifeife / All-about-XAI
This repository is all about papers and tools of Explainable AI
☆36Updated 4 years ago
Related projects ⓘ
Alternatives and complementary repositories for All-about-XAI
- ☆109Updated 2 years ago
- Self-Explaining Neural Networks☆39Updated 4 years ago
- Codes for reproducing the contrastive explanation in “Explanations based on the Missing: Towards Contrastive Explanations with Pertinent…☆54Updated 6 years ago
- Quantitative Testing with Concept Activation Vectors in PyTorch☆41Updated 5 years ago
- Code for using CDEP from the paper "Interpretations are useful: penalizing explanations to align neural networks with prior knowledge" ht…☆127Updated 3 years ago
- Interesting resources related to Explainable Artificial Intelligence, Interpretable Machine Learning, Interactive Machine Learning, Human…☆72Updated 2 years ago
- PyTorch implementation of SmoothTaylor☆15Updated 3 years ago
- Work on Evidential Deep Learning to Quantify Classification Uncertainty☆56Updated 5 years ago
- A pytorch implemention of the Explainable AI work 'Contrastive layerwise relevance propagation (CLRP)'☆17Updated 2 years ago
- Reference tables to introduce and organize evaluation methods and measures for explainable machine learning systems☆73Updated 2 years ago
- Towards Automatic Concept-based Explanations☆157Updated 6 months ago
- Code for the CVPR 2021 paper: Understanding Failures of Deep Networks via Robust Feature Extraction☆35Updated 2 years ago
- Library implementing state-of-the-art Concept-based and Disentanglement Learning methods for Explainable AI☆52Updated 2 years ago
- STEEX: Steering Counterfactual Explanations with Semantics☆18Updated last year
- A list of papers on Active Learning and Uncertainty Estimation for Neural Networks.☆65Updated 4 years ago
- B-LRP is the repository for the paper How Much Can I Trust You? — Quantifying Uncertainties in Explaining Neural Networks☆18Updated 2 years ago
- NoiseGrad (and its extension NoiseGrad++) is a method to enhance explanations of artificial neural networks by adding noise to model weig…☆21Updated last year
- In this part, I've introduced and experimented with ways to interpret and evaluate models in the field of image. (Pytorch)☆40Updated 4 years ago
- Using / reproducing ACD from the paper "Hierarchical interpretations for neural network predictions" 🧠 (ICLR 2019)☆125Updated 3 years ago
- ☆48Updated 4 years ago
- Model Patching: Closing the Subgroup Performance Gap with Data Augmentation☆42Updated 4 years ago
- Python implementation for evaluating explanations presented in "On the (In)fidelity and Sensitivity for Explanations" in NeurIPS 2019 for…☆25Updated 2 years ago
- Detect model's attention☆156Updated 4 years ago
- Tools for training explainable models using attribution priors.☆121Updated 3 years ago
- A lightweight implementation of removal-based explanations for ML models.☆57Updated 3 years ago
- Papers and code of Explainable AI esp. w.r.t. Image classificiation☆196Updated 2 years ago
- Pytorch implementation of various neural network interpretability methods☆112Updated 2 years ago
- Morpho-MNIST: Quantitative Assessment and Diagnostics for Representation Learning (http://jmlr.org/papers/v20/19-033.html)☆84Updated 4 months ago
- Visual Explanation using Uncertainty based Class Activation Maps☆21Updated 4 years ago
- Interpretable Explanations of Black Boxes by Meaningful Perturbation Pytorch☆12Updated 2 months ago