anguyen8 / XAI-papers
☆592Updated last year
Alternatives and similar repositories for XAI-papers:
Users that are interested in XAI-papers are comparing it to the libraries listed below
- Interesting resources related to XAI (Explainable Artificial Intelligence)☆827Updated 2 years ago
- ☆916Updated 2 years ago
- Papers and code of Explainable AI esp. w.r.t. Image classificiation☆208Updated 2 years ago
- A curated list of awesome Fairness in AI resources☆320Updated last year
- A collection of research materials on explainable AI/ML☆1,494Updated last month
- Attributing predictions made by the Inception network using the Integrated Gradients method☆624Updated 3 years ago
- Code for the TCAV ML interpretability project☆639Updated 9 months ago
- Towards Automatic Concept-based Explanations☆159Updated last year
- A unified framework of perturbation and gradient-based attribution methods for Deep Neural Networks interpretability. DeepExplain also in…☆750Updated 4 years ago
- Related papers for robust machine learning☆568Updated last year
- 💡 Adversarial attacks on explanations and how to defend them☆314Updated 5 months ago
- Interesting resources related to Explainable Artificial Intelligence, Interpretable Machine Learning, Interactive Machine Learning, Human…☆73Updated 2 years ago
- XAI - An eXplainability toolbox for machine learning☆1,168Updated 3 years ago
- OpenXAI : Towards a Transparent Evaluation of Model Explanations☆245Updated 8 months ago
- Reference tables to introduce and organize evaluation methods and measures for explainable machine learning systems☆74Updated 3 years ago
- This code package implements the prototypical part network (ProtoPNet) from the paper "This Looks Like That: Deep Learning for Interpreta…☆363Updated 2 years ago
- This is the pytorch implementation of the paper - Axiomatic Attribution for Deep Networks.☆183Updated 3 years ago
- Tensorflow tutorial for various Deep Neural Network visualization techniques☆347Updated 4 years ago
- Understanding Deep Networks via Extremal Perturbations and Smooth Masks☆345Updated 4 years ago
- Full-gradient saliency maps☆210Updated 2 years ago
- A machine learning benchmark of in-the-wild distribution shifts, with data loaders, evaluators, and default models.☆564Updated last year
- Quantus is an eXplainable AI toolkit for responsible evaluation of neural network explanations☆598Updated 3 months ago
- List of relevant resources for machine learning from explanatory supervision☆157Updated 3 months ago
- Code for using CDEP from the paper "Interpretations are useful: penalizing explanations to align neural networks with prior knowledge" ht…☆127Updated 4 years ago
- Using / reproducing ACD from the paper "Hierarchical interpretations for neural network predictions" 🧠 (ICLR 2019)☆128Updated 3 years ago
- A library for experimenting with, training and evaluating neural networks, with a focus on adversarial robustness.☆934Updated last year
- Pytorch implementation of various neural network interpretability methods☆117Updated 3 years ago
- Making decision trees competitive with neural networks on CIFAR10, CIFAR100, TinyImagenet200, Imagenet☆620Updated 2 years ago
- All about explainable AI, algorithmic fairness and more☆107Updated last year
- Literature survey, paper reviews, experimental setups and a collection of implementations for baselines methods for predictive uncertaint…☆624Updated 2 years ago