anguyen8 / XAI-papers
☆565Updated last year
Related projects ⓘ
Alternatives and complementary repositories for XAI-papers
- Interesting resources related to XAI (Explainable Artificial Intelligence)☆822Updated 2 years ago
- Attributing predictions made by the Inception network using the Integrated Gradients method☆598Updated 2 years ago
- A collection of research materials on explainable AI/ML☆1,422Updated 3 weeks ago
- 💡 Adversarial attacks on explanations and how to defend them☆299Updated 8 months ago
- ☆906Updated last year
- A unified framework of perturbation and gradient-based attribution methods for Deep Neural Networks interpretability. DeepExplain also in…☆734Updated 4 years ago
- This code package implements the prototypical part network (ProtoPNet) from the paper "This Looks Like That: Deep Learning for Interpreta…☆341Updated 2 years ago
- Papers and code of Explainable AI esp. w.r.t. Image classificiation☆196Updated 2 years ago
- Tensorflow tutorial for various Deep Neural Network visualization techniques☆344Updated 4 years ago
- A curated list of awesome Fairness in AI resources☆314Updated last year
- Related papers for robust machine learning☆564Updated last year
- Detect model's attention☆155Updated 4 years ago
- Reference tables to introduce and organize evaluation methods and measures for explainable machine learning systems☆73Updated 2 years ago
- A library for experimenting with, training and evaluating neural networks, with a focus on adversarial robustness.☆918Updated 10 months ago
- Understanding Deep Networks via Extremal Perturbations and Smooth Masks☆344Updated 4 years ago
- This is the pytorch implementation of the paper - Axiomatic Attribution for Deep Networks.☆181Updated 2 years ago
- Towards Automatic Concept-based Explanations☆157Updated 6 months ago
- Pytorch implementation of various neural network interpretability methods☆111Updated 2 years ago
- Interesting resources related to Explainable Artificial Intelligence, Interpretable Machine Learning, Interactive Machine Learning, Human…☆72Updated 2 years ago
- Full-gradient saliency maps☆203Updated last year
- A simple and effective method for detecting out-of-distribution images in neural networks.☆531Updated 3 years ago
- Quantus is an eXplainable AI toolkit for responsible evaluation of neural network explanations☆558Updated last week
- Code for using CDEP from the paper "Interpretations are useful: penalizing explanations to align neural networks with prior knowledge" ht…☆127Updated 3 years ago
- This is a PyTorch reimplementation of Influence Functions from the ICML2017 best paper: Understanding Black-box Predictions via Influence…☆318Updated last year
- The LRP Toolbox provides simple and accessible stand-alone implementations of LRP for artificial neural networks supporting Matlab and Py…☆330Updated 2 years ago
- All about explainable AI, algorithmic fairness and more☆107Updated last year
- A toolbox to iNNvestigate neural networks' predictions!☆1,268Updated 11 months ago
- Reading list for the Advanced Machine Learning Course☆367Updated last year
- Official implementation of Score-CAM in PyTorch☆405Updated 2 years ago
- OpenXAI : Towards a Transparent Evaluation of Model Explanations☆232Updated 3 months ago