albermax / interpretable_ai_book__sw_chapter
The code snippets for the SW chapter of the "Interpretable AI" book.
☆17Updated 5 years ago
Related projects ⓘ
Alternatives and complementary repositories for interpretable_ai_book__sw_chapter
- This repository is all about papers and tools of Explainable AI☆36Updated 4 years ago
- This is a benchmark to evaluate machine learning local explanaitons quality generated from any explainer for text and image data☆30Updated 3 years ago
- A lightweight implementation of removal-based explanations for ML models.☆57Updated 3 years ago
- Tensorflow implementation of integrated gradients presented in "Axiomatic Attribution for Deep Networks". It explains connections between…☆16Updated 5 years ago
- ☆8Updated 2 years ago
- Code for our AAMAS 2020 paper: "A Story of Two Streams: Reinforcement Learning Models from Human Behavior and Neuropsychiatry".☆27Updated last year
- Reference tables to introduce and organize evaluation methods and measures for explainable machine learning systems☆73Updated 2 years ago
- General purpose library for BNNs, and implementation of OC-BNNs in our 2020 NeurIPS paper.☆38Updated 2 years ago
- Repository for the paper "An Adversarial Approach for the Robust Classification of Pneumonia from Chest Radiographs"☆18Updated 4 years ago
- This repository contains codes to explain One-Dimensional Convolutional Neural Networks (1D-CNN) using Layer-wise Relevance Propagation.☆12Updated 3 years ago
- ☆16Updated 2 years ago
- Code for our ICML '19 paper: Neural Network Attributions: A Causal Perspective.☆50Updated 2 years ago
- Code to study the generalisability of benchmark models on non-stationary EHRs.☆14Updated 5 years ago
- Quantitative Testing with Concept Activation Vectors in PyTorch☆41Updated 5 years ago
- How Can I Explain This to You? An Empirical Study of Deep Neural Network Explanation Methods☆23Updated 4 years ago
- The Medkit-Learn(ing) Environment: Medical Decision Modelling through Simulation (NeurIPS 2021) by Alex J. Chan, Ioana Bica, Alihan Huyuk…☆28Updated 2 years ago
- Learning representations for RL in Healthcare under a POMDP assumption☆51Updated 3 years ago
- Codebase for INVASE: Instance-wise Variable Selection - 2019 ICLR☆60Updated 4 years ago
- Self-Explaining Neural Networks☆39Updated 4 years ago
- Tools for training explainable models using attribution priors.☆121Updated 3 years ago
- Gifsplanation - Explaining neural networks with gifs!☆24Updated 9 months ago
- Autoencoder network for imputing missing values☆26Updated 5 years ago
- Adversarial Attacks on Post Hoc Explanation Techniques (LIME/SHAP)☆80Updated last year
- Code for the paper "Adversarial Attacks Against Medical Deep Learning Systems"☆67Updated 5 years ago
- Modular Python Toolbox for Fairness, Accountability and Transparency Forensics☆75Updated last year
- Code accompanying the paper "Preserving Causal Constraints in Counterfactual Explanations for Machine Learning Classifiers"☆30Updated last year
- Codes for reproducing the contrastive explanation in “Explanations based on the Missing: Towards Contrastive Explanations with Pertinent…☆54Updated 6 years ago
- Neural Additive Models (Google Research)☆67Updated 3 years ago
- Explaining Anomalies Detected by Autoencoders Using SHAP☆28Updated 4 years ago
- XAI-Bench is a library for benchmarking feature attribution explainability techniques☆57Updated last year