anguyen8 / generative-attribution-methodsLinks
Code for paper [Explaining image classifiers by removing input features using generative models] [ACCV 2020] https://arxiv.org/abs/1910.04256
☆15Updated 3 years ago
Alternatives and similar repositories for generative-attribution-methods
Users that are interested in generative-attribution-methods are comparing it to the libraries listed below
Sorting:
- Using / reproducing ACD from the paper "Hierarchical interpretations for neural network predictions" 🧠(ICLR 2019)☆129Updated 4 years ago
- Code for "Testing Robustness Against Unforeseen Adversaries"☆80Updated last year
- This is a benchmark to evaluate machine learning local explanaitons quality generated from any explainer for text and image data☆30Updated 4 years ago
- Code for using CDEP from the paper "Interpretations are useful: penalizing explanations to align neural networks with prior knowledge" ht…☆128Updated 4 years ago
- Code/figures in Right for the Right Reasons☆57Updated 5 years ago
- ☆135Updated 6 years ago
- ☆125Updated 4 years ago
- Tools for training explainable models using attribution priors.☆125Updated 4 years ago
- code release for Representer point Selection for Explaining Deep Neural Network in NeurIPS 2018☆67Updated 4 years ago
- ☆15Updated last year
- Interpretation of Neural Network is Fragile☆36Updated last year
- Supervised Local Modeling for Interpretability☆29Updated 7 years ago
- Code for Fong and Vedaldi 2017, "Interpretable Explanations of Black Boxes by Meaningful Perturbation"☆32Updated 6 years ago
- ☆13Updated 5 years ago
- Explaining Image Classifiers by Counterfactual Generation☆28Updated 3 years ago
- Code and data for the experiments in "On Fairness and Calibration"☆51Updated 3 years ago
- This repository contains the full code for the "Towards fairness in machine learning with adversarial networks" blog post.☆119Updated 4 years ago
- A community-run reference for state-of-the-art adversarial example defenses.☆51Updated last year
- Code for "Evaluating Explainable AI: Which Algorithmic Explanations Help Users Predict Model Behavior?"☆45Updated 2 years ago
- Comparing fairness-aware machine learning techniques.☆161Updated 3 years ago
- ☆51Updated 5 years ago
- To Trust Or Not To Trust A Classifier. A measure of uncertainty for any trained (possibly black-box) classifier which is more effective t…☆178Updated 2 years ago
- LaTeX source for the paper "On Evaluating Adversarial Robustness"☆259Updated 4 years ago
- Codebase for "Deep Learning for Case-based Reasoning through Prototypes: A Neural Network that Explains Its Predictions" (to appear in AA…☆77Updated 8 years ago
- code release for the paper "On Completeness-aware Concept-Based Explanations in Deep Neural Networks"☆54Updated 3 years ago
- Computing various norms/measures on over-parametrized neural networks☆50Updated 7 years ago
- ☆62Updated 4 years ago
- Provably Robust Boosted Decision Stumps and Trees against Adversarial Attacks [NeurIPS 2019]☆50Updated 5 years ago
- This is a public collection of papers related to machine learning model interpretability.☆26Updated 4 years ago
- ☆113Updated 3 years ago