marcoancona / DASP
Keras implementation for DASP: Deep Approximate Shapley Propagation (ICML 2019)
β61Updated 5 years ago
Alternatives and similar repositories for DASP:
Users that are interested in DASP are comparing it to the libraries listed below
- Codebase for "Deep Learning for Case-based Reasoning through Prototypes: A Neural Network that Explains Its Predictions" (to appear in AAβ¦β75Updated 7 years ago
- Using / reproducing ACD from the paper "Hierarchical interpretations for neural network predictions" π§ (ICLR 2019)β128Updated 3 years ago
- Interpretation of Neural Network is Fragileβ36Updated last year
- Implementation of the paper "Shapley Explanation Networks"β88Updated 4 years ago
- β125Updated 3 years ago
- Provably Robust Boosted Decision Stumps and Trees against Adversarial Attacks [NeurIPS 2019]β50Updated 5 years ago
- β44Updated 4 years ago
- Code for AAAI 2018 accepted paper: "Beyond Sparsity: Tree Regularization of Deep Models for Interpretability"β78Updated 7 years ago
- Code for using CDEP from the paper "Interpretations are useful: penalizing explanations to align neural networks with prior knowledge" htβ¦β127Updated 4 years ago
- PyTorch code for KDD 18 paper: Towards Explanation of DNN-based Prediction with Guided Feature Inversionβ21Updated 6 years ago
- Causal Explanation (CXPlain) is a method for explaining the predictions of any machine-learning model.β131Updated 4 years ago
- β134Updated 5 years ago
- Adv-BNN: Improved Adversarial Defense through Robust Bayesian Neural Networkβ62Updated 5 years ago
- β50Updated 2 years ago
- This is the pytorch implementation of the paper - Axiomatic Attribution for Deep Networks.β182Updated 3 years ago
- [ICML 2019, 20 min long talk] Robust Decision Trees Against Adversarial Examplesβ67Updated 2 years ago
- The Ultimate Reference for Out of Distribution Detection with Deep Neural Networksβ118Updated 5 years ago
- Fair Empirical Risk Minimization (FERM)β37Updated 4 years ago
- β51Updated 4 years ago
- Rethinking Bias-Variance Trade-off for Generalization of Neural Networksβ49Updated 4 years ago
- CVPR'19 experiments with (on-manifold) adversarial examples.β44Updated 5 years ago
- Self-Explaining Neural Networksβ13Updated last year
- Code for AAAI 2018 accepted paper: "Improving the Adversarial Robustness and Interpretability of Deep Neural Networks by Regularizing theβ¦β55Updated 2 years ago
- Python implementation for evaluating explanations presented in "On the (In)fidelity and Sensitivity for Explanations" in NeurIPS 2019 forβ¦β25Updated 3 years ago
- Adversarial Defense for Ensemble Models (ICML 2019)β61Updated 4 years ago
- Interfaces for defining Robust ML models and precisely specifying the threat models under which they claim to be secure.β62Updated 5 years ago
- Gold Loss Correctionβ87Updated 6 years ago
- A python implementation of the kernel two-samples test as in Gretton et al 2012 (JMLR).β33Updated 9 years ago
- Implementation of Invariant Risk Minimization https://arxiv.org/abs/1907.02893β86Updated 5 years ago
- Explaining a black-box using Deep Variational Information Bottleneck Approachβ46Updated 2 years ago