nesl / Explainability-Study
How Can I Explain This to You? An Empirical Study of Deep Neural Network Explanation Methods
☆23Updated 4 years ago
Alternatives and similar repositories for Explainability-Study:
Users that are interested in Explainability-Study are comparing it to the libraries listed below
- Code for "Interpretable Image Recognition with Hierarchical Prototypes"☆18Updated 5 years ago
- Model Patching: Closing the Subgroup Performance Gap with Data Augmentation☆42Updated 4 years ago
- Official repository for "Bridging Adversarial Robustness and Gradient Interpretability".☆30Updated 5 years ago
- Active and Sample-Efficient Model Evaluation☆24Updated 3 years ago
- Quantitative Testing with Concept Activation Vectors in PyTorch☆42Updated 5 years ago
- Geometric Certifications of Neural Nets☆41Updated 2 years ago
- Python implementation for evaluating explanations presented in "On the (In)fidelity and Sensitivity for Explanations" in NeurIPS 2019 for…☆25Updated 2 years ago
- Code for Net2Vec: Quantifying and Explaining how Concepts are Encoded by Filters in Deep Neural Networks☆31Updated 6 years ago
- Library implementing state-of-the-art Concept-based and Disentanglement Learning methods for Explainable AI☆52Updated 2 years ago
- An uncertainty-based random sampling algorithm for data augmentation☆30Updated 4 years ago
- Code for Fong and Vedaldi 2017, "Interpretable Explanations of Black Boxes by Meaningful Perturbation"☆30Updated 5 years ago
- Code for using CDEP from the paper "Interpretations are useful: penalizing explanations to align neural networks with prior knowledge" ht…☆128Updated 3 years ago
- This code reproduces the results of the paper, "Measuring Data Leakage in Machine-Learning Models with Fisher Information"☆49Updated 3 years ago
- Fine-grained ImageNet annotations☆29Updated 4 years ago
- ICML 2020, Estimating Generalization under Distribution Shifts via Domain-Invariant Representations☆22Updated 4 years ago
- Parameter-Space Saliency Maps for Explainability☆23Updated last year
- Code/figures in Right for the Right Reasons☆55Updated 4 years ago
- Interpretation of Neural Network is Fragile☆36Updated 8 months ago
- Exemplar VAE: Linking Generative Models, Nearest Neighbor Retrieval, and Data Augmentation☆68Updated 4 years ago
- Implementation of the models and datasets used in "An Information-theoretic Approach to Distribution Shifts"☆25Updated 3 years ago
- Deep Structured Energy Based Model☆11Updated 7 years ago
- Codebase for "Deep Learning for Case-based Reasoning through Prototypes: A Neural Network that Explains Its Predictions" (to appear in AA…☆73Updated 7 years ago
- Explaining Image Classifiers by Counterfactual Generation☆28Updated 2 years ago
- Self-Explaining Neural Networks☆39Updated 4 years ago
- Robust Out-of-distribution Detection in Neural Networks☆72Updated 2 years ago
- Using / reproducing ACD from the paper "Hierarchical interpretations for neural network predictions" 🧠 (ICLR 2019)☆128Updated 3 years ago
- ☆21Updated 4 years ago
- Tools for training explainable models using attribution priors.☆120Updated 3 years ago
- Code to study the generalisability of benchmark models on non-stationary EHRs.☆14Updated 5 years ago
- MODALS: Modality-agnostic Automated Data Augmentation in the Latent Space☆40Updated 4 years ago