nesl / Explainability-StudyLinks
How Can I Explain This to You? An Empirical Study of Deep Neural Network Explanation Methods
☆24Updated 5 years ago
Alternatives and similar repositories for Explainability-Study
Users that are interested in Explainability-Study are comparing it to the libraries listed below
Sorting:
- Fine-grained ImageNet annotations☆30Updated 5 years ago
- Official repository for "Bridging Adversarial Robustness and Gradient Interpretability".☆30Updated 6 years ago
- Model Patching: Closing the Subgroup Performance Gap with Data Augmentation☆42Updated 5 years ago
- Code for using CDEP from the paper "Interpretations are useful: penalizing explanations to align neural networks with prior knowledge" ht…☆128Updated 4 years ago
- In this paper, we show that the performance of a learnt generative model is closely related to the model's ability to accurately represen…☆41Updated 4 years ago
- Parameter-Space Saliency Maps for Explainability☆23Updated 2 years ago
- Tools for training explainable models using attribution priors.☆125Updated 4 years ago
- Figures & code from the paper "Shortcut Learning in Deep Neural Networks" (Nature Machine Intelligence 2020)☆101Updated 3 years ago
- Codes for reproducing the contrastive explanation in “Explanations based on the Missing: Towards Contrastive Explanations with Pertinent…☆54Updated 7 years ago
- This code reproduces the results of the paper, "Measuring Data Leakage in Machine-Learning Models with Fisher Information"☆49Updated 4 years ago
- Exemplar VAE: Linking Generative Models, Nearest Neighbor Retrieval, and Data Augmentation☆69Updated 5 years ago
- Using / reproducing ACD from the paper "Hierarchical interpretations for neural network predictions" 🧠 (ICLR 2019)☆129Updated 4 years ago
- Implementation of Information Dropout☆39Updated 8 years ago
- ☆46Updated 6 years ago
- Learning perturbation sets for robust machine learning☆65Updated 4 years ago
- Code for "Supermasks in Superposition"☆125Updated 2 years ago
- Python implementation for evaluating explanations presented in "On the (In)fidelity and Sensitivity for Explanations" in NeurIPS 2019 for…☆25Updated 3 years ago
- ☆26Updated 5 years ago
- Code for Net2Vec: Quantifying and Explaining how Concepts are Encoded by Filters in Deep Neural Networks☆30Updated 7 years ago
- Geometric Certifications of Neural Nets☆42Updated 3 years ago
- Library implementing state-of-the-art Concept-based and Disentanglement Learning methods for Explainable AI☆55Updated 3 years ago
- Gradient Starvation: A Learning Proclivity in Neural Networks☆61Updated 5 years ago
- [NeurIPS'19] [PyTorch] Adaptive Regularization in NN☆68Updated 6 years ago
- Adversarial Lipschitz Regularization☆10Updated 4 years ago
- Implementation of the models and datasets used in "An Information-theoretic Approach to Distribution Shifts"☆25Updated 4 years ago
- Randomized Smoothing of All Shapes and Sizes (ICML 2020).☆51Updated 5 years ago
- PyTorch implementation of the ICML 2020 paper "Latent Bernoulli Autoencoder"☆24Updated 4 years ago
- Code for Fong and Vedaldi 2017, "Interpretable Explanations of Black Boxes by Meaningful Perturbation"☆32Updated 6 years ago
- Pytorch implementation for "The Surprising Positive Knowledge Transfer in Continual 3D Object Shape Reconstruction"☆33Updated 3 years ago
- ☆34Updated 7 years ago