dais-ita / interpretability-papers
Papers on interpretable deep learning, for review
☆29Updated 7 years ago
Alternatives and similar repositories for interpretability-papers:
Users that are interested in interpretability-papers are comparing it to the libraries listed below
- Code for "Testing Robustness Against Unforeseen Adversaries"☆80Updated 6 months ago
- Computing various norms/measures on over-parametrized neural networks☆49Updated 6 years ago
- OD-test: A Less Biased Evaluation of Out-of-Distribution (Outlier) Detectors (PyTorch)☆62Updated last year
- Provable Robustness of ReLU networks via Maximization of Linear Regions [AISTATS 2019]☆32Updated 4 years ago
- ☆51Updated 4 years ago
- Learning perturbation sets for robust machine learning☆64Updated 3 years ago
- ☆34Updated 3 years ago
- ☆26Updated 5 years ago
- ☆61Updated last year
- ☆35Updated last year
- Code/figures in Right for the Right Reasons☆55Updated 4 years ago
- Gradient Starvation: A Learning Proclivity in Neural Networks☆61Updated 4 years ago
- Interpretation of Neural Network is Fragile☆36Updated 9 months ago
- Implementation of Information Dropout☆39Updated 7 years ago
- Data, code & materials from the paper "Generalisation in humans and deep neural networks" (NeurIPS 2018)☆95Updated last year
- Geometric Certifications of Neural Nets☆41Updated 2 years ago
- Using / reproducing ACD from the paper "Hierarchical interpretations for neural network predictions" 🧠 (ICLR 2019)☆128Updated 3 years ago
- Source code for paper Mroueh, Sercu, Rigotti, Padhi, dos Santos, "Sobolev Independence Criterion", NeurIPS 2019☆14Updated 8 months ago
- A way to achieve uniform confidence far away from the training data.☆37Updated 3 years ago
- Code for the paper 'Understanding Measures of Uncertainty for Adversarial Example Detection'☆59Updated 6 years ago
- Code for Fong and Vedaldi 2017, "Interpretable Explanations of Black Boxes by Meaningful Perturbation"☆30Updated 5 years ago
- Adversarially Robust Neural Network on MNIST.☆64Updated 3 years ago
- ☆30Updated 4 years ago
- A pytorch implementation of our jacobian regularizer to encourage learning representations more robust to input perturbations.☆125Updated last year
- Explaining Image Classifiers by Counterfactual Generation☆28Updated 2 years ago
- Pytorch Implementation of recent visual attribution methods for model interpretability☆145Updated 4 years ago
- SGD and Ordered SGD codes for deep learning, SVM, and logistic regression☆35Updated 4 years ago
- ☆27Updated last year
- Codebase for Learning Invariances in Neural Networks☆93Updated 2 years ago
- Repository of code for the experiments for the ICLR submission "An Empirical Investigation of Catastrophic Forgetting in Gradient-Based N…☆67Updated 10 years ago