datamllab / xdeepLinks
☆44Updated 5 years ago
Alternatives and similar repositories for xdeep
Users that are interested in xdeep are comparing it to the libraries listed below
Sorting:
- Using / reproducing ACD from the paper "Hierarchical interpretations for neural network predictions" 🧠 (ICLR 2019)☆129Updated 4 years ago
- Codes for reproducing the contrastive explanation in “Explanations based on the Missing: Towards Contrastive Explanations with Pertinent…☆54Updated 7 years ago
- Active and Sample-Efficient Model Evaluation☆26Updated 6 months ago
- [ICML 2019, 20 min long talk] Robust Decision Trees Against Adversarial Examples☆67Updated 4 months ago
- Code for using CDEP from the paper "Interpretations are useful: penalizing explanations to align neural networks with prior knowledge" ht…☆128Updated 4 years ago
- Keras implementation for DASP: Deep Approximate Shapley Propagation (ICML 2019)☆62Updated 6 years ago
- ☆32Updated 4 years ago
- Official Repo for "Efficient task-specific data valuation for nearest neighbor algorithms"☆26Updated 5 years ago
- The Ultimate Reference for Out of Distribution Detection with Deep Neural Networks☆118Updated 5 years ago
- Interpretable Explanations of Black Boxes by Meaningful Perturbation Pytorch☆12Updated last year
- Interpretation of Neural Network is Fragile☆36Updated last year
- Official repository for "Bridging Adversarial Robustness and Gradient Interpretability".☆30Updated 6 years ago
- A pytorch implementation for the LSTM experiments in the paper: Why Gradient Clipping Accelerates Training: A Theoretical Justification f…☆46Updated 5 years ago
- ☆125Updated 4 years ago
- Provably Robust Boosted Decision Stumps and Trees against Adversarial Attacks [NeurIPS 2019]☆50Updated 5 years ago
- Python implementation for evaluating explanations presented in "On the (In)fidelity and Sensitivity for Explanations" in NeurIPS 2019 for…☆25Updated 3 years ago
- (ICML 2021) Mandoline: Model Evaluation under Distribution Shift☆30Updated 4 years ago
- Towards Automatic Concept-based Explanations☆161Updated last year
- A simple algorithm to identify and correct for label shift.☆22Updated 7 years ago
- Learning perturbation sets for robust machine learning☆65Updated 4 years ago
- This code reproduces the results of the paper, "Measuring Data Leakage in Machine-Learning Models with Fisher Information"☆50Updated 4 years ago
- [NeurIPS 2020] Coresets for Robust Training of Neural Networks against Noisy Labels☆35Updated 4 years ago
- Code repository for our paper "Failing Loudly: An Empirical Study of Methods for Detecting Dataset Shift": https://arxiv.org/abs/1810.119…☆107Updated last year
- Rethinking Bias-Variance Trade-off for Generalization of Neural Networks☆50Updated 4 years ago
- Implementation of the paper "Understanding anomaly detection with deep invertible networks through hierarchies of distributions and featu…☆42Updated 5 years ago
- Distributional Shapley: A Distributional Framework for Data Valuation☆30Updated last year
- Beta Shapley: a Unified and Noise-reduced Data Valuation Framework for Machine Learning (AISTATS 2022 Oral)☆42Updated 3 years ago
- Python codes for influential instance estimation☆56Updated 2 years ago
- To Trust Or Not To Trust A Classifier. A measure of uncertainty for any trained (possibly black-box) classifier which is more effective t…☆177Updated 2 years ago
- Calibration of Convolutional Neural Networks☆170Updated 2 years ago