datamllab / xdeepLinks
☆44Updated 5 years ago
Alternatives and similar repositories for xdeep
Users that are interested in xdeep are comparing it to the libraries listed below
Sorting:
- Using / reproducing ACD from the paper "Hierarchical interpretations for neural network predictions" 🧠 (ICLR 2019)☆129Updated 4 years ago
- Official Repo for "Efficient task-specific data valuation for nearest neighbor algorithms"☆26Updated 5 years ago
- Keras implementation for DASP: Deep Approximate Shapley Propagation (ICML 2019)☆62Updated 6 years ago
- [ICML 2019, 20 min long talk] Robust Decision Trees Against Adversarial Examples☆67Updated 4 months ago
- Official repository for "Bridging Adversarial Robustness and Gradient Interpretability".☆30Updated 6 years ago
- Codes for reproducing the contrastive explanation in “Explanations based on the Missing: Towards Contrastive Explanations with Pertinent…☆54Updated 7 years ago
- Code for using CDEP from the paper "Interpretations are useful: penalizing explanations to align neural networks with prior knowledge" ht…☆128Updated 4 years ago
- This code reproduces the results of the paper, "Measuring Data Leakage in Machine-Learning Models with Fisher Information"☆50Updated 4 years ago
- Interpretation of Neural Network is Fragile☆36Updated last year
- ☆124Updated 4 years ago
- Code repository for our paper "Failing Loudly: An Empirical Study of Methods for Detecting Dataset Shift": https://arxiv.org/abs/1810.119…☆107Updated last year
- [NeurIPS 2020] Coresets for Robust Training of Neural Networks against Noisy Labels☆35Updated 4 years ago
- Active and Sample-Efficient Model Evaluation☆25Updated 5 months ago
- Causal Explanation (CXPlain) is a method for explaining the predictions of any machine-learning model.☆132Updated 5 years ago
- A pytorch implementation for the LSTM experiments in the paper: Why Gradient Clipping Accelerates Training: A Theoretical Justification f…☆46Updated 5 years ago
- ☆32Updated 4 years ago
- Rethinking Bias-Variance Trade-off for Generalization of Neural Networks☆50Updated 4 years ago
- To Trust Or Not To Trust A Classifier. A measure of uncertainty for any trained (possibly black-box) classifier which is more effective t…☆177Updated 2 years ago
- Implementation of the paper "Shapley Explanation Networks"☆88Updated 4 years ago
- A lightweight implementation of removal-based explanations for ML models.☆59Updated 4 years ago
- Provably Robust Boosted Decision Stumps and Trees against Adversarial Attacks [NeurIPS 2019]☆50Updated 5 years ago
- Towards Automatic Concept-based Explanations☆161Updated last year
- Tools for training explainable models using attribution priors.☆124Updated 4 years ago
- Calibration of Convolutional Neural Networks☆170Updated 2 years ago
- Ἀνατομή is a PyTorch library to analyze representation of neural networks☆66Updated 4 months ago
- The Ultimate Reference for Out of Distribution Detection with Deep Neural Networks☆118Updated 5 years ago
- Explores the ideas presented in Deep Ensembles: A Loss Landscape Perspective (https://arxiv.org/abs/1912.02757) by Stanislav Fort, Huiyi …☆66Updated 5 years ago
- (ICML 2021) Mandoline: Model Evaluation under Distribution Shift☆30Updated 4 years ago
- ☆38Updated 4 years ago
- PyTorch Implementation of "Distilling a Neural Network Into a Soft Decision Tree." Nicholas Frosst, Geoffrey Hinton., 2017.☆105Updated last year