datamllab / xdeepLinks
☆44Updated 5 years ago
Alternatives and similar repositories for xdeep
Users that are interested in xdeep are comparing it to the libraries listed below
Sorting:
- Using / reproducing ACD from the paper "Hierarchical interpretations for neural network predictions" 🧠 (ICLR 2019)☆129Updated 4 years ago
- [ICML 2019, 20 min long talk] Robust Decision Trees Against Adversarial Examples☆69Updated 6 months ago
- Official repository for "Bridging Adversarial Robustness and Gradient Interpretability".☆30Updated 6 years ago
- Code for using CDEP from the paper "Interpretations are useful: penalizing explanations to align neural networks with prior knowledge" ht…☆128Updated 4 years ago
- Codes for reproducing the contrastive explanation in “Explanations based on the Missing: Towards Contrastive Explanations with Pertinent…☆54Updated 7 years ago
- ☆125Updated 4 years ago
- This code reproduces the results of the paper, "Measuring Data Leakage in Machine-Learning Models with Fisher Information"☆49Updated 4 years ago
- A pytorch implementation for the LSTM experiments in the paper: Why Gradient Clipping Accelerates Training: A Theoretical Justification f…☆47Updated 6 years ago
- Interpretation of Neural Network is Fragile☆36Updated last year
- Calibration of Convolutional Neural Networks☆171Updated 2 years ago
- Causal Explanation (CXPlain) is a method for explaining the predictions of any machine-learning model.☆132Updated 5 years ago
- Rethinking Bias-Variance Trade-off for Generalization of Neural Networks☆50Updated 4 years ago
- Official Repo for "Efficient task-specific data valuation for nearest neighbor algorithms"☆26Updated 5 years ago
- Active and Sample-Efficient Model Evaluation☆26Updated 8 months ago
- The Ultimate Reference for Out of Distribution Detection with Deep Neural Networks☆118Updated 6 years ago
- Keras implementation for DASP: Deep Approximate Shapley Propagation (ICML 2019)☆62Updated 6 years ago
- Code for our ICML '19 oral paper: Neural Network Attributions: A Causal Perspective.☆51Updated 4 years ago
- [NeurIPS'19] [PyTorch] Adaptive Regularization in NN☆68Updated 6 years ago
- To Trust Or Not To Trust A Classifier. A measure of uncertainty for any trained (possibly black-box) classifier which is more effective t…☆177Updated 2 years ago
- Python codes for influential instance estimation☆56Updated 3 years ago
- [NeurIPS 2020] Coresets for Robust Training of Neural Networks against Noisy Labels☆35Updated 4 years ago
- Distance Metric Learning Algorithms for Python☆175Updated 4 years ago
- Python implementation for evaluating explanations presented in "On the (In)fidelity and Sensitivity for Explanations" in NeurIPS 2019 for…☆25Updated 3 years ago
- Learning perturbation sets for robust machine learning☆65Updated 4 years ago
- Implementation of the paper "Shapley Explanation Networks"☆88Updated 5 years ago
- (ICML 2021) Mandoline: Model Evaluation under Distribution Shift☆30Updated 4 years ago
- Code for "Supermasks in Superposition"☆125Updated 2 years ago
- Self-Explaining Neural Networks☆13Updated 2 years ago
- Towards Automatic Concept-based Explanations☆161Updated last year
- Guarantees on the behavior of neural networks don't always have to come at the cost of performance.☆30Updated 3 years ago