marcotcr / anchor-experiments
Experiments for AAAI anchor paper
☆62Updated 7 years ago
Alternatives and similar repositories for anchor-experiments:
Users that are interested in anchor-experiments are comparing it to the libraries listed below
- ☆133Updated 5 years ago
- ☆125Updated 3 years ago
- Supervised Local Modeling for Interpretability☆28Updated 6 years ago
- Code/figures in Right for the Right Reasons☆55Updated 4 years ago
- Using / reproducing ACD from the paper "Hierarchical interpretations for neural network predictions" 🧠 (ICLR 2019)☆128Updated 3 years ago
- Code and data for the experiments in "On Fairness and Calibration"☆50Updated 2 years ago
- Code for "Evaluating Explainable AI: Which Algorithmic Explanations Help Users Predict Model Behavior?"☆46Updated last year
- Bayesian or-of-and☆34Updated 3 years ago
- Summaries and minimal implementations of ML / statistics research articles.☆39Updated 4 years ago
- repository for R library "sbrlmod"☆25Updated 10 months ago
- Demo for method introduced in "Beyond Word Importance: Contextual Decomposition to Extract Interactions from LSTMs"☆56Updated 4 years ago
- Train a simple convnet on the MNIST dataset and evaluate the BALD acquisition function☆16Updated 7 years ago
- Using / reproducing DAC from the paper "Disentangled Attribution Curves for Interpreting Random Forests and Boosted Trees"☆27Updated 4 years ago
- Interpretable ML package designed to explain any machine learning model.☆61Updated 6 years ago
- python tools to check recourse in linear classification☆75Updated 4 years ago
- A lightweight implementation of removal-based explanations for ML models.☆59Updated 3 years ago
- Provably Robust Boosted Decision Stumps and Trees against Adversarial Attacks [NeurIPS 2019]☆50Updated 4 years ago
- A simple Tensorflow implementation of https://arxiv.org/abs/1906.04985☆13Updated 5 years ago
- Codebase for "Deep Learning for Case-based Reasoning through Prototypes: A Neural Network that Explains Its Predictions" (to appear in AA…☆74Updated 7 years ago
- NeurIPS 2016. Linear-time interpretable nonparametric two-sample test.☆63Updated 6 years ago
- Explaining a black-box using Deep Variational Information Bottleneck Approach☆46Updated 2 years ago
- Code for paper EDDI: Efficient Dynamic Discovery of High-Value Information with Partial VAE☆38Updated last year
- A generic Monte Carlo method based on the Gumbel-Max trick.☆32Updated 8 years ago
- The Synbols dataset generator is a ServiceNow Research project that was started at Element AI.☆45Updated last year
- Code to reproduce experiments appearing in the academic paper Lost Relatives of the Gumbel Trick☆17Updated 7 years ago
- This is a benchmark to evaluate machine learning local explanaitons quality generated from any explainer for text and image data☆30Updated 3 years ago
- Pip-installable differentiable stacks in PyTorch!☆65Updated 4 years ago
- Code for SPINE - Sparse Interpretable Neural Embeddings. Jhamtani H.*, Pruthi D.*, Subramanian A.*, Berg-Kirkpatrick T., Hovy E. AAAI 20…☆52Updated 5 years ago
- Optimization and Regularization variants of Non-negative Matrix Factorization (NMF)☆33Updated 6 years ago
- Official Code Repo for the Paper: "How does This Interaction Affect Me? Interpretable Attribution for Feature Interactions", In NeurIPS 2…☆39Updated 2 years ago