dtak / rrr
Code/figures in Right for the Right Reasons
☆55Updated 4 years ago
Alternatives and similar repositories for rrr:
Users that are interested in rrr are comparing it to the libraries listed below
- ☆132Updated 5 years ago
- Supervised Local Modeling for Interpretability☆28Updated 6 years ago
- ☆124Updated 3 years ago
- Code for using CDEP from the paper "Interpretations are useful: penalizing explanations to align neural networks with prior knowledge" ht…☆127Updated 3 years ago
- Combating hidden stratification with GEORGE☆62Updated 3 years ago
- Using / reproducing ACD from the paper "Hierarchical interpretations for neural network predictions" 🧠 (ICLR 2019)☆128Updated 3 years ago
- The Synbols dataset generator is a ServiceNow Research project that was started at Element AI.☆43Updated last year
- This is a benchmark to evaluate machine learning local explanaitons quality generated from any explainer for text and image data☆30Updated 3 years ago
- Code and data for the experiments in "On Fairness and Calibration"☆50Updated 2 years ago
- A collection of implementations of fair ML algorithms☆12Updated 7 years ago
- Toy datasets to evaluate algorithms for domain generalization and invariance learning.☆42Updated 3 years ago
- Algorithms for abstention, calibration and domain adaptation to label shift.☆36Updated 4 years ago
- ☆14Updated 10 months ago
- Interpretation of Neural Network is Fragile☆36Updated 8 months ago
- Fair Empirical Risk Minimization (FERM)☆37Updated 4 years ago
- A python library to discover and mitigate biases in machine learning models and datasets☆20Updated last year
- Code for the CVPR 2021 paper: Understanding Failures of Deep Networks via Robust Feature Extraction☆35Updated 2 years ago
- Experiments for AAAI anchor paper☆61Updated 6 years ago
- Codebase for "Deep Learning for Case-based Reasoning through Prototypes: A Neural Network that Explains Its Predictions" (to appear in AA…☆73Updated 7 years ago
- Provable Robustness of ReLU networks via Maximization of Linear Regions [AISTATS 2019]☆32Updated 4 years ago
- ☆50Updated last year
- code release for the paper "On Completeness-aware Concept-Based Explanations in Deep Neural Networks"☆52Updated 2 years ago
- Tools for training explainable models using attribution priors.☆120Updated 3 years ago
- Implicit generative models and related stuff based on the MMD, in PyTorch☆16Updated 4 years ago
- Code for the paper 'Understanding Measures of Uncertainty for Adversarial Example Detection'☆58Updated 6 years ago
- ☆62Updated 3 years ago
- A lightweight implementation of removal-based explanations for ML models.☆57Updated 3 years ago
- python tools to check recourse in linear classification☆74Updated 4 years ago
- Computing various norms/measures on over-parametrized neural networks☆49Updated 6 years ago
- Code repository for our paper "Failing Loudly: An Empirical Study of Methods for Detecting Dataset Shift": https://arxiv.org/abs/1810.119…☆102Updated 10 months ago