marcotcr / anchor
Code for "High-Precision Model-Agnostic Explanations" paper
☆801Updated 2 years ago
Alternatives and similar repositories for anchor:
Users that are interested in anchor are comparing it to the libraries listed below
- ☆913Updated 2 years ago
- python partial dependence plot toolbox☆855Updated 7 months ago
- Generate Diverse Counterfactual Explanations for any machine learning model.☆1,399Updated 4 months ago
- H2O.ai Machine Learning Interpretability Resources☆488Updated 4 years ago
- Python implementation of the rulefit algorithm☆420Updated last year
- machine learning with logical rules in Python☆633Updated last year
- ML-Ensemble – high performance ensemble learning☆853Updated last year
- Examples of techniques for training interpretable ML models, explaining ML models, and debugging ML models for accuracy, discrimination, …☆676Updated 9 months ago
- Interpret Community extends Interpret repository with additional interpretability techniques and utility functions to handle real-world d…☆428Updated 2 months ago
- ☆757Updated last year
- Neural Oblivious Decision Ensembles for Deep Learning on Tabular Data☆485Updated 4 years ago
- apricot implements submodular optimization for the purpose of selecting subsets of massive data sets to train machine learning models qui…☆504Updated 7 months ago
- Tuning hyperparams fast with Hyperband☆593Updated 6 years ago
- ⬛ Python Individual Conditional Expectation Plot Toolbox☆165Updated 4 years ago
- Interesting resources related to XAI (Explainable Artificial Intelligence)☆825Updated 2 years ago
- ☆264Updated 5 years ago
- Code for all experiments.☆316Updated 4 years ago
- Bias Auditing & Fair ML Toolkit☆713Updated 3 weeks ago
- Algorithms for explaining machine learning models☆2,485Updated last week
- A unified framework of perturbation and gradient-based attribution methods for Deep Neural Networks interpretability. DeepExplain also in…☆746Updated 4 years ago
- Hyper-parameter optimization for sklearn☆1,621Updated 3 weeks ago
- Attributing predictions made by the Inception network using the Integrated Gradients method☆617Updated 3 years ago
- Natural Gradient Boosting for Probabilistic Prediction☆1,694Updated last week
- moDel Agnostic Language for Exploration and eXplanation☆1,418Updated 2 months ago
- XAI - An eXplainability toolbox for machine learning☆1,164Updated 3 years ago
- Interpretability and explainability of data and machine learning models☆1,678Updated last month
- A library that implements fairness-aware machine learning algorithms☆124Updated 4 years ago
- A library for debugging/inspecting machine learning classifiers and explaining their predictions☆2,770Updated 2 years ago
- [HELP REQUESTED] Generalized Additive Models in Python☆892Updated 9 months ago
- Interpretable ML package 🔍 for concise, transparent, and accurate predictive modeling (sklearn-compatible).☆1,442Updated last month