Trusted-AI / AIX360
Interpretability and explainability of data and machine learning models
☆1,655Updated 7 months ago
Alternatives and similar repositories for AIX360:
Users that are interested in AIX360 are comparing it to the libraries listed below
- Algorithms for explaining machine learning models☆2,447Updated 2 months ago
- XAI - An eXplainability toolbox for machine learning☆1,153Updated 3 years ago
- Generate Diverse Counterfactual Explanations for any machine learning model.☆1,383Updated 2 months ago
- A comprehensive set of fairness metrics for datasets and machine learning models, explanations for these metrics, and algorithms to mitig…☆2,520Updated 2 months ago
- ☆910Updated last year
- Code for "High-Precision Model-Agnostic Explanations" paper☆798Updated 2 years ago
- Interesting resources related to XAI (Explainable Artificial Intelligence)☆820Updated 2 years ago
- Source code/webpage/demos for the What-If Tool☆935Updated 5 months ago
- A Python package to assess and improve fairness of machine learning models.☆2,008Updated this week
- moDel Agnostic Language for Exploration and eXplanation☆1,403Updated last week
- Algorithms for outlier, adversarial and drift detection☆2,302Updated last month
- Bias Auditing & Fair ML Toolkit☆706Updated 5 months ago
- A collection of research materials on explainable AI/ML☆1,458Updated 3 months ago
- Explainable AI framework for data scientists. Explain & debug any blackbox machine learning model with a single line of code. We are look…☆425Updated 6 months ago
- Interpretable ML package 🔍 for concise, transparent, and accurate predictive modeling (sklearn-compatible).☆1,425Updated this week
- python partial dependence plot toolbox☆850Updated 5 months ago
- Interpret Community extends Interpret repository with additional interpretability techniques and utility functions to handle real-world d…☆424Updated last week
- OmniXAI: A Library for eXplainable AI☆898Updated 6 months ago
- Uncertainty Quantification 360 (UQ360) is an extensible open-source toolkit that can help you estimate, communicate and use uncertainty i…☆259Updated 6 months ago
- Quantus is an eXplainable AI toolkit for responsible evaluation of neural network explanations☆584Updated 2 weeks ago
- Examples of techniques for training interpretable ML models, explaining ML models, and debugging ML models for accuracy, discrimination, …☆673Updated 8 months ago
- A Python library that helps data scientists to infer causation rather than observing correlation.☆2,281Updated 7 months ago
- Library for Semi-Automated Data Science☆335Updated 5 months ago
- A library for debugging/inspecting machine learning classifiers and explaining their predictions☆2,763Updated 2 years ago
- A curated list of awesome responsible machine learning resources.☆3,720Updated this week
- CARLA: A Python Library to Benchmark Algorithmic Recourse and Counterfactual Explanation Algorithms☆286Updated last year
- H2O.ai Machine Learning Interpretability Resources☆485Updated 4 years ago
- Automatic architecture search and hyperparameter optimization for PyTorch☆2,420Updated 10 months ago
- machine learning with logical rules in Python☆630Updated last year
- Natural Gradient Boosting for Probabilistic Prediction☆1,682Updated last week