KaryFramling / py-ciu
Explainable Artificial Intelligence through Contextual Importance and Utility
☆25Updated 2 months ago
Related projects ⓘ
Alternatives and complementary repositories for py-ciu
- Modular Python Toolbox for Fairness, Accountability and Transparency Forensics☆75Updated last year
- Unified slicing for all Python data structures.☆36Updated 8 months ago
- Practical ideas on securing machine learning models☆36Updated 3 years ago
- Model Agnostic Counterfactual Explanations☆87Updated 2 years ago
- this repo might get accepted☆29Updated 3 years ago
- Editing machine learning models to reflect human knowledge and values☆123Updated last year
- Multi-Objective Counterfactuals☆40Updated 2 years ago
- Public home of pycorels, the python binding to CORELS☆75Updated 4 years ago
- python tools to check recourse in linear classification☆74Updated 3 years ago
- Experimental library integrating LLM capabilities to support causal analyses☆82Updated 2 months ago
- stratx is a library for A Stratification Approach to Partial Dependence for Codependent Variables☆64Updated 6 months ago
- Contrastive Explanation (Foil Trees), developed at TNO/Utrecht University☆45Updated last year
- A visual analytic system for fair data-driven decision making☆25Updated last year
- The official implementation of "The Shapley Value of Classifiers in Ensemble Games" (CIKM 2021).☆219Updated last year
- Causing: CAUsal INterpretation using Graphs☆55Updated 3 weeks ago
- List of python packages for causal inference☆17Updated 3 years ago
- [Experimental] Causal graphs that are networkx-compliant for the py-why ecosystem.☆47Updated this week
- A toolbox for fair and explainable machine learning☆53Updated 4 months ago
- CARLA: A Python Library to Benchmark Algorithmic Recourse and Counterfactual Explanation Algorithms☆281Updated last year
- CEML - Counterfactuals for Explaining Machine Learning models - A Python toolbox☆42Updated 3 months ago
- All about explainable AI, algorithmic fairness and more☆107Updated last year
- A Python Package providing two algorithms, DAME and FLAME, for fast and interpretable treatment-control matches of categorical data☆57Updated 5 months ago
- ☆42Updated 2 months ago
- Paper and talk from KDD 2019 XAI Workshop☆20Updated 4 years ago
- FairVis: Visual Analytics for Discovering Intersectional Bias in Machine Learning☆35Updated 6 months ago
- Privacy preserving synthetic data generation workflows☆20Updated 2 years ago
- ☆22Updated last year
- Prune your sklearn models☆19Updated 2 weeks ago
- A new framework to generate interpretable classification rules☆17Updated last year