KaryFramling / py-ciuLinks
Explainable Artificial Intelligence through Contextual Importance and Utility
☆28Updated 2 months ago
Alternatives and similar repositories for py-ciu
Users that are interested in py-ciu are comparing it to the libraries listed below
Sorting:
- Evaluate uncertainty, calibration, accuracy, and fairness of LLMs on real-world survey data!☆26Updated this week
- The official implementation of "The Shapley Value of Classifiers in Ensemble Games" (CIKM 2021).☆222Updated 2 years ago
- Modular Python Toolbox for Fairness, Accountability and Transparency Forensics☆77Updated 2 years ago
- CEML - Counterfactuals for Explaining Machine Learning models - A Python toolbox☆46Updated 6 months ago
- Editing machine learning models to reflect human knowledge and values☆128Updated 2 years ago
- A Natural Language Interface to Explainable Boosting Machines☆68Updated last year
- CARLA: A Python Library to Benchmark Algorithmic Recourse and Counterfactual Explanation Algorithms☆298Updated 2 years ago
- XAI-Bench is a library for benchmarking feature attribution explainability techniques☆70Updated 2 years ago
- Interpret text data with LLMs (sklearn compatible).☆172Updated 2 months ago
- ACV is a python library that provides explanations for any machine learning model or data. It gives local rule-based explanations for any…☆102Updated 3 years ago
- Experimental library integrating LLM capabilities to support causal analyses☆268Updated 2 months ago
- A curated list of awesome academic research, books, code of ethics, courses, databases, data sets, frameworks, institutes, maturity mode…☆98Updated this week
- Model Agnostic Counterfactual Explanations☆88Updated 3 years ago
- A lightweight implementation of removal-based explanations for ML models.☆59Updated 4 years ago
- Public home of pycorels, the python binding to CORELS☆80Updated 5 years ago
- Practical ideas on securing machine learning models☆36Updated 4 years ago
- automatic data slicing☆34Updated 4 years ago
- Adversarial Attacks on Post Hoc Explanation Techniques (LIME/SHAP)☆85Updated 3 years ago
- Weakly Supervised End-to-End Learning (NeurIPS 2021)☆156Updated 2 years ago
- All about explainable AI, algorithmic fairness and more☆110Updated 2 years ago
- [Experimental] Causal graphs that are networkx-compliant for the py-why ecosystem.☆62Updated this week
- A python package for benchmarking interpretability techniques on Transformers.☆214Updated last year
- causal-falsify: A Python library with algorithms for falsifying unconfoundedness assumption in a composite dataset from multiple sources.☆36Updated last week
- A Python Package providing two algorithms, DAME and FLAME, for fast and interpretable treatment-control matches of categorical data☆62Updated 4 months ago
- This repo accompanies the FF22 research cycle focused on unsupervised methods for detecting concept drift☆30Updated 4 years ago
- Explore/examine/explain/expose your model with the explabox!☆18Updated 2 months ago
- List of python packages for causal inference☆17Updated 4 years ago
- GAM (Global Attribution Mapping) explains the landscape of neural network predictions across subpopulations☆35Updated 2 months ago
- Interpret machine learning predictions using agnostic local feature importance based on Shapley Values.☆20Updated 4 months ago
- SPEAR: Programmatically label and build training data quickly.☆109Updated last year