ClearExplanationsAI / CLEAR
Counterfactual Local Explanations of AI systems
☆29Updated 3 years ago
Alternatives and similar repositories for CLEAR:
Users that are interested in CLEAR are comparing it to the libraries listed below
- XAI-Bench is a library for benchmarking feature attribution explainability techniques☆63Updated 2 years ago
- python tools to check recourse in linear classification☆75Updated 4 years ago
- Interpretable and efficient predictors using pre-trained language models. Scikit-learn compatible.☆41Updated 3 weeks ago
- Modular Python Toolbox for Fairness, Accountability and Transparency Forensics☆77Updated last year
- Codebase for VAEL: Bridging Variational Autoencoders and Probabilistic Logic Programming☆20Updated last year
- PyTorch package to train and audit ML models for Individual Fairness☆66Updated last year
- Adversarial Attacks on Post Hoc Explanation Techniques (LIME/SHAP)☆82Updated 2 years ago
- Evaluate uncertainty, calibration, accuracy, and fairness of LLMs on real-world survey data!☆20Updated last month
- CAIPI turns LIMEs into trust!☆12Updated 4 years ago
- Lightweight implementations of generative label models for weakly supervised machine learning☆21Updated 11 months ago
- Logic Explained Networks is a python repository implementing explainable-by-design deep learning models.☆49Updated last year
- Testing Language Models for Memorization of Tabular Datasets.☆33Updated last month
- CEML - Counterfactuals for Explaining Machine Learning models - A Python toolbox☆43Updated 8 months ago
- Model Agnostic Counterfactual Explanations☆87Updated 2 years ago
- Explainable Artificial Intelligence through Contextual Importance and Utility☆27Updated 7 months ago
- Code for paper "Search Methods for Sufficient, Socially-Aligned Feature Importance Explanations with In-Distribution Counterfactuals"☆17Updated 2 years ago
- Code for the paper "Rule induction for global explanation of trained models"☆21Updated 8 months ago
- Supervised Local Modeling for Interpretability☆28Updated 6 years ago
- A Natural Language Interface to Explainable Boosting Machines☆65Updated 8 months ago
- Official Code Repo for the Paper: "How does This Interaction Affect Me? Interpretable Attribution for Feature Interactions", In NeurIPS 2…☆39Updated 2 years ago
- BoostSRL: "Boosting for Statistical Relational Learning." A gradient-boosting based approach for learning different types of SRL models.☆32Updated last year
- A new framework to generate interpretable classification rules☆17Updated 2 years ago
- A HOL-based framework for reasoning over knowledge graphs☆24Updated 5 months ago
- ☆37Updated 3 years ago
- Interpretable ML package designed to explain any machine learning model.☆61Updated 6 years ago
- Experimental library integrating LLM capabilities to support causal analyses☆120Updated 2 weeks ago
- LOcal Rule-based Exlanations☆53Updated last year
- A PyTorch-based open-source framework that provides methods for improving the weakly annotated data and allows researchers to efficiently…☆108Updated 6 months ago
- ☆29Updated last year
- Neuro-symbolic approaches to reasoning problems from abstract argumentation☆21Updated 2 years ago