ClearExplanationsAI / CLEARLinks
Counterfactual Local Explanations of AI systems
☆28Updated 4 years ago
Alternatives and similar repositories for CLEAR
Users that are interested in CLEAR are comparing it to the libraries listed below
Sorting:
- OpenXAI : Towards a Transparent Evaluation of Model Explanations☆252Updated last year
- Modular Python Toolbox for Fairness, Accountability and Transparency Forensics☆77Updated 2 years ago
- Adversarial Attacks on Post Hoc Explanation Techniques (LIME/SHAP)☆85Updated 3 years ago
- CARLA: A Python Library to Benchmark Algorithmic Recourse and Counterfactual Explanation Algorithms☆298Updated 2 years ago
- XAI-Bench is a library for benchmarking feature attribution explainability techniques☆70Updated 3 years ago
- Library for Semi-Automated Data Science☆345Updated 3 months ago
- Explainable Artificial Intelligence through Contextual Importance and Utility☆27Updated 4 months ago
- XAI framework for interpreting Link Predictions on Knowledge Graphs☆43Updated last year
- All about explainable AI, algorithmic fairness and more☆110Updated 2 years ago
- Jenga is an experimentation library that allows data science practititioners and researchers to study the effect of common data corruptio…☆42Updated 2 years ago
- LOcal Rule-based Exlanations☆54Updated 2 years ago
- A library that incorporates state-of-the-art explainers for text-based machine learning models and visualizes the result with a built-in …☆431Updated 2 years ago
- Model Agnostic Counterfactual Explanations☆88Updated 3 years ago
- A Natural Language Interface to Explainable Boosting Machines☆69Updated last year
- 🐍 Python Implementation and Extension of RDF2Vec☆265Updated last week
- Code for paper "Search Methods for Sufficient, Socially-Aligned Feature Importance Explanations with In-Distribution Counterfactuals"☆18Updated 3 years ago
- CEML - Counterfactuals for Explaining Machine Learning models - A Python toolbox☆47Updated 8 months ago
- Interpretable and efficient predictors using pre-trained language models. Scikit-learn compatible.☆44Updated 2 months ago
- ☆88Updated 7 months ago
- Official Code Repo for the Paper: "How does This Interaction Affect Me? Interpretable Attribution for Feature Interactions", In NeurIPS 2…☆42Updated 3 years ago
- Python package to compute interaction indices that extend the Shapley Value. AISTATS 2023.☆19Updated 2 years ago
- Evaluate uncertainty, calibration, accuracy, and fairness of LLMs on real-world survey data!☆26Updated last month
- python tools to check recourse in linear classification☆76Updated 5 years ago
- List of relevant resources for machine learning from explanatory supervision☆162Updated 6 months ago
- Fairness toolkit for pytorch, scikit learn and autogluon☆33Updated 2 months ago
- Interpret Community extends Interpret repository with additional interpretability techniques and utility functions to handle real-world d…☆438Updated last year
- [NeurIPS 2021] WRENCH: Weak supeRvision bENCHmark☆226Updated last year
- DeepProbLog is an extension of ProbLog that integrates Probabilistic Logic Programming with deep learning by introducing the neural predi…☆295Updated last year
- Uncertainty Quantification 360 (UQ360) is an extensible open-source toolkit that can help you estimate, communicate and use uncertainty i…☆268Updated 4 months ago
- Python library that classifies content from scientific papers with the topics of the Computer Science Ontology (CSO).☆93Updated last week