IBM / inFairnessLinks
PyTorch package to train and audit ML models for Individual Fairness
☆66Updated this week
Alternatives and similar repositories for inFairness
Users that are interested in inFairness are comparing it to the libraries listed below
Sorting:
- The official implementation of "The Shapley Value of Classifiers in Ensemble Games" (CIKM 2021).☆221Updated 2 years ago
- A collection of implementations of fair ML algorithms☆12Updated 7 years ago
- AI Assistant for Building Reliable, High-performing and Fair Multilingual NLP Systems☆47Updated 3 years ago
- A Natural Language Interface to Explainable Boosting Machines☆68Updated last year
- Uncertainty Quantification 360 (UQ360) is an extensible open-source toolkit that can help you estimate, communicate and use uncertainty i…☆265Updated this week
- A lightweight implementation of removal-based explanations for ML models.☆58Updated 4 years ago
- Reading list for "The Shapley Value in Machine Learning" (JCAI 2022)☆153Updated 3 years ago
- TalkToModel gives anyone with the powers of XAI through natural language conversations 💬!☆124Updated 2 years ago
- Modular Python Toolbox for Fairness, Accountability and Transparency Forensics☆77Updated 2 years ago
- XAI-Bench is a library for benchmarking feature attribution explainability techniques☆70Updated 2 years ago
- Interpretable and efficient predictors using pre-trained language models. Scikit-learn compatible.☆43Updated 6 months ago
- Extending Conformal Prediction to LLMs☆67Updated last year
- Code repository for the NAACL 2022 paper "ExSum: From Local Explanations to Model Understanding"☆64Updated 3 years ago
- Testing Language Models for Memorization of Tabular Datasets.☆35Updated 7 months ago
- Evaluate uncertainty, calibration, accuracy, and fairness of LLMs on real-world survey data!☆24Updated 5 months ago
- ☆56Updated last week
- CARLA: A Python Library to Benchmark Algorithmic Recourse and Counterfactual Explanation Algorithms☆294Updated last year
- Model Agnostic Counterfactual Explanations☆87Updated 2 years ago
- Code repository for our paper "Failing Loudly: An Empirical Study of Methods for Detecting Dataset Shift": https://arxiv.org/abs/1810.119…☆106Updated last year
- OpenXAI : Towards a Transparent Evaluation of Model Explanations☆247Updated last year
- Measuring data importance over ML pipelines using the Shapley value.☆43Updated 3 weeks ago
- Fairness toolkit for pytorch, scikit learn and autogluon☆32Updated 9 months ago
- Multi-Objective Counterfactuals