IBM / inFairnessLinks
PyTorch package to train and audit ML models for Individual Fairness
☆66Updated 2 months ago
Alternatives and similar repositories for inFairness
Users that are interested in inFairness are comparing it to the libraries listed below
Sorting:
- A Natural Language Interface to Explainable Boosting Machines☆67Updated last year
- XAI-Bench is a library for benchmarking feature attribution explainability techniques☆69Updated 2 years ago
- A collection of implementations of fair ML algorithms☆12Updated 7 years ago
- TalkToModel gives anyone with the powers of XAI through natural language conversations 💬!☆121Updated last year
- Testing Language Models for Memorization of Tabular Datasets.☆34Updated 5 months ago
- ModelDiff: A Framework for Comparing Learning Algorithms☆59Updated last year
- Modular Python Toolbox for Fairness, Accountability and Transparency Forensics☆77Updated 2 years ago
- This repository contains a Jax implementation of conformal training corresponding to the ICLR'22 paper "learning optimal conformal classi…☆130Updated 2 years ago
- The official implementation of "The Shapley Value of Classifiers in Ensemble Games" (CIKM 2021).☆220Updated 2 years ago
- A lightweight implementation of removal-based explanations for ML models.☆59Updated 3 years ago
- Code repository for our paper "Failing Loudly: An Empirical Study of Methods for Detecting Dataset Shift": https://arxiv.org/abs/1810.119…☆105Updated last year
- Uncertainty Quantification 360 (UQ360) is an extensible open-source toolkit that can help you estimate, communicate and use uncertainty i…☆267Updated 2 months ago
- Fairness toolkit for pytorch, scikit learn and autogluon☆32Updated 7 months ago
- Reading list for "The Shapley Value in Machine Learning" (JCAI 2022)☆152Updated 2 years ago
- ☆32Updated 3 years ago
- Model Agnostic Counterfactual Explanations☆87Updated 2 years ago
- The Recognizing, Exploring, and Articulating Limitations in Machine Learning research tool (REAL ML) is a set of guided activities to hel…☆51Updated 3 years ago
- ☆139Updated last year
- Solving the causality pairs challenge (does A cause B) with ChatGPT☆77Updated last year
- Training and evaluating NBM and SPAM for interpretable machine learning.☆78Updated 2 years ago
- Extending Conformal Prediction to LLMs☆67Updated last year
- AI Assistant for Building Reliable, High-performing and Fair Multilingual NLP Systems☆46Updated 2 years ago
- PAIR.withgoogle.com and friend's work on interpretability methods☆194Updated last week
- OpenXAI : Towards a Transparent Evaluation of Model Explanations☆247Updated 11 months ago
- This repository holds code and other relevant files for the NeurIPS 2022 tutorial: Foundational Robustness of Foundation Models.☆70Updated 2 years ago
- Bayesian Bandits☆68Updated last year
- Adversarial Attacks on Post Hoc Explanation Techniques (LIME/SHAP)☆82Updated 2 years ago
- Measuring data importance over ML pipelines using the Shapley value.☆43Updated 2 months ago
- Achieve error-rate fairness between societal groups for any score-based classifier.☆19Updated last year
- ☆55Updated this week