IBM / inFairnessLinks
PyTorch package to train and audit ML models for Individual Fairness
☆66Updated last month
Alternatives and similar repositories for inFairness
Users that are interested in inFairness are comparing it to the libraries listed below
Sorting:
- A collection of implementations of fair ML algorithms☆12Updated 7 years ago
- AI Assistant for Building Reliable, High-performing and Fair Multilingual NLP Systems☆46Updated 2 years ago
- ☆10Updated 2 years ago
- The Recognizing, Exploring, and Articulating Limitations in Machine Learning research tool (REAL ML) is a set of guided activities to hel…☆51Updated 3 years ago
- Interpretable and efficient predictors using pre-trained language models. Scikit-learn compatible.☆42Updated 3 months ago
- Official implementation of the paper "Interventions, Where and How? Experimental Design for Causal Models at Scale", NeurIPS 2022.☆20Updated 2 years ago
- Achieve error-rate fairness between societal groups for any score-based classifier.☆18Updated last year
- Code repository for the NAACL 2022 paper "ExSum: From Local Explanations to Model Understanding"☆64Updated 3 years ago
- A Natural Language Interface to Explainable Boosting Machines☆67Updated 11 months ago
- Bias Buccaneers Image Recognition Challenge☆11Updated 2 years ago
- python tools to check recourse in linear classification☆76Updated 4 years ago
- Testing Language Models for Memorization of Tabular Datasets.☆33Updated 4 months ago
- Modular Python Toolbox for Fairness, Accountability and Transparency Forensics☆77Updated 2 years ago
- Bayesian Bandits☆68Updated last year
- Extending Conformal Prediction to LLMs☆66Updated last year
- This repository contains a Jax implementation of conformal training corresponding to the ICLR'22 paper "learning optimal conformal classi…☆130Updated 2 years ago
- Fairness toolkit for pytorch, scikit learn and autogluon☆32Updated 6 months ago
- ☆32Updated 3 years ago
- Logic Explained Networks is a python repository implementing explainable-by-design deep learning models.☆50Updated 2 years ago
- Uncertainty Quantification 360 (UQ360) is an extensible open-source toolkit that can help you estimate, communicate and use uncertainty i…☆267Updated last month
- A lightweight implementation of removal-based explanations for ML models.☆59Updated 3 years ago
- Codebase for "Demystifying Black-box Models with Symbolic Metamodels", NeurIPS 2019.☆50Updated 5 years ago
- AutoML Two-Sample Test☆19Updated 2 years ago
- 😇A curated list of links and resources for Fair ML and Data Ethics☆18Updated 3 years ago
- XAI-Bench is a library for benchmarking feature attribution explainability techniques☆68Updated 2 years ago
- ☆31Updated 3 years ago
- Dynamic causal Bayesian optimisation☆38Updated 2 years ago
- Reading list for "The Shapley Value in Machine Learning" (JCAI 2022)☆152Updated 2 years ago
- Counterfactual Local Explanations of AI systems☆28Updated 3 years ago
- Interactive Weak Supervision: Learning Useful Heuristics for Data Labeling☆31Updated 4 years ago