mever-team / FairBench
Comprehensive AI fairness exploration.
☆16Updated this week
Alternatives and similar repositories for FairBench:
Users that are interested in FairBench are comparing it to the libraries listed below
- FairGrad, is an easy to use general purpose approach to enforce fairness for gradient descent based methods.☆14Updated last year
- Datasets derived from US census data☆256Updated 10 months ago
- Repository of the paper "Imperceptible Adversarial Attacks on Tabular Data" presented at NeurIPS 2019 Workshop on Robust AI in Financial …☆15Updated 3 years ago
- ☆37Updated 2 years ago
- This repository refers to the paper currently under review for the 36th Conference on Neural Information Processing Systems (NeurIPS 2022…☆10Updated 9 months ago
- FR-Train: A Mutual Information-Based Approach to Fair and Robust Training (ICML 2020)☆13Updated 3 years ago
- This repository contains Python code for the paper "Learn What You Want to Unlearn: Unlearning Inversion Attacks against Machine Unlearni…☆14Updated 11 months ago
- ☆56Updated 4 years ago
- ☆12Updated 2 years ago
- Analytic calibration for differential privacy with Gaussian perturbations☆46Updated 6 years ago
- PPGANs: Privacy-preserving Generative Adversarial Networks.☆16Updated 2 years ago
- A Python framework for reliably assessing synthetic image detection methods☆36Updated 4 months ago
- Quantus is an eXplainable AI toolkit for responsible evaluation of neural network explanations☆594Updated last month
- Zennit is a high-level framework in Python using PyTorch for explaining/exploring neural networks using attribution methods like LRP.☆217Updated 8 months ago
- reference implementation for "explanations can be manipulated and geometry is to blame"☆36Updated 2 years ago
- A fast algorithm to optimally compose privacy guarantees of differentially private (DP) mechanisms to arbitrary accuracy.☆73Updated last year
- Fair Empirical Risk Minimization (FERM)☆37Updated 4 years ago
- Code accompanying the paper "Disparate Impact in Differential Privacy from Gradient Misalignment".☆11Updated last year
- ☆42Updated last year
- Example external repository for interacting with armory.☆11Updated 2 years ago
- 💡 Adversarial attacks on explanations and how to defend them☆314Updated 4 months ago
- APBench: A Unified Availability Poisoning Attack and Defenses Benchmark (TMLR 08/2024)☆30Updated 2 months ago
- Methods for removing learned data from neural nets and evaluation of those methods☆34Updated 4 years ago
- Crowdsourcing library for Python☆13Updated 3 months ago
- ☆49Updated 3 years ago
- Implementation of the paper "MAZE: Data-Free Model Stealing Attack Using Zeroth-Order Gradient Estimation".☆31Updated 3 years ago
- A Unified Framework for Quantifying Privacy Risk in Synthetic Data according to the GDPR☆82Updated 3 weeks ago
- ☆27Updated 3 years ago
- This is a list of awesome prototype-based papers for explainable artificial intelligence.☆36Updated 2 years ago
- Adversarial Attacks on Post Hoc Explanation Techniques (LIME/SHAP)☆82Updated 2 years ago