mever-team / FairBenchLinks
Comprehensive AI fairness exploration.
β23Updated last month
Alternatives and similar repositories for FairBench
Users that are interested in FairBench are comparing it to the libraries listed below
Sorting:
- Quantus is an eXplainable AI toolkit for responsible evaluation of neural network explanationsβ636Updated 2 weeks ago
- π Xplique is a Neural Networks Explainability Toolboxβ731Updated 2 weeks ago
- Contains modules with MAMMOths research results, and a lightweight demonstrator for running thoseβ20Updated last month
- Datasets derived from US census dataβ276Updated last year
- Repository of the paper "Imperceptible Adversarial Attacks on Tabular Data" presented at NeurIPS 2019 Workshop on Robust AI in Financial β¦β16Updated 4 years ago
- An eXplainable AI toolkit with Concept Relevance Propagation and Relevance Maximizationβ140Updated 3 weeks ago
- Papers and code of Explainable AI esp. w.r.t. Image classificiationβ226Updated 3 years ago
- A fairness library in PyTorch.β32Updated last year
- β12Updated 2 years ago
- Bias Auditing & Fair ML Toolkitβ747Updated this week
- Zennit is a high-level framework in Python using PyTorch for explaining/exploring neural networks using attribution methods like LRP.β239Updated last week
- A set of tools to play with deep learningβ26Updated last year
- Reliability diagrams visualize whether a classifier model needs calibrationβ165Updated 3 years ago
- Generate Diverse Counterfactual Explanations for any machine learning model.β1,492Updated 6 months ago
- Build and train Lipschitz constrained networks: TensorFlow implementation of k-Lipschitz layersβ102Updated 10 months ago
- A library for generating and evaluating synthetic tabular data for privacy, fairness and data augmentation.β636Updated 7 months ago
- OpenXAI : Towards a Transparent Evaluation of Model Explanationsβ252Updated last year
- π The core repository to support participants through the UN PET Lab Hackathon 2022 π Registration at: https://petlab.officialstatisticβ¦β19Updated 3 years ago
- Gender Bias Extra Materialβ13Updated 2 years ago
- XAI Experiments on an Annotated Dataset of Wild Bee Imagesβ20Updated 5 months ago
- β50Updated last year
- A curated list of awesome Fairness in AI resourcesβ332Updated 2 years ago
- π‘ Adversarial attacks on explanations and how to defend themβ334Updated last year
- A machine learning benchmark of in-the-wild distribution shifts, with data loaders, evaluators, and default models.β592Updated 2 years ago
- Neural network visualization toolkit for tf.kerasβ337Updated 10 months ago
- Adversarial Attacks on Post Hoc Explanation Techniques (LIME/SHAP)β85Updated 3 years ago
- FairGrad, is an easy to use general purpose approach to enforce fairness for gradient descent based methods.β14Updated 2 years ago
- Meaningful Local Explanation for Machine Learning Modelsβ42Updated 2 years ago
- β122Updated 3 years ago
- Track and predict the energy consumption and carbon footprint of training deep learning models.β473Updated 3 weeks ago