choprashweta / Adversarial-DebiasingLinks
Implementation of Adversarial Debiasing in PyTorch to address Gender Bias
☆31Updated 5 years ago
Alternatives and similar repositories for Adversarial-Debiasing
Users that are interested in Adversarial-Debiasing are comparing it to the libraries listed below
Sorting:
- Reference tables to introduce and organize evaluation methods and measures for explainable machine learning systems☆75Updated 3 years ago
- ☆12Updated 4 years ago
- code release for the paper "On Completeness-aware Concept-Based Explanations in Deep Neural Networks"☆54Updated 3 years ago
- ☆37Updated 2 years ago
- This is a benchmark to evaluate machine learning local explanaitons quality generated from any explainer for text and image data☆30Updated 4 years ago
- This is a collection of papers and other resources related to fairness.☆94Updated last week
- Python implementation for evaluating explanations presented in "On the (In)fidelity and Sensitivity for Explanations" in NeurIPS 2019 for…☆25Updated 3 years ago
- ☆38Updated 4 years ago
- Using / reproducing ACD from the paper "Hierarchical interpretations for neural network predictions" 🧠 (ICLR 2019)☆129Updated 4 years ago
- Code for Environment Inference for Invariant Learning (ICML 2021 Paper)☆51Updated 4 years ago
- Code for using CDEP from the paper "Interpretations are useful: penalizing explanations to align neural networks with prior knowledge" ht…☆128Updated 4 years ago
- General fair regression subject to demographic parity constraint. Paper appeared in ICML 2019.☆16Updated 5 years ago
- ⚖️ Code for the paper "Ethical Adversaries: Towards Mitigating Unfairness with Adversarial Machine Learning".☆11Updated 2 years ago
- ☆14Updated 5 years ago
- Code for the CVPR 2021 paper: Understanding Failures of Deep Networks via Robust Feature Extraction☆36Updated 3 years ago
- Influence Analysis and Estimation - Survey, Papers, and Taxonomy☆83Updated last year
- ☆62Updated 4 years ago
- Adversarial Attacks on Post Hoc Explanation Techniques (LIME/SHAP)☆84Updated 2 years ago
- Interpretation of Neural Network is Fragile☆36Updated last year
- ☆50Updated 2 years ago
- https://arxiv.org/abs/2102.12594☆14Updated 2 years ago
- CVPR'19 experiments with (on-manifold) adversarial examples.☆45Updated 5 years ago
- ☆46Updated 2 years ago
- This repository provides a PyTorch implementation of "Fooling Neural Network Interpretations via Adversarial Model Manipulation". Our pap…☆23Updated 4 years ago
- Code and data for the experiments in "On Fairness and Calibration"☆51Updated 3 years ago
- Code for "Evaluating Explainable AI: Which Algorithmic Explanations Help Users Predict Model Behavior?"☆45Updated last year
- ☆109Updated 2 years ago
- Code for "Just Train Twice: Improving Group Robustness without Training Group Information"☆72Updated last year
- Distributional Shapley: A Distributional Framework for Data Valuation☆30Updated last year
- Self-Explaining Neural Networks☆13Updated 2 years ago