axa-rev-research / LowProFool
Repository of the paper "Imperceptible Adversarial Attacks on Tabular Data" presented at NeurIPS 2019 Workshop on Robust AI in Financial Services (Robust AI in FS 2019)
β15Updated 3 years ago
Alternatives and similar repositories for LowProFool:
Users that are interested in LowProFool are comparing it to the libraries listed below
- π± A curated list of data valuation (DV) to design your next data marketplaceβ116Updated last week
- β22Updated 5 years ago
- Papers and online resources related to machine learning fairnessβ71Updated last year
- FFB: A Fair Fairness Benchmark for In-Processing Group Fairness Methods.β28Updated 9 months ago
- Methods for removing learned data from neural nets and evaluation of those methodsβ35Updated 4 years ago
- π‘ Adversarial attacks on explanations and how to defend themβ311Updated 3 months ago
- β37Updated last year
- FairBatch: Batch Selection for Model Fairness (ICLR 2021)β19Updated last year
- Towards Efficient Shapley Value Estimation via Cross-contribution Maximizationβ14Updated 2 years ago
- Local explanations with uncertainty π!β39Updated last year
- This is a list of awesome prototype-based papers for explainable artificial intelligence.β36Updated 2 years ago
- A reproduced PyTorch implementation of the Adversarially Reweighted Learning (ARL) model, originally presented in "Fairness without Demogβ¦β20Updated 4 years ago
- Code related to the paper "Machine Unlearning of Features and Labels"β68Updated last year
- An amortized approach for calculating local Shapley value explanationsβ95Updated last year
- β56Updated 4 years ago
- General fair regression subject to demographic parity constraint. Paper appeared in ICML 2019.β15Updated 4 years ago
- This is a PyTorch reimplementation of Influence Functions from the ICML2017 best paper: Understanding Black-box Predictions via Influenceβ¦β16Updated 4 years ago
- code release for the paper "On Completeness-aware Concept-Based Explanations in Deep Neural Networks"β53Updated 2 years ago
- bayesian limeβ17Updated 7 months ago
- β11Updated 4 years ago
- reference implementation for "explanations can be manipulated and geometry is to blame"β36Updated 2 years ago
- β27Updated last year
- β53Updated 2 years ago
- A repo for transfer learning with deep tabular modelsβ102Updated 2 years ago
- β10Updated 4 years ago
- β48Updated 3 years ago
- β28Updated 3 years ago
- A repository for summaries of recent explainable AI/Interpretable ML approachesβ73Updated 5 months ago
- β19Updated 5 months ago
- Fair Empirical Risk Minimization (FERM)β37Updated 4 years ago