axa-rev-research / LowProFoolLinks
Repository of the paper "Imperceptible Adversarial Attacks on Tabular Data" presented at NeurIPS 2019 Workshop on Robust AI in Financial Services (Robust AI in FS 2019)
☆15Updated 3 years ago
Alternatives and similar repositories for LowProFool
Users that are interested in LowProFool are comparing it to the libraries listed below
Sorting:
- ☆37Updated 2 years ago
- OpenDataVal: a Unified Benchmark for Data Valuation in Python (NeurIPS 2023)☆98Updated 4 months ago
- Methods for removing learned data from neural nets and evaluation of those methods☆37Updated 4 years ago
- 💡 Adversarial attacks on explanations and how to defend them☆315Updated 6 months ago
- Public implementation of ICML'19 paper "White-box vs Black-box: Bayes Optimal Strategies for Membership Inference"☆16Updated 5 years ago
- A reproduced PyTorch implementation of the Adversarially Reweighted Learning (ARL) model, originally presented in "Fairness without Demog…☆20Updated 4 years ago
- Code for "On Adaptive Attacks to Adversarial Example Defenses"☆87Updated 4 years ago
- This is a list of awesome prototype-based papers for explainable artificial intelligence.☆38Updated 2 years ago
- A Python Data Valuation Package☆30Updated 2 years ago
- Implementation of Wasserstein adversarial attacks.☆23Updated 4 years ago
- ☆22Updated 6 years ago
- Code for Auditing DPSGD☆37Updated 3 years ago
- Fair Empirical Risk Minimization (FERM)☆37Updated 4 years ago
- 💱 A curated list of data valuation (DV) to design your next data marketplace☆118Updated 3 months ago
- Distributional Shapley: A Distributional Framework for Data Valuation☆30Updated last year
- FFB: A Fair Fairness Benchmark for In-Processing Group Fairness Methods.☆28Updated last year
- Certified Removal from Machine Learning Models☆65Updated 3 years ago
- [NeurIPS 2020] code for "Boundary thickness and robustness in learning models"☆20Updated 4 years ago
- General fair regression subject to demographic parity constraint. Paper appeared in ICML 2019.☆16Updated 4 years ago
- [NeurIPS 2019] H. Chen*, H. Zhang*, S. Si, Y. Li, D. Boning and C.-J. Hsieh, Robustness Verification of Tree-based Models (*equal contrib…☆27Updated 5 years ago
- ☆51Updated 3 years ago
- ☆57Updated 4 years ago
- reference implementation for "explanations can be manipulated and geometry is to blame"☆36Updated 2 years ago
- ☆44Updated 9 months ago
- Code for Auditing Data Provenance in Text-Generation Models (in KDD 2019)☆9Updated 5 years ago
- Witches' Brew: Industrial Scale Data Poisoning via Gradient Matching☆102Updated 9 months ago
- Code related to the paper "Machine Unlearning of Features and Labels"☆69Updated last year
- Data Banzhaf: A Robust Data Valuation Framework for Machine Learning (AISTATS 2023 Oral)☆15Updated last year
- [NeurIPS23 (Spotlight)] "Model Sparsity Can Simplify Machine Unlearning" by Jinghan Jia*, Jiancheng Liu*, Parikshit Ram, Yuguang Yao, Gao…☆69Updated last year
- Papers and online resources related to machine learning fairness☆72Updated 2 years ago