axa-rev-research / LowProFoolLinks
Repository of the paper "Imperceptible Adversarial Attacks on Tabular Data" presented at NeurIPS 2019 Workshop on Robust AI in Financial Services (Robust AI in FS 2019)
β16Updated 4 years ago
Alternatives and similar repositories for LowProFool
Users that are interested in LowProFool are comparing it to the libraries listed below
Sorting:
- π‘ Adversarial attacks on explanations and how to defend themβ331Updated last year
- β22Updated 6 years ago
- Papers and online resources related to machine learning fairnessβ75Updated 2 years ago
- FFB: A Fair Fairness Benchmark for In-Processing Group Fairness Methods.β31Updated last year
- Methods for removing learned data from neural nets and evaluation of those methodsβ38Updated 5 years ago
- [USENIX Security 2022] Mitigating Membership Inference Attacks by Self-Distillation Through a Novel Ensemble Architectureβ17Updated 3 years ago
- β58Updated 3 years ago
- π± A curated list of data valuation (DV) to design your next data marketplaceβ135Updated 10 months ago
- FairGrad, is an easy to use general purpose approach to enforce fairness for gradient descent based methods.β14Updated 2 years ago
- This is a list of awesome prototype-based papers for explainable artificial intelligence.β40Updated 3 years ago
- [ICLR 2022] Reliable Adversarial Distillation with Unreliable Teachersβ22Updated 3 years ago
- FR-Train: A Mutual Information-Based Approach to Fair and Robust Training (ICML 2020)β13Updated 4 years ago
- Code related to the paper "Machine Unlearning of Features and Labels"β72Updated last year
- A unified benchmark problem for data poisoning attacksβ161Updated 2 years ago
- This repository provides simple PyTorch implementations for adversarial training methods on CIFAR-10.β172Updated 4 years ago
- Code for "On Adaptive Attacks to Adversarial Example Defenses"β87Updated 4 years ago
- [arXiv:2411.10023] "Model Inversion Attacks: A Survey of Approaches and Countermeasures"β211Updated 7 months ago
- β32Updated last year
- A library for running membership inference attacks against ML modelsβ152Updated 3 years ago
- β367Updated last month
- Related papers for robust machine learningβ567Updated 2 years ago
- [ICLR2021] Unlearnable Examples: Making Personal Data Unexploitableβ170Updated last year
- β37Updated 2 years ago
- [NeurIPS 2021] "G-PATE: Scalable Differentially Private Data Generator via Private Aggregation of Teacher Discriminators" by Yunhui Long*β¦β30Updated 4 years ago
- This is a collection of papers and other resources related to fairness.β94Updated last month
- This is a PyTorch reimplementation of Influence Functions from the ICML2017 best paper: Understanding Black-box Predictions via Influenceβ¦β344Updated 2 years ago
- RobustBench: a standardized adversarial robustness benchmark [NeurIPS 2021 Benchmarks and Datasets Track]β756Updated 9 months ago
- Membership Inference of Generative Modelsβ14Updated 6 years ago
- [ICLR 2020] A repository for extremely fast adversarial training using FGSMβ450Updated last year
- β196Updated 2 years ago