axa-rev-research / LowProFoolLinks
Repository of the paper "Imperceptible Adversarial Attacks on Tabular Data" presented at NeurIPS 2019 Workshop on Robust AI in Financial Services (Robust AI in FS 2019)
☆16Updated 3 years ago
Alternatives and similar repositories for LowProFool
Users that are interested in LowProFool are comparing it to the libraries listed below
Sorting:
- 💡 Adversarial attacks on explanations and how to defend them☆328Updated 10 months ago
- ☆22Updated 6 years ago
- ☆37Updated 2 years ago
- Methods for removing learned data from neural nets and evaluation of those methods☆37Updated 4 years ago
- This repository provides simple PyTorch implementations for adversarial training methods on CIFAR-10.☆169Updated 4 years ago
- A unified benchmark problem for data poisoning attacks☆159Updated 2 years ago
- [USENIX Security 2022] Mitigating Membership Inference Attacks by Self-Distillation Through a Novel Ensemble Architecture☆17Updated 3 years ago
- ☆32Updated last year
- ☆58Updated 5 years ago
- RobustBench: a standardized adversarial robustness benchmark [NeurIPS 2021 Benchmarks and Datasets Track]☆741Updated 6 months ago
- ☆21Updated 3 years ago
- [ICLR 2022] Reliable Adversarial Distillation with Unreliable Teachers☆21Updated 3 years ago
- Code for "On Adaptive Attacks to Adversarial Example Defenses"☆86Updated 4 years ago
- Python package to create adversarial agents for membership inference attacks againts machine learning models☆46Updated 6 years ago
- Certified Removal from Machine Learning Models☆69Updated 4 years ago
- FFB: A Fair Fairness Benchmark for In-Processing Group Fairness Methods.☆30Updated last year
- ☆54Updated 4 years ago
- A curated list of papers on adversarial machine learning (adversarial examples and defense methods).☆211Updated 3 years ago
- Code for Auditing DPSGD☆37Updated 3 years ago
- reference implementation for "explanations can be manipulated and geometry is to blame"☆37Updated 3 years ago
- Papers and online resources related to machine learning fairness☆73Updated 2 years ago
- ☆26Updated 3 years ago
- Implementation of Minimax Pareto Fairness framework☆21Updated 5 years ago
- Official implementation of "RelaxLoss: Defending Membership Inference Attacks without Losing Utility" (ICLR 2022)☆48Updated 3 years ago
- Membership Inference Attacks and Defenses in Neural Network Pruning☆27Updated 3 years ago
- [ICLR 2020] A repository for extremely fast adversarial training using FGSM☆448Updated last year
- Implementation of the paper "MAZE: Data-Free Model Stealing Attack Using Zeroth-Order Gradient Estimation".☆30Updated 3 years ago
- Code related to the paper "Machine Unlearning of Features and Labels"☆71Updated last year
- Systematic Evaluation of Membership Inference Privacy Risks of Machine Learning Models☆127Updated last year
- [arXiv:2411.10023] "Model Inversion Attacks: A Survey of Approaches and Countermeasures"☆201Updated 4 months ago