axa-rev-research / LowProFoolLinks
Repository of the paper "Imperceptible Adversarial Attacks on Tabular Data" presented at NeurIPS 2019 Workshop on Robust AI in Financial Services (Robust AI in FS 2019)
☆16Updated 4 years ago
Alternatives and similar repositories for LowProFool
Users that are interested in LowProFool are comparing it to the libraries listed below
Sorting:
- 💡 Adversarial attacks on explanations and how to defend them☆328Updated 11 months ago
- ☆22Updated 6 years ago
- A unified benchmark problem for data poisoning attacks☆160Updated 2 years ago
- This repository provides simple PyTorch implementations for adversarial training methods on CIFAR-10.☆172Updated 4 years ago
- Methods for removing learned data from neural nets and evaluation of those methods☆38Updated 4 years ago
- [ICLR 2022] Reliable Adversarial Distillation with Unreliable Teachers☆22Updated 3 years ago
- [USENIX Security 2022] Mitigating Membership Inference Attacks by Self-Distillation Through a Novel Ensemble Architecture☆17Updated 3 years ago
- Code for "On Adaptive Attacks to Adversarial Example Defenses"☆87Updated 4 years ago
- RobustBench: a standardized adversarial robustness benchmark [NeurIPS 2021 Benchmarks and Datasets Track]☆752Updated 7 months ago
- A curated list of papers on adversarial machine learning (adversarial examples and defense methods).☆212Updated 3 years ago
- Provable adversarial robustness at ImageNet scale☆402Updated 6 years ago
- ☆32Updated last year
- [ICLR 2020] A repository for extremely fast adversarial training using FGSM☆449Updated last year
- Related papers for robust machine learning☆567Updated 2 years ago
- PyTorch-1.0 implementation for the adversarial training on MNIST/CIFAR-10 and visualization on robustness classifier.☆253Updated 5 years ago
- Implementation of the paper "MAZE: Data-Free Model Stealing Attack Using Zeroth-Order Gradient Estimation".☆30Updated 3 years ago
- Code relative to "Reliable evaluation of adversarial robustness with an ensemble of diverse parameter-free attacks"☆725Updated last year
- ☆37Updated 2 years ago
- Prediction Poisoning: Towards Defenses Against DNN Model Stealing Attacks (ICLR '20)☆33Updated 5 years ago
- Library containing PyTorch implementations of various adversarial attacks and resources☆165Updated this week
- Keeps track of popular provable training and verification approaches towards robust neural networks, including leaderboards on popular da…☆20Updated last year
- ☆11Updated 3 years ago
- ☆160Updated 4 years ago
- [NeurIPS 2021] "G-PATE: Scalable Differentially Private Data Generator via Private Aggregation of Teacher Discriminators" by Yunhui Long*…☆30Updated 4 years ago
- Code for "Label-Consistent Backdoor Attacks"☆56Updated 4 years ago
- ☆58Updated 5 years ago
- [ICLR2021] Unlearnable Examples: Making Personal Data Unexploitable☆170Updated last year
- ☆54Updated 4 years ago
- CVPR 2021 Official repository for the Data-Free Model Extraction paper. https://arxiv.org/abs/2011.14779☆75Updated last year
- ☆26Updated 3 years ago