axa-rev-research / LowProFoolLinks
Repository of the paper "Imperceptible Adversarial Attacks on Tabular Data" presented at NeurIPS 2019 Workshop on Robust AI in Financial Services (Robust AI in FS 2019)
β16Updated 4 years ago
Alternatives and similar repositories for LowProFool
Users that are interested in LowProFool are comparing it to the libraries listed below
Sorting:
- β22Updated 6 years ago
- π‘ Adversarial attacks on explanations and how to defend themβ330Updated last year
- β37Updated 2 years ago
- Methods for removing learned data from neural nets and evaluation of those methodsβ38Updated 5 years ago
- [ICLR2021] Unlearnable Examples: Making Personal Data Unexploitableβ170Updated last year
- RobustBench: a standardized adversarial robustness benchmark [NeurIPS 2021 Benchmarks and Datasets Track]β754Updated 8 months ago
- [USENIX Security 2022] Mitigating Membership Inference Attacks by Self-Distillation Through a Novel Ensemble Architectureβ17Updated 3 years ago
- [ICLR 2020] A repository for extremely fast adversarial training using FGSMβ449Updated last year
- A unified benchmark problem for data poisoning attacksβ161Updated 2 years ago
- This repository provides simple PyTorch implementations for adversarial training methods on CIFAR-10.β172Updated 4 years ago
- [ICLR 2022] Reliable Adversarial Distillation with Unreliable Teachersβ22Updated 3 years ago
- A curated list of papers on adversarial machine learning (adversarial examples and defense methods).β212Updated 3 years ago
- FFB: A Fair Fairness Benchmark for In-Processing Group Fairness Methods.β30Updated last year
- Related papers for robust machine learningβ567Updated 2 years ago
- Membership Inference of Generative Modelsβ14Updated 6 years ago
- Code relative to "Reliable evaluation of adversarial robustness with an ensemble of diverse parameter-free attacks"β732Updated last year
- β32Updated last year
- Keeps track of popular provable training and verification approaches towards robust neural networks, including leaderboards on popular daβ¦β19Updated last year
- β196Updated 2 years ago
- FR-Train: A Mutual Information-Based Approach to Fair and Robust Training (ICML 2020)β13Updated 4 years ago
- This is a PyTorch reimplementation of Influence Functions from the ICML2017 best paper: Understanding Black-box Predictions via Influenceβ¦β343Updated 2 years ago
- Official implementation of "GAN-Leaks: A Taxonomy of Membership Inference Attacks against Generative Models" (CCS 2020)β47Updated 3 years ago
- CVPR 2021 Official repository for the Data-Free Model Extraction paper. https://arxiv.org/abs/2011.14779β75Updated last year
- β58Updated 5 years ago
- [arXiv:2411.10023] "Model Inversion Attacks: A Survey of Approaches and Countermeasures"β209Updated 6 months ago
- Code for "On Adaptive Attacks to Adversarial Example Defenses"β87Updated 4 years ago
- General fair regression subject to demographic parity constraint. Paper appeared in ICML 2019.β16Updated 5 years ago
- Implementation of the paper "MAZE: Data-Free Model Stealing Attack Using Zeroth-Order Gradient Estimation".β31Updated 3 years ago
- A reproduced PyTorch implementation of the Adversarially Reweighted Learning (ARL) model, originally presented in "Fairness without Demogβ¦β20Updated 4 years ago
- Papers and online resources related to machine learning fairnessβ74Updated 2 years ago