axa-rev-research / LowProFoolLinks
Repository of the paper "Imperceptible Adversarial Attacks on Tabular Data" presented at NeurIPS 2019 Workshop on Robust AI in Financial Services (Robust AI in FS 2019)
β16Updated 3 years ago
Alternatives and similar repositories for LowProFool
Users that are interested in LowProFool are comparing it to the libraries listed below
Sorting:
- β22Updated 6 years ago
- π‘ Adversarial attacks on explanations and how to defend themβ328Updated 10 months ago
- Methods for removing learned data from neural nets and evaluation of those methodsβ37Updated 4 years ago
- β37Updated 2 years ago
- A unified benchmark problem for data poisoning attacksβ160Updated 2 years ago
- [ICLR 2022] Reliable Adversarial Distillation with Unreliable Teachersβ22Updated 3 years ago
- Code for "On Adaptive Attacks to Adversarial Example Defenses"β87Updated 4 years ago
- [USENIX Security 2022] Mitigating Membership Inference Attacks by Self-Distillation Through a Novel Ensemble Architectureβ17Updated 3 years ago
- This repository provides simple PyTorch implementations for adversarial training methods on CIFAR-10.β170Updated 4 years ago
- β194Updated 2 years ago
- Code related to the paper "Machine Unlearning of Features and Labels"β71Updated last year
- [ICLR2021] Unlearnable Examples: Making Personal Data Unexploitableβ171Updated last year
- [ICLR 2023, Spotlight] Indiscriminate Poisoning Attacks on Unsupervised Contrastive Learningβ31Updated last year
- Keeps track of popular provable training and verification approaches towards robust neural networks, including leaderboards on popular daβ¦β19Updated last year
- β54Updated 4 years ago
- β58Updated 5 years ago
- β32Updated last year
- Papers and online resources related to machine learning fairnessβ73Updated 2 years ago
- [arXiv:2411.10023] "Model Inversion Attacks: A Survey of Approaches and Countermeasures"β207Updated 4 months ago
- A curated list of papers on adversarial machine learning (adversarial examples and defense methods).β211Updated 3 years ago
- β19Updated last year
- Code for "Label-Consistent Backdoor Attacks"β58Updated 4 years ago
- A reproduced PyTorch implementation of the Adversarially Reweighted Learning (ARL) model, originally presented in "Fairness without Demogβ¦β20Updated 4 years ago
- reference implementation for "explanations can be manipulated and geometry is to blame"β38Updated 3 years ago
- β47Updated last year
- π± A curated list of data valuation (DV) to design your next data marketplaceβ128Updated 8 months ago
- FFB: A Fair Fairness Benchmark for In-Processing Group Fairness Methods.β30Updated last year
- Implemented CURE algorithm from robustness via curvature regularization and vice versaβ32Updated 2 years ago
- Certified Removal from Machine Learning Modelsβ69Updated 4 years ago
- Implementation of the paper "MAZE: Data-Free Model Stealing Attack Using Zeroth-Order Gradient Estimation".β30Updated 3 years ago