m-kahla / Label-Only-Model-Inversion-Attacks-via-Boundary-RepulsionLinks
☆27Updated 3 years ago
Alternatives and similar repositories for Label-Only-Model-Inversion-Attacks-via-Boundary-Repulsion
Users that are interested in Label-Only-Model-Inversion-Attacks-via-Boundary-Repulsion are comparing it to the libraries listed below
Sorting:
- ☆45Updated 2 years ago
- [KDD 2022] "Bilateral Dependency Optimization: Defending Against Model-inversion Attacks"☆24Updated 2 months ago
- ☆32Updated 4 years ago
- Revisiting Transferable Adversarial Images (TPAMI 2025)☆137Updated 2 months ago
- WaNet - Imperceptible Warping-based Backdoor Attack (ICLR 2021)☆129Updated last year
- ☆120Updated last month
- Invisible Backdoor Attack with Sample-Specific Triggers☆102Updated 3 years ago
- [ICML 2022 / ICLR 2024] Source code for our papers "Plug & Play Attacks: Towards Robust and Flexible Model Inversion Attacks" and "Be C…☆45Updated 3 months ago
- [ICLR2021] Unlearnable Examples: Making Personal Data Unexploitable☆170Updated last year
- [CVPR-2023] Re-thinking Model Inversion Attacks Against Deep Neural Networks☆42Updated 2 years ago
- Official Repository for the AAAI-20 paper "Hidden Trigger Backdoor Attacks"☆132Updated 2 years ago
- ☆54Updated 4 years ago
- ☆59Updated 2 years ago
- [AAAI 2023] Pseudo Label-Guided Model Inversion Attack via Conditional Generative Adversarial Network☆30Updated last year
- Implementation of the paper "MAZE: Data-Free Model Stealing Attack Using Zeroth-Order Gradient Estimation".☆30Updated 3 years ago
- Convert tensorflow model to pytorch model via [MMdnn](https://github.com/microsoft/MMdnn) for adversarial attacks.☆92Updated 2 years ago
- Code for "Label-Consistent Backdoor Attacks"☆56Updated 4 years ago
- ☆18Updated 3 years ago
- Code for "Neural Network Inversion in Adversarial Setting via Background Knowledge Alignment" (CCS 2019)☆48Updated 5 years ago
- ☆20Updated 3 years ago
- This repository provides simple PyTorch implementations for adversarial training methods on CIFAR-10.☆172Updated 4 years ago
- [ICLR 2022] Reliable Adversarial Distillation with Unreliable Teachers☆22Updated 3 years ago
- Code for Transferable Unlearnable Examples☆22Updated 2 years ago
- [ICCV-2023] Gradient inversion attack, Federated learning, Generative adversarial network.☆49Updated last year
- Input-aware Dynamic Backdoor Attack (NeurIPS 2020)☆36Updated last year
- CVPR 2021 Official repository for the Data-Free Model Extraction paper. https://arxiv.org/abs/2011.14779☆75Updated last year
- Code for "Variational Model Inversion Attacks" Wang et al., NeurIPS2021☆22Updated 3 years ago
- ☆25Updated 2 years ago
- Poison Frogs! Targeted Clean-Label Poisoning Attacks on Neural Networks☆18Updated 6 years ago
- A curated list of papers & resources on backdoor attacks and defenses in deep learning.☆221Updated last year