LetheSec / PLG-MI-Attack
[AAAI 2023] Pseudo Label-Guided Model Inversion Attack via Conditional Generative Adversarial Network
☆28Updated 6 months ago
Alternatives and similar repositories for PLG-MI-Attack:
Users that are interested in PLG-MI-Attack are comparing it to the libraries listed below
- ☆42Updated last year
- [CVPR 2023] Backdoor Defense via Adaptively Splitting Poisoned Dataset☆48Updated last year
- ☆25Updated 2 years ago
- [ICCV-2023] Gradient inversion attack, Federated learning, Generative adversarial network.☆37Updated 9 months ago
- Poison Frogs! Targeted Clean-Label Poisoning Attacks on Neural Networks☆17Updated 5 years ago
- ☆27Updated 3 years ago
- [KDD 2022] "Bilateral Dependency Optimization: Defending Against Model-inversion Attacks"☆24Updated this week
- [CVPR 2023] The official implementation of our CVPR 2023 paper "Detecting Backdoors During the Inference Stage Based on Corruption Robust…☆23Updated last year
- [ACM MM 2023] Improving the Transferability of Adversarial Examples with Arbitrary Style Transfer.☆18Updated last year
- Implementation of IEEE TNNLS 2023 and Elsevier PR 2023 papers on backdoor watermarking for deep classification models with unambiguity an…☆16Updated last year
- All code and data necessary to replicate experiments in the paper BAGM: A Backdoor Attack for Manipulating Text-to-Image Generative Model…☆11Updated 6 months ago
- [NeurIPS 2023] Boosting Adversarial Transferability by Achieving Flat Local Maxima☆30Updated last year
- Official Implementation of ICLR 2022 paper, ``Adversarial Unlearning of Backdoors via Implicit Hypergradient''☆53Updated 2 years ago
- Invisible Backdoor Attack with Sample-Specific Triggers☆94Updated 2 years ago
- [MM'23 Oral] "Text-to-image diffusion models can be easily backdoored through multimodal data poisoning"☆28Updated last month
- WaNet - Imperceptible Warping-based Backdoor Attack (ICLR 2021)☆121Updated 5 months ago
- Stochastic Variance Reduced Ensemble Adversarial Attack for Boosting the Adversarial Transferability☆24Updated 2 years ago
- ☆19Updated 2 years ago
- Official implementation of the ICCV2023 paper: Enhancing Generalization of Universal Adversarial Perturbation through Gradient Aggregatio…☆23Updated last year
- Code for "Label-Consistent Backdoor Attacks"☆55Updated 4 years ago
- A curated list of papers for the transferability of adversarial examples☆63Updated 9 months ago
- ☆25Updated last year
- Defending against Model Stealing via Verifying Embedded External Features☆36Updated 3 years ago
- [NeurIPS 2023] Codes for DiffAttack: Evasion Attacks Against Diffusion-Based Adversarial Purification☆29Updated last year
- The official code of IEEE S&P 2024 paper "Why Does Little Robustness Help? A Further Step Towards Understanding Adversarial Transferabili…☆19Updated 7 months ago
- Spectrum simulation attack (ECCV'2022 Oral) towards boosting the transferability of adversarial examples☆102Updated 2 years ago
- [CVPR-2023] Re-thinking Model Inversion Attacks Against Deep Neural Networks☆40Updated last year
- ☆12Updated 9 months ago
- Implementation of BadCLIP https://arxiv.org/pdf/2311.16194.pdf☆20Updated last year
- ☆17Updated 3 years ago