ShuchiWu / RDALinks
☆11Updated 7 months ago
Alternatives and similar repositories for RDA
Users that are interested in RDA are comparing it to the libraries listed below
Sorting:
- Code for the paper Boosting Accuracy and Robustness of Student Models via Adaptive Adversarial Distillation (CVPR 2023).☆34Updated 2 years ago
- ☆76Updated last year
- [ICCV-2025] Universal Adversarial Attack, Multimodal Adversarial Attacks, VLP models, Contrastive Learning, Cross-modal Perturbation Gene…☆24Updated last month
- ☆35Updated last year
- Code for Transferable Unlearnable Examples☆20Updated 2 years ago
- ☆10Updated 3 years ago
- [CVPR 2023] Official implementation of the Clean Feature Mixup (CFM) method☆18Updated 2 years ago
- Code repository for CVPR2024 paper 《Pre-trained Model Guided Fine-Tuning for Zero-Shot Adversarial Robustness》☆21Updated last year
- CVPR2023: Unlearnable Clusters: Towards Label-agnostic Unlearnable Examples☆22Updated 2 years ago
- Official implementation of the ICCV2023 paper: Enhancing Generalization of Universal Adversarial Perturbation through Gradient Aggregatio…☆26Updated last year
- ☆10Updated 3 years ago
- Set-level Guidance Attack: Boosting Adversarial Transferability of Vision-Language Pre-training Models. [ICCV 2023 Oral]☆64Updated last year
- [CVPR 2023] Adversarial Robustness via Random Projection Filters☆14Updated 2 years ago
- ☆18Updated 2 years ago
- Unlearnable Examples Give a False Sense of Security: Piercing through Unexploitable Data with Learnable Examples☆10Updated 10 months ago
- Implementation of BadCLIP https://arxiv.org/pdf/2311.16194.pdf☆21Updated last year
- Spectrum simulation attack (ECCV'2022 Oral) towards boosting the transferability of adversarial examples☆109Updated 3 years ago
- Code for the paper "Autoregressive Perturbations for Data Poisoning" (NeurIPS 2022)☆20Updated 11 months ago
- CVPR 2025 - Anyattack: Towards Large-scale Self-supervised Adversarial Attacks on Vision-language Models☆41Updated last week
- [ACM MM 2023] Improving the Transferability of Adversarial Examples with Arbitrary Style Transfer.☆20Updated last year
- [CVPR 2024] Not All Prompts Are Secure: A Switchable Backdoor Attack Against Pre-trained Vision Transfomers☆17Updated 9 months ago
- [CVPR'25]Chain of Attack: On the Robustness of Vision-Language Models Against Transfer-Based Adversarial Attacks☆15Updated 2 months ago
- ☆60Updated 2 years ago
- AdvDiffuser: Natural Adversarial Example Synthesis with Diffusion Models (ICCV 2023)☆18Updated 2 years ago
- [CVPR-25🔥] Test-time Counterattacks (TTC) towards adversarial robustness of CLIP☆29Updated 2 months ago
- Official implementation of "When Adversarial Training Meets Vision Transformers: Recipes from Training to Architecture" published at Neur…☆33Updated 10 months ago
- One Prompt Word is Enough to Boost Adversarial Robustness for Pre-trained Vision-Language Models☆50Updated 7 months ago
- ☆23Updated 2 years ago
- Pytorch implementation for the pilot study on the robustness of latent diffusion models.☆12Updated 2 years ago
- Code for the CVPR '23 paper, "Defending Against Patch-based Backdoor Attacks on Self-Supervised Learning"☆10Updated 2 years ago