ShuchiWu / RDALinks
☆11Updated last year
Alternatives and similar repositories for RDA
Users that are interested in RDA are comparing it to the libraries listed below
Sorting:
- Code for the paper Boosting Accuracy and Robustness of Student Models via Adaptive Adversarial Distillation (CVPR 2023).☆34Updated 2 years ago
- Code for the CVPR '23 paper, "Defending Against Patch-based Backdoor Attacks on Self-Supervised Learning"☆10Updated 2 years ago
- ☆14Updated 2 years ago
- [CVPR 2024] Not All Prompts Are Secure: A Switchable Backdoor Attack Against Pre-trained Vision Transfomers☆16Updated last year
- [ICCV-2025] Universal Adversarial Attack, Multimodal Adversarial Attacks, VLP models, Contrastive Learning, Cross-modal Perturbation Gene…☆34Updated 6 months ago
- ☆12Updated 3 years ago
- ☆15Updated 2 years ago
- Implementation of BadCLIP https://arxiv.org/pdf/2311.16194.pdf☆23Updated last year
- [CVPR 2023] Official implementation of the Clean Feature Mixup (CFM) method☆23Updated 2 years ago
- Github repo for One-shot Neural Backdoor Erasing via Adversarial Weight Masking (NeurIPS 2022)☆15Updated 3 years ago
- ☆19Updated 2 years ago
- An Embarrassingly Simple Backdoor Attack on Self-supervised Learning☆20Updated 2 years ago
- ☆10Updated 3 years ago
- Code for Transferable Unlearnable Examples☆23Updated 2 years ago
- ☆37Updated last year
- [CVPR 2023] Adversarial Robustness via Random Projection Filters☆13Updated 2 years ago
- ☆14Updated 3 years ago
- One Prompt Word is Enough to Boost Adversarial Robustness for Pre-trained Vision-Language Models☆57Updated last year
- Code for the paper "Autoregressive Perturbations for Data Poisoning" (NeurIPS 2022)☆20Updated last year
- [CVPR 2023] Backdoor Defense via Adaptively Splitting Poisoned Dataset☆49Updated last year
- Minimizing Maximum Model Discrepancy for Transferable Black-box Targeted Attacks(CVPR2023)☆18Updated 2 years ago
- Official implementation of "When Adversarial Training Meets Vision Transformers: Recipes from Training to Architecture" published at Neur…☆37Updated last year
- ☆28Updated 2 years ago
- ☆80Updated last year
- ☆13Updated last year
- ☆31Updated last year
- Set-level Guidance Attack: Boosting Adversarial Transferability of Vision-Language Pre-training Models. [ICCV 2023 Oral]☆71Updated 2 years ago
- official implementation of Towards Robust Model Watermark via Reducing Parametric Vulnerability☆16Updated last year
- ECCV2024: Adversarial Prompt Tuning for Vision-Language Models☆30Updated last year
- [CVPR 2023] The official implementation of our CVPR 2023 paper "Detecting Backdoors During the Inference Stage Based on Corruption Robust…☆24Updated 2 years ago