GodXuxilie / RobustSSL_BenchmarkLinks
Benchmark of robust self-supervised learning (RobustSSL) methods & Code for AutoLoRa (ICLR 2024).
☆18Updated 11 months ago
Alternatives and similar repositories for RobustSSL_Benchmark
Users that are interested in RobustSSL_Benchmark are comparing it to the libraries listed below
Sorting:
- [CVPR23] "Towards Compositional Adversarial Robustness: Generalizing Adversarial Training to Composite Semantic Perturbations" by Lei Hsi…☆23Updated last month
- CVPR 2023 generalist☆15Updated last year
- Official implementation of "When Adversarial Training Meets Vision Transformers: Recipes from Training to Architecture" published at Neur…☆33Updated 8 months ago
- Code for the paper Boosting Accuracy and Robustness of Student Models via Adaptive Adversarial Distillation (CVPR 2023).☆35Updated 2 years ago
- This is the repository that introduces research topics related to protecting intellectual property (IP) of AI from a data-centric perspec…☆22Updated last year
- An Embarrassingly Simple Backdoor Attack on Self-supervised Learning☆16Updated last year
- ☆23Updated 2 years ago
- [CVPR 2023] Adversarial Robustness via Random Projection Filters☆14Updated last year
- ☆20Updated 2 months ago
- ☆31Updated 2 years ago
- [CVPR 2022 oral] Subspace Adversarial Training☆26Updated 2 years ago
- Implementation of BadCLIP https://arxiv.org/pdf/2311.16194.pdf☆20Updated last year
- ☆24Updated last year
- [NeurIPS 2024] Fight Back Against Jailbreaking via Prompt Adversarial Tuning☆10Updated 7 months ago
- Implementation for <Robust Weight Perturbation for Adversarial Training> in IJCAI'22.☆14Updated 2 years ago
- [CVPR 2024] Not All Prompts Are Secure: A Switchable Backdoor Attack Against Pre-trained Vision Transfomers☆17Updated 7 months ago
- Backdoor Safety Tuning (NeurIPS 2023 & 2024 Spotlight)☆26Updated 6 months ago
- Boosting the Transferability of Adversarial Attacks with Reverse Adversarial Perturbation (NeurIPS 2022)☆33Updated 2 years ago
- ☆11Updated 2 years ago
- [ICLR 2022] Reliable Adversarial Distillation with Unreliable Teachers☆21Updated 3 years ago
- SEAT☆20Updated last year
- Code repository for CVPR2024 paper 《Pre-trained Model Guided Fine-Tuning for Zero-Shot Adversarial Robustness》☆21Updated last year
- Official Code for Efficient and Effective Augmentation Strategy for Adversarial Training (NeurIPS-2022)☆16Updated 2 years ago
- Towards Defending against Adversarial Examples via Attack-Invariant Features☆10Updated last year
- [ICLR 2022] "Patch-Fool: Are Vision Transformers Always Robust Against Adversarial Perturbations?" by Yonggan Fu, Shunyao Zhang, Shang Wu…☆33Updated 3 years ago
- [NeurIPS23 (Spotlight)] "Model Sparsity Can Simplify Machine Unlearning" by Jinghan Jia*, Jiancheng Liu*, Parikshit Ram, Yuguang Yao, Gao…☆70Updated last year
- A curated list of papers for the transferability of adversarial examples☆69Updated 10 months ago
- ☆35Updated 11 months ago
- Towards understanding modern generative data augmentation techniques.☆27Updated 2 years ago
- Code for the paper "A Light Recipe to Train Robust Vision Transformers" [SaTML 2023]☆52Updated 2 years ago