Sxing2 / CLIP-Test-time-CounterattacksLinks
[CVPR-25🔥] Test-time Counterattacks (TTC) towards adversarial robustness of CLIP
☆29Updated 2 months ago
Alternatives and similar repositories for CLIP-Test-time-Counterattacks
Users that are interested in CLIP-Test-time-Counterattacks are comparing it to the libraries listed below
Sorting:
- One Prompt Word is Enough to Boost Adversarial Robustness for Pre-trained Vision-Language Models☆50Updated 8 months ago
- CVPR 2025 - R-TPT: Improving Adversarial Robustness of Vision-Language Models through Test-Time Prompt Tuning☆12Updated 4 months ago
- ☆43Updated 2 years ago
- [ICCV-2025] Universal Adversarial Attack, Multimodal Adversarial Attacks, VLP models, Contrastive Learning, Cross-modal Perturbation Gene…☆24Updated last month
- Set-level Guidance Attack: Boosting Adversarial Transferability of Vision-Language Pre-training Models. [ICCV 2023 Oral]☆64Updated last year
- ECCV2024: Adversarial Prompt Tuning for Vision-Language Models☆27Updated 9 months ago
- Code repository for CVPR2024 paper 《Pre-trained Model Guided Fine-Tuning for Zero-Shot Adversarial Robustness》☆21Updated last year
- [CVPR'25]Chain of Attack: On the Robustness of Vision-Language Models Against Transfer-Based Adversarial Attacks☆16Updated 2 months ago
- Official code for ICML 2024 paper, "Connecting the Dots: Collaborative Fine-tuning for Black-Box Vision-Language Models"☆18Updated last year
- [TTA-VLM] A Benchmark of Test-Time Adaptation for Vision-Language Models☆16Updated 3 months ago
- The official repository of ECCV 2024 paper "Outlier-Aware Test-time Adaptation with Stable Memory Replay"☆18Updated 2 months ago
- ☆76Updated last year
- ☆29Updated last year
- ☆27Updated 2 years ago
- [CVPR 2024] On the Diversity and Realism of Distilled Dataset: An Efficient Dataset Distillation Paradigm☆73Updated 6 months ago
- Implementation of BadCLIP https://arxiv.org/pdf/2311.16194.pdf☆21Updated last year
- [ICCVW 2025] Robust-LLaVA: On the Effectiveness of Large-Scale Robust Image Encoders for Multi-modal Large Language Models☆23Updated last week
- ICCV 2023 - AdaptGuard: Defending Against Universal Attacks for Model Adaptation☆11Updated last year
- [CVPR 2023] Adversarial Robustness via Random Projection Filters☆14Updated 2 years ago
- [CVPR2025] Official Repository for IMMUNE: Improving Safety Against Jailbreaks in Multi-modal LLMs via Inference-Time Alignment☆22Updated 2 months ago
- [ICCV 2023] Towards Building More Robust Models with Frequency Bias☆17Updated last year
- ☆18Updated 2 years ago
- (NeurIPS 2024)Text-Guided Attention is All You Need for Zero-Shot Robustness in Vision-Language Models☆13Updated last month
- [CVPR23] "Understanding and Improving Visual Prompting: A Label-Mapping Perspective" by Aochuan Chen, Yuguang Yao, Pin-Yu Chen, Yihua Zha…☆53Updated last year
- [ECCV-2024] Transferable Targeted Adversarial Attack, CLIP models, Generative adversarial network, Multi-target attacks☆36Updated 4 months ago
- Official implementation of NeurIPS'24 paper "Defensive Unlearning with Adversarial Training for Robust Concept Erasure in Diffusion Model…☆46Updated 9 months ago
- [ICML 2024] Unsupervised Adversarial Fine-Tuning of Vision Embeddings for Robust Large Vision-Language Models