TreeLLi / APTLinks
One Prompt Word is Enough to Boost Adversarial Robustness for Pre-trained Vision-Language Models
☆51Updated 6 months ago
Alternatives and similar repositories for APT
Users that are interested in APT are comparing it to the libraries listed below
Sorting:
- ECCV2024: Adversarial Prompt Tuning for Vision-Language Models☆27Updated 7 months ago
- ☆43Updated 2 years ago
- Code for the paper Boosting Accuracy and Robustness of Student Models via Adaptive Adversarial Distillation (CVPR 2023).☆35Updated 2 years ago
- [ECCV'24 Oral] The official GitHub page for ''Images are Achilles' Heel of Alignment: Exploiting Visual Vulnerabilities for Jailbreaking …☆29Updated 8 months ago
- [CVPR2024 Highlight] Official implementation for Transferable Visual Prompting. The paper "Exploring the Transferability of Visual Prompt…☆44Updated 6 months ago
- ☆44Updated 6 months ago
- [CVPR23] "Understanding and Improving Visual Prompting: A Label-Mapping Perspective" by Aochuan Chen, Yuguang Yao, Pin-Yu Chen, Yihua Zha…☆53Updated last year
- Universal Adversarial Attack, Multimodal Adversarial Attacks, VLP models, Contrastive Learning, Cross-modal Perturbation Generator, Gener…☆17Updated 8 months ago
- Official PyTorch implementation of "CleanCLIP: Mitigating Data Poisoning Attacks in Multimodal Contrastive Learning" @ ICCV 2023☆36Updated last year
- ☆22Updated 9 months ago
- [ICLR 2024 Oral] Less is More: Fewer Interpretable Region via Submodular Subset Selection☆78Updated 3 weeks ago
- [ECCV'24 Oral] The official GitHub page for ''Images are Achilles' Heel of Alignment: Exploiting Visual Vulnerabilities for Jailbreaking …☆22Updated 8 months ago
- The official repository of ECCV 2024 paper "Outlier-Aware Test-time Adaptation with Stable Memory Replay"☆19Updated 2 weeks ago
- Implementation of BadCLIP https://arxiv.org/pdf/2311.16194.pdf☆20Updated last year
- This is an official repository of ``VLAttack: Multimodal Adversarial Attacks on Vision-Language Tasks via Pre-trained Models'' (NeurIPS 2…☆55Updated 3 months ago
- Set-level Guidance Attack: Boosting Adversarial Transferability of Vision-Language Pre-training Models. [ICCV 2023 Oral]☆62Updated last year
- ECSO (Make MLLM safe without neither training nor any external models!) (https://arxiv.org/abs/2403.09572)☆25Updated 7 months ago
- ☆35Updated 11 months ago
- [CVPR-25🔥] Test-time Counterattacks (TTC) towards adversarial robustness of CLIP☆26Updated 3 weeks ago
- ☆18Updated last year
- [ICML 2024] Safety Fine-Tuning at (Almost) No Cost: A Baseline for Vision Large Language Models.☆72Updated 5 months ago
- CVPR 2025 - Anyattack: Towards Large-scale Self-supervised Adversarial Attacks on Vision-language Models☆35Updated 3 months ago
- [CVPR2025] Official Repository for IMMUNE: Improving Safety Against Jailbreaks in Multi-modal LLMs via Inference-Time Alignment☆18Updated 2 weeks ago
- ☆46Updated last year
- [ECCV2024] Boosting Transferability in Vision-Language Attacks via Diversification along the Intersection Region of Adversarial Trajector…☆27Updated 7 months ago
- [ECCV 2024] Official PyTorch Implementation of "How Many Unicorns Are in This Image? A Safety Evaluation Benchmark for Vision LLMs"☆81Updated last year
- A package that achieves 95%+ transfer attack success rate against GPT-4☆20Updated 8 months ago
- Evaluate robustness of adaptation methods on large vision-language models☆19Updated last year
- CVPR 2025 - R-TPT: Improving Adversarial Robustness of Vision-Language Models through Test-Time Prompt Tuning☆11Updated 2 months ago
- AnyDoor: Test-Time Backdoor Attacks on Multimodal Large Language Models☆55Updated last year