xandery-geek / BadCMLinks
[IEEE TIP] Offical implementation for the work "BadCM: Invisible Backdoor Attack against Cross-Modal Learning".
☆14Updated last year
Alternatives and similar repositories for BadCM
Users that are interested in BadCM are comparing it to the libraries listed below
Sorting:
- [NeurIPS'25] Backdoor Cleaning without External Guidance in MLLM Fine-tuning☆17Updated 3 months ago
- ☆30Updated last year
- ☆14Updated last year
- Code for paper "Membership Inference Attacks Against Vision-Language Models"☆25Updated last year
- [CVPR 2023] Backdoor Defense via Adaptively Splitting Poisoned Dataset☆49Updated last year
- One Prompt Word is Enough to Boost Adversarial Robustness for Pre-trained Vision-Language Models☆57Updated last year
- ECCV2024: Adversarial Prompt Tuning for Vision-Language Models☆30Updated last year
- Implementation of BadCLIP https://arxiv.org/pdf/2311.16194.pdf☆23Updated last year
- [ICCV-2025] Universal Adversarial Attack, Multimodal Adversarial Attacks, VLP models, Contrastive Learning, Cross-modal Perturbation Gene…☆34Updated 6 months ago
- This is an official repository of ``VLAttack: Multimodal Adversarial Attacks on Vision-Language Tasks via Pre-trained Models'' (NeurIPS 2…☆66Updated 10 months ago
- Unlearnable Examples Give a False Sense of Security: Piercing through Unexploitable Data with Learnable Examples☆11Updated last year
- ☆10Updated 3 years ago
- Code for the CVPR '23 paper, "Defending Against Patch-based Backdoor Attacks on Self-Supervised Learning"☆10Updated 2 years ago
- ☆13Updated last year
- Code for paper: PoisonPrompt: Backdoor Attack on Prompt-based Large Language Models, IEEE ICASSP 2024. Demo//124.220.228.133:11107☆19Updated last year
- [BMVC 2023] Backdoor Attack on Hash-based Image Retrieval via Clean-label Data Poisoning☆17Updated 2 years ago
- ☆54Updated last year
- [ECCV-2024] Transferable Targeted Adversarial Attack, CLIP models, Generative adversarial network, Multi-target attacks☆38Updated 9 months ago
- [MM '24] EvilEdit: Backdooring Text-to-Image Diffusion Models in One Second☆27Updated last year
- [CVPR'25]Chain of Attack: On the Robustness of Vision-Language Models Against Transfer-Based Adversarial Attacks☆29Updated 7 months ago
- This is an official repository for Practical Membership Inference Attacks Against Large-Scale Multi-Modal Models: A Pilot Study (ICCV2023…☆24Updated 2 years ago
- [ECCV'24 Oral] The official GitHub page for ''Images are Achilles' Heel of Alignment: Exploiting Visual Vulnerabilities for Jailbreaking …☆38Updated last year
- official implementation of Towards Robust Model Watermark via Reducing Parametric Vulnerability☆16Updated last year
- [ICLR 2025] MMFakeBench: A Mixed-Source Multimodal Misinformation Detection Benchmark for LVLMs☆43Updated 10 months ago
- AnyDoor: Test-Time Backdoor Attacks on Multimodal Large Language Models☆60Updated last year
- ☆46Updated last year
- ☆12Updated last year
- Set-level Guidance Attack: Boosting Adversarial Transferability of Vision-Language Pre-training Models. [ICCV 2023 Oral]☆70Updated 2 years ago
- The code implementation of MuScleLoRA (Accepted in ACL 2024)☆10Updated last year
- [NeurIPS 2024] "Self-Calibrated Tuning of Vision-Language Models for Out-of-Distribution Detection"☆13Updated last year