XuankunRong / BYELinks
[NeurIPS'25] Backdoor Cleaning without External Guidance in MLLM Fine-tuning
☆17Updated 3 months ago
Alternatives and similar repositories for BYE
Users that are interested in BYE are comparing it to the libraries listed below
Sorting:
- ECCV2024: Adversarial Prompt Tuning for Vision-Language Models☆30Updated last year
- ☆30Updated last year
- The official implementation of CVPR 2025 paper "Invisible Backdoor Attack against Self-supervised Learning"☆17Updated 6 months ago
- [IEEE TIP] Offical implementation for the work "BadCM: Invisible Backdoor Attack against Cross-Modal Learning".☆14Updated last year
- One Prompt Word is Enough to Boost Adversarial Robustness for Pre-trained Vision-Language Models☆57Updated last year
- [ECCV-2024] Transferable Targeted Adversarial Attack, CLIP models, Generative adversarial network, Multi-target attacks☆38Updated 9 months ago
- [NeurIPS24] "What makes unlearning hard and what to do about it" [NeurIPS24] "Scalability of memorization-based machine unlearning"☆20Updated 8 months ago
- Code for paper "Membership Inference Attacks Against Vision-Language Models"☆25Updated last year
- Code for paper: PoisonPrompt: Backdoor Attack on Prompt-based Large Language Models, IEEE ICASSP 2024. Demo//124.220.228.133:11107☆19Updated last year
- [BMVC 2023] Backdoor Attack on Hash-based Image Retrieval via Clean-label Data Poisoning☆17Updated 2 years ago
- ☆15Updated last year
- [CVPR 2023] Adversarial Robustness via Random Projection Filters☆13Updated 2 years ago
- ☆28Updated 2 years ago
- Implementation of BadCLIP https://arxiv.org/pdf/2311.16194.pdf☆23Updated last year
- [ICCV-2025] Universal Adversarial Attack, Multimodal Adversarial Attacks, VLP models, Contrastive Learning, Cross-modal Perturbation Gene…☆34Updated 6 months ago
- Code for the CVPR '23 paper, "Defending Against Patch-based Backdoor Attacks on Self-Supervised Learning"☆10Updated 2 years ago
- [CVPR 2023] Backdoor Defense via Adaptively Splitting Poisoned Dataset☆49Updated last year
- [CVPR-25🔥] Test-time Counterattacks (TTC) towards adversarial robustness of CLIP☆39Updated 7 months ago
- official implementation of Towards Robust Model Watermark via Reducing Parametric Vulnerability☆16Updated last year
- CVPR 2025 - R-TPT: Improving Adversarial Robustness of Vision-Language Models through Test-Time Prompt Tuning☆19Updated 5 months ago
- AnyDoor: Test-Time Backdoor Attacks on Multimodal Large Language Models☆60Updated last year
- Official PyTorch implementation of "CleanCLIP: Mitigating Data Poisoning Attacks in Multimodal Contrastive Learning" @ ICCV 2023☆40Updated 3 months ago
- ECSO (Make MLLM safe without neither training nor any external models!) (https://arxiv.org/abs/2403.09572)☆36Updated last year
- [CVPR2025] Official Repository for IMMUNE: Improving Safety Against Jailbreaks in Multi-modal LLMs via Inference-Time Alignment☆26Updated 7 months ago
- [NeurIPS2023] Black-box Backdoor Defense via Zero-shot Image Purification☆16Updated 2 years ago
- [CVPR'25]Chain of Attack: On the Robustness of Vision-Language Models Against Transfer-Based Adversarial Attacks☆29Updated 7 months ago
- The official repository of ECCV 2024 paper "Outlier-Aware Test-time Adaptation with Stable Memory Replay"☆19Updated 7 months ago
- Official code for ICML 2024 paper, "Connecting the Dots: Collaborative Fine-tuning for Black-Box Vision-Language Models"☆19Updated last year
- This is an official repository of ``VLAttack: Multimodal Adversarial Attacks on Vision-Language Tasks via Pre-trained Models'' (NeurIPS 2…☆66Updated 10 months ago
- [ECCV'24 Oral] The official GitHub page for ''Images are Achilles' Heel of Alignment: Exploiting Visual Vulnerabilities for Jailbreaking …☆38Updated last year