jzhang538 / BadMergingLinks
[CCS 2024] "BadMerging: Backdoor Attacks Against Model Merging": official code implementation.
☆35Updated last year
Alternatives and similar repositories for BadMerging
Users that are interested in BadMerging are comparing it to the libraries listed below
Sorting:
- [NeurIPS25 & ICML25 Workshop on Reliable and Responsible Foundation Models] A Simple Baseline Achieving Over 90% Success Rate Against the…☆86Updated this week
- [ICML 2024] Unsupervised Adversarial Fine-Tuning of Vision Embeddings for Robust Large Vision-Language Models☆154Updated 8 months ago
- Code for Neurips 2024 paper "Shadowcast: Stealthy Data Poisoning Attacks Against Vision-Language Models"☆59Updated last year
- AnyDoor: Test-Time Backdoor Attacks on Multimodal Large Language Models☆60Updated last year
- ☆37Updated this week
- (CVPR 2025) Official implementation to DELT: A Simple Diversity-driven EarlyLate Training for Dataset Distillation which outperforms SOTA…☆26Updated 5 months ago
- The official implementation of ECCV'24 paper "To Generate or Not? Safety-Driven Unlearned Diffusion Models Are Still Easy To Generate Uns…☆86Updated 11 months ago
- [ICML 2024] Agent Smith: A Single Image Can Jailbreak One Million Multimodal LLM Agents Exponentially Fast☆118Updated last year
- [CCS-LAMPS'24] LLM IP Protection Against Model Merging☆16Updated last year
- ☆27Updated 2 years ago
- [ICLR 2024] Inducing High Energy-Latency of Large Vision-Language Models with Verbose Images☆42Updated 2 years ago
- ☆109Updated last year
- Improving Your Model Ranking on Chatbot Arena by Vote Rigging (ICML 2025)☆26Updated 11 months ago
- ☆47Updated this week
- ☆16Updated 3 years ago
- [ICLR 2025] Dissecting adversarial robustness of multimodal language model agents☆123Updated 11 months ago
- [CCS 2024] Optimization-based Prompt Injection Attack to LLM-as-a-Judge☆39Updated 4 months ago
- ☆33Updated 9 months ago
- To Think or Not to Think: Exploring the Unthinking Vulnerability in Large Reasoning Models☆33Updated 8 months ago
- [ICLR24 (Spotlight)] "SalUn: Empowering Machine Unlearning via Gradient-based Weight Saliency in Both Image Classification and Generation…☆141Updated 8 months ago
- ☆16Updated 10 months ago
- official implementation of Towards Robust Model Watermark via Reducing Parametric Vulnerability☆16Updated last year
- The evaluation code for A Safety Report on GPT-5.2, Gemini 3 Pro, Qwen3-VL, Grok 4.1 Fast, Nano Banana Pro, and Seedream 4.5☆49Updated 3 weeks ago
- Code to conduct an embedding attack on LLMs☆31Updated last year
- Official code for the ICCV2023 paper ``One-bit Flip is All You Need: When Bit-flip Attack Meets Model Training''☆20Updated 2 years ago
- ☆30Updated last year
- This is the official code for the paper "Virus: Harmful Fine-tuning Attack for Large Language Models Bypassing Guardrail Moderation"☆53Updated last year
- [NeurIPS'24] Protecting Your LLMs with Information Bottleneck☆25Updated last year
- ☆34Updated 3 years ago
- ☆24Updated last year