jzhang538 / BadMergingLinks
[CCS 2024] "BadMerging: Backdoor Attacks Against Model Merging": official code implementation.
☆28Updated 10 months ago
Alternatives and similar repositories for BadMerging
Users that are interested in BadMerging are comparing it to the libraries listed below
Sorting:
- [ICML 2024] Unsupervised Adversarial Fine-Tuning of Vision Embeddings for Robust Large Vision-Language Models☆135Updated last month
- A Simple Baseline Achieving Over 90% Success Rate Against the Strong Black-box Models of GPT-4.5/4o/o1. Paper at: https://arxiv.org/abs/2…☆65Updated 3 months ago
- ☆27Updated last year
- (CVPR 2025) Official implementation to DELT: A Simple Diversity-driven EarlyLate Training for Dataset Distillation which outperforms SOTA…☆23Updated 2 months ago
- AnyDoor: Test-Time Backdoor Attacks on Multimodal Large Language Models☆55Updated last year
- [ICLR24 (Spotlight)] "SalUn: Empowering Machine Unlearning via Gradient-based Weight Saliency in Both Image Classification and Generation…☆126Updated last month
- Improving Your Model Ranking on Chatbot Arena by Vote Rigging (ICML 2025)☆21Updated 4 months ago
- [TMLR'24] This repository includes the official implementation our paper "FedConv: Enhancing Convolutional Neural Networks for Handling D…☆25Updated last year
- [ICML 2024] Agent Smith: A Single Image Can Jailbreak One Million Multimodal LLM Agents Exponentially Fast☆107Updated last year
- [CCS 2024] Optimization-based Prompt Injection Attack to LLM-as-a-Judge☆25Updated 8 months ago
- Code for Neurips 2024 paper "Shadowcast: Stealthy Data Poisoning Attacks Against Vision-Language Models"☆50Updated 6 months ago
- [ICLR 2025] Dissecting adversarial robustness of multimodal language model agents☆97Updated 4 months ago
- ☆27Updated 2 months ago
- [CCS-LAMPS'24] LLM IP Protection Against Model Merging☆15Updated 9 months ago
- [ICLR 2024] Inducing High Energy-Latency of Large Vision-Language Models with Verbose Images☆36Updated last year
- The official implementation of ECCV'24 paper "To Generate or Not? Safety-Driven Unlearned Diffusion Models Are Still Easy To Generate Uns…☆78Updated 4 months ago
- [NeurIPS24] "What makes unlearning hard and what to do about it" [NeurIPS24] "Scalability of memorization-based machine unlearning"☆17Updated last month
- ☆31Updated 2 years ago
- ☆27Updated 4 months ago
- ☆153Updated 3 months ago
- This repository contains the source code, datasets, and scripts for the paper "GenderCARE: A Comprehensive Framework for Assessing and Re…☆23Updated 10 months ago
- Code and data for paper "Can Watermarked LLMs be Identified by Users via Crafted Prompts?" Accepted by ICLR 2025 (Spotlight)☆23Updated 6 months ago
- [NeurIPS 2024] Official implementation of the paper “Ferrari: Federated Feature Unlearning via Optimizing Feature Sensitivity"☆18Updated 4 months ago
- ☆14Updated 2 years ago
- A novel approach to improve the safety of large language models, enabling them to transition effectively from unsafe to safe state.☆61Updated last month
- [ACL2025] Unsolvable Problem Detection: Robust Understanding Evaluation for Large Multimodal Models☆77Updated last month
- [ECCV'24 Oral] The official GitHub page for ''Images are Achilles' Heel of Alignment: Exploiting Visual Vulnerabilities for Jailbreaking …☆29Updated 9 months ago
- The repository of the paper "REEF: Representation Encoding Fingerprints for Large Language Models," aims to protect the IP of open-source…☆57Updated 6 months ago
- A toolbox for benchmarking trustworthiness of multimodal large language models (MultiTrust, NeurIPS 2024 Track Datasets and Benchmarks)☆154Updated 3 weeks ago
- ☆29Updated last month