[CCS-LAMPS'24] LLM IP Protection Against Model Merging
☆16Oct 14, 2024Updated last year
Alternatives and similar repositories for MergeGuard
Users that are interested in MergeGuard are comparing it to the libraries listed below
Sorting:
- The code implementation of MuScleLoRA (Accepted in ACL 2024)☆10Dec 1, 2024Updated last year
- Fingerprint large language models☆49Jul 11, 2024Updated last year
- [CCS'22] SSLGuard: A Watermarking Scheme for Self-supervised Learning Pre-trained Encoders☆18Jul 12, 2022Updated 3 years ago
- [S&P'24] Test-Time Poisoning Attacks Against Test-Time Adaptation Models☆19Feb 18, 2025Updated last year
- Code and data for paper "Can Watermarked LLMs be Identified by Users via Crafted Prompts?" Accepted by ICLR 2025 (Spotlight)☆28Dec 28, 2024Updated last year
- Code for the paper "Rethinking Stealthiness of Backdoor Attack against NLP Models" (ACL-IJCNLP 2021)☆24Dec 9, 2021Updated 4 years ago
- ☆10Mar 20, 2023Updated 2 years ago
- ☆14Feb 26, 2025Updated last year
- Code for the paper "Overconfidence is a Dangerous Thing: Mitigating Membership Inference Attacks by Enforcing Less Confident Prediction" …☆12Sep 6, 2023Updated 2 years ago
- SaTML 2023, 1st place in CVPR’21 Security AI Challenger: Unrestricted Adversarial Attacks on ImageNet.☆27Dec 29, 2022Updated 3 years ago
- [CCS'24] Official Implementation of "Fisher Information guided Purification against Backdoor Attacks"☆14Oct 29, 2025Updated 4 months ago
- Evaluating Durability: Benchmark Insights into Multimodal Watermarking☆12Jun 7, 2024Updated last year
- ☆14Jun 4, 2025Updated 8 months ago
- [NDSS'25] The official implementation of safety misalignment.☆17Jan 8, 2025Updated last year
- This repository contains code implementation of the paper "AI-Guardian: Defeating Adversarial Attacks using Backdoors, at IEEE Security a…☆14Aug 13, 2023Updated 2 years ago
- ☆12Mar 25, 2020Updated 5 years ago
- ☆25Nov 14, 2022Updated 3 years ago
- ☆20Oct 28, 2025Updated 4 months ago
- PyTorch implementation of our ICLR 2023 paper titled "Is Adversarial Training Really a Silver Bullet for Mitigating Data Poisoning?".☆12Mar 13, 2023Updated 2 years ago
- [CCS 2024] "BadMerging: Backdoor Attacks Against Model Merging": official code implementation.☆35Aug 22, 2024Updated last year
- This is the repositoary for our paper published at ICML24.☆11Jun 11, 2025Updated 8 months ago
- Code for the paper "Evading Black-box Classifiers Without Breaking Eggs" [SaTML 2024]☆21Apr 15, 2024Updated last year
- Official implementation for "HuRef: HUman-REadable Fingerprint for Large Language Models" (NeurIPS2024)☆15Jun 17, 2025Updated 8 months ago
- Code and data for paper "Can LLM Watermarks Robustly Prevent Unauthorized Knowledge Distillation?". (ACL 2025 Main)☆20Jun 18, 2025Updated 8 months ago
- ReColorAdv and other attacks from the NeurIPS 2019 paper "Functional Adversarial Attacks"☆38May 31, 2022Updated 3 years ago
- The code for our Updates-Leak paper☆17Jul 23, 2020Updated 5 years ago
- ☆17May 11, 2025Updated 9 months ago
- This is a python script to generate nice bibtex file for latex.☆18Mar 1, 2020Updated 5 years ago
- [NeurIPS 2021] Better Safe Than Sorry: Preventing Delusive Adversaries with Adversarial Training☆32Jan 9, 2022Updated 4 years ago
- ☆21Oct 25, 2023Updated 2 years ago
- AnyDoor: Test-Time Backdoor Attacks on Multimodal Large Language Models☆60Apr 8, 2024Updated last year
- [NDSS'25 Best Technical Poster] A collection of automated evaluators for assessing jailbreak attempts.☆187Apr 1, 2025Updated 10 months ago
- [USENIX Security'24] REMARK-LLM: A robust and efficient watermarking framework for generative large language models☆27Oct 23, 2024Updated last year
- [ACL2024-Main] Data and Code for WaterBench: Towards Holistic Evaluation of LLM Watermarks☆30Nov 14, 2023Updated 2 years ago
- This repository contains the source code, datasets, and scripts for the paper "GenderCARE: A Comprehensive Framework for Assessing and Re…☆27Aug 29, 2024Updated last year
- Backdoor Safety Tuning (NeurIPS 2023 & 2024 Spotlight)☆27Nov 18, 2024Updated last year
- This is the source code for MEA-Defender. Our paper is accepted by the IEEE Symposium on Security and Privacy (S&P) 2024.☆29Nov 19, 2023Updated 2 years ago
- Enterprise AI Security Platform - Real-time firewall protection for LLM applications against prompt injection, data leakage, and function…☆23Sep 14, 2025Updated 5 months ago
- [ICLR'21] Dataset Inference for Ownership Resolution in Machine Learning☆32Oct 10, 2022Updated 3 years ago