Fingerprint large language models
☆49Jul 11, 2024Updated last year
Alternatives and similar repositories for Model-Fingerprint
Users that are interested in Model-Fingerprint are comparing it to the libraries listed below
Sorting:
- [CCS-LAMPS'24] LLM IP Protection Against Model Merging☆16Oct 14, 2024Updated last year
- Official implementation for "HuRef: HUman-REadable Fingerprint for Large Language Models" (NeurIPS2024)☆15Jun 17, 2025Updated 8 months ago
- A curated collection of research and techniques for protecting intellectual property of large language models, including watermarking, fi…☆46Feb 15, 2026Updated 2 weeks ago
- [USENIX Security'24] REMARK-LLM: A robust and efficient watermarking framework for generative large language models☆27Oct 23, 2024Updated last year
- The repository of the paper "REEF: Representation Encoding Fingerprints for Large Language Models," aims to protect the IP of open-source…☆74Jan 16, 2025Updated last year
- Code to the paper: The Geometry of Refusal in Large Language Models: Concept Cones and Representational Independence☆26Jul 31, 2025Updated 7 months ago
- [EMNLP 24] Official Implementation of CLEANGEN: Mitigating Backdoor Attacks for Generation Tasks in Large Language Models☆19Mar 9, 2025Updated 11 months ago
- Official Repository for "Ten Words Only Still Help: Improving Black-Box AI-Generated Text Detection via Proxy-Guided Efficient Re-Samplin…☆22Aug 15, 2024Updated last year
- [NDSS 2025] Official code for our paper "Explanation as a Watermark: Towards Harmless and Multi-bit Model Ownership Verification via Wate…☆45Nov 5, 2024Updated last year
- [USENIX'25] HateBench: Benchmarking Hate Speech Detectors on LLM-Generated Content and Hate Campaigns☆13Mar 1, 2025Updated last year
- ☆24Dec 8, 2024Updated last year
- Code for the paper "Rethinking Stealthiness of Backdoor Attack against NLP Models" (ACL-IJCNLP 2021)☆24Dec 9, 2021Updated 4 years ago
- Official Implementation of implicit reference attack☆11Oct 16, 2024Updated last year
- ☆10Mar 20, 2023Updated 2 years ago
- Disguising Attacks with Explanation-Aware Backdoors (IEEE S&P 2023)☆11Jan 3, 2026Updated last month
- Code for the paper "Overconfidence is a Dangerous Thing: Mitigating Membership Inference Attacks by Enforcing Less Confident Prediction" …☆12Sep 6, 2023Updated 2 years ago
- ☆14Feb 26, 2025Updated last year
- The code implementation of MuScleLoRA (Accepted in ACL 2024)☆10Dec 1, 2024Updated last year
- ☆10Oct 31, 2022Updated 3 years ago
- [ICLR 2025] On Evluating the Durability of Safegurads for Open-Weight LLMs☆13Jun 20, 2025Updated 8 months ago
- This is the implementation for IEEE S&P 2022 paper "Model Orthogonalization: Class Distance Hardening in Neural Networks for Better Secur…☆11Aug 24, 2022Updated 3 years ago
- Implemention of "Robust Watermarking of Neural Network with Exponential Weighting" in TensorFlow.☆13Dec 2, 2020Updated 5 years ago
- Watermarking LLM papers up-to-date☆11Dec 17, 2023Updated 2 years ago
- ☆14Jun 4, 2025Updated 8 months ago
- ☆146Jul 24, 2025Updated 7 months ago
- [ECCV 2024] Official PyTorch Implementation of "How Many Unicorns Are in This Image? A Safety Evaluation Benchmark for Vision LLMs"☆86Nov 28, 2023Updated 2 years ago
- This repository contains code implementation of the paper "AI-Guardian: Defeating Adversarial Attacks using Backdoors, at IEEE Security a…☆14Aug 13, 2023Updated 2 years ago
- ☆15May 28, 2024Updated last year
- ☆18Oct 7, 2022Updated 3 years ago
- The official code of Multi-player Nash Preference Optimization [ICLR 2026]☆31Feb 4, 2026Updated 3 weeks ago
- ☆21Sep 16, 2024Updated last year
- [AAAI'25 (Oral)] Jailbreaking Large Vision-language Models via Typographic Visual Prompts☆191Jun 26, 2025Updated 8 months ago
- This repo is for the safety topic, including attacks, defenses and studies related to reasoning and RL☆61Sep 5, 2025Updated 5 months ago
- [COLM 2024] JailBreakV-28K: A comprehensive benchmark designed to evaluate the transferability of LLM jailbreak attacks to MLLMs, and fur…☆87May 9, 2025Updated 9 months ago
- The official code and dataset for EMNLP 2022 paper "COPEN: Probing Conceptual Knowledge in Pre-trained Language Models".☆21Mar 9, 2023Updated 2 years ago
- Code repo for the paper: Attacking Vision-Language Computer Agents via Pop-ups☆51Dec 23, 2024Updated last year
- ☆15Dec 7, 2023Updated 2 years ago
- ☆18Aug 15, 2022Updated 3 years ago
- Code and data for paper "Can LLM Watermarks Robustly Prevent Unauthorized Knowledge Distillation?". (ACL 2025 Main)☆21Jun 18, 2025Updated 8 months ago