Fingerprint large language models
☆49Jul 11, 2024Updated last year
Alternatives and similar repositories for Model-Fingerprint
Users that are interested in Model-Fingerprint are comparing it to the libraries listed below
Sorting:
- [CCS-LAMPS'24] LLM IP Protection Against Model Merging☆16Oct 14, 2024Updated last year
- Official implementation for "HuRef: HUman-REadable Fingerprint for Large Language Models" (NeurIPS2024)☆15Jun 17, 2025Updated 9 months ago
- A curated collection of research and techniques for protecting intellectual property of large language models, including watermarking, fi…☆47Feb 15, 2026Updated last month
- [USENIX Security'24] REMARK-LLM: A robust and efficient watermarking framework for generative large language models☆27Oct 23, 2024Updated last year
- Code to the paper: The Geometry of Refusal in Large Language Models: Concept Cones and Representational Independence☆26Jul 31, 2025Updated 7 months ago
- [EMNLP 24] Official Implementation of CLEANGEN: Mitigating Backdoor Attacks for Generation Tasks in Large Language Models☆19Mar 9, 2025Updated last year
- Official Repository for "Ten Words Only Still Help: Improving Black-Box AI-Generated Text Detection via Proxy-Guided Efficient Re-Samplin…☆22Aug 15, 2024Updated last year
- ☆24Dec 8, 2024Updated last year
- [NDSS 2025] Official code for our paper "Explanation as a Watermark: Towards Harmless and Multi-bit Model Ownership Verification via Wate…☆45Nov 5, 2024Updated last year
- ☆10Oct 31, 2022Updated 3 years ago
- [ECCV 2024] Official PyTorch Implementation of "How Many Unicorns Are in This Image? A Safety Evaluation Benchmark for Vision LLMs"☆87Nov 28, 2023Updated 2 years ago
- This is the implementation for IEEE S&P 2022 paper "Model Orthogonalization: Class Distance Hardening in Neural Networks for Better Secur…☆11Aug 24, 2022Updated 3 years ago
- Code and data for paper "Can Watermarked LLMs be Identified by Users via Crafted Prompts?" Accepted by ICLR 2025 (Spotlight)☆28Dec 28, 2024Updated last year
- The official code of Multi-player Nash Preference Optimization [ICLR 2026]☆33Feb 4, 2026Updated last month
- Disguising Attacks with Explanation-Aware Backdoors (IEEE S&P 2023)☆12Jan 3, 2026Updated 2 months ago
- [ICLR 2025] On Evluating the Durability of Safegurads for Open-Weight LLMs☆13Jun 20, 2025Updated 9 months ago
- ☆212Jul 24, 2025Updated 7 months ago
- Code for the paper "Overconfidence is a Dangerous Thing: Mitigating Membership Inference Attacks by Enforcing Less Confident Prediction" …☆12Sep 6, 2023Updated 2 years ago
- ☆15Dec 7, 2023Updated 2 years ago
- Code for the paper "Rethinking Stealthiness of Backdoor Attack against NLP Models" (ACL-IJCNLP 2021)☆24Dec 9, 2021Updated 4 years ago
- This repository contains code implementation of the paper "AI-Guardian: Defeating Adversarial Attacks using Backdoors, at IEEE Security a…☆14Aug 13, 2023Updated 2 years ago
- Official Implementation of implicit reference attack☆11Oct 16, 2024Updated last year
- [TMLR'26] UltraEdit: Training-, Subject-, and Memory-Free Lifelong Editing in Large Language Models☆54Mar 10, 2026Updated last week
- [TDSC 2024] Official code for our paper "FedTracker: Furnishing Ownership Verification and Traceability for Federated Learning Model"☆22May 14, 2025Updated 10 months ago
- [USENIX'25] HateBench: Benchmarking Hate Speech Detectors on LLM-Generated Content and Hate Campaigns☆13Mar 1, 2025Updated last year
- ☆14Feb 26, 2025Updated last year
- The official code and dataset for EMNLP 2022 paper "COPEN: Probing Conceptual Knowledge in Pre-trained Language Models".☆21Mar 9, 2023Updated 3 years ago
- [COLM 2024] JailBreakV-28K: A comprehensive benchmark designed to evaluate the transferability of LLM jailbreak attacks to MLLMs, and fur…☆88May 9, 2025Updated 10 months ago
- Implemention of "Robust Watermarking of Neural Network with Exponential Weighting" in TensorFlow.☆13Dec 2, 2020Updated 5 years ago
- ☆26Oct 6, 2024Updated last year
- [AAAI'25 (Oral)] Jailbreaking Large Vision-language Models via Typographic Visual Prompts☆196Jun 26, 2025Updated 8 months ago
- ☆10Mar 20, 2023Updated 3 years ago
- Code repo for the paper: Attacking Vision-Language Computer Agents via Pop-ups☆51Dec 23, 2024Updated last year
- This repo is for the safety topic, including attacks, defenses and studies related to reasoning and RL☆61Sep 5, 2025Updated 6 months ago
- [ACL2024-Main] Data and Code for WaterBench: Towards Holistic Evaluation of LLM Watermarks☆30Nov 14, 2023Updated 2 years ago
- ☆18Oct 7, 2022Updated 3 years ago
- ☆17May 11, 2025Updated 10 months ago
- ☆22Oct 25, 2024Updated last year
- Source code and scripts for the paper "Is Difficulty Calibration All We Need? Towards More Practical Membership Inference Attacks"☆20Dec 10, 2024Updated last year