plll4zzx / Awesome-LLM-WatermarkLinks
A collection list for Large Language Model (LLM) Watermark
β54Updated 10 months ago
Alternatives and similar repositories for Awesome-LLM-Watermark
Users that are interested in Awesome-LLM-Watermark are comparing it to the libraries listed below
Sorting:
- [USENIX Security 2025] PoisonedRAG: Knowledge Corruption Attacks to Retrieval-Augmented Generation of Large Language Modelsβ222Updated last month
- UP-TO-DATE LLM Watermark paper. π₯π₯π₯β370Updated last year
- An open-source toolkit for textual backdoor attack and defense (NeurIPS 2022 D&B, Spotlight)β198Updated 2 years ago
- A curated list of papers & resources linked to data poisoning, backdoor attacks and defenses against them (no longer maintained)β282Updated 11 months ago
- A survey on harmful fine-tuning attack for large language modelβ229Updated last week
- Repository for Towards Codable Watermarking for Large Language Modelsβ38Updated 2 years ago
- multi-bit language model watermarking (NAACL 24)β17Updated last year
- β37Updated last year
- β223Updated 4 months ago
- Repo for SemStamp (NAACL2024) and k-SemStamp (ACL2024)β27Updated last year
- β32Updated last month
- [NDSS 2025] Official code for our paper "Explanation as a Watermark: Towards Harmless and Multi-bit Model Ownership Verification via Wateβ¦β45Updated last year
- [NeurIPS 2025] BackdoorLLM: A Comprehensive Benchmark for Backdoor Attacks and Defenses on Large Language Modelsβ262Updated 2 months ago
- Code and data of the ACL-IJCNLP 2021 paper "Hidden Killer: Invisible Textual Backdoor Attacks with Syntactic Trigger"β43Updated 3 years ago
- Code for paper "The Philosopherβs Stone: Trojaning Plugins of Large Language Models"β25Updated last year
- Safety at Scale: A Comprehensive Survey of Large Model Safetyβ216Updated last month
- The code for paper "The Good and The Bad: Exploring Privacy Issues in Retrieval-Augmented Generation (RAG)", exploring the privacy risk oβ¦β63Updated 11 months ago
- β71Updated 7 months ago
- This is the code repository for "Uncovering Safety Risks of Large Language Models through Concept Activation Vector"β47Updated 2 months ago
- β26Updated last year
- The lastest paper about detection of LLM-generated text and codeβ281Updated 6 months ago
- β21Updated last year
- β573Updated 6 months ago
- [TDSC 2024] Official code for our paper "FedTracker: Furnishing Ownership Verification and Traceability for Federated Learning Model"β22Updated 7 months ago
- A curated list of papers & resources on backdoor attacks and defenses in deep learning.β231Updated last year
- β17Updated 7 months ago
- β27Updated 2 years ago
- [NDSS 2025] "CLIBE: Detecting Dynamic Backdoors in Transformer-based NLP Models"β23Updated 4 months ago
- Composite Backdoor Attacks Against Large Language Modelsβ21Updated last year
- [USENIX Security'24] REMARK-LLM: A robust and efficient watermarking framework for generative large language modelsβ26Updated last year