ruisizhang123 / REMARK-LLMLinks
[USENIX Security'24] REMARK-LLM: A robust and efficient watermarking framework for generative large language models
☆25Updated 10 months ago
Alternatives and similar repositories for REMARK-LLM
Users that are interested in REMARK-LLM are comparing it to the libraries listed below
Sorting:
- Repository for Towards Codable Watermarking for Large Language Models☆38Updated 2 years ago
- This is the code repository of our submission: Understanding the Dark Side of LLMs’ Intrinsic Self-Correction.☆63Updated 9 months ago
- UP-TO-DATE LLM Watermark paper. 🔥🔥🔥☆354Updated 9 months ago
- ☆39Updated last year
- ☆35Updated 11 months ago
- A curated list of trustworthy Generative AI papers. Daily updating...☆74Updated last year
- Source code of paper "An Unforgeable Publicly Verifiable Watermark for Large Language Models" accepted by ICLR 2024☆35Updated last year
- Code and data for paper "A Semantic Invariant Robust Watermark for Large Language Models" accepted by ICLR 2024.☆34Updated 10 months ago
- ☆16Updated 4 months ago
- ☆20Updated last year
- [NeurIPS 2025] BackdoorLLM: A Comprehensive Benchmark for Backdoor Attacks and Defenses on Large Language Models☆213Updated this week
- Fingerprint large language models☆41Updated last year
- [NDSS 2025] Official code for our paper "Explanation as a Watermark: Towards Harmless and Multi-bit Model Ownership Verification via Wate…☆44Updated 10 months ago
- Text-CRS: A Generalized Certified Robustness Framework against Textual Adversarial Attacks (IEEE S&P 2024)☆34Updated 2 months ago
- Robust natural language watermarking using invariant features☆26Updated last year
- ☆31Updated 5 months ago
- multi-bit language model watermarking (NAACL 24)☆15Updated last year
- Repo for SemStamp (NAACL2024) and k-SemStamp (ACL2024)☆22Updated 9 months ago
- [USENIX'24] Prompt Stealing Attacks Against Text-to-Image Generation Models☆45Updated 8 months ago
- [NDSS'25] The official implementation of safety misalignment.☆16Updated 8 months ago
- ☆26Updated last year
- [ACL 2024] CodeAttack: Revealing Safety Generalization Challenges of Large Language Models via Code Completion☆52Updated 10 months ago
- ☆62Updated 3 months ago
- [USENIX Security 2025] PoisonedRAG: Knowledge Corruption Attacks to Retrieval-Augmented Generation of Large Language Models☆192Updated 6 months ago
- ☆20Updated last year
- Code for the paper "BadPrompt: Backdoor Attacks on Continuous Prompts"☆39Updated last year
- This is the code repository for "Uncovering Safety Risks of Large Language Models through Concept Activation Vector"☆44Updated 10 months ago
- Code for Neurips 2024 paper "Shadowcast: Stealthy Data Poisoning Attacks Against Vision-Language Models"☆54Updated 8 months ago
- A list of recent papers about adversarial learning☆212Updated this week
- This is the source code for MEA-Defender. Our paper is accepted by the IEEE Symposium on Security and Privacy (S&P) 2024.☆28Updated last year