ruisizhang123 / REMARK-LLM
[USENIX Scurity'24] REMARK-LLM: A robust and efficient watermarking framework for generative large language models
☆18Updated last month
Related projects ⓘ
Alternatives and complementary repositories for REMARK-LLM
- Repository for Towards Codable Watermarking for Large Language Models☆29Updated last year
- Text-CRS: A Generalized Certified Robustness Framework against Textual Adversarial Attacks (IEEE S&P 2024)☆31Updated 8 months ago
- ☆12Updated 3 weeks ago
- Source code of paper "An Unforgeable Publicly Verifiable Watermark for Large Language Models" accepted by ICLR 2024☆28Updated 6 months ago
- [MM'23 Oral] "Text-to-image diffusion models can be easily backdoored through multimodal data poisoning"☆22Updated this week
- ☆18Updated last month
- ☆15Updated 7 months ago
- multi-bit language model watermarking (NAACL 24)☆11Updated 2 months ago
- Code for paper: "PromptCARE: Prompt Copyright Protection by Watermark Injection and Verification", IEEE S&P 2024.☆28Updated 3 months ago
- ☆9Updated 2 years ago
- ☆21Updated 4 months ago
- Code release for DeepJudge (S&P'22)☆51Updated last year
- This repository is the official implementation of the paper "ASSET: Robust Backdoor Data Detection Across a Multiplicity of Deep Learning…☆17Updated last year
- ☆19Updated 4 months ago
- This is an official repository for Practical Membership Inference Attacks Against Large-Scale Multi-Modal Models: A Pilot Study (ICCV2023…☆20Updated last year
- Distribution Preserving Backdoor Attack in Self-supervised Learning☆12Updated 9 months ago
- Code for paper: PoisonPrompt: Backdoor Attack on Prompt-based Large Language Models, IEEE ICASSP 2024. Demo//124.220.228.133:11107☆12Updated 3 months ago
- Code for Neurips 2024 paper "Shadowcast: Stealthy Data Poisoning Attacks Against Vision-Language Models"☆28Updated last month
- ☆21Updated last year
- ☆10Updated 3 months ago
- [ACL 2024] CodeAttack: Revealing Safety Generalization Challenges of Large Language Models via Code Completion☆29Updated last month
- 😎 up-to-date & curated list of awesome Attacks on Large-Vision-Language-Models papers, methods & resources.☆134Updated last week
- This is the source code for MEA-Defender. Our paper is accepted by the IEEE Symposium on Security and Privacy (S&P) 2024.☆15Updated last year
- ☆31Updated 3 months ago
- Robust natural language watermarking using invariant features☆25Updated last year
- Repo for SemStamp (NAACL2024) and k-SemStamp (ACL2024)☆11Updated 4 months ago
- Code for the paper "BadPrompt: Backdoor Attacks on Continuous Prompts"☆36Updated 4 months ago
- BackdoorLLM: A Comprehensive Benchmark for Backdoor Attacks on Large Language Models