Fish-and-Sheep / Text-Fluoroscopy
☆13Updated 2 months ago
Alternatives and similar repositories for Text-Fluoroscopy:
Users that are interested in Text-Fluoroscopy are comparing it to the libraries listed below
- ☆79Updated last year
- Repository for Towards Codable Watermarking for Large Language Models☆36Updated last year
- ☆26Updated 2 weeks ago
- multi-bit language model watermarking (NAACL 24)☆13Updated 7 months ago
- ☆30Updated 6 months ago
- [NDSS 2025] Official code for our paper "Explanation as a Watermark: Towards Harmless and Multi-bit Model Ownership Verification via Wate…☆33Updated 5 months ago
- [USENIX Security '24] An LLM-Assisted Easy-to-Trigger Backdoor Attack on Code Completion Models: Injecting Disguised Vulnerabilities agai…☆44Updated last month
- [EMNLP 24] Official Implementation of CLEANGEN: Mitigating Backdoor Attacks for Generation Tasks in Large Language Models☆14Updated last month
- ☆51Updated 4 months ago
- This is the code repository for "Uncovering Safety Risks of Large Language Models through Concept Activation Vector"☆36Updated 5 months ago
- [CCS 2024] Optimization-based Prompt Injection Attack to LLM-as-a-Judge☆20Updated 5 months ago
- ☆25Updated 6 months ago
- Code for Findings-EMNLP 2023 paper: Multi-step Jailbreaking Privacy Attacks on ChatGPT☆33Updated last year
- The most comprehensive and accurate LLM jailbreak attack benchmark by far☆19Updated last month
- ☆15Updated 2 years ago
- [USENIX Security'24] REMARK-LLM: A robust and efficient watermarking framework for generative large language models☆24Updated 6 months ago
- Code for paper: "PromptCARE: Prompt Copyright Protection by Watermark Injection and Verification", IEEE S&P 2024.☆32Updated 8 months ago
- Code for paper: PoisonPrompt: Backdoor Attack on Prompt-based Large Language Models, IEEE ICASSP 2024. Demo//124.220.228.133:11107☆17Updated 8 months ago
- Robust natural language watermarking using invariant features☆25Updated last year
- ☆17Updated 2 months ago
- Text-CRS: A Generalized Certified Robustness Framework against Textual Adversarial Attacks (IEEE S&P 2024)☆33Updated last week
- Agent Security Bench (ASB)☆76Updated 3 weeks ago
- [AAAI'25 (Oral)] Jailbreaking Large Vision-language Models via Typographic Visual Prompts☆134Updated 2 months ago
- Accepted by ECCV 2024☆125Updated 6 months ago
- ☆44Updated 8 months ago
- ☆128Updated 7 months ago
- ☆20Updated last year
- [ECCV'24 Oral] The official GitHub page for ''Images are Achilles' Heel of Alignment: Exploiting Visual Vulnerabilities for Jailbreaking …☆19Updated 6 months ago
- Chain of Attack: a Semantic-Driven Contextual Multi-Turn attacker for LLM☆29Updated 3 months ago
- Source code of paper "An Unforgeable Publicly Verifiable Watermark for Large Language Models" accepted by ICLR 2024☆33Updated 11 months ago