Fish-and-Sheep / Text-FluoroscopyLinks
☆13Updated 10 months ago
Alternatives and similar repositories for Text-Fluoroscopy
Users that are interested in Text-Fluoroscopy are comparing it to the libraries listed below
Sorting:
- ☆84Updated 3 months ago
- ☆32Updated last month
- Official Code for ACL 2024 paper "GradSafe: Detecting Unsafe Prompts for LLMs via Safety-Critical Gradient Analysis"☆60Updated last year
- ☆37Updated last year
- Repository for Towards Codable Watermarking for Large Language Models☆38Updated 2 years ago
- [AAAI'25 (Oral)] Jailbreaking Large Vision-language Models via Typographic Visual Prompts☆184Updated 6 months ago
- ☆71Updated 7 months ago
- ☆156Updated last year
- Safety at Scale: A Comprehensive Survey of Large Model Safety☆216Updated last month
- Chain of Attack: a Semantic-Driven Contextual Multi-Turn attacker for LLM☆39Updated 11 months ago
- [EMNLP 24] Official Implementation of CLEANGEN: Mitigating Backdoor Attacks for Generation Tasks in Large Language Models☆20Updated 9 months ago
- This is the code repository for "Uncovering Safety Risks of Large Language Models through Concept Activation Vector"☆47Updated 2 months ago
- Code for ACM MM2024 paper: White-box Multimodal Jailbreaks Against Large Vision-Language Models☆31Updated 11 months ago
- ☆117Updated 10 months ago
- S-Eval: Towards Automated and Comprehensive Safety Evaluation for Large Language Models☆106Updated 2 months ago
- ☆54Updated last year
- Accepted by ECCV 2024☆179Updated last year
- ☆25Updated last year
- ☆37Updated last year
- [EMNLP 2025 Oral] IPIGuard: A Novel Tool Dependency Graph-Based Defense Against Indirect Prompt Injection in LLM Agents☆16Updated 3 months ago
- ☆155Updated last month
- [NAACL2024] Attacks, Defenses and Evaluations for LLM Conversation Safety: A Survey☆109Updated last year
- [USENIX Security 2025] PoisonedRAG: Knowledge Corruption Attacks to Retrieval-Augmented Generation of Large Language Models☆221Updated last month
- [ICLR 2024] The official implementation of our ICLR2024 paper "AutoDAN: Generating Stealthy Jailbreak Prompts on Aligned Large Language M…☆417Updated 11 months ago
- multi-bit language model watermarking (NAACL 24)☆17Updated last year
- ☆18Updated 3 years ago
- ☆65Updated 8 months ago
- ☆26Updated 10 months ago
- [ACL 2024] Defending Large Language Models Against Jailbreaking Attacks Through Goal Prioritization☆29Updated last year
- official implementation of [USENIX Sec'25] StruQ: Defending Against Prompt Injection with Structured Queries☆56Updated last month