theshi-1128 / llm-defense
An easy-to-use Python framework to defend against jailbreak prompts.
☆19Updated 5 months ago
Alternatives and similar repositories for llm-defense:
Users that are interested in llm-defense are comparing it to the libraries listed below
- ☆41Updated 2 months ago
- The most comprehensive and accurate LLM jailbreak attack benchmark by far☆14Updated 3 months ago
- An open-source toolkit for textual backdoor attack and defense (NeurIPS 2022 D&B, Spotlight)☆170Updated last year
- Accepted by ECCV 2024☆104Updated 4 months ago
- [AAAI'25 (Oral)] Jailbreaking Large Vision-language Models via Typographic Visual Prompts☆112Updated last week
- LLMs can be Dangerous Reasoners: Analyzing-based Jailbreak Attack on Large Language Models☆17Updated last week
- ☆43Updated 9 months ago
- Code and data of the ACL-IJCNLP 2021 paper "Hidden Killer: Invisible Textual Backdoor Attacks with Syntactic Trigger"☆41Updated 2 years ago
- [USENIX Security'24] Official repository of "Making Them Ask and Answer: Jailbreaking Large Language Models in Few Queries via Disguise a…☆67Updated 4 months ago
- Code for Findings-EMNLP 2023 paper: Multi-step Jailbreaking Privacy Attacks on ChatGPT☆30Updated last year
- [NAACL2024] Attacks, Defenses and Evaluations for LLM Conversation Safety: A Survey☆87Updated 6 months ago
- ☆25Updated 5 months ago
- Accepted by IJCAI-24 Survey Track☆192Updated 6 months ago
- 😎 up-to-date & curated list of awesome Attacks on Large-Vision-Language-Models papers, methods & resources.☆225Updated this week
- S-Eval: Automatic and Adaptive Test Generation for Benchmarking Safety Evaluation of Large Language Models☆52Updated last week
- [USENIX Security 2025] PoisonedRAG: Knowledge Corruption Attacks to Retrieval-Augmented Generation of Large Language Models☆121Updated this week
- BackdoorLLM: A Comprehensive Benchmark for Backdoor Attacks on Large Language Models☆108Updated last week
- ☆24Updated 4 months ago
- [ICLR 2024]Data for "Multilingual Jailbreak Challenges in Large Language Models"☆67Updated 11 months ago
- ☆78Updated 10 months ago
- ☆74Updated 3 weeks ago
- A curated list of papers & resources linked to data poisoning, backdoor attacks and defenses against them (no longer maintained)☆232Updated last month
- Chain of Attack: a Semantic-Driven Contextual Multi-Turn attacker for LLM☆28Updated last month
- Repo for SemStamp (NAACL2024) and k-SemStamp (ACL2024)☆17Updated 2 months ago
- Repository for the Paper (AAAI 2024, Oral) --- Visual Adversarial Examples Jailbreak Large Language Models☆203Updated 9 months ago
- Red Queen Dataset and data generation template☆12Updated 4 months ago
- [NDSS 2025] Official code for our paper "Explanation as a Watermark: Towards Harmless and Multi-bit Model Ownership Verification via Wate…☆30Updated 3 months ago
- Repository for Towards Codable Watermarking for Large Language Models☆35Updated last year
- Code for ACM MM2024 paper: White-box Multimodal Jailbreaks Against Large Vision-Language Models☆22Updated 2 months ago