ltroin / llm_attack_defense_arena
☆79Updated last year
Alternatives and similar repositories for llm_attack_defense_arena:
Users that are interested in llm_attack_defense_arena are comparing it to the libraries listed below
- ☆26Updated 6 months ago
- [EMNLP 24] Official Implementation of CLEANGEN: Mitigating Backdoor Attacks for Generation Tasks in Large Language Models☆15Updated last month
- ☆51Updated 4 months ago
- ☆31Updated 7 months ago
- ☆18Updated 10 months ago
- Repository for Towards Codable Watermarking for Large Language Models☆36Updated last year
- ☆55Updated 4 months ago
- ☆15Updated 2 years ago
- ☆14Updated last year
- Red Queen Dataset and data generation template☆15Updated 6 months ago
- ☆14Updated last year
- ☆17Updated 2 months ago
- Official Code for ACL 2024 paper "GradSafe: Detecting Unsafe Prompts for LLMs via Safety-Critical Gradient Analysis"☆56Updated 6 months ago
- [ACL 2024] Defending Large Language Models Against Jailbreaking Attacks Through Goal Prioritization☆22Updated 9 months ago
- Distribution Preserving Backdoor Attack in Self-supervised Learning☆15Updated last year
- Backdooring Neural Code Search☆13Updated last year
- Code for ACM MM2024 paper: White-box Multimodal Jailbreaks Against Large Vision-Language Models☆26Updated 4 months ago
- This is the code repository of our submission: Understanding the Dark Side of LLMs’ Intrinsic Self-Correction.☆56Updated 4 months ago
- This is the code repository for "Uncovering Safety Risks of Large Language Models through Concept Activation Vector"☆36Updated 5 months ago
- [AAAI'25 (Oral)] Jailbreaking Large Vision-language Models via Typographic Visual Prompts☆135Updated 2 months ago
- [USENIX Security '24] An LLM-Assisted Easy-to-Trigger Backdoor Attack on Code Completion Models: Injecting Disguised Vulnerabilities agai…☆44Updated last month
- Chain of Attack: a Semantic-Driven Contextual Multi-Turn attacker for LLM☆30Updated 3 months ago
- [NDSS 2025] "CLIBE: Detecting Dynamic Backdoors in Transformer-based NLP Models"☆12Updated 4 months ago
- ☆21Updated last year
- This repository is the official implementation of the paper "ASSET: Robust Backdoor Data Detection Across a Multiplicity of Deep Learning…☆17Updated last year
- multi-bit language model watermarking (NAACL 24)☆13Updated 7 months ago
- Comprehensive Assessment of Trustworthiness in Multimodal Foundation Models☆18Updated last month
- The most comprehensive and accurate LLM jailbreak attack benchmark by far☆19Updated last month
- ☆20Updated last year
- [USENIX Security 2025] PoisonedRAG: Knowledge Corruption Attacks to Retrieval-Augmented Generation of Large Language Models☆144Updated 2 months ago