ltroin / llm_attack_defense_arenaView external linksLinks
☆86Sep 5, 2025Updated 5 months ago
Alternatives and similar repositories for llm_attack_defense_arena
Users that are interested in llm_attack_defense_arena are comparing it to the libraries listed below
Sorting:
- Djinn-Agent: A lightweight CLI tool for seamless interaction with Claude's advanced computer-use capabilities, automating complex tasks f…☆27Oct 28, 2024Updated last year
- Source Code Search☆11Nov 16, 2023Updated 2 years ago
- This repository provide the studies on the security of language models for code (CodeLMs).☆50Feb 26, 2025Updated 11 months ago
- ☆20Feb 11, 2024Updated 2 years ago
- ☆17Jan 5, 2026Updated last month
- [ACL 2024] CodeAttack: Revealing Safety Generalization Challenges of Large Language Models via Code Completion☆58Oct 1, 2025Updated 4 months ago
- [ACL 2024] Defending Large Language Models Against Jailbreaking Attacks Through Goal Prioritization☆29Jul 9, 2024Updated last year
- ☆145Sep 12, 2025Updated 5 months ago
- ICL backdoor attack☆17Nov 4, 2024Updated last year
- Official repository for "Robust Prompt Optimization for Defending Language Models Against Jailbreaking Attacks"☆61Aug 8, 2024Updated last year
- Code for paper "Defending aginast LLM Jailbreaking via Backtranslation"☆34Aug 16, 2024Updated last year
- A lightweight library for large laguage model (LLM) jailbreaking defense.☆61Sep 11, 2025Updated 5 months ago
- Synthesizing realistic and diverse text-datasets from augmented LLMs☆16Jan 26, 2026Updated 2 weeks ago
- MCPSecBench: A Systematic Security Benchmark and Playground for Testing Model Context Protocols☆27Sep 24, 2025Updated 4 months ago
- ☆33Mar 13, 2025Updated 11 months ago
- Official repo for GPTFUZZER : Red Teaming Large Language Models with Auto-Generated Jailbreak Prompts☆565Sep 24, 2024Updated last year
- An easy-to-use Python framework to generate adversarial jailbreak prompts.☆815Mar 27, 2025Updated 10 months ago
- Bag of Tricks: Benchmarking of Jailbreak Attacks on LLMs. Empirical tricks for LLM Jailbreaking. (NeurIPS 2024)☆162Nov 30, 2024Updated last year
- ☆122Nov 13, 2023Updated 2 years ago
- ☆40May 21, 2024Updated last year
- Whispers in the Machine: Confidentiality in Agentic Systems☆41Dec 11, 2025Updated 2 months ago
- ☆28Mar 20, 2024Updated last year
- ☆15Feb 21, 2024Updated last year
- ☆43May 23, 2023Updated 2 years ago
- Towards Safe LLM with our simple-yet-highly-effective Intention Analysis Prompting☆20Mar 25, 2024Updated last year
- ☆193Nov 26, 2023Updated 2 years ago
- Improved techniques for optimization-based jailbreaking on large language models (ICLR2025)☆142Apr 7, 2025Updated 10 months ago
- ☆22Sep 17, 2024Updated last year
- The official implementation of our NAACL 2024 paper "A Wolf in Sheep’s Clothing: Generalized Nested Jailbreak Prompts can Fool Large Lang…☆152Sep 2, 2025Updated 5 months ago
- Repository for "StrongREJECT for Empty Jailbreaks" paper☆151Nov 3, 2024Updated last year
- Safety at Scale: A Comprehensive Survey of Large Model Safety☆225Feb 3, 2026Updated last week
- PAL: Proxy-Guided Black-Box Attack on Large Language Models☆57Aug 17, 2024Updated last year
- Data for our paper "Defending ChatGPT against Jailbreak Attack via Self-Reminder"☆20Oct 26, 2023Updated 2 years ago
- ☆21Mar 17, 2025Updated 10 months ago
- Code for "Variational Model Inversion Attacks" Wang et al., NeurIPS2021☆22Dec 10, 2021Updated 4 years ago
- ☆25Aug 18, 2023Updated 2 years ago
- Using Explanations as a Tool for Advanced LLMs☆69Sep 11, 2024Updated last year
- Official implementation of AdvPrompter https//arxiv.org/abs/2404.16873☆176May 6, 2024Updated last year
- A collection of prompt injection mitigation techniques.☆27Aug 19, 2023Updated 2 years ago