☆86Sep 5, 2025Updated 6 months ago
Alternatives and similar repositories for llm_attack_defense_arena
Users that are interested in llm_attack_defense_arena are comparing it to the libraries listed below
Sorting:
- Djinn-Agent: A lightweight CLI tool for seamless interaction with Claude's advanced computer-use capabilities, automating complex tasks f…☆27Oct 28, 2024Updated last year
- Source Code Search☆11Nov 16, 2023Updated 2 years ago
- This repository provide the studies on the security of language models for code (CodeLMs).☆51Feb 26, 2025Updated last year
- ☆20Feb 11, 2024Updated 2 years ago
- [ACL 2024] CodeAttack: Revealing Safety Generalization Challenges of Large Language Models via Code Completion☆58Oct 1, 2025Updated 5 months ago
- [ACL 2024] Defending Large Language Models Against Jailbreaking Attacks Through Goal Prioritization☆29Jul 9, 2024Updated last year
- ☆146Sep 12, 2025Updated 5 months ago
- Official repository for "Robust Prompt Optimization for Defending Language Models Against Jailbreaking Attacks"☆61Aug 8, 2024Updated last year
- Code for paper "Defending aginast LLM Jailbreaking via Backtranslation"☆34Aug 16, 2024Updated last year
- Synthesizing realistic and diverse text-datasets from augmented LLMs☆16Jan 26, 2026Updated last month
- ☆32Mar 13, 2025Updated 11 months ago
- Official repo for GPTFUZZER : Red Teaming Large Language Models with Auto-Generated Jailbreak Prompts☆571Feb 27, 2026Updated last week
- Bag of Tricks: Benchmarking of Jailbreak Attacks on LLMs. Empirical tricks for LLM Jailbreaking. (NeurIPS 2024)☆163Nov 30, 2024Updated last year
- MCPSecBench: A Systematic Security Benchmark and Playground for Testing Model Context Protocols☆30Sep 24, 2025Updated 5 months ago
- Whispers in the Machine: Confidentiality in Agentic Systems☆42Dec 11, 2025Updated 2 months ago
- ☆39May 21, 2024Updated last year
- ☆128Nov 13, 2023Updated 2 years ago
- Code for ACL 2024 paper: PrivLM-Bench: A Multi-level Privacy Evaluation Benchmark for Language Models.☆16Feb 5, 2025Updated last year
- ☆15Feb 21, 2024Updated 2 years ago
- ☆43May 23, 2023Updated 2 years ago
- Towards Safe LLM with our simple-yet-highly-effective Intention Analysis Prompting☆20Mar 25, 2024Updated last year
- ☆197Nov 26, 2023Updated 2 years ago
- Improved techniques for optimization-based jailbreaking on large language models (ICLR2025)☆142Apr 7, 2025Updated 11 months ago
- ☆22Sep 17, 2024Updated last year
- Safety at Scale: A Comprehensive Survey of Large Model Safety☆228Feb 3, 2026Updated last month
- Repository for "StrongREJECT for Empty Jailbreaks" paper☆152Nov 3, 2024Updated last year
- PAL: Proxy-Guided Black-Box Attack on Large Language Models☆57Aug 17, 2024Updated last year
- Data for our paper "Defending ChatGPT against Jailbreak Attack via Self-Reminder"☆20Oct 26, 2023Updated 2 years ago
- This repository provides a benchmark for prompt injection attacks and defenses in LLMs☆396Oct 29, 2025Updated 4 months ago
- ☆21Mar 17, 2025Updated 11 months ago
- ☆24Aug 18, 2023Updated 2 years ago
- Using Explanations as a Tool for Advanced LLMs☆69Sep 11, 2024Updated last year
- Official implementation of AdvPrompter https//arxiv.org/abs/2404.16873☆179May 6, 2024Updated last year
- Codes and datasets of the paper Red-Teaming Large Language Models using Chain of Utterances for Safety-Alignment☆108Mar 8, 2024Updated 2 years ago
- ☆32Jan 26, 2025Updated last year
- ☆23Sep 20, 2023Updated 2 years ago
- Jailbreaking Leading Safety-Aligned LLMs with Simple Adaptive Attacks [ICLR 2025]☆379Jan 23, 2025Updated last year
- A collection of prompt injection mitigation techniques.☆27Aug 19, 2023Updated 2 years ago
- MASTERKEY is a framework designed to explore and exploit vulnerabilities in large language model chatbots by automating jailbreak attacks…☆33Sep 12, 2024Updated last year