controllability / jailbreak-evaluationLinks
The jailbreak-evaluation is an easy-to-use Python package for language model jailbreak evaluation.
☆23Updated 7 months ago
Alternatives and similar repositories for jailbreak-evaluation
Users that are interested in jailbreak-evaluation are comparing it to the libraries listed below
Sorting:
- A benchmark for evaluating the robustness of LLMs and defenses to indirect prompt injection attacks.☆69Updated last year
- Risks and targets for assessing LLMs & LLM vulnerabilities☆30Updated last year
- ☆65Updated 4 months ago
- LLM security and privacy☆49Updated 7 months ago
- ☆71Updated 6 months ago
- ☆34Updated 6 months ago
- Whispers in the Machine: Confidentiality in Agentic Systems☆37Updated 2 weeks ago
- A benchmark for prompt injection detection systems.☆115Updated 3 weeks ago
- A collection of prompt injection mitigation techniques.☆23Updated last year
- [NDSS'25 Best Technical Poster] A collection of automated evaluators for assessing jailbreak attempts.☆158Updated 2 months ago
- A repository of Language Model Vulnerabilities and Exposures (LVEs).☆110Updated last year
- ☆63Updated 11 months ago
- A prompt injection game to collect data for robust ML research☆61Updated 4 months ago
- Implementation of BEAST adversarial attack for language models (ICML 2024)☆87Updated last year
- ☆64Updated 3 weeks ago
- Automated Safety Testing of Large Language Models☆15Updated 4 months ago
- TaskTracker is an approach to detecting task drift in Large Language Models (LLMs) by analysing their internal activations. It provides a…☆56Updated 3 months ago
- Code repo for the paper: Attacking Vision-Language Computer Agents via Pop-ups☆32Updated 5 months ago
- ☆109Updated 2 weeks ago
- Package to optimize Adversarial Attacks against (Large) Language Models with Varied Objectives☆69Updated last year
- Ferret: Faster and Effective Automated Red Teaming with Reward-Based Scoring Technique☆16Updated 9 months ago
- The official implementation of our pre-print paper "Automatic and Universal Prompt Injection Attacks against Large Language Models".☆49Updated 7 months ago
- [ICML 2025] Weak-to-Strong Jailbreaking on Large Language Models☆76Updated last month
- PAL: Proxy-Guided Black-Box Attack on Large Language Models☆51Updated 9 months ago
- A re-implementation of the "Red Teaming Language Models with Language Models" paper by Perez et al., 2022☆31Updated last year
- The official repository of the paper "On the Exploitability of Instruction Tuning".☆63Updated last year
- LLM | Security | Operations in one github repo with good links and pictures.☆29Updated 5 months ago
- ☆9Updated last year
- Official repository for the paper "ALERT: A Comprehensive Benchmark for Assessing Large Language Models’ Safety through Red Teaming"☆42Updated 8 months ago
- This repository provides a benchmark for prompt Injection attacks and defenses☆216Updated this week