controllability / jailbreak-evaluation
The jailbreak-evaluation is an easy-to-use Python package for language model jailbreak evaluation.
☆22Updated 5 months ago
Alternatives and similar repositories for jailbreak-evaluation:
Users that are interested in jailbreak-evaluation are comparing it to the libraries listed below
- LLM security and privacy☆48Updated 6 months ago
- Ferret: Faster and Effective Automated Red Teaming with Reward-Based Scoring Technique☆14Updated 8 months ago
- ☆31Updated 5 months ago
- ☆59Updated 5 months ago
- PAL: Proxy-Guided Black-Box Attack on Large Language Models☆50Updated 8 months ago
- ☆67Updated last month
- Weak-to-Strong Jailbreaking on Large Language Models☆73Updated last year
- Risks and targets for assessing LLMs & LLM vulnerabilities☆30Updated 10 months ago
- ☆64Updated 3 months ago
- ☆52Updated 2 months ago
- ☆93Updated last month
- [NDSS'25 Best Technical Poster] A collection of automated evaluators for assessing jailbreak attempts.☆148Updated 3 weeks ago
- ☆44Updated 11 months ago
- Implementation of BEAST adversarial attack for language models (ICML 2024)☆82Updated 11 months ago
- A benchmark for evaluating the robustness of LLMs and defenses to indirect prompt injection attacks.☆66Updated last year
- A collection of prompt injection mitigation techniques.☆22Updated last year
- Papers about red teaming LLMs and Multimodal models.☆111Updated 5 months ago
- This repository provides a benchmark for prompt Injection attacks and defenses☆188Updated last week
- ☆23Updated 8 months ago
- An Execution Isolation Architecture for LLM-Based Agentic Systems☆70Updated 2 months ago
- ☆59Updated 9 months ago
- Code repo for the paper: Attacking Vision-Language Computer Agents via Pop-ups☆29Updated 4 months ago
- Whispers in the Machine: Confidentiality in LLM-integrated Systems☆35Updated last month
- The official implementation of our pre-print paper "Automatic and Universal Prompt Injection Attacks against Large Language Models".☆45Updated 6 months ago
- The opensoure repository of FuzzLLM☆25Updated 11 months ago
- Official Repository for ACL 2024 Paper SafeDecoding: Defending against Jailbreak Attacks via Safety-Aware Decoding☆129Updated 9 months ago
- Automated Safety Testing of Large Language Models☆14Updated 2 months ago
- Package to optimize Adversarial Attacks against (Large) Language Models with Varied Objectives☆68Updated last year
- [NeurIPS'24] Protecting Your LLMs with Information Bottleneck☆14Updated 5 months ago
- Codes and datasets of the paper Red-Teaming Large Language Models using Chain of Utterances for Safety-Alignment☆97Updated last year