haizelabs / redteaming-resistance-benchmark
☆42Updated 8 months ago
Alternatives and similar repositories for redteaming-resistance-benchmark:
Users that are interested in redteaming-resistance-benchmark are comparing it to the libraries listed below
- A Dynamic Environment to Evaluate Attacks and Defenses for LLM Agents.☆130Updated 3 weeks ago
- Papers about red teaming LLMs and Multimodal models.☆111Updated 5 months ago
- Finding trojans in aligned LLMs. Official repository for the competition hosted at SaTML 2024.☆111Updated 10 months ago
- Run safety benchmarks against AI models and view detailed reports showing how well they performed.☆88Updated this week
- TAP: An automated jailbreaking method for black-box LLMs☆165Updated 4 months ago
- Improving Alignment and Robustness with Circuit Breakers☆197Updated 7 months ago
- The official implementation of our pre-print paper "Automatic and Universal Prompt Injection Attacks against Large Language Models".☆45Updated 6 months ago
- Code to generate NeuralExecs (prompt injection for LLMs)☆20Updated 5 months ago
- Dataset for the Tensor Trust project☆39Updated last year
- Package to optimize Adversarial Attacks against (Large) Language Models with Varied Objectives☆68Updated last year
- Red-Teaming Language Models with DSPy☆183Updated 2 months ago
- A fast + lightweight implementation of the GCG algorithm in PyTorch☆225Updated 2 months ago
- AmpleGCG: Learning a Universal and Transferable Generator of Adversarial Attacks on Both Open and Closed LLM☆62Updated 5 months ago
- Official implementation of AdvPrompter https//arxiv.org/abs/2404.16873☆152Updated 11 months ago
- A repository of Language Model Vulnerabilities and Exposures (LVEs).☆109Updated last year
- WMDP is a LLM proxy benchmark for hazardous knowledge in bio, cyber, and chemical security. We also release code for RMU, an unlearning m…☆111Updated 11 months ago
- ☆59Updated 9 months ago
- Contains random samples referenced in the paper "Sleeper Agents: Training Robustly Deceptive LLMs that Persist Through Safety Training".☆102Updated last year
- LLM Self Defense: By Self Examination, LLMs know they are being tricked☆32Updated 11 months ago
- Official Repository for ACL 2024 Paper SafeDecoding: Defending against Jailbreak Attacks via Safety-Aware Decoding☆129Updated 9 months ago
- Jailbreaking Leading Safety-Aligned LLMs with Simple Adaptive Attacks [ICLR 2025]☆294Updated 3 months ago
- [NDSS'25 Best Technical Poster] A collection of automated evaluators for assessing jailbreak attempts.☆148Updated 3 weeks ago
- A benchmark for evaluating the robustness of LLMs and defenses to indirect prompt injection attacks.☆66Updated last year
- A Comprehensive Assessment of Trustworthiness in GPT Models☆284Updated 7 months ago
- [ICLR'24 Spotlight] A language model (LM)-based emulation framework for identifying the risks of LM agents with tool use☆141Updated last year
- ☆93Updated last month
- Repo for the research paper "SecAlign: Defending Against Prompt Injection with Preference Optimization"☆44Updated 2 weeks ago
- Code to break Llama Guard☆31Updated last year
- Fluent student-teacher redteaming☆20Updated 9 months ago
- This repository provides a benchmark for prompt Injection attacks and defenses☆188Updated last week