haizelabs / redteaming-resistance-benchmarkLinks
☆44Updated 10 months ago
Alternatives and similar repositories for redteaming-resistance-benchmark
Users that are interested in redteaming-resistance-benchmark are comparing it to the libraries listed below
Sorting:
- A Dynamic Environment to Evaluate Attacks and Defenses for LLM Agents.☆175Updated this week
- [NDSS'25 Best Technical Poster] A collection of automated evaluators for assessing jailbreak attempts.☆158Updated 2 months ago
- Package to optimize Adversarial Attacks against (Large) Language Models with Varied Objectives☆69Updated last year
- WMDP is a LLM proxy benchmark for hazardous knowledge in bio, cyber, and chemical security. We also release code for RMU, an unlearning m…☆121Updated last week
- Papers about red teaming LLMs and Multimodal models.☆121Updated last week
- TAP: An automated jailbreaking method for black-box LLMs☆171Updated 5 months ago
- Collection of evals for Inspect AI☆144Updated this week
- A fast + lightweight implementation of the GCG algorithm in PyTorch☆240Updated 3 weeks ago
- A Comprehensive Assessment of Trustworthiness in GPT Models☆294Updated 8 months ago
- A benchmark for evaluating the robustness of LLMs and defenses to indirect prompt injection attacks.☆69Updated last year
- Improving Alignment and Robustness with Circuit Breakers☆208Updated 8 months ago
- Röttger et al. (NAACL 2024): "XSTest: A Test Suite for Identifying Exaggerated Safety Behaviours in Large Language Models"☆98Updated 3 months ago
- Fluent student-teacher redteaming☆21Updated 10 months ago
- Open One-Stop Moderation Tools for Safety Risks, Jailbreaks, and Refusals of LLMs☆77Updated 6 months ago
- Official Repository for ACL 2024 Paper SafeDecoding: Defending against Jailbreak Attacks via Safety-Aware Decoding☆133Updated 10 months ago
- The official implementation of our pre-print paper "Automatic and Universal Prompt Injection Attacks against Large Language Models".☆49Updated 7 months ago
- Run safety benchmarks against AI models and view detailed reports showing how well they performed.☆92Updated this week
- ☆63Updated 11 months ago
- Dataset for the Tensor Trust project☆40Updated last year
- Finding trojans in aligned LLMs. Official repository for the competition hosted at SaTML 2024.☆113Updated 11 months ago
- LLM Self Defense: By Self Examination, LLMs know they are being tricked☆34Updated last year
- ☆64Updated 3 weeks ago
- Code to generate NeuralExecs (prompt injection for LLMs)☆22Updated 6 months ago
- [ICLR'24 Spotlight] A language model (LM)-based emulation framework for identifying the risks of LM agents with tool use☆142Updated last year
- PAL: Proxy-Guided Black-Box Attack on Large Language Models☆51Updated 9 months ago
- Red-Teaming Language Models with DSPy☆195Updated 3 months ago
- Repo for the research paper "SecAlign: Defending Against Prompt Injection with Preference Optimization"☆48Updated 2 months ago
- ☆97Updated last year
- AmpleGCG: Learning a Universal and Transferable Generator of Adversarial Attacks on Both Open and Closed LLM☆64Updated 7 months ago
- Python package for measuring memorization in LLMs.☆154Updated 6 months ago