Papers about red teaming LLMs and Multimodal models.
☆160May 28, 2025Updated 9 months ago
Alternatives and similar repositories for OpenRedTeaming
Users that are interested in OpenRedTeaming are comparing it to the libraries listed below. We may earn a commission when you buy through links labeled 'Ad' on this page.
Sorting:
- ☆23May 20, 2025Updated 10 months ago
- HarmBench: A Standardized Evaluation Framework for Automated Red Teaming and Robust Refusal☆879Aug 16, 2024Updated last year
- [EMNLP 2024] Holistic Automated Red Teaming for Large Language Models through Top-Down Test Case Generation and Multi-turn Interaction☆17Nov 9, 2024Updated last year
- [NAACL 2025] The official implementation of paper "Learning From Failure: Integrating Negative Examples when Fine-tuning Large Language M…☆28Mar 14, 2024Updated 2 years ago
- ☆31Feb 23, 2025Updated last year
- Managed hosting for WordPress and PHP on Cloudways • AdManaged hosting with the flexibility to host WordPress, Magento, Laravel, or PHP apps, on multiple cloud providers. Cloudways by DigitalOcean.
- Q&A dataset for many-shot jailbreaking☆14Jul 19, 2024Updated last year
- [ICLR 2024] The official implementation of our ICLR2024 paper "AutoDAN: Generating Stealthy Jailbreak Prompts on Aligned Large Language M…☆434Jan 22, 2025Updated last year
- MCPSecBench: A Systematic Security Benchmark and Playground for Testing Model Context Protocols☆33Mar 4, 2026Updated 3 weeks ago
- ☆25Mar 16, 2025Updated last year
- Do-Not-Answer: A Dataset for Evaluating Safeguards in LLMs☆320Jun 7, 2024Updated last year
- An easy-to-use Python framework to generate adversarial jailbreak prompts.☆830Mar 27, 2025Updated last year
- Do you want to learn AI Security but don't know where to start ? Take a look at this map.☆31Apr 23, 2024Updated last year
- Jailbreaking Leading Safety-Aligned LLMs with Simple Adaptive Attacks [ICLR 2025]☆380Jan 23, 2025Updated last year
- We jailbreak GPT-3.5 Turbo’s safety guardrails by fine-tuning it on only 10 adversarially designed examples, at a cost of less than $0.20…☆345Feb 23, 2024Updated 2 years ago
- 1-Click AI Models by DigitalOcean Gradient • AdDeploy popular AI models on DigitalOcean Gradient GPU virtual machines with just a single click and start building anything your business needs.
- AIBOM Workshop RSA 2024☆15May 20, 2024Updated last year
- An implementation for MLLM oversensitivity evaluation☆18Nov 16, 2024Updated last year
- FeatureAlignment = Alignment + Mechanistic Interpretability☆35Mar 8, 2025Updated last year
- Official implementation of ICLR'24 paper, "Curiosity-driven Red Teaming for Large Language Models" (https://openreview.net/pdf?id=4KqkizX…☆88Mar 15, 2024Updated 2 years ago
- [NDSS'25 Best Technical Poster] A collection of automated evaluators for assessing jailbreak attempts.☆189Apr 1, 2025Updated 11 months ago
- ☆18Apr 7, 2025Updated 11 months ago
- LLM red teaming datasets from the paper 'Student-Teacher Prompting for Red Teaming to Improve Guardrails' for the ART of Safety Workshop …☆24Oct 12, 2023Updated 2 years ago
- 🤖🛡️🔍🔒🔑 Tiny package designed to support red teams and penetration testers in exploiting large language model AI solutions.☆26May 16, 2024Updated last year
- ☆711Jul 2, 2025Updated 8 months ago
- Open source password manager - Proton Pass • AdSecurely store, share, and autofill your credentials with Proton Pass, the end-to-end encrypted password manager trusted by millions.
- ☆14May 4, 2024Updated last year
- ☆34Nov 12, 2024Updated last year
- The official dataset of paper "Goal-Oriented Prompt Attack and Safety Evaluation for LLMs".☆21Feb 5, 2024Updated 2 years ago
- A library for red-teaming LLM applications with LLMs.☆29Oct 11, 2024Updated last year
- ☆165Sep 2, 2024Updated last year
- Applying SAEs for fine-grained control☆26Dec 15, 2024Updated last year
- Official Repository for ACL 2024 Paper SafeDecoding: Defending against Jailbreak Attacks via Safety-Aware Decoding☆152Jul 19, 2024Updated last year
- ArxivDaily☆13Updated this week
- [COLM 2024] JailBreakV-28K: A comprehensive benchmark designed to evaluate the transferability of LLM jailbreak attacks to MLLMs, and fur…☆88May 9, 2025Updated 10 months ago
- Managed Database hosting by DigitalOcean • AdPostgreSQL, MySQL, MongoDB, Kafka, Valkey, and OpenSearch available. Automatically scale up storage and focus on building your apps.
- ☆10Oct 31, 2022Updated 3 years ago
- Röttger et al. (2025): "MSTS: A Multimodal Safety Test Suite for Vision-Language Models"☆16Mar 31, 2025Updated 11 months ago
- Bag of Tricks: Benchmarking of Jailbreak Attacks on LLMs. Empirical tricks for LLM Jailbreaking. (NeurIPS 2024)☆160Nov 30, 2024Updated last year
- [ACL 2024] Defending Large Language Models Against Jailbreaking Attacks Through Goal Prioritization☆29Jul 9, 2024Updated last year
- A curation of awesome tools, documents and projects about LLM Security.☆1,554Aug 20, 2025Updated 7 months ago
- ☆25Sep 3, 2025Updated 6 months ago
- Röttger et al. (NAACL 2024): "XSTest: A Test Suite for Identifying Exaggerated Safety Behaviours in Large Language Models"☆130Feb 24, 2025Updated last year