Papers about red teaming LLMs and Multimodal models.
☆162May 28, 2025Updated 10 months ago
Alternatives and similar repositories for OpenRedTeaming
Users that are interested in OpenRedTeaming are comparing it to the libraries listed below. We may earn a commission when you buy through links labeled 'Ad' on this page.
Sorting:
- ☆23May 20, 2025Updated 11 months ago
- HarmBench: A Standardized Evaluation Framework for Automated Red Teaming and Robust Refusal☆924Aug 16, 2024Updated last year
- [EMNLP 2024] Holistic Automated Red Teaming for Large Language Models through Top-Down Test Case Generation and Multi-turn Interaction☆17Nov 9, 2024Updated last year
- [NAACL 2025] The official implementation of paper "Learning From Failure: Integrating Negative Examples when Fine-tuning Large Language M…☆28Mar 14, 2024Updated 2 years ago
- Q&A dataset for many-shot jailbreaking☆14Jul 19, 2024Updated last year
- Managed Database hosting by DigitalOcean • AdPostgreSQL, MySQL, MongoDB, Kafka, Valkey, and OpenSearch available. Automatically scale up storage and focus on building your apps.
- [ICLR 2024] The official implementation of our ICLR2024 paper "AutoDAN: Generating Stealthy Jailbreak Prompts on Aligned Large Language M…☆436Jan 22, 2025Updated last year
- MCPSecBench: A Systematic Security Benchmark and Playground for Testing Model Context Protocols☆35Mar 4, 2026Updated last month
- Do-Not-Answer: A Dataset for Evaluating Safeguards in LLMs☆323Jun 7, 2024Updated last year
- An easy-to-use Python framework to generate adversarial jailbreak prompts.☆834Mar 30, 2026Updated 3 weeks ago
- Jailbreaking Leading Safety-Aligned LLMs with Simple Adaptive Attacks [ICLR 2025]☆381Jan 23, 2025Updated last year
- We jailbreak GPT-3.5 Turbo’s safety guardrails by fine-tuning it on only 10 adversarially designed examples, at a cost of less than $0.20…☆348Feb 23, 2024Updated 2 years ago
- local b=game:GetService("Players")local c=game:GetService("ReplicatedStorage")local d=game:GetService("StarterGui")local e=game:GetServic…☆12Jan 27, 2022Updated 4 years ago
- AIBOM Workshop RSA 2024☆15May 20, 2024Updated last year
- FeatureAlignment = Alignment + Mechanistic Interpretability☆35Mar 8, 2025Updated last year
- Managed hosting for WordPress and PHP on Cloudways • AdManaged hosting for WordPress, Magento, Laravel, or PHP apps, on multiple cloud providers. Deploy in minutes on Cloudways by DigitalOcean.
- A reading list for large models safety, security, and privacy (including Awesome LLM Security, Safety, etc.).☆1,937Apr 2, 2026Updated 2 weeks ago
- [NDSS'25 Best Technical Poster] A collection of automated evaluators for assessing jailbreak attempts.☆190Apr 1, 2025Updated last year
- Official implementation of ICLR'24 paper, "Curiosity-driven Red Teaming for Large Language Models" (https://openreview.net/pdf?id=4KqkizX…☆89Mar 15, 2024Updated 2 years ago
- ☆18Apr 7, 2025Updated last year
- LLM red teaming datasets from the paper 'Student-Teacher Prompting for Red Teaming to Improve Guardrails' for the ART of Safety Workshop …☆24Oct 12, 2023Updated 2 years ago
- ☆23Jun 13, 2024Updated last year
- Papers and resources related to the security and privacy of LLMs 🤖☆571Jun 8, 2025Updated 10 months ago
- Official github repo for ACLUE, an evaluation benchmark focused on ancient Chinese language comprehension☆33Mar 20, 2024Updated 2 years ago
- 🤖🛡️🔍🔒🔑 Tiny package designed to support red teams and penetration testers in exploiting large language model AI solutions.☆25May 16, 2024Updated last year
- Bare Metal GPUs on DigitalOcean Gradient AI • AdPurpose-built for serious AI teams training foundational models, running large-scale inference, and pushing the boundaries of what's possible.
- ☆14May 4, 2024Updated last year
- ☆59Jun 7, 2024Updated last year
- ☆34Nov 12, 2024Updated last year
- ☆719Jul 2, 2025Updated 9 months ago
- The official dataset of paper "Goal-Oriented Prompt Attack and Safety Evaluation for LLMs".☆21Feb 5, 2024Updated 2 years ago
- A library for red-teaming LLM applications with LLMs.☆29Oct 11, 2024Updated last year
- A curated list of awesome AI Red Teaming resources and tools.☆32May 12, 2023Updated 2 years ago
- Applying SAEs for fine-grained control☆26Dec 15, 2024Updated last year
- Official Repository for ACL 2024 Paper SafeDecoding: Defending against Jailbreak Attacks via Safety-Aware Decoding☆152Jul 19, 2024Updated last year
- Wordpress hosting with auto-scaling - Free Trial • AdFully Managed hosting for WordPress and WooCommerce businesses that need reliable, auto-scalable performance. Cloudways SafeUpdates now available.
- [EMNLP 2024] The official GitHub repo for the paper "Course-Correction: Safety Alignment Using Synthetic Preferences"☆20Oct 2, 2024Updated last year
- ☆10Oct 31, 2022Updated 3 years ago
- [COLM 2024] JailBreakV-28K: A comprehensive benchmark designed to evaluate the transferability of LLM jailbreak attacks to MLLMs, and fur…☆88May 9, 2025Updated 11 months ago
- [ACL 2024] Defending Large Language Models Against Jailbreaking Attacks Through Goal Prioritization☆29Jul 9, 2024Updated last year
- Bag of Tricks: Benchmarking of Jailbreak Attacks on LLMs. Empirical tricks for LLM Jailbreaking. (NeurIPS 2024)☆160Nov 30, 2024Updated last year
- A curation of awesome tools, documents and projects about LLM Security.☆1,565Aug 20, 2025Updated 8 months ago
- Röttger et al. (NAACL 2024): "XSTest: A Test Suite for Identifying Exaggerated Safety Behaviours in Large Language Models"☆132Feb 24, 2025Updated last year