allenai / wildguardView external linksLinks
Open One-Stop Moderation Tools for Safety Risks, Jailbreaks, and Refusals of LLMs
☆106Dec 2, 2024Updated last year
Alternatives and similar repositories for wildguard
Users that are interested in wildguard are comparing it to the libraries listed below
Sorting:
- A simple evaluation of generative language models and safety classifiers.☆85Dec 11, 2025Updated 2 months ago
- Q&A dataset for many-shot jailbreaking☆14Jul 19, 2024Updated last year
- An official codebase for "NormLens: Reading Books is Great, But Not if You Are Driving! Visually Grounded Reasoning about Defeasible Comm…☆10May 9, 2024Updated last year
- Audio Jailbreak: An Open Comprehensive Benchmark for Jailbreaking Large Audio-Language Models☆28Oct 6, 2025Updated 4 months ago
- ☆18Mar 30, 2025Updated 10 months ago
- Chain of Attack: a Semantic-Driven Contextual Multi-Turn attacker for LLM☆39Jan 17, 2025Updated last year
- Towards Safe LLM with our simple-yet-highly-effective Intention Analysis Prompting☆20Mar 25, 2024Updated last year
- JailbreakBench: An Open Robustness Benchmark for Jailbreaking Language Models [NeurIPS 2024 Datasets and Benchmarks Track]☆527Apr 4, 2025Updated 10 months ago
- Official repository for "Robust Prompt Optimization for Defending Language Models Against Jailbreaking Attacks"☆61Aug 8, 2024Updated last year
- 【ACL 2024】 SALAD benchmark & MD-Judge☆170Mar 8, 2025Updated 11 months ago
- ☆23Jun 13, 2024Updated last year
- ☆115Jul 2, 2024Updated last year
- Official repository for ICML 2024 paper "On Prompt-Driven Safeguarding for Large Language Models"☆106May 20, 2025Updated 8 months ago
- ☆193Nov 26, 2023Updated 2 years ago
- [ACL 2025] Data and Code for Paper VLSBench: Unveiling Visual Leakage in Multimodal Safety☆53Jul 21, 2025Updated 6 months ago
- ☆121Feb 3, 2025Updated last year
- HarmBench: A Standardized Evaluation Framework for Automated Red Teaming and Robust Refusal☆847Aug 16, 2024Updated last year
- Dialogue Action Tokens: Steering Language Models in Goal-Directed Dialogue with a Multi-Turn Planner☆30Jun 27, 2024Updated last year
- ☆40Aug 10, 2024Updated last year
- [ACL24] Official Repo of Paper `ArtPrompt: ASCII Art-based Jailbreak Attacks against Aligned LLMs`☆93Aug 15, 2025Updated 5 months ago
- ☆20Oct 5, 2025Updated 4 months ago
- Persuasive Jailbreaker: we can persuade LLMs to jailbreak them!☆349Oct 17, 2025Updated 3 months ago
- Do-Not-Answer: A Dataset for Evaluating Safeguards in LLMs☆314Jun 7, 2024Updated last year
- Automated, schema-based JSON unpacking to Polars objects☆13Sep 14, 2025Updated 5 months ago
- Code&Data for the paper "Watch Out for Your Agents! Investigating Backdoor Threats to LLM-Based Agents" [NeurIPS 2024]☆109Sep 27, 2024Updated last year
- Official implementation of paper: DrAttack: Prompt Decomposition and Reconstruction Makes Powerful LLM Jailbreakers☆65Aug 25, 2024Updated last year
- Repository for "StrongREJECT for Empty Jailbreaks" paper☆151Nov 3, 2024Updated last year
- ☆57Oct 5, 2022Updated 3 years ago
- ☆18Mar 25, 2024Updated last year
- A novel jailbreak attack unveiling an overlooked attack surface inherently in the chain-of-thought reasoning trajectory of LLMs☆22Sep 18, 2025Updated 4 months ago
- Our research proposes a novel MoGU framework that improves LLMs' safety while preserving their usability.☆18Jan 14, 2025Updated last year
- Official github repo for SafetyBench, a comprehensive benchmark to evaluate LLMs' safety. [ACL 2024]☆272Jul 28, 2025Updated 6 months ago
- Measuring and Controlling Persona Drift in Language Model Dialogs☆21Feb 26, 2024Updated last year
- Improved Few-Shot Jailbreaking Can Circumvent Aligned Language Models and Their Defenses (NeurIPS 2024)☆65Jan 11, 2025Updated last year
- ☆164Sep 2, 2024Updated last year
- Repo for the research paper "SecAlign: Defending Against Prompt Injection with Preference Optimization"☆84Jul 24, 2025Updated 6 months ago
- Jailbreaking Leading Safety-Aligned LLMs with Simple Adaptive Attacks [ICLR 2025]☆377Jan 23, 2025Updated last year
- ☆691Jul 2, 2025Updated 7 months ago
- Codes and datasets of the paper Red-Teaming Large Language Models using Chain of Utterances for Safety-Alignment☆108Mar 8, 2024Updated last year