Do-Not-Answer: A Dataset for Evaluating Safeguards in LLMs
☆314Jun 7, 2024Updated last year
Alternatives and similar repositories for do-not-answer
Users that are interested in do-not-answer are comparing it to the libraries listed below
Sorting:
- We jailbreak GPT-3.5 Turbo’s safety guardrails by fine-tuning it on only 10 adversarially designed examples, at a cost of less than $0.20…☆339Feb 23, 2024Updated 2 years ago
- Röttger et al. (NAACL 2024): "XSTest: A Test Suite for Identifying Exaggerated Safety Behaviours in Large Language Models"☆127Feb 24, 2025Updated last year
- ICLR2024 Paper. Showing properties of safety tuning and exaggerated safety.☆93May 9, 2024Updated last year
- Codes and datasets of the paper Red-Teaming Large Language Models using Chain of Utterances for Safety-Alignment☆108Mar 8, 2024Updated last year
- [ICLR 2024]Data for "Multilingual Jailbreak Challenges in Large Language Models"☆98Mar 7, 2024Updated last year
- Official github repo for SafetyBench, a comprehensive benchmark to evaluate LLMs' safety. [ACL 2024]☆272Jul 28, 2025Updated 7 months ago
- ☆39May 21, 2024Updated last year
- ☆32Aug 9, 2024Updated last year
- Chinese safety prompts for evaluating and improving the safety of LLMs. 中文安全prompts,用于评估和提升大模型的安全性。☆1,129Feb 27, 2024Updated 2 years ago
- Persuasive Jailbreaker: we can persuade LLMs to jailbreak them!☆350Oct 17, 2025Updated 4 months ago
- ☆48May 9, 2024Updated last year
- Red Queen Dataset and data generation template☆26Dec 26, 2025Updated 2 months ago
- ☆24Dec 15, 2023Updated 2 years ago
- [NAACL2024] Attacks, Defenses and Evaluations for LLM Conversation Safety: A Survey☆109Aug 7, 2024Updated last year
- [ICML 2024] COLD-Attack: Jailbreaking LLMs with Stealthiness and Controllability☆176Dec 18, 2024Updated last year
- [ICLR 2024] Evaluating Large Language Models at Evaluating Instruction Following☆137Jul 8, 2024Updated last year
- Official Implementation of "Learning to Refuse: Towards Mitigating Privacy Risks in LLMs"☆10Dec 13, 2024Updated last year
- ☆19Jun 21, 2025Updated 8 months ago
- Open One-Stop Moderation Tools for Safety Risks, Jailbreaks, and Refusals of LLMs☆107Dec 2, 2024Updated last year
- S-Eval: Towards Automated and Comprehensive Safety Evaluation for Large Language Models☆109Feb 13, 2026Updated 2 weeks ago
- HarmBench: A Standardized Evaluation Framework for Automated Red Teaming and Robust Refusal☆858Aug 16, 2024Updated last year
- ☆44Mar 3, 2023Updated 2 years ago
- ☆10Sep 13, 2022Updated 3 years ago
- Understanding the correlation between different LLM benchmarks☆29Jan 11, 2024Updated 2 years ago
- Official repository for ICML 2024 paper "On Prompt-Driven Safeguarding for Large Language Models"☆107May 20, 2025Updated 9 months ago
- Official Repository for ACL 2024 Paper SafeDecoding: Defending against Jailbreak Attacks via Safety-Aware Decoding☆151Jul 19, 2024Updated last year
- ☆20Feb 11, 2024Updated 2 years ago
- Human preference data for "Training a Helpful and Harmless Assistant with Reinforcement Learning from Human Feedback"☆1,816Jun 17, 2025Updated 8 months ago
- 面向中文大模型价值观的评估与对齐研究☆553Jul 20, 2023Updated 2 years ago
- ☆164Sep 2, 2024Updated last year
- [NeurIPS 2024 D&B] Evaluating Copyright Takedown Methods for Language Models☆17Jul 17, 2024Updated last year
- ☆229Feb 23, 2021Updated 5 years ago
- Improving Alignment and Robustness with Circuit Breakers☆258Sep 24, 2024Updated last year
- [ICML 2025] Official repository for paper "OR-Bench: An Over-Refusal Benchmark for Large Language Models"☆23Mar 4, 2025Updated 11 months ago
- [NDSS'25 Best Technical Poster] A collection of automated evaluators for assessing jailbreak attempts.☆187Apr 1, 2025Updated 10 months ago
- JailbreakBench: An Open Robustness Benchmark for Jailbreaking Language Models [NeurIPS 2024 Datasets and Benchmarks Track]☆531Apr 4, 2025Updated 10 months ago
- Code and data for the FACTOR paper☆53Nov 15, 2023Updated 2 years ago
- [S&P'24] Test-Time Poisoning Attacks Against Test-Time Adaptation Models☆19Feb 18, 2025Updated last year
- WMDP is a LLM proxy benchmark for hazardous knowledge in bio, cyber, and chemical security. We also release code for RMU, an unlearning m…☆159May 29, 2025Updated 8 months ago