Do-Not-Answer: A Dataset for Evaluating Safeguards in LLMs
☆321Jun 7, 2024Updated last year
Alternatives and similar repositories for do-not-answer
Users that are interested in do-not-answer are comparing it to the libraries listed below
Sorting:
- We jailbreak GPT-3.5 Turbo’s safety guardrails by fine-tuning it on only 10 adversarially designed examples, at a cost of less than $0.20…☆345Feb 23, 2024Updated 2 years ago
- Röttger et al. (NAACL 2024): "XSTest: A Test Suite for Identifying Exaggerated Safety Behaviours in Large Language Models"☆130Feb 24, 2025Updated last year
- Codes and datasets of the paper Red-Teaming Large Language Models using Chain of Utterances for Safety-Alignment☆108Mar 8, 2024Updated 2 years ago
- ICLR2024 Paper. Showing properties of safety tuning and exaggerated safety.☆93May 9, 2024Updated last year
- [ICLR 2024]Data for "Multilingual Jailbreak Challenges in Large Language Models"☆101Mar 7, 2024Updated 2 years ago
- Official github repo for SafetyBench, a comprehensive benchmark to evaluate LLMs' safety. [ACL 2024]☆274Jul 28, 2025Updated 7 months ago
- ☆39May 21, 2024Updated last year
- Chinese safety prompts for evaluating and improving the safety of LLMs. 中文安全prompts,用于评估和提升大模型的安全性。☆1,136Feb 27, 2024Updated 2 years ago
- Open One-Stop Moderation Tools for Safety Risks, Jailbreaks, and Refusals of LLMs☆113Dec 2, 2024Updated last year
- ☆32Aug 9, 2024Updated last year
- Official Implementation of "Learning to Refuse: Towards Mitigating Privacy Risks in LLMs"☆10Dec 13, 2024Updated last year
- Persuasive Jailbreaker: we can persuade LLMs to jailbreak them!☆352Oct 17, 2025Updated 5 months ago
- ☆19Jun 21, 2025Updated 9 months ago
- [NAACL2024] Attacks, Defenses and Evaluations for LLM Conversation Safety: A Survey☆110Aug 7, 2024Updated last year
- ☆24Dec 15, 2023Updated 2 years ago
- ☆20Feb 11, 2024Updated 2 years ago
- HarmBench: A Standardized Evaluation Framework for Automated Red Teaming and Robust Refusal☆879Aug 16, 2024Updated last year
- Red Queen Dataset and data generation template☆27Dec 26, 2025Updated 2 months ago
- [ICLR 2024] Evaluating Large Language Models at Evaluating Instruction Following☆137Jul 8, 2024Updated last year
- Papers about red teaming LLMs and Multimodal models.☆160May 28, 2025Updated 9 months ago
- ☆47May 9, 2024Updated last year
- ☆163Sep 2, 2024Updated last year
- JailbreakBench: An Open Robustness Benchmark for Jailbreaking Language Models [NeurIPS 2024 Datasets and Benchmarks Track]☆546Apr 4, 2025Updated 11 months ago
- ☆45Mar 3, 2023Updated 3 years ago
- Improving Alignment and Robustness with Circuit Breakers☆259Sep 24, 2024Updated last year
- Official Repository for ACL 2024 Paper SafeDecoding: Defending against Jailbreak Attacks via Safety-Aware Decoding☆151Jul 19, 2024Updated last year
- An easy-to-use Python framework to generate adversarial jailbreak prompts.☆826Mar 27, 2025Updated 11 months ago
- Code and data for the FACTOR paper☆53Nov 15, 2023Updated 2 years ago
- SG-Bench: Evaluating LLM Safety Generalization Across Diverse Tasks and Prompt Types☆24Nov 29, 2024Updated last year
- Repository for the Paper: Refusing Safe Prompts for Multi-modal Large Language Models☆18Oct 16, 2024Updated last year
- Human preference data for "Training a Helpful and Harmless Assistant with Reinforcement Learning from Human Feedback"☆1,827Jun 17, 2025Updated 9 months ago
- ☆26Sep 3, 2025Updated 6 months ago
- [ICML 2024] COLD-Attack: Jailbreaking LLMs with Stealthiness and Controllability☆176Dec 18, 2024Updated last year
- 面向中文大模型价值观的评估与对齐研究☆555Jul 20, 2023Updated 2 years ago
- S-Eval: Towards Automated and Comprehensive Safety Evaluation for Large Language Models☆111Feb 13, 2026Updated last month
- ☆26Oct 6, 2024Updated last year
- [NDSS'25 Best Technical Poster] A collection of automated evaluators for assessing jailbreak attempts.☆188Apr 1, 2025Updated 11 months ago
- Towards Safe LLM with our simple-yet-highly-effective Intention Analysis Prompting☆20Mar 25, 2024Updated last year
- TAP: An automated jailbreaking method for black-box LLMs☆224Dec 10, 2024Updated last year