Do-Not-Answer: A Dataset for Evaluating Safeguards in LLMs
☆323Jun 7, 2024Updated last year
Alternatives and similar repositories for do-not-answer
Users that are interested in do-not-answer are comparing it to the libraries listed below. We may earn a commission when you buy through links labeled 'Ad' on this page.
Sorting:
- We jailbreak GPT-3.5 Turbo’s safety guardrails by fine-tuning it on only 10 adversarially designed examples, at a cost of less than $0.20…☆348Feb 23, 2024Updated 2 years ago
- Röttger et al. (NAACL 2024): "XSTest: A Test Suite for Identifying Exaggerated Safety Behaviours in Large Language Models"☆132Feb 24, 2025Updated last year
- Codes and datasets of the paper Red-Teaming Large Language Models using Chain of Utterances for Safety-Alignment☆111Mar 8, 2024Updated 2 years ago
- ICLR2024 Paper. Showing properties of safety tuning and exaggerated safety.☆93May 9, 2024Updated last year
- [ICLR 2024]Data for "Multilingual Jailbreak Challenges in Large Language Models"☆103Mar 7, 2024Updated 2 years ago
- Wordpress hosting with auto-scaling - Free Trial Offer • AdFully Managed hosting for WordPress and WooCommerce businesses that need reliable, auto-scalable performance. Cloudways SafeUpdates now available.
- Official github repo for SafetyBench, a comprehensive benchmark to evaluate LLMs' safety. [ACL 2024]☆283Jul 28, 2025Updated 9 months ago
- ☆39May 21, 2024Updated last year
- Chinese safety prompts for evaluating and improving the safety of LLMs. 中文安全prompts,用于评估和提升大模型的安全性。☆1,151Feb 27, 2024Updated 2 years ago
- Open One-Stop Moderation Tools for Safety Risks, Jailbreaks, and Refusals of LLMs☆119Dec 2, 2024Updated last year
- ☆32Aug 9, 2024Updated last year
- Official Implementation of "Learning to Refuse: Towards Mitigating Privacy Risks in LLMs"☆10Dec 13, 2024Updated last year
- Persuasive Jailbreaker: we can persuade LLMs to jailbreak them!☆354Oct 17, 2025Updated 6 months ago
- ☆19Jun 21, 2025Updated 10 months ago
- [NAACL2024] Attacks, Defenses and Evaluations for LLM Conversation Safety: A Survey☆111Aug 7, 2024Updated last year
- GPUs on demand by Runpod - Special Offer Available • AdRun AI, ML, and HPC workloads on powerful cloud GPUs—without limits or wasted spend. Deploy GPUs in under a minute and pay by the second.
- ☆24Dec 15, 2023Updated 2 years ago
- ☆20Feb 11, 2024Updated 2 years ago
- Red Queen Dataset and data generation template☆26Dec 26, 2025Updated 4 months ago
- HarmBench: A Standardized Evaluation Framework for Automated Red Teaming and Robust Refusal☆933Aug 16, 2024Updated last year
- [ICLR 2024] Evaluating Large Language Models at Evaluating Instruction Following☆138Jul 8, 2024Updated last year
- Papers about red teaming LLMs and Multimodal models.☆163May 28, 2025Updated 11 months ago
- ☆48May 9, 2024Updated last year
- ☆45Mar 3, 2023Updated 3 years ago
- ☆166Sep 2, 2024Updated last year
- AI Agents on DigitalOcean Gradient AI Platform • AdBuild production-ready AI agents using customizable tools or access multiple LLMs through a single endpoint. Create custom knowledge bases or connect external data.
- JailbreakBench: An Open Robustness Benchmark for Jailbreaking Language Models [NeurIPS 2024 Datasets and Benchmarks Track]☆582Apr 4, 2025Updated last year
- Improving Alignment and Robustness with Circuit Breakers☆261Sep 24, 2024Updated last year
- Official Repository for ACL 2024 Paper SafeDecoding: Defending against Jailbreak Attacks via Safety-Aware Decoding☆152Jul 19, 2024Updated last year
- An easy-to-use Python framework to generate adversarial jailbreak prompts.☆843Mar 30, 2026Updated last month
- Code and data for the FACTOR paper☆53Nov 15, 2023Updated 2 years ago
- SG-Bench: Evaluating LLM Safety Generalization Across Diverse Tasks and Prompt Types☆25Nov 29, 2024Updated last year
- Repository for the Paper: Leave My Images Alone: Preventing Multi-Modal Large Language Models from Analyzing Images via Visual Prompt Inj…☆19Apr 17, 2026Updated last week
- Human preference data for "Training a Helpful and Harmless Assistant with Reinforcement Learning from Human Feedback"☆1,840Jun 17, 2025Updated 10 months ago
- [ICML 2024] COLD-Attack: Jailbreaking LLMs with Stealthiness and Controllability☆176Dec 18, 2024Updated last year
- GPU virtual machines on DigitalOcean Gradient AI • AdGet to production fast with high-performance AMD and NVIDIA GPUs you can spin up in seconds. The definition of operational simplicity.
- 面向中文大模型价值观的评估与对齐研究☆557Jul 20, 2023Updated 2 years ago
- ☆27Oct 6, 2024Updated last year
- [NDSS'25 Best Technical Poster] A collection of automated evaluators for assessing jailbreak attempts.☆191Apr 1, 2025Updated last year
- TAP: An automated jailbreaking method for black-box LLMs☆228Dec 10, 2024Updated last year
- Official repository for ICML 2024 paper "On Prompt-Driven Safeguarding for Large Language Models"☆108May 20, 2025Updated 11 months ago
- [NeurIPS 2024 D&B] Evaluating Copyright Takedown Methods for Language Models☆17Jul 17, 2024Updated last year
- ☆136Jul 7, 2025Updated 9 months ago