Do-Not-Answer: A Dataset for Evaluating Safeguards in LLMs
☆321Jun 7, 2024Updated last year
Alternatives and similar repositories for do-not-answer
Users that are interested in do-not-answer are comparing it to the libraries listed below. We may earn a commission when you buy through links labeled 'Ad' on this page.
Sorting:
- We jailbreak GPT-3.5 Turbo’s safety guardrails by fine-tuning it on only 10 adversarially designed examples, at a cost of less than $0.20…☆345Feb 23, 2024Updated 2 years ago
- Röttger et al. (NAACL 2024): "XSTest: A Test Suite for Identifying Exaggerated Safety Behaviours in Large Language Models"☆131Feb 24, 2025Updated last year
- Codes and datasets of the paper Red-Teaming Large Language Models using Chain of Utterances for Safety-Alignment☆108Mar 8, 2024Updated 2 years ago
- ICLR2024 Paper. Showing properties of safety tuning and exaggerated safety.☆93May 9, 2024Updated last year
- [ICLR 2024]Data for "Multilingual Jailbreak Challenges in Large Language Models"☆101Mar 7, 2024Updated 2 years ago
- GPU virtual machines on DigitalOcean Gradient AI • AdGet to production fast with high-performance AMD and NVIDIA GPUs you can spin up in seconds. The definition of operational simplicity.
- Official github repo for SafetyBench, a comprehensive benchmark to evaluate LLMs' safety. [ACL 2024]☆281Jul 28, 2025Updated 8 months ago
- ☆39May 21, 2024Updated last year
- Chinese safety prompts for evaluating and improving the safety of LLMs. 中文安全prompts,用于评估和提升大模型的安全性。☆1,146Feb 27, 2024Updated 2 years ago
- Open One-Stop Moderation Tools for Safety Risks, Jailbreaks, and Refusals of LLMs☆113Dec 2, 2024Updated last year
- ☆32Aug 9, 2024Updated last year
- Official Implementation of "Learning to Refuse: Towards Mitigating Privacy Risks in LLMs"☆10Dec 13, 2024Updated last year
- Persuasive Jailbreaker: we can persuade LLMs to jailbreak them!☆355Oct 17, 2025Updated 5 months ago
- ☆19Jun 21, 2025Updated 9 months ago
- [NAACL2024] Attacks, Defenses and Evaluations for LLM Conversation Safety: A Survey☆111Aug 7, 2024Updated last year
- Proton VPN Special Offer - Get 70% off • AdSpecial partner offer. Trusted by over 100 million users worldwide. Tested, Approved and Recommended by Experts.
- ☆24Dec 15, 2023Updated 2 years ago
- ☆20Feb 11, 2024Updated 2 years ago
- HarmBench: A Standardized Evaluation Framework for Automated Red Teaming and Robust Refusal☆894Aug 16, 2024Updated last year
- Red Queen Dataset and data generation template☆27Dec 26, 2025Updated 3 months ago
- [ICLR 2024] Evaluating Large Language Models at Evaluating Instruction Following☆137Jul 8, 2024Updated last year
- Papers about red teaming LLMs and Multimodal models.☆160May 28, 2025Updated 10 months ago
- ☆47May 9, 2024Updated last year
- ☆45Mar 3, 2023Updated 3 years ago
- JailbreakBench: An Open Robustness Benchmark for Jailbreaking Language Models [NeurIPS 2024 Datasets and Benchmarks Track]☆564Apr 4, 2025Updated last year
- Virtual machines for every use case on DigitalOcean • AdGet dependable uptime with 99.99% SLA, simple security tools, and predictable monthly pricing with DigitalOcean's virtual machines, called Droplets.
- ☆167Sep 2, 2024Updated last year
- Improving Alignment and Robustness with Circuit Breakers☆259Sep 24, 2024Updated last year
- Official Repository for ACL 2024 Paper SafeDecoding: Defending against Jailbreak Attacks via Safety-Aware Decoding☆152Jul 19, 2024Updated last year
- An easy-to-use Python framework to generate adversarial jailbreak prompts.☆832Mar 30, 2026Updated last week
- Code and data for the FACTOR paper☆53Nov 15, 2023Updated 2 years ago
- SG-Bench: Evaluating LLM Safety Generalization Across Diverse Tasks and Prompt Types☆24Nov 29, 2024Updated last year
- Repository for the Paper: Refusing Safe Prompts for Multi-modal Large Language Models☆19Oct 16, 2024Updated last year
- Human preference data for "Training a Helpful and Harmless Assistant with Reinforcement Learning from Human Feedback"☆1,834Jun 17, 2025Updated 9 months ago
- [ICML 2024] COLD-Attack: Jailbreaking LLMs with Stealthiness and Controllability☆176Dec 18, 2024Updated last year
- Managed Database hosting by DigitalOcean • AdPostgreSQL, MySQL, MongoDB, Kafka, Valkey, and OpenSearch available. Automatically scale up storage and focus on building your apps.
- ☆27Oct 6, 2024Updated last year
- Towards Safe LLM with our simple-yet-highly-effective Intention Analysis Prompting☆20Mar 25, 2024Updated 2 years ago
- TAP: An automated jailbreaking method for black-box LLMs☆226Dec 10, 2024Updated last year
- Official repository for ICML 2024 paper "On Prompt-Driven Safeguarding for Large Language Models"☆108May 20, 2025Updated 10 months ago
- [NeurIPS 2024 D&B] Evaluating Copyright Takedown Methods for Language Models☆17Jul 17, 2024Updated last year
- ☆133Jul 7, 2025Updated 9 months ago
- Universal and Transferable Attacks on Aligned Language Models☆4,601Aug 2, 2024Updated last year