Codes and datasets of the paper Red-Teaming Large Language Models using Chain of Utterances for Safety-Alignment
☆108Mar 8, 2024Updated last year
Alternatives and similar repositories for red-instruct
Users that are interested in red-instruct are comparing it to the libraries listed below
Sorting:
- We jailbreak GPT-3.5 Turbo’s safety guardrails by fine-tuning it on only 10 adversarially designed examples, at a cost of less than $0.20…☆341Feb 23, 2024Updated 2 years ago
- ☆48May 9, 2024Updated last year
- Our EMNLP 2022 paper on VIP-Based Prompting for Parameter-Efficient Learning☆10Oct 22, 2022Updated 3 years ago
- ☆28Sep 21, 2024Updated last year
- ☆22Mar 16, 2023Updated 2 years ago
- ☆196Nov 26, 2023Updated 2 years ago
- This repository contains the implementation of the paper -- KNOT: Knowledge Distillation using Optimal Transport for Solving NLP Tasks☆15Sep 15, 2022Updated 3 years ago
- BeaverTails is a collection of datasets designed to facilitate research on safety alignment in large language models (LLMs).☆175Oct 27, 2023Updated 2 years ago
- ☆698Jul 2, 2025Updated 8 months ago
- [WSDM 2026] LookAhead Tuning: Safer Language Models via Partial Answer Previews☆17Dec 14, 2025Updated 2 months ago
- ☆121Feb 3, 2025Updated last year
- ☆24Jun 17, 2025Updated 8 months ago
- PAL: Proxy-Guided Black-Box Attack on Large Language Models☆57Aug 17, 2024Updated last year
- Do-Not-Answer: A Dataset for Evaluating Safeguards in LLMs☆314Jun 7, 2024Updated last year
- ICLR2024 Paper. Showing properties of safety tuning and exaggerated safety.☆93May 9, 2024Updated last year
- How Robust are Randomized Smoothing based Defenses to Data Poisoning? (CVPR 2021)☆14Jul 16, 2021Updated 4 years ago
- ☆20Nov 15, 2024Updated last year
- Official Repository for ACL 2024 Paper SafeDecoding: Defending against Jailbreak Attacks via Safety-Aware Decoding☆151Jul 19, 2024Updated last year
- Röttger et al. (NAACL 2024): "XSTest: A Test Suite for Identifying Exaggerated Safety Behaviours in Large Language Models"☆128Feb 24, 2025Updated last year
- TAP: An automated jailbreaking method for black-box LLMs☆221Dec 10, 2024Updated last year
- [arXiv:2311.03191] "DeepInception: Hypnotize Large Language Model to Be Jailbreaker"☆172Feb 20, 2024Updated 2 years ago
- This repository contains the dataset and the pytorch implementations of the models from the paper CIDER: Commonsense Inference for Dialog…☆27Oct 30, 2022Updated 3 years ago
- LLM experiments done during SERI MATS - focusing on activation steering / interpreting activation spaces☆101Sep 21, 2023Updated 2 years ago
- Official implementation of AdvPrompter https//arxiv.org/abs/2404.16873☆179May 6, 2024Updated last year
- ☆39May 21, 2024Updated last year
- Benchmark evaluation code for "SORRY-Bench: Systematically Evaluating Large Language Model Safety Refusal" (ICLR 2025)☆76Mar 1, 2025Updated last year
- JailbreakBench: An Open Robustness Benchmark for Jailbreaking Language Models [NeurIPS 2024 Datasets and Benchmarks Track]☆535Apr 4, 2025Updated 10 months ago
- [ICLR 2024] The official implementation of our ICLR2024 paper "AutoDAN: Generating Stealthy Jailbreak Prompts on Aligned Large Language M…☆430Jan 22, 2025Updated last year
- [NeurIPS 2024 Oral] Aligner: Efficient Alignment by Learning to Correct☆191Jan 16, 2025Updated last year
- An easy-to-use Python framework to generate adversarial jailbreak prompts.☆815Mar 27, 2025Updated 11 months ago
- [ACL 2024] Defending Large Language Models Against Jailbreaking Attacks Through Goal Prioritization☆29Jul 9, 2024Updated last year
- Persuasive Jailbreaker: we can persuade LLMs to jailbreak them!☆350Oct 17, 2025Updated 4 months ago
- ☆31Jul 14, 2023Updated 2 years ago
- Human preference data for "Training a Helpful and Harmless Assistant with Reinforcement Learning from Human Feedback"☆1,816Jun 17, 2025Updated 8 months ago
- Code to break Llama Guard☆32Dec 7, 2023Updated 2 years ago
- Official repo for GPTFUZZER : Red Teaming Large Language Models with Auto-Generated Jailbreak Prompts☆568Updated this week
- SG-Bench: Evaluating LLM Safety Generalization Across Diverse Tasks and Prompt Types☆25Nov 29, 2024Updated last year
- HarmBench: A Standardized Evaluation Framework for Automated Red Teaming and Robust Refusal☆864Aug 16, 2024Updated last year
- Test LLMs against jailbreaks and unprecedented harms☆40Oct 19, 2024Updated last year