[ICLR'24] RAIN: Your Language Models Can Align Themselves without Finetuning
☆98May 23, 2024Updated last year
Alternatives and similar repositories for RAIN
Users that are interested in RAIN are comparing it to the libraries listed below
Sorting:
- Official Repository for ACL 2024 Paper SafeDecoding: Defending against Jailbreak Attacks via Safety-Aware Decoding☆151Jul 19, 2024Updated last year
- Towards Safe LLM with our simple-yet-highly-effective Intention Analysis Prompting☆20Mar 25, 2024Updated last year
- LLM Self Defense: By Self Examination, LLMs know they are being tricked☆51May 21, 2024Updated last year
- [ICLR 2024] The official implementation of our ICLR2024 paper "AutoDAN: Generating Stealthy Jailbreak Prompts on Aligned Large Language M…☆434Jan 22, 2025Updated last year
- Code for our paper "Defending ChatGPT against Jailbreak Attack via Self-Reminder" in NMI.☆57Nov 13, 2023Updated 2 years ago
- ☆127Nov 13, 2023Updated 2 years ago
- ☆197Nov 26, 2023Updated 2 years ago
- Official Code for "Baseline Defenses for Adversarial Attacks Against Aligned Language Models"☆31Oct 26, 2023Updated 2 years ago
- Chain of Attack: a Semantic-Driven Contextual Multi-Turn attacker for LLM☆39Jan 17, 2025Updated last year
- ☆12Sep 29, 2024Updated last year
- [ACL 2024] Defending Large Language Models Against Jailbreaking Attacks Through Goal Prioritization☆29Jul 9, 2024Updated last year
- [ECCV'24 Oral] The official GitHub page for ''Images are Achilles' Heel of Alignment: Exploiting Visual Vulnerabilities for Jailbreaking …☆36Oct 23, 2024Updated last year
- ☆23Oct 25, 2024Updated last year
- Implementation of paper 'Defending Large Language Models against Jailbreak Attacks via Semantic Smoothing'☆23Jun 9, 2024Updated last year
- ☆30Jun 19, 2023Updated 2 years ago
- ☆31Jul 14, 2023Updated 2 years ago
- MASTERKEY is a framework designed to explore and exploit vulnerabilities in large language model chatbots by automating jailbreak attacks…☆35Sep 12, 2024Updated last year
- We jailbreak GPT-3.5 Turbo’s safety guardrails by fine-tuning it on only 10 adversarially designed examples, at a cost of less than $0.20…☆345Feb 23, 2024Updated 2 years ago
- HarmBench: A Standardized Evaluation Framework for Automated Red Teaming and Robust Refusal☆879Aug 16, 2024Updated last year
- Red Queen Dataset and data generation template☆27Dec 26, 2025Updated 2 months ago
- AmpleGCG: Learning a Universal and Transferable Generator of Adversarial Attacks on Both Open and Closed LLM☆85Nov 3, 2024Updated last year
- Jailbreak artifacts for JailbreakBench☆83Nov 6, 2024Updated last year
- Persuasive Jailbreaker: we can persuade LLMs to jailbreak them!☆352Oct 17, 2025Updated 5 months ago
- Codes for the paper "Deep Neural Networks with Multi-Branch Architectures Are Less Non-Convex"☆21Jul 25, 2020Updated 5 years ago
- Official repository for ICML 2024 paper "On Prompt-Driven Safeguarding for Large Language Models"☆107May 20, 2025Updated 10 months ago
- [ICML 2024] COLD-Attack: Jailbreaking LLMs with Stealthiness and Controllability☆176Dec 18, 2024Updated last year
- [ICLR 2025] BlueSuffix: Reinforced Blue Teaming for Vision-Language Models Against Jailbreak Attacks☆31Nov 2, 2025Updated 4 months ago
- [USENIX'24] Prompt Stealing Attacks Against Text-to-Image Generation Models☆51Jan 11, 2025Updated last year
- [ICML 2025] Weak-to-Strong Jailbreaking on Large Language Models☆92May 2, 2025Updated 10 months ago
- [NDSS'25] The official implementation of safety misalignment.☆17Jan 8, 2025Updated last year
- Implementation for "RigorLLM: Resilient Guardrails for Large Language Models against Undesired Content"☆23Jul 28, 2024Updated last year
- TAP: An automated jailbreaking method for black-box LLMs☆224Dec 10, 2024Updated last year
- Official implementation of ICLR'24 paper, "Curiosity-driven Red Teaming for Large Language Models" (https://openreview.net/pdf?id=4KqkizX…☆90Mar 15, 2024Updated 2 years ago
- [TACL] Code for "Red Teaming Language Model Detectors with Language Models"☆24Nov 24, 2023Updated 2 years ago
- Official Repo for MageBench: Bridging Large Multimodal Models to Agents☆22Jan 8, 2025Updated last year
- Safe Unlearning: A Surprisingly Effective and Generalizable Solution to Defend Against Jailbreak Attacks☆32Jul 9, 2024Updated last year
- ☆60Mar 9, 2023Updated 3 years ago
- Official repo for GPTFUZZER : Red Teaming Large Language Models with Auto-Generated Jailbreak Prompts☆573Feb 27, 2026Updated 3 weeks ago
- ☆14Oct 6, 2024Updated last year