A framework to evaluate the generalization capability of safety alignment for LLMs
☆625Oct 9, 2025Updated 5 months ago
Alternatives and similar repositories for CipherChat
Users that are interested in CipherChat are comparing it to the libraries listed below
Sorting:
- A novel approach to improve the safety of large language models, enabling them to transition effectively from unsafe to safe state.☆72May 22, 2025Updated 9 months ago
- ☆28Mar 20, 2024Updated 2 years ago
- [ICLR 2024] The official implementation of our ICLR2024 paper "AutoDAN: Generating Stealthy Jailbreak Prompts on Aligned Large Language M…☆434Jan 22, 2025Updated last year
- [ICSE'25] Aligning the Objective of LLM-based Program Repair☆23Mar 8, 2025Updated last year
- ☆704Jul 2, 2025Updated 8 months ago
- [ICLR 2024]Data for "Multilingual Jailbreak Challenges in Large Language Models"☆101Mar 7, 2024Updated 2 years ago
- ☆196Nov 26, 2023Updated 2 years ago
- The official implementation of our NAACL 2024 paper "A Wolf in Sheep’s Clothing: Generalized Nested Jailbreak Prompts can Fool Large Lang…☆155Sep 2, 2025Updated 6 months ago
- Official implementation of our IWSLT 2023 paper "The MineTrans Systems for IWSLT 2023 Offline Speech Translation and Speech-to-Speech Tra…☆16Jul 14, 2023Updated 2 years ago
- SpyGame: An interactive multi-agent framework to evaluate intelligence with large language models :D☆15Nov 9, 2023Updated 2 years ago
- Official repo for GPTFUZZER : Red Teaming Large Language Models with Auto-Generated Jailbreak Prompts☆572Feb 27, 2026Updated 3 weeks ago
- [ACL 2024] CodeAttack: Revealing Safety Generalization Challenges of Large Language Models via Code Completion☆59Oct 1, 2025Updated 5 months ago
- We jailbreak GPT-3.5 Turbo’s safety guardrails by fine-tuning it on only 10 adversarially designed examples, at a cost of less than $0.20…☆345Feb 23, 2024Updated 2 years ago
- [arXiv:2311.03191] "DeepInception: Hypnotize Large Language Model to Be Jailbreaker"☆172Feb 20, 2024Updated 2 years ago
- ☆11Jan 19, 2025Updated last year
- Towards Safe LLM with our simple-yet-highly-effective Intention Analysis Prompting☆20Mar 25, 2024Updated last year
- ☆124Feb 3, 2025Updated last year
- Universal and Transferable Attacks on Aligned Language Models☆4,568Aug 2, 2024Updated last year
- An easy-to-use Python framework to generate adversarial jailbreak prompts.☆826Mar 27, 2025Updated 11 months ago
- The repo for paper: Exploiting the Index Gradients for Optimization-Based Jailbreaking on Large Language Models.☆14Dec 16, 2024Updated last year
- Fine-tuning base models to build robust task-specific models☆34Apr 11, 2024Updated last year
- Recent papers on (1) Psychology of LLMs; (2) Biases in LLMs.☆50Nov 3, 2023Updated 2 years ago
- MTTM: Metamorphic Testing for Textual Content Moderation Software☆32Feb 10, 2023Updated 3 years ago
- JailbreakBench: An Open Robustness Benchmark for Jailbreaking Language Models [NeurIPS 2024 Datasets and Benchmarks Track]☆546Apr 4, 2025Updated 11 months ago
- [NeurIPS'25] Official Implementation of RISE (Reinforcing Reasoning with Self-Verification)☆32Aug 8, 2025Updated 7 months ago
- Multilingual safety benchmark for Large Language Models☆53Sep 1, 2024Updated last year
- Code and data for the paper: On the Humanity of Conversational AI: Evaluating the Psychological Portrayal of LLMs☆130Jan 24, 2026Updated last month
- Papers and resources related to the security and privacy of LLMs 🤖☆570Jun 8, 2025Updated 9 months ago
- Improved Few-Shot Jailbreaking Can Circumvent Aligned Language Models and Their Defenses (NeurIPS 2024)☆65Jan 11, 2025Updated last year
- Jailbreaking Leading Safety-Aligned LLMs with Simple Adaptive Attacks [ICLR 2025]☆380Jan 23, 2025Updated last year
- Code of "Improving Machine Translation with Human Feedback: An Exploration of Quality Estimation as a Reward Model"☆23Jun 28, 2024Updated last year
- A curated list of safety-related papers, articles, and resources focused on Large Language Models (LLMs). This repository aims to provide…☆1,796Mar 13, 2026Updated last week
- Code and data for the paper: On the Reliability of Psychological Scales on Large Language Models☆30Dec 15, 2025Updated 3 months ago
- Official Repository for ACL 2024 Paper SafeDecoding: Defending against Jailbreak Attacks via Safety-Aware Decoding☆151Jul 19, 2024Updated last year
- ☆48May 9, 2024Updated last year
- PromptInject is a framework that assembles prompts in a modular fashion to provide a quantitative analysis of the robustness of LLMs to a…☆465Feb 26, 2024Updated 2 years ago
- A fast + lightweight implementation of the GCG algorithm in PyTorch☆321May 13, 2025Updated 10 months ago
- ☆14Feb 26, 2025Updated last year
- [AAAI'25 (Oral)] Jailbreaking Large Vision-language Models via Typographic Visual Prompts☆196Jun 26, 2025Updated 8 months ago