A framework to evaluate the generalization capability of safety alignment for LLMs
☆627Oct 9, 2025Updated 6 months ago
Alternatives and similar repositories for CipherChat
Users that are interested in CipherChat are comparing it to the libraries listed below. We may earn a commission when you buy through links labeled 'Ad' on this page.
Sorting:
- A novel approach to improve the safety of large language models, enabling them to transition effectively from unsafe to safe state.☆72May 22, 2025Updated 10 months ago
- ☆29Mar 20, 2024Updated 2 years ago
- [ICLR 2024] The official implementation of our ICLR2024 paper "AutoDAN: Generating Stealthy Jailbreak Prompts on Aligned Large Language M…☆438Jan 22, 2025Updated last year
- [ICSE'25] Aligning the Objective of LLM-based Program Repair☆23Mar 8, 2025Updated last year
- ☆713Jul 2, 2025Updated 9 months ago
- GPU virtual machines on DigitalOcean Gradient AI • AdGet to production fast with high-performance AMD and NVIDIA GPUs you can spin up in seconds. The definition of operational simplicity.
- [ICLR 2024]Data for "Multilingual Jailbreak Challenges in Large Language Models"☆101Mar 7, 2024Updated 2 years ago
- ☆197Nov 26, 2023Updated 2 years ago
- SpyGame: An interactive multi-agent framework to evaluate intelligence with large language models :D☆15Nov 9, 2023Updated 2 years ago
- The official implementation of our NAACL 2024 paper "A Wolf in Sheep’s Clothing: Generalized Nested Jailbreak Prompts can Fool Large Lang…☆157Sep 2, 2025Updated 7 months ago
- Official implementation of our IWSLT 2023 paper "The MineTrans Systems for IWSLT 2023 Offline Speech Translation and Speech-to-Speech Tra…☆16Jul 14, 2023Updated 2 years ago
- Official repo for GPTFUZZER : Red Teaming Large Language Models with Auto-Generated Jailbreak Prompts☆575Feb 27, 2026Updated last month
- [ACL 2024] CodeAttack: Revealing Safety Generalization Challenges of Large Language Models via Code Completion☆59Oct 1, 2025Updated 6 months ago
- ☆127Feb 3, 2025Updated last year
- We jailbreak GPT-3.5 Turbo’s safety guardrails by fine-tuning it on only 10 adversarially designed examples, at a cost of less than $0.20…☆345Feb 23, 2024Updated 2 years ago
- NordVPN Threat Protection Pro™ • AdTake your cybersecurity to the next level. Block phishing, malware, trackers, and ads. Lightweight app that works with all browsers.
- [arXiv:2311.03191] "DeepInception: Hypnotize Large Language Model to Be Jailbreaker"☆176Feb 20, 2024Updated 2 years ago
- ☆11Jan 19, 2025Updated last year
- Towards Safe LLM with our simple-yet-highly-effective Intention Analysis Prompting☆20Mar 25, 2024Updated 2 years ago
- Universal and Transferable Attacks on Aligned Language Models☆4,595Aug 2, 2024Updated last year
- An easy-to-use Python framework to generate adversarial jailbreak prompts.☆832Mar 30, 2026Updated last week
- The repo for paper: Exploiting the Index Gradients for Optimization-Based Jailbreaking on Large Language Models.☆14Dec 16, 2024Updated last year
- Fine-tuning base models to build robust task-specific models☆35Apr 11, 2024Updated last year
- Recent papers on (1) Psychology of LLMs; (2) Biases in LLMs.☆50Nov 3, 2023Updated 2 years ago
- MTTM: Metamorphic Testing for Textual Content Moderation Software☆32Feb 10, 2023Updated 3 years ago
- Bare Metal GPUs on DigitalOcean Gradient AI • AdPurpose-built for serious AI teams training foundational models, running large-scale inference, and pushing the boundaries of what's possible.
- [NeurIPS'25] Official Implementation of RISE (Reinforcing Reasoning with Self-Verification)☆32Aug 8, 2025Updated 8 months ago
- JailbreakBench: An Open Robustness Benchmark for Jailbreaking Language Models [NeurIPS 2024 Datasets and Benchmarks Track]☆564Apr 4, 2025Updated last year
- Multilingual safety benchmark for Large Language Models☆53Sep 1, 2024Updated last year
- Code and data for the paper: On the Humanity of Conversational AI: Evaluating the Psychological Portrayal of LLMs☆132Jan 24, 2026Updated 2 months ago
- Papers and resources related to the security and privacy of LLMs 🤖☆568Jun 8, 2025Updated 10 months ago
- Improved Few-Shot Jailbreaking Can Circumvent Aligned Language Models and Their Defenses (NeurIPS 2024)☆65Jan 11, 2025Updated last year
- Jailbreaking Leading Safety-Aligned LLMs with Simple Adaptive Attacks [ICLR 2025]☆380Jan 23, 2025Updated last year
- Code of "Improving Machine Translation with Human Feedback: An Exploration of Quality Estimation as a Reward Model"☆23Jun 28, 2024Updated last year
- A curated list of safety-related papers, articles, and resources focused on Large Language Models (LLMs). This repository aims to provide…☆1,814Updated this week
- Wordpress hosting with auto-scaling on Cloudways • AdFully Managed hosting built for WordPress-powered businesses that need reliable, auto-scalable hosting. Cloudways SafeUpdates now available.
- Code and data for the paper: On the Reliability of Psychological Scales on Large Language Models☆30Dec 15, 2025Updated 3 months ago
- Official Repository for ACL 2024 Paper SafeDecoding: Defending against Jailbreak Attacks via Safety-Aware Decoding☆152Jul 19, 2024Updated last year
- ☆47May 9, 2024Updated last year
- PromptInject is a framework that assembles prompts in a modular fashion to provide a quantitative analysis of the robustness of LLMs to a…☆473Feb 26, 2024Updated 2 years ago
- A fast + lightweight implementation of the GCG algorithm in PyTorch☆324May 13, 2025Updated 10 months ago
- ☆14Feb 26, 2025Updated last year
- This is the starter kit for the Trojan Detection Challenge 2023 (LLM Edition), a NeurIPS 2023 competition.☆90May 19, 2024Updated last year