A framework to evaluate the generalization capability of safety alignment for LLMs
☆629Oct 9, 2025Updated 6 months ago
Alternatives and similar repositories for CipherChat
Users that are interested in CipherChat are comparing it to the libraries listed below. We may earn a commission when you buy through links labeled 'Ad' on this page.
Sorting:
- A novel approach to improve the safety of large language models, enabling them to transition effectively from unsafe to safe state.☆72May 22, 2025Updated 11 months ago
- ☆29Mar 20, 2024Updated 2 years ago
- [ICLR 2024] The official implementation of our ICLR2024 paper "AutoDAN: Generating Stealthy Jailbreak Prompts on Aligned Large Language M…☆439Jan 22, 2025Updated last year
- [ICSE'25] Aligning the Objective of LLM-based Program Repair☆23Mar 8, 2025Updated last year
- ☆728Jul 2, 2025Updated 9 months ago
- GPUs on demand by Runpod - Special Offer Available • AdRun AI, ML, and HPC workloads on powerful cloud GPUs—without limits or wasted spend. Deploy GPUs in under a minute and pay by the second.
- [ICLR 2024]Data for "Multilingual Jailbreak Challenges in Large Language Models"☆103Mar 7, 2024Updated 2 years ago
- ☆199Nov 26, 2023Updated 2 years ago
- SpyGame: An interactive multi-agent framework to evaluate intelligence with large language models :D☆15Nov 9, 2023Updated 2 years ago
- The official implementation of our NAACL 2024 paper "A Wolf in Sheep’s Clothing: Generalized Nested Jailbreak Prompts can Fool Large Lang…☆158Sep 2, 2025Updated 7 months ago
- Official implementation of our IWSLT 2023 paper "The MineTrans Systems for IWSLT 2023 Offline Speech Translation and Speech-to-Speech Tra…☆16Jul 14, 2023Updated 2 years ago
- Official repo for GPTFUZZER : Red Teaming Large Language Models with Auto-Generated Jailbreak Prompts☆578Feb 27, 2026Updated 2 months ago
- [ACL 2024] CodeAttack: Revealing Safety Generalization Challenges of Large Language Models via Code Completion☆59Oct 1, 2025Updated 6 months ago
- We jailbreak GPT-3.5 Turbo’s safety guardrails by fine-tuning it on only 10 adversarially designed examples, at a cost of less than $0.20…☆348Feb 23, 2024Updated 2 years ago
- ☆129Feb 3, 2025Updated last year
- AI Agents on DigitalOcean Gradient AI Platform • AdBuild production-ready AI agents using customizable tools or access multiple LLMs through a single endpoint. Create custom knowledge bases or connect external data.
- ☆11Jan 19, 2025Updated last year
- [arXiv:2311.03191] "DeepInception: Hypnotize Large Language Model to Be Jailbreaker"☆176Feb 20, 2024Updated 2 years ago
- Universal and Transferable Attacks on Aligned Language Models☆4,638Aug 2, 2024Updated last year
- Towards Safe LLM with our simple-yet-highly-effective Intention Analysis Prompting☆21Mar 25, 2024Updated 2 years ago
- An easy-to-use Python framework to generate adversarial jailbreak prompts.☆843Mar 30, 2026Updated last month
- The repo for paper: Exploiting the Index Gradients for Optimization-Based Jailbreaking on Large Language Models.☆14Dec 16, 2024Updated last year
- Fine-tuning base models to build robust task-specific models☆35Apr 11, 2024Updated 2 years ago
- Recent papers on (1) Psychology of LLMs; (2) Biases in LLMs.☆50Nov 3, 2023Updated 2 years ago
- MTTM: Metamorphic Testing for Textual Content Moderation Software☆32Feb 10, 2023Updated 3 years ago
- Managed Database hosting by DigitalOcean • AdPostgreSQL, MySQL, MongoDB, Kafka, Valkey, and OpenSearch available. Automatically scale up storage and focus on building your apps.
- [NeurIPS'25] Official Implementation of RISE (Reinforcing Reasoning with Self-Verification)☆32Aug 8, 2025Updated 8 months ago
- JailbreakBench: An Open Robustness Benchmark for Jailbreaking Language Models [NeurIPS 2024 Datasets and Benchmarks Track]☆582Apr 4, 2025Updated last year
- Multilingual safety benchmark for Large Language Models☆53Sep 1, 2024Updated last year
- Code and data for the paper: On the Humanity of Conversational AI: Evaluating the Psychological Portrayal of LLMs☆132Jan 24, 2026Updated 3 months ago
- Papers and resources related to the security and privacy of LLMs 🤖☆573Jun 8, 2025Updated 10 months ago
- Improved Few-Shot Jailbreaking Can Circumvent Aligned Language Models and Their Defenses (NeurIPS 2024)☆65Jan 11, 2025Updated last year
- Jailbreaking Leading Safety-Aligned LLMs with Simple Adaptive Attacks [ICLR 2025]☆385Jan 23, 2025Updated last year
- Code of "Improving Machine Translation with Human Feedback: An Exploration of Quality Estimation as a Reward Model"☆22Jun 28, 2024Updated last year
- Code and data for the paper: On the Reliability of Psychological Scales on Large Language Models☆30Dec 15, 2025Updated 4 months ago
- Bare Metal GPUs on DigitalOcean Gradient AI • AdPurpose-built for serious AI teams training foundational models, running large-scale inference, and pushing the boundaries of what's possible.
- A curated list of safety-related papers, articles, and resources focused on Large Language Models (LLMs). This repository aims to provide…☆1,833Apr 18, 2026Updated last week
- Official Repository for ACL 2024 Paper SafeDecoding: Defending against Jailbreak Attacks via Safety-Aware Decoding☆152Jul 19, 2024Updated last year
- ☆48May 9, 2024Updated last year
- PromptInject is a framework that assembles prompts in a modular fashion to provide a quantitative analysis of the robustness of LLMs to a…☆485Feb 26, 2024Updated 2 years ago
- A fast + lightweight implementation of the GCG algorithm in PyTorch☆330May 13, 2025Updated 11 months ago
- ☆14Feb 26, 2025Updated last year
- This is the starter kit for the Trojan Detection Challenge 2023 (LLM Edition), a NeurIPS 2023 competition.☆91May 19, 2024Updated last year