☆136Jul 7, 2025Updated 9 months ago
Alternatives and similar repositories for strong_reject
Users that are interested in strong_reject are comparing it to the libraries listed below. We may earn a commission when you buy through links labeled 'Ad' on this page.
Sorting:
- Repository for "StrongREJECT for Empty Jailbreaks" paper☆156Nov 3, 2024Updated last year
- Codebase for Obfuscated Activations Bypass LLM Latent-Space Defenses☆31Feb 11, 2025Updated last year
- [ACL 2024] CodeAttack: Revealing Safety Generalization Challenges of Large Language Models via Code Completion☆59Oct 1, 2025Updated 7 months ago
- A fast + lightweight implementation of the GCG algorithm in PyTorch☆330May 13, 2025Updated 11 months ago
- A python sdk for LLM finetuning and inference on runpod infrastructure☆27Apr 23, 2026Updated last week
- Deploy to Railway using AI coding agents - Free Credits Offer • AdUse Claude Code, Codex, OpenCode, and more. Autonomous software development now has the infrastructure to match with Railway.
- HarmBench: A Standardized Evaluation Framework for Automated Red Teaming and Robust Refusal☆936Aug 16, 2024Updated last year
- Official repository for the paper "Gradient-based Jailbreak Images for Multimodal Fusion Models" (https//arxiv.org/abs/2410.03489)☆19Oct 22, 2024Updated last year
- The official repository for guided jailbreak benchmark☆29Jul 28, 2025Updated 9 months ago
- ☆65May 21, 2025Updated 11 months ago
- ☆40May 17, 2025Updated 11 months ago
- [ICML 2025] An official source code for paper "FlipAttack: Jailbreak LLMs via Flipping".☆172May 2, 2025Updated last year
- [NDSS'25] The official implementation of safety misalignment.☆19Jan 8, 2025Updated last year
- ☆728Jul 2, 2025Updated 10 months ago
- [NDSS'25 Best Technical Poster] A collection of automated evaluators for assessing jailbreak attempts.☆191Apr 1, 2025Updated last year
- Managed Database hosting by DigitalOcean • AdPostgreSQL, MySQL, MongoDB, Kafka, Valkey, and OpenSearch available. Automatically scale up storage and focus on building your apps.
- ☆129Feb 3, 2025Updated last year
- Official implementation of paper: DrAttack: Prompt Decomposition and Reconstruction Makes Powerful LLM Jailbreakers☆66Aug 25, 2024Updated last year
- Official Repository for ACL 2024 Paper SafeDecoding: Defending against Jailbreak Attacks via Safety-Aware Decoding☆152Jul 19, 2024Updated last year
- We jailbreak GPT-3.5 Turbo’s safety guardrails by fine-tuning it on only 10 adversarially designed examples, at a cost of less than $0.20…☆348Feb 23, 2024Updated 2 years ago
- ☆131Dec 3, 2025Updated 5 months ago
- BeaverTails is a collection of datasets designed to facilitate research on safety alignment in large language models (LLMs).☆180Oct 27, 2023Updated 2 years ago
- Code for ICCV2025 paper——IDEATOR: Jailbreaking and Benchmarking Large Vision-Language Models Using Themselves☆17Jul 11, 2025Updated 9 months ago
- ☆21Jul 26, 2025Updated 9 months ago
- AmpleGCG: Learning a Universal and Transferable Generator of Adversarial Attacks on Both Open and Closed LLM☆86Nov 3, 2024Updated last year
- 1-Click AI Models by DigitalOcean Gradient • AdDeploy popular AI models on DigitalOcean Gradient GPU virtual machines with just a single click. Zero configuration with optimized deployments.
- A Python library for guardrail models evaluation.☆35Oct 9, 2025Updated 6 months ago
- Röttger et al. (NAACL 2024): "XSTest: A Test Suite for Identifying Exaggerated Safety Behaviours in Large Language Models"☆133Feb 24, 2025Updated last year
- ☆31Oct 23, 2024Updated last year
- [NeurIPS25 & ICML25 Workshop on Reliable and Responsible Foundation Models] A Simple Baseline Achieving Over 90% Success Rate Against the…☆94Feb 3, 2026Updated 3 months ago
- AIR-Bench 2024 is a safety benchmark that aligns with emerging government regulations and company policies☆30Aug 14, 2024Updated last year
- Code and data to go with the Zhu et al. paper "An Objective for Nuanced LLM Jailbreaks"☆36Apr 8, 2026Updated 3 weeks ago
- [ICML 2025] X-Transfer Attacks: Towards Super Transferable Adversarial Attacks on CLIP☆44Feb 3, 2026Updated 3 months ago
- ☆18Mar 30, 2025Updated last year
- Independent robustness evaluation of Improving Alignment and Robustness with Short Circuiting☆17Apr 15, 2025Updated last year
- AI Agents on DigitalOcean Gradient AI Platform • AdBuild production-ready AI agents using customizable tools or access multiple LLMs through a single endpoint. Create custom knowledge bases or connect external data.
- Code and results accompanying the paper "Refusal in Language Models Is Mediated by a Single Direction".☆379Jun 13, 2025Updated 10 months ago
- Does Refusal Training in LLMs Generalize to the Past Tense? [ICLR 2025]☆79Jan 23, 2025Updated last year
- TAP: An automated jailbreaking method for black-box LLMs☆228Dec 10, 2024Updated last year
- This is the code repository for "Uncovering Safety Risks of Large Language Models through Concept Activation Vector"☆48Oct 13, 2025Updated 6 months ago
- ☆121Apr 27, 2025Updated last year
- ☆19Aug 19, 2025Updated 8 months ago
- Official Code for What Makes and Breaks Safety Fine-tuning? A Mechanistic Study (NeurIPS 2024)☆12Oct 31, 2024Updated last year