☆134Jul 7, 2025Updated 9 months ago
Alternatives and similar repositories for strong_reject
Users that are interested in strong_reject are comparing it to the libraries listed below. We may earn a commission when you buy through links labeled 'Ad' on this page.
Sorting:
- Repository for "StrongREJECT for Empty Jailbreaks" paper☆155Nov 3, 2024Updated last year
- Codebase for Obfuscated Activations Bypass LLM Latent-Space Defenses☆31Feb 11, 2025Updated last year
- [ACL 2024] CodeAttack: Revealing Safety Generalization Challenges of Large Language Models via Code Completion☆59Oct 1, 2025Updated 6 months ago
- A fast + lightweight implementation of the GCG algorithm in PyTorch☆324May 13, 2025Updated 11 months ago
- A python sdk for LLM finetuning and inference on runpod infrastructure☆25Apr 6, 2026Updated last week
- Bare Metal GPUs on DigitalOcean Gradient AI • AdPurpose-built for serious AI teams training foundational models, running large-scale inference, and pushing the boundaries of what's possible.
- HarmBench: A Standardized Evaluation Framework for Automated Red Teaming and Robust Refusal☆915Aug 16, 2024Updated last year
- The official repository for guided jailbreak benchmark☆29Jul 28, 2025Updated 8 months ago
- ☆39May 17, 2025Updated 10 months ago
- [NDSS'25] The official implementation of safety misalignment.☆18Jan 8, 2025Updated last year
- [ICML 2025] An official source code for paper "FlipAttack: Jailbreak LLMs via Flipping".☆171May 2, 2025Updated 11 months ago
- ☆716Jul 2, 2025Updated 9 months ago
- ☆127Feb 3, 2025Updated last year
- [NDSS'25 Best Technical Poster] A collection of automated evaluators for assessing jailbreak attempts.☆191Apr 1, 2025Updated last year
- Official implementation of paper: DrAttack: Prompt Decomposition and Reconstruction Makes Powerful LLM Jailbreakers☆66Aug 25, 2024Updated last year
- Wordpress hosting with auto-scaling - Free Trial • AdFully Managed hosting for WordPress and WooCommerce businesses that need reliable, auto-scalable performance. Cloudways SafeUpdates now available.
- Official Repository for ACL 2024 Paper SafeDecoding: Defending against Jailbreak Attacks via Safety-Aware Decoding☆152Jul 19, 2024Updated last year
- We jailbreak GPT-3.5 Turbo’s safety guardrails by fine-tuning it on only 10 adversarially designed examples, at a cost of less than $0.20…☆346Feb 23, 2024Updated 2 years ago
- ☆125Dec 3, 2025Updated 4 months ago
- BeaverTails is a collection of datasets designed to facilitate research on safety alignment in large language models (LLMs).☆178Oct 27, 2023Updated 2 years ago
- Code for ICCV2025 paper——IDEATOR: Jailbreaking and Benchmarking Large Vision-Language Models Using Themselves☆17Jul 11, 2025Updated 9 months ago
- ☆62May 21, 2025Updated 10 months ago
- ☆21Jul 26, 2025Updated 8 months ago
- AmpleGCG: Learning a Universal and Transferable Generator of Adversarial Attacks on Both Open and Closed LLM☆86Nov 3, 2024Updated last year
- Röttger et al. (NAACL 2024): "XSTest: A Test Suite for Identifying Exaggerated Safety Behaviours in Large Language Models"☆132Feb 24, 2025Updated last year
- Managed Kubernetes at scale on DigitalOcean • AdDigitalOcean Kubernetes includes the control plane, bandwidth allowance, container registry, automatic updates, and more for free.
- A Python library for guardrail models evaluation.☆35Oct 9, 2025Updated 6 months ago
- ☆31Oct 23, 2024Updated last year
- [NeurIPS25 & ICML25 Workshop on Reliable and Responsible Foundation Models] A Simple Baseline Achieving Over 90% Success Rate Against the…☆91Feb 3, 2026Updated 2 months ago
- AIR-Bench 2024 is a safety benchmark that aligns with emerging government regulations and company policies☆30Aug 14, 2024Updated last year
- Multi-dimensional analysis of orthogonal safety directions in LLM alignment☆21Mar 20, 2025Updated last year
- [ICML 2025] X-Transfer Attacks: Towards Super Transferable Adversarial Attacks on CLIP☆43Feb 3, 2026Updated 2 months ago
- ☆18Mar 30, 2025Updated last year
- Code and results accompanying the paper "Refusal in Language Models Is Mediated by a Single Direction".☆370Jun 13, 2025Updated 10 months ago
- Independent robustness evaluation of Improving Alignment and Robustness with Short Circuiting☆17Apr 15, 2025Updated 11 months ago
- AI Agents on DigitalOcean Gradient AI Platform • AdBuild production-ready AI agents using customizable tools or access multiple LLMs through a single endpoint. Create custom knowledge bases or connect external data.
- Does Refusal Training in LLMs Generalize to the Past Tense? [ICLR 2025]☆79Jan 23, 2025Updated last year
- TAP: An automated jailbreaking method for black-box LLMs☆225Dec 10, 2024Updated last year
- This is the code repository for "Uncovering Safety Risks of Large Language Models through Concept Activation Vector"☆47Oct 13, 2025Updated 6 months ago
- ☆120Apr 27, 2025Updated 11 months ago
- ☆18Aug 19, 2025Updated 7 months ago
- Official Code for What Makes and Breaks Safety Fine-tuning? A Mechanistic Study (NeurIPS 2024)☆12Oct 31, 2024Updated last year
- Code repo of our paper Towards Understanding Jailbreak Attacks in LLMs: A Representation Space Analysis (https://arxiv.org/abs/2406.10794…☆24Jul 26, 2024Updated last year