Jailbreak artifacts for JailbreakBench
☆91Nov 6, 2024Updated last year
Alternatives and similar repositories for artifacts
Users that are interested in artifacts are comparing it to the libraries listed below. We may earn a commission when you buy through links labeled 'Ad' on this page.
Sorting:
- JailbreakBench: An Open Robustness Benchmark for Jailbreaking Language Models [NeurIPS 2024 Datasets and Benchmarks Track]☆584Apr 4, 2025Updated last year
- ☆728Jul 2, 2025Updated 10 months ago
- Code for paper "Defending aginast LLM Jailbreaking via Backtranslation"☆34Aug 16, 2024Updated last year
- Jailbreaking Leading Safety-Aligned LLMs with Simple Adaptive Attacks [ICLR 2025]☆385Jan 23, 2025Updated last year
- A fast + lightweight implementation of the GCG algorithm in PyTorch☆330May 13, 2025Updated 11 months ago
- Serverless GPU API endpoints on Runpod - Get Bonus Credits • AdSkip the infrastructure headaches. Auto-scaling, pay-as-you-go, no-ops approach lets you focus on innovating your application.
- ☆199Nov 26, 2023Updated 2 years ago
- [ACL 2024] CodeAttack: Revealing Safety Generalization Challenges of Large Language Models via Code Completion☆59Oct 1, 2025Updated 7 months ago
- Official Code for "Baseline Defenses for Adversarial Attacks Against Aligned Language Models"☆33Oct 26, 2023Updated 2 years ago
- TAP: An automated jailbreaking method for black-box LLMs☆228Dec 10, 2024Updated last year
- ☆52May 24, 2023Updated 2 years ago
- [ICLR 2024]Data for "Multilingual Jailbreak Challenges in Large Language Models"☆104Mar 7, 2024Updated 2 years ago
- [ICML 2024] COLD-Attack: Jailbreaking LLMs with Stealthiness and Controllability☆176Dec 18, 2024Updated last year
- [ICLR'24] RAIN: Your Language Models Can Align Themselves without Finetuning☆97May 23, 2024Updated last year
- Code for our NeurIPS 2024 paper Improved Generation of Adversarial Examples Against Safety-aligned LLMs☆12Nov 7, 2024Updated last year
- Managed Database hosting by DigitalOcean • AdPostgreSQL, MySQL, MongoDB, Kafka, Valkey, and OpenSearch available. Automatically scale up storage and focus on building your apps.
- csl: PyTorch-based Constrained Learning☆11Jun 1, 2022Updated 3 years ago
- ☆40May 17, 2025Updated 11 months ago
- Improving Alignment and Robustness with Circuit Breakers☆261Sep 24, 2024Updated last year
- Official repository for the paper "Gradient-based Jailbreak Images for Multimodal Fusion Models" (https//arxiv.org/abs/2410.03489)☆19Oct 22, 2024Updated last year
- PaCE: Parsimonious Concept Engineering for Large Language Models (NeurIPS 2024)☆43Jan 18, 2026Updated 3 months ago
- ☆166Sep 2, 2024Updated last year
- Awesome-Multilingual-LLMs-Papers☆34Jan 21, 2025Updated last year
- ☆13Jan 14, 2025Updated last year
- Official repo for GPTFUZZER : Red Teaming Large Language Models with Auto-Generated Jailbreak Prompts☆579Feb 27, 2026Updated 2 months ago
- Deploy to Railway using AI coding agents - Free Credits Offer • AdUse Claude Code, Codex, OpenCode, and more. Autonomous software development now has the infrastructure to match with Railway.
- Bag of Tricks: Benchmarking of Jailbreak Attacks on LLMs. Empirical tricks for LLM Jailbreaking. (NeurIPS 2024)☆161Nov 30, 2024Updated last year
- ☆15Jul 24, 2022Updated 3 years ago
- Official repository for "Robust Prompt Optimization for Defending Language Models Against Jailbreaking Attacks"☆62Aug 8, 2024Updated last year
- [ACL24] Official Repo of Paper `ArtPrompt: ASCII Art-based Jailbreak Attacks against Aligned LLMs`☆97Aug 15, 2025Updated 8 months ago
- ☆34Jan 25, 2024Updated 2 years ago
- [ICLR 2024] The official implementation of our ICLR2024 paper "AutoDAN: Generating Stealthy Jailbreak Prompts on Aligned Large Language M…☆439Jan 22, 2025Updated last year
- Repository for the Paper (AAAI 2024, Oral) --- Visual Adversarial Examples Jailbreak Large Language Models☆270May 13, 2024Updated last year
- Towards Safe LLM with our simple-yet-highly-effective Intention Analysis Prompting☆21Mar 25, 2024Updated 2 years ago
- ☆35Dec 16, 2022Updated 3 years ago
- AI Agents on DigitalOcean Gradient AI Platform • AdBuild production-ready AI agents using customizable tools or access multiple LLMs through a single endpoint. Create custom knowledge bases or connect external data.
- A Domain-Specific Language, Jailbreak Attack Synthesizer and Dynamic LLM Redteaming Toolkit☆27Dec 5, 2024Updated last year
- Official repository for "Boosting Adversarial Transferability using Dynamic Cues " (ICLR 2023)☆20Aug 24, 2023Updated 2 years ago
- Repository for reproducing `Model-Based Robust Deep Learning`☆17Jan 22, 2021Updated 5 years ago
- An easy-to-use Python framework to generate adversarial jailbreak prompts.☆843Mar 30, 2026Updated last month
- Blogs that I'm actively following.☆15Sep 17, 2023Updated 2 years ago
- Robustness for Non-Parametric Classification: A Generic Attack and Defense☆18Nov 21, 2022Updated 3 years ago
- ☆53Feb 13, 2026Updated 2 months ago