Jailbreak artifacts for JailbreakBench
☆80Nov 6, 2024Updated last year
Alternatives and similar repositories for artifacts
Users that are interested in artifacts are comparing it to the libraries listed below
Sorting:
- JailbreakBench: An Open Robustness Benchmark for Jailbreaking Language Models [NeurIPS 2024 Datasets and Benchmarks Track]☆535Apr 4, 2025Updated 10 months ago
- ☆698Jul 2, 2025Updated 8 months ago
- Jailbreaking Leading Safety-Aligned LLMs with Simple Adaptive Attacks [ICLR 2025]☆377Jan 23, 2025Updated last year
- Code for paper "Defending aginast LLM Jailbreaking via Backtranslation"☆34Aug 16, 2024Updated last year
- ☆196Nov 26, 2023Updated 2 years ago
- [ICLR 2024]Data for "Multilingual Jailbreak Challenges in Large Language Models"☆99Mar 7, 2024Updated last year
- [ICML 2024] COLD-Attack: Jailbreaking LLMs with Stealthiness and Controllability☆176Dec 18, 2024Updated last year
- TAP: An automated jailbreaking method for black-box LLMs☆221Dec 10, 2024Updated last year
- ☆53May 24, 2023Updated 2 years ago
- Code for our NeurIPS 2024 paper Improved Generation of Adversarial Examples Against Safety-aligned LLMs☆12Nov 7, 2024Updated last year
- csl: PyTorch-based Constrained Learning☆12Jun 1, 2022Updated 3 years ago
- Awesome-Multilingual-LLMs-Papers☆31Jan 21, 2025Updated last year
- [ACL 2024] CodeAttack: Revealing Safety Generalization Challenges of Large Language Models via Code Completion☆58Oct 1, 2025Updated 5 months ago
- ☆34Jan 25, 2024Updated 2 years ago
- FuseLIP: Multimodal Embeddings via Early Fusion of Discrete Tokens☆17Sep 8, 2025Updated 5 months ago
- ☆39May 17, 2025Updated 9 months ago
- Official Code for "Baseline Defenses for Adversarial Attacks Against Aligned Language Models"☆31Oct 26, 2023Updated 2 years ago
- Official repository for "Boosting Adversarial Transferability using Dynamic Cues " (ICLR 2023)☆20Aug 24, 2023Updated 2 years ago
- ☆13Jan 14, 2025Updated last year
- ☆35Dec 16, 2022Updated 3 years ago
- ☆15Jul 24, 2022Updated 3 years ago
- The official repository for CosPGD: a unified white-box adversarial attack for pixel-wise prediction tasks.☆15May 8, 2025Updated 9 months ago
- Repository for reproducing `Model-Based Robust Deep Learning`☆16Jan 22, 2021Updated 5 years ago
- ☆34Nov 12, 2024Updated last year
- PaCE: Parsimonious Concept Engineering for Large Language Models (NeurIPS 2024)☆42Jan 18, 2026Updated last month
- Bag of Tricks: Benchmarking of Jailbreak Attacks on LLMs. Empirical tricks for LLM Jailbreaking. (NeurIPS 2024)☆163Nov 30, 2024Updated last year
- Robustness for Non-Parametric Classification: A Generic Attack and Defense☆18Nov 21, 2022Updated 3 years ago
- Towards Safe LLM with our simple-yet-highly-effective Intention Analysis Prompting☆20Mar 25, 2024Updated last year
- [ECCV 2024] Towards Reliable Evaluation and Fast Training of Robust Semantic Segmentation Models☆21Jul 17, 2024Updated last year
- ☆25Feb 20, 2026Updated last week
- Repository for the Paper (AAAI 2024, Oral) --- Visual Adversarial Examples Jailbreak Large Language Models☆266May 13, 2024Updated last year
- Official repo for GPTFUZZER : Red Teaming Large Language Models with Auto-Generated Jailbreak Prompts☆568Updated this week
- Repository for "StrongREJECT for Empty Jailbreaks" paper☆152Nov 3, 2024Updated last year
- Official Repository for ACL 2024 Paper SafeDecoding: Defending against Jailbreak Attacks via Safety-Aware Decoding☆151Jul 19, 2024Updated last year
- Official repo for the paper "Make Some Noise: Reliable and Efficient Single-Step Adversarial Training" (https://arxiv.org/abs/2202.01181)☆25Oct 17, 2022Updated 3 years ago
- [ICLR 2024] The official implementation of our ICLR2024 paper "AutoDAN: Generating Stealthy Jailbreak Prompts on Aligned Large Language M…☆430Jan 22, 2025Updated last year
- HarmBench: A Standardized Evaluation Framework for Automated Red Teaming and Robust Refusal☆864Aug 16, 2024Updated last year
- Code for paper "Universal Jailbreak Backdoors from Poisoned Human Feedback"☆66Apr 24, 2024Updated last year
- Official repository for "Robust Prompt Optimization for Defending Language Models Against Jailbreaking Attacks"☆61Aug 8, 2024Updated last year