Bag of Tricks: Benchmarking of Jailbreak Attacks on LLMs. Empirical tricks for LLM Jailbreaking. (NeurIPS 2024)
☆163Nov 30, 2024Updated last year
Alternatives and similar repositories for JailTrickBench
Users that are interested in JailTrickBench are comparing it to the libraries listed below
Sorting:
- JAILJUDGE: A comprehensive evaluation benchmark which includes a wide range of risk scenarios with complex malicious prompts (e.g., synth…☆58Dec 13, 2024Updated last year
- ☆12Feb 19, 2024Updated 2 years ago
- Official Repository for ACL 2024 Paper SafeDecoding: Defending against Jailbreak Attacks via Safety-Aware Decoding☆151Jul 19, 2024Updated last year
- JailbreakBench: An Open Robustness Benchmark for Jailbreaking Language Models [NeurIPS 2024 Datasets and Benchmarks Track]☆535Apr 4, 2025Updated 11 months ago
- TAP: An automated jailbreaking method for black-box LLMs☆221Dec 10, 2024Updated last year
- Official codes of KDD'24 paper "HiFGL: A Hierarchical Framework for Cross-silo Cross-device Federated Graph Learning"☆10Sep 4, 2024Updated last year
- Jailbreaking Leading Safety-Aligned LLMs with Simple Adaptive Attacks [ICLR 2025]☆377Jan 23, 2025Updated last year
- A lightweight library for large laguage model (LLM) jailbreaking defense.☆61Sep 11, 2025Updated 5 months ago
- ☆33Aug 24, 2023Updated 2 years ago
- Improved Few-Shot Jailbreaking Can Circumvent Aligned Language Models and Their Defenses (NeurIPS 2024)☆65Jan 11, 2025Updated last year
- Code and data to go with the Zhu et al. paper "An Objective for Nuanced LLM Jailbreaks"☆36Dec 18, 2024Updated last year
- [ICML 2024] COLD-Attack: Jailbreaking LLMs with Stealthiness and Controllability☆176Dec 18, 2024Updated last year
- ☆164Sep 2, 2024Updated last year
- ☆59Jun 5, 2024Updated last year
- The repo for paper: Exploiting the Index Gradients for Optimization-Based Jailbreaking on Large Language Models.☆13Dec 16, 2024Updated last year
- An easy-to-use Python framework to generate adversarial jailbreak prompts.☆820Mar 27, 2025Updated 11 months ago
- ☆698Jul 2, 2025Updated 8 months ago
- [ICML 2025] An official source code for paper "FlipAttack: Jailbreak LLMs via Flipping".☆165May 2, 2025Updated 10 months ago
- A fast + lightweight implementation of the GCG algorithm in PyTorch☆319May 13, 2025Updated 9 months ago
- All in How You Ask for It: Simple Black-Box Method for Jailbreak Attacks☆18Apr 24, 2024Updated last year
- Code repository for the paper "Heuristic Induced Multimodal Risk Distribution Jailbreak Attack for Multimodal Large Language Models"☆15Aug 7, 2025Updated 6 months ago
- Improving Alignment and Robustness with Circuit Breakers☆258Sep 24, 2024Updated last year
- [NDSS'25 Best Technical Poster] A collection of automated evaluators for assessing jailbreak attempts.☆187Apr 1, 2025Updated 11 months ago
- Official repository for the paper "Gradient-based Jailbreak Images for Multimodal Fusion Models" (https//arxiv.org/abs/2410.03489)☆19Oct 22, 2024Updated last year
- ☆122Feb 3, 2025Updated last year
- ☆196Nov 26, 2023Updated 2 years ago
- Official implementation of AdvPrompter https//arxiv.org/abs/2404.16873☆179May 6, 2024Updated last year
- Official implementation for ICML24 paper "Irregular Multivariate Time Series Forecasting: A Transformable Patching Graph Neural Networks …☆128Nov 28, 2025Updated 3 months ago
- Papers and resources related to the security and privacy of LLMs 🤖☆568Jun 8, 2025Updated 8 months ago
- [arXiv:2311.03191] "DeepInception: Hypnotize Large Language Model to Be Jailbreaker"☆172Feb 20, 2024Updated 2 years ago
- Code for ICLR 2025 Failures to Find Transferable Image Jailbreaks Between Vision-Language Models☆37Jun 1, 2025Updated 9 months ago
- [ICLR 2024] The official implementation of our ICLR2024 paper "AutoDAN: Generating Stealthy Jailbreak Prompts on Aligned Large Language M…☆430Jan 22, 2025Updated last year
- Code&Data for the paper "Watch Out for Your Agents! Investigating Backdoor Threats to LLM-Based Agents" [NeurIPS 2024]☆109Sep 27, 2024Updated last year
- ☆18Mar 30, 2025Updated 11 months ago
- ☆28Mar 20, 2024Updated last year
- Persuasive Jailbreaker: we can persuade LLMs to jailbreak them!☆351Oct 17, 2025Updated 4 months ago
- [ECCV'24 Oral] The official GitHub page for ''Images are Achilles' Heel of Alignment: Exploiting Visual Vulnerabilities for Jailbreaking …☆35Oct 23, 2024Updated last year
- Awesome-Jailbreak-on-LLMs is a collection of state-of-the-art, novel, exciting jailbreak methods on LLMs. It contains papers, codes, data…☆1,231Feb 6, 2026Updated last month
- Towards Safe LLM with our simple-yet-highly-effective Intention Analysis Prompting☆20Mar 25, 2024Updated last year