☆37Sep 30, 2024Updated last year
Alternatives and similar repositories for LLMJailbreak
Users that are interested in LLMJailbreak are comparing it to the libraries listed below
Sorting:
- [COLM 2024] JailBreakV-28K: A comprehensive benchmark designed to evaluate the transferability of LLM jailbreak attacks to MLLMs, and fur…☆88May 9, 2025Updated 9 months ago
- ☆14Mar 9, 2025Updated 11 months ago
- [NDSS'25] The official implementation of safety misalignment.☆17Jan 8, 2025Updated last year
- ☆13May 17, 2025Updated 9 months ago
- [ACL 2024] Defending Large Language Models Against Jailbreaking Attacks Through Goal Prioritization☆29Jul 9, 2024Updated last year
- Universal Robustness Evaluation Toolkit (for Evasion)☆32Sep 17, 2025Updated 5 months ago
- Consuming Resrouce via Auto-generation for LLM-DoS Attack under Black-box Settings☆18Sep 1, 2025Updated 6 months ago
- Official repo for "ProSec: Fortifying Code LLMs with Proactive Security Alignment"☆17Feb 26, 2026Updated last week
- Red Queen Dataset and data generation template☆26Dec 26, 2025Updated 2 months ago
- Code repository for CVPR2024 paper 《Pre-trained Model Guided Fine-Tuning for Zero-Shot Adversarial Robustness》☆25May 29, 2024Updated last year
- [USENIX'24] Prompt Stealing Attacks Against Text-to-Image Generation Models☆51Jan 11, 2025Updated last year
- ☆20Jan 15, 2024Updated 2 years ago
- ☆22May 28, 2025Updated 9 months ago
- Code release for RobOT (ICSE'21)☆15Dec 5, 2022Updated 3 years ago
- Code for paper "The Philosopher’s Stone: Trojaning Plugins of Large Language Models"☆27Sep 11, 2024Updated last year
- [ICLR 2025] Dissecting adversarial robustness of multimodal language model agents☆130Feb 19, 2025Updated last year
- Code to generate NeuralExecs (prompt injection for LLMs)☆27Oct 5, 2025Updated 5 months ago
- Official repo for GPTFUZZER : Red Teaming Large Language Models with Auto-Generated Jailbreak Prompts☆568Feb 27, 2026Updated last week
- ☆33Sep 13, 2024Updated last year
- ☆29Aug 31, 2025Updated 6 months ago
- [ICLR 2025 Spotlight] The official implementation of our ICLR2025 paper "AutoDAN-Turbo: A Lifelong Agent for Strategy Self-Exploration to…☆349Oct 8, 2025Updated 4 months ago
- Adversarial Attacks against Closed-Source MLLMs via Feature Optimal Alignment (NeurIPS 2025)☆49Nov 5, 2025Updated 4 months ago
- ☆26Aug 21, 2024Updated last year
- ☆34Aug 28, 2024Updated last year
- Repo for the research paper "SecAlign: Defending Against Prompt Injection with Preference Optimization"☆87Jul 24, 2025Updated 7 months ago
- AmpleGCG: Learning a Universal and Transferable Generator of Adversarial Attacks on Both Open and Closed LLM☆83Nov 3, 2024Updated last year
- [ICLR 2024] The official implementation of our ICLR2024 paper "AutoDAN: Generating Stealthy Jailbreak Prompts on Aligned Large Language M…☆430Jan 22, 2025Updated last year
- A data construction and evaluation framework to quantify privacy norm awareness of language models (LMs) and emerging privacy risk of LM …☆43Mar 4, 2025Updated last year
- [CCS 2024] Optimization-based Prompt Injection Attack to LLM-as-a-Judge☆39Sep 17, 2025Updated 5 months ago
- AI安全☆35Dec 28, 2020Updated 5 years ago
- Papers and resources related to the security and privacy of LLMs 🤖☆568Jun 8, 2025Updated 8 months ago
- ☆31Jul 18, 2024Updated last year
- Clone of JSAI static analysis framework☆13Jul 29, 2017Updated 8 years ago
- A curated list of awesome resources about LLM supply chain security (including papers, security reports and CVEs)☆96Jan 20, 2025Updated last year
- ☆56May 21, 2025Updated 9 months ago
- An easy-to-use Python framework to generate adversarial jailbreak prompts.☆820Mar 27, 2025Updated 11 months ago
- The official repository for paper "MLLM-Protector: Ensuring MLLM’s Safety without Hurting Performance"☆44Apr 21, 2024Updated last year
- ☆44Feb 26, 2025Updated last year
- [ICML 2025] Weak-to-Strong Jailbreaking on Large Language Models☆90May 2, 2025Updated 10 months ago