WUSTL-CSPL / LLMJailbreakView external linksLinks
☆37Sep 30, 2024Updated last year
Alternatives and similar repositories for LLMJailbreak
Users that are interested in LLMJailbreak are comparing it to the libraries listed below
Sorting:
- ☆31Sep 22, 2024Updated last year
- [NDSS'25] The official implementation of safety misalignment.☆17Jan 8, 2025Updated last year
- ☆13May 17, 2025Updated 8 months ago
- [ACL 2024] Defending Large Language Models Against Jailbreaking Attacks Through Goal Prioritization☆29Jul 9, 2024Updated last year
- Universal Robustness Evaluation Toolkit (for Evasion)☆32Sep 17, 2025Updated 4 months ago
- Consuming Resrouce via Auto-generation for LLM-DoS Attack under Black-box Settings☆18Sep 1, 2025Updated 5 months ago
- Official repo for "ProSec: Fortifying Code LLMs with Proactive Security Alignment"☆17Mar 26, 2025Updated 10 months ago
- Code for the paper "Jailbreak Large Vision-Language Models Through Multi-Modal Linkage"☆26Dec 6, 2024Updated last year
- ☆17Sep 4, 2024Updated last year
- [USENIX'24] Prompt Stealing Attacks Against Text-to-Image Generation Models☆50Jan 11, 2025Updated last year
- ☆22May 28, 2025Updated 8 months ago
- Code for paper "The Philosopher’s Stone: Trojaning Plugins of Large Language Models"☆27Sep 11, 2024Updated last year
- [ICLR 2025] Dissecting adversarial robustness of multimodal language model agents☆123Feb 19, 2025Updated 11 months ago
- ☆26Dec 1, 2022Updated 3 years ago
- Official repo for GPTFUZZER : Red Teaming Large Language Models with Auto-Generated Jailbreak Prompts☆565Sep 24, 2024Updated last year
- ☆32Sep 13, 2024Updated last year
- ☆28Aug 31, 2025Updated 5 months ago
- ☆26Aug 21, 2024Updated last year
- ☆34Aug 28, 2024Updated last year
- Repo for the research paper "SecAlign: Defending Against Prompt Injection with Preference Optimization"☆84Jul 24, 2025Updated 6 months ago
- Awesome-Jailbreak-on-LLMs is a collection of state-of-the-art, novel, exciting jailbreak methods on LLMs. It contains papers, codes, data…☆1,205Feb 6, 2026Updated last week
- A data construction and evaluation framework to quantify privacy norm awareness of language models (LMs) and emerging privacy risk of LM …☆41Mar 4, 2025Updated 11 months ago
- CovRL-Fuzz: Fuzzing JavaScript Interpreters with Coverage-Guided Reinforcement Learning for LLM-Based Mutation☆40Nov 10, 2024Updated last year
- [CCS 2024] Optimization-based Prompt Injection Attack to LLM-as-a-Judge☆39Sep 17, 2025Updated 4 months ago
- AI安全☆35Dec 28, 2020Updated 5 years ago
- Papers and resources related to the security and privacy of LLMs 🤖☆561Jun 8, 2025Updated 8 months ago
- ☆55May 21, 2025Updated 8 months ago
- Clone of JSAI static analysis framework☆13Jul 29, 2017Updated 8 years ago
- BPE Tokenizer implementations in C# for Anthropic, OpenAI LLM offerings☆14Oct 5, 2023Updated 2 years ago
- Python package for ML developers and researchers to change certain variables while their code is executing to make the task of training a…☆11Apr 25, 2024Updated last year
- A curated list of awesome resources about LLM supply chain security (including papers, security reports and CVEs)☆95Jan 20, 2025Updated last year
- An easy-to-use Python framework to generate adversarial jailbreak prompts.☆815Mar 27, 2025Updated 10 months ago
- The official repository for paper "MLLM-Protector: Ensuring MLLM’s Safety without Hurting Performance"☆44Apr 21, 2024Updated last year
- ☆44Feb 26, 2025Updated 11 months ago
- [ICML 2025] Weak-to-Strong Jailbreaking on Large Language Models☆92May 2, 2025Updated 9 months ago
- ☆46Aug 4, 2023Updated 2 years ago
- BiasFinder | IEEE TSE | Metamorphic Test Generation to Uncover Bias for Sentiment Analysis Systems☆11Jan 18, 2022Updated 4 years ago
- Code repo for the paper: Attacking Vision-Language Computer Agents via Pop-ups☆50Dec 23, 2024Updated last year
- ☆12Dec 22, 2025Updated last month