The automated prompt injection framework for LLM-integrated applications.
☆258Sep 12, 2024Updated last year
Alternatives and similar repositories for HouYi
Users that are interested in HouYi are comparing it to the libraries listed below. We may earn a commission when you buy through links labeled 'Ad' on this page.
Sorting:
- ☆27Jul 30, 2024Updated last year
- TAP: An automated jailbreaking method for black-box LLMs☆224Dec 10, 2024Updated last year
- The official implementation of our pre-print paper "Automatic and Universal Prompt Injection Attacks against Large Language Models".☆70Oct 23, 2024Updated last year
- Official repo for GPTFUZZER : Red Teaming Large Language Models with Auto-Generated Jailbreak Prompts☆573Feb 27, 2026Updated 3 weeks ago
- ☆704Jul 2, 2025Updated 8 months ago
- ☆13Apr 26, 2023Updated 2 years ago
- PromptInject is a framework that assembles prompts in a modular fashion to provide a quantitative analysis of the robustness of LLMs to a…☆465Feb 26, 2024Updated 2 years ago
- ☆25Jan 17, 2025Updated last year
- LLM prompt attacks for hacker CTFs via CTFd.☆15Dec 17, 2023Updated 2 years ago
- ☆11Dec 18, 2024Updated last year
- A benchmark for evaluating the robustness of LLMs and defenses to indirect prompt injection attacks.☆112Apr 15, 2024Updated last year
- Implementation of BEAST adversarial attack for language models (ICML 2024)☆89May 14, 2024Updated last year
- A curation of awesome tools, documents and projects about LLM Security.☆1,548Aug 20, 2025Updated 7 months ago
- [ICLR 2024] The official implementation of our ICLR2024 paper "AutoDAN: Generating Stealthy Jailbreak Prompts on Aligned Large Language M…☆434Jan 22, 2025Updated last year
- Towards Safe LLM with our simple-yet-highly-effective Intention Analysis Prompting☆20Mar 25, 2024Updated last year
- [CCS 2024] Optimization-based Prompt Injection Attack to LLM-as-a-Judge☆39Sep 17, 2025Updated 6 months ago
- ⚡ Vigil ⚡ Detect prompt injections, jailbreaks, and other potentially risky Large Language Model (LLM) inputs☆465Jan 31, 2024Updated 2 years ago
- Make your GenAI Apps Safe & Secure Test & harden your system prompt☆652Feb 16, 2026Updated last month
- Code for Voice Jailbreak Attacks Against GPT-4o.☆37May 31, 2024Updated last year
- ☆28Oct 14, 2021Updated 4 years ago
- 🧠 LLMFuzzer - Fuzzing Framework for Large Language Models 🧠 LLMFuzzer is the first open-source fuzzing framework specifically designed …☆347Feb 12, 2024Updated 2 years ago
- Code to conduct an embedding attack on LLMs☆31Jan 10, 2025Updated last year
- the LLM vulnerability scanner☆7,312Updated this week
- ☆86Sep 5, 2025Updated 6 months ago
- A reading list for large models safety, security, and privacy (including Awesome LLM Security, Safety, etc.).☆1,899Mar 16, 2026Updated last week
- Papers and resources related to the security and privacy of LLMs 🤖☆567Jun 8, 2025Updated 9 months ago
- LLM Prompt Injection Detector☆1,445Aug 7, 2024Updated last year
- Industrial Cybersecurity Conference Index☆13Mar 11, 2024Updated 2 years ago
- A collection of prompt injection mitigation techniques.☆28Aug 19, 2023Updated 2 years ago
- a security scanner for custom LLM applications☆1,149Dec 1, 2025Updated 3 months ago
- Universal and Transferable Attacks on Aligned Language Models☆4,568Aug 2, 2024Updated last year
- LMAP (large language model mapper) is like NMAP for LLM, is an LLM Vulnerability Scanner and Zero-day Vulnerability Fuzzer.☆29Oct 16, 2024Updated last year
- Explore, Establish, Exploit: Red Teaming Language Models from Scratch☆13Jun 21, 2023Updated 2 years ago
- This is the code repository of our submission: Understanding the Dark Side of LLMs’ Intrinsic Self-Correction.☆62Dec 20, 2024Updated last year
- Code for Fast Propagation is Better: Accelerating Single-Step Adversarial Training via Sampling Subnetworks (TIFS2024)☆13Mar 29, 2024Updated last year
- The Python Risk Identification Tool for generative AI (PyRIT) is an open source framework built to empower security professionals and eng…☆3,556Mar 16, 2026Updated last week
- PAL: Proxy-Guided Black-Box Attack on Large Language Models☆56Aug 17, 2024Updated last year
- LLM Platform Security: Applying a Systematic Evaluation Framework to OpenAI's ChatGPT Plugins☆29Jul 29, 2024Updated last year
- JailbreakBench: An Open Robustness Benchmark for Jailbreaking Language Models [NeurIPS 2024 Datasets and Benchmarks Track]☆553Apr 4, 2025Updated 11 months ago