LLMSecurity / HouYi
The automated prompt injection framework for LLM-integrated applications.
☆179Updated 4 months ago
Alternatives and similar repositories for HouYi:
Users that are interested in HouYi are comparing it to the libraries listed below
- Official repo for GPTFUZZER : Red Teaming Large Language Models with Auto-Generated Jailbreak Prompts☆438Updated 4 months ago
- ☆77Updated 9 months ago
- ☆101Updated 6 months ago
- This repository provides implementation to formalize and benchmark Prompt Injection attacks and defenses☆167Updated last week
- [USENIX Security '24] An LLM-Assisted Easy-to-Trigger Backdoor Attack on Code Completion Models: Injecting Disguised Vulnerabilities agai…☆34Updated 3 months ago
- ☆16Updated 3 months ago
- SecLLMHolmes is a generalized, fully automated, and scalable framework to systematically evaluate the performance (i.e., accuracy and rea…☆44Updated 2 months ago
- [USENIX Security'24] Official repository of "Making Them Ask and Answer: Jailbreaking Large Language Models in Few Queries via Disguise a…☆62Updated 3 months ago
- A curated list of awesome resources about LLM supply chain security (including papers, security reports and CVEs)☆27Updated last week
- TAP: An automated jailbreaking method for black-box LLMs☆138Updated last month
- CVEfixes: Automated Collection of Vulnerabilities and Their Fixes from Open-Source Software☆219Updated 6 months ago
- Academic Papers about LLM Application on Security☆115Updated 7 months ago
- This is a benchmark for evaluating the vulnerability discovery ability of automated approaches including Large Language Models (LLMs), de…☆65Updated 2 months ago
- ☆23Updated 4 months ago
- ☆24Updated 4 months ago
- 🧠 LLMFuzzer - Fuzzing Framework for Large Language Models 🧠 LLMFuzzer is the first open-source fuzzing framework specifically designed …☆253Updated 11 months ago
- ☆49Updated last month
- Repository for "SecurityEval Dataset: Mining Vulnerability Examples to Evaluate Machine Learning-Based Code Generation Techniques" publis…☆57Updated last year
- [USENIX Security 2025] PoisonedRAG: Knowledge Corruption Attacks to Retrieval-Augmented Generation of Large Language Models☆111Updated 3 months ago
- A collection of security papers on top-tier publications☆38Updated last week
- ☆33Updated 3 months ago
- ☆470Updated last month
- Repository for PrimeVul Vulnerability Detection Dataset☆95Updated 4 months ago
- ☆27Updated 6 months ago
- [ICLR 2024] The official implementation of our ICLR2024 paper "AutoDAN: Generating Stealthy Jailbreak Prompts on Aligned Large Language M…☆282Updated last week
- A collection of automated evaluators for assessing jailbreak attempts.☆102Updated this week
- A benchmark for evaluating the robustness of LLMs and defenses to indirect prompt injection attacks.☆55Updated 9 months ago
- ☆47Updated 6 months ago
- AIBugHunter: A Practical Tool for Predicting, Classifying and Repairing Software Vulnerabilities☆38Updated 9 months ago
- JailbreakBench: An Open Robustness Benchmark for Jailbreaking Language Models [NeurIPS 2024 Datasets and Benchmarks Track]☆280Updated 4 months ago