sherdencooper / PromptFuzz
☆16Updated 2 months ago
Alternatives and similar repositories for PromptFuzz:
Users that are interested in PromptFuzz are comparing it to the libraries listed below
- [USENIX Security'24] Official repository of "Making Them Ask and Answer: Jailbreaking Large Language Models in Few Queries via Disguise a…☆62Updated 3 months ago
- ☆23Updated 3 months ago
- [USENIX Security '24] An LLM-Assisted Easy-to-Trigger Backdoor Attack on Code Completion Models: Injecting Disguised Vulnerabilities agai…☆34Updated 2 months ago
- The automated prompt injection framework for LLM-integrated applications.☆177Updated 4 months ago
- A curated list of awesome resources about LLM supply chain security (including papers, security reports and CVEs)☆27Updated last week
- [USENIX Security 2025] PoisonedRAG: Knowledge Corruption Attacks to Retrieval-Augmented Generation of Large Language Models☆108Updated 3 months ago
- Academic Papers about LLM Application on Security☆115Updated 7 months ago
- TensorFlow API analysis tool and malicious model detection tool☆19Updated last month
- A collection of security papers on top-tier publications☆37Updated last month
- ☆48Updated 3 weeks ago
- ☆12Updated 9 months ago
- SecLLMHolmes is a generalized, fully automated, and scalable framework to systematically evaluate the performance (i.e., accuracy and rea…☆44Updated 2 months ago
- ☆15Updated 4 months ago
- ☆78Updated 9 months ago
- This is a benchmark for evaluating the vulnerability discovery ability of automated approaches including Large Language Models (LLMs), de…☆65Updated 2 months ago
- ☆34Updated this week
- ☆24Updated 3 months ago
- ☆101Updated 6 months ago
- ☆24Updated 3 years ago
- SecGPT: An execution isolation architecture for LLM-based systems☆57Updated last month
- ☆13Updated 4 months ago
- Seminar 2022☆22Updated this week
- Agent Security Bench (ASB)☆55Updated last month
- [USENIX'24] Prompt Stealing Attacks Against Text-to-Image Generation Models☆31Updated last week
- AI Model Security Reading Notes☆35Updated 5 months ago
- A benchmark for evaluating the robustness of LLMs and defenses to indirect prompt injection attacks.☆53Updated 9 months ago
- Official implementation of paper: DrAttack: Prompt Decomposition and Reconstruction Makes Powerful LLM Jailbreakers☆41Updated 4 months ago
- ☆32Updated 6 months ago
- A Dynamic Environment to Evaluate Attacks and Defenses for LLM Agents.☆74Updated this week
- ☆31Updated 3 months ago