sherdencooper / prompt-injectionLinks
Official repo for Customized but Compromised: Assessing Prompt Injection Risks in User-Designed GPTs
☆29Updated last year
Alternatives and similar repositories for prompt-injection
Users that are interested in prompt-injection are comparing it to the libraries listed below
Sorting:
- Official repo for GPTFUZZER : Red Teaming Large Language Models with Auto-Generated Jailbreak Prompts☆529Updated last year
- Guardrails for secure and robust agent development☆348Updated 2 months ago
- PromptInject is a framework that assembles prompts in a modular fashion to provide a quantitative analysis of the robustness of LLMs to a…☆422Updated last year
- The fastest Trust Layer for AI Agents☆143Updated 4 months ago
- MCPSafetyScanner - Automated MCP safety auditing and remediation using Agents. More info: https://www.arxiv.org/abs/2504.03767☆135Updated 6 months ago
- 🧠 LLMFuzzer - Fuzzing Framework for Large Language Models 🧠 LLMFuzzer is the first open-source fuzzing framework specifically designed …☆315Updated last year
- The automated prompt injection framework for LLM-integrated applications.☆230Updated last year
- Automated Safety Testing of Large Language Models☆17Updated 8 months ago
- CVE-Bench: A Benchmark for AI Agents’ Ability to Exploit Real-World Web Application Vulnerabilities☆103Updated last month
- An Execution Isolation Architecture for LLM-Based Agentic Systems☆94Updated 8 months ago
- This repository provides a benchmark for prompt Injection attacks and defenses☆292Updated this week
- ☆89Updated 10 months ago
- Code snippets to reproduce MCP tool poisoning attacks.☆181Updated 6 months ago
- This repo contains the codes of the penetration test benchmark for Generative Agents presented in the paper "AutoPenBench: Benchmarking G…☆42Updated 3 months ago
- 🤖🛡️🔍🔒🔑 Tiny package designed to support red teams and penetration testers in exploiting large language model AI solutions.☆26Updated last year
- ☆80Updated last year
- An autonomous LLM-agent for large-scale, repository-level code auditing☆239Updated last week
- Persuasive Jailbreaker: we can persuade LLMs to jailbreak them!☆323Updated last year
- ☆50Updated last year
- TaskTracker is an approach to detecting task drift in Large Language Models (LLMs) by analysing their internal activations. It provides a…☆66Updated last month
- 😎 Awesome list of resources about using and building AI software development systems☆111Updated last year
- AutoDefense: Multi-Agent LLM Defense against Jailbreak Attacks☆55Updated 4 months ago
- A benchmark for prompt injection detection systems.☆143Updated last month
- A benchmark for evaluating the robustness of LLMs and defenses to indirect prompt injection attacks.☆84Updated last year
- Static Analysis meets Large Language Models☆49Updated last year
- ☆627Updated 3 months ago
- [NDSS'25 Best Technical Poster] A collection of automated evaluators for assessing jailbreak attempts.☆170Updated 6 months ago
- Repilot, a patch generation tool introduced in the ESEC/FSE'23 paper "Copiloting the Copilots: Fusing Large Language Models with Completi…☆133Updated 2 years ago
- A better way of testing, inspecting, and analyzing AI Agent traces.☆40Updated last week
- Can Large Language Models Solve Security Challenges? We test LLMs' ability to interact and break out of shell environments using the Over…☆13Updated 2 years ago