sherdencooper / prompt-injectionLinks
Official repo for Customized but Compromised: Assessing Prompt Injection Risks in User-Designed GPTs
☆30Updated 2 years ago
Alternatives and similar repositories for prompt-injection
Users that are interested in prompt-injection are comparing it to the libraries listed below
Sorting:
- Official repo for GPTFUZZER : Red Teaming Large Language Models with Auto-Generated Jailbreak Prompts☆563Updated last year
- PromptInject is a framework that assembles prompts in a modular fashion to provide a quantitative analysis of the robustness of LLMs to a…☆449Updated last year
- Guardrails for secure and robust agent development☆378Updated last week
- Static Analysis meets Large Language Models☆53Updated last year
- An Execution Isolation Architecture for LLM-Based Agentic Systems☆102Updated 11 months ago
- The fastest Trust Layer for AI Agents☆148Updated 7 months ago
- 🧠 LLMFuzzer - Fuzzing Framework for Large Language Models 🧠 LLMFuzzer is the first open-source fuzzing framework specifically designed …☆336Updated last year
- MCPSafetyScanner - Automated MCP safety auditing and remediation using Agents. More info: https://www.arxiv.org/abs/2504.03767☆160Updated 9 months ago
- MCP Server Semgrep is a [Model Context Protocol](https://modelcontextprotocol.io) compliant server that integrates the powerful Semgrep s…☆25Updated 10 months ago
- 🚀 The LLM Automatic Computer Framework: L2MAC☆145Updated last year
- CVE-Bench: A Benchmark for AI Agents’ Ability to Exploit Real-World Web Application Vulnerabilities☆136Updated last week
- RepairAgent is an autonomous LLM-based agent for software repair.☆81Updated 5 months ago
- ⚡ Vigil ⚡ Detect prompt injections, jailbreaks, and other potentially risky Large Language Model (LLM) inputs☆436Updated last year
- Make your GenAI Apps Safe & Secure Test & harden your system prompt☆609Updated 3 months ago
- An autonomous LLM-agent for large-scale, repository-level code auditing☆314Updated last month
- autoredteam: code for training models that automatically red team other language models☆15Updated 2 years ago
- This repo contains the codes of the penetration test benchmark for Generative Agents presented in the paper "AutoPenBench: Benchmarking G…☆59Updated 2 months ago
- The automated prompt injection framework for LLM-integrated applications.☆247Updated last year
- prompt attack-defense, prompt Injection, reverse engineering notes and examples | 提示词对抗、破解例子与笔记☆286Updated 10 months ago
- Can Large Language Models Solve Security Challenges? We test LLMs' ability to interact and break out of shell environments using the Over…☆13Updated 2 years ago
- LLM proxy to observe and debug what your AI agents are doing.☆59Updated 2 months ago
- Enhancing AI Software Engineering with Repository-level Code Graph☆246Updated 9 months ago
- ☆110Updated last year
- Persuasive Jailbreaker: we can persuade LLMs to jailbreak them!☆345Updated 3 months ago
- Tools and our test data developed for the HackAPrompt 2023 competition☆46Updated 2 years ago
- 🤖🛡️🔍🔒🔑 Tiny package designed to support red teams and penetration testers in exploiting large language model AI solutions.☆28Updated last year
- ☆55Updated last year
- A Dynamic Environment to Evaluate Attacks and Defenses for LLM Agents.☆406Updated last month
- ☆112Updated last month
- TaskTracker is an approach to detecting task drift in Large Language Models (LLMs) by analysing their internal activations. It provides a…☆78Updated 4 months ago