sherdencooper / prompt-injectionLinks
Official repo for Customized but Compromised: Assessing Prompt Injection Risks in User-Designed GPTs
☆29Updated 2 years ago
Alternatives and similar repositories for prompt-injection
Users that are interested in prompt-injection are comparing it to the libraries listed below
Sorting:
- Official repo for GPTFUZZER : Red Teaming Large Language Models with Auto-Generated Jailbreak Prompts☆542Updated last year
- PromptInject is a framework that assembles prompts in a modular fashion to provide a quantitative analysis of the robustness of LLMs to a…☆435Updated last year
- 🧠 LLMFuzzer - Fuzzing Framework for Large Language Models 🧠 LLMFuzzer is the first open-source fuzzing framework specifically designed …☆327Updated last year
- prompt attack-defense, prompt Injection, reverse engineering notes and examples | 提示词对抗、破解例子与笔记☆262Updated 8 months ago
- The fastest Trust Layer for AI Agents☆144Updated 5 months ago
- MCPSafetyScanner - Automated MCP safety auditing and remediation using Agents. More info: https://www.arxiv.org/abs/2504.03767☆152Updated 7 months ago
- An Execution Isolation Architecture for LLM-Based Agentic Systems☆100Updated 9 months ago
- Guardrails for secure and robust agent development☆364Updated 3 months ago
- Tools and our test data developed for the HackAPrompt 2023 competition☆44Updated 2 years ago
- This repo contains the codes of the penetration test benchmark for Generative Agents presented in the paper "AutoPenBench: Benchmarking G…☆48Updated 3 weeks ago
- ⚡ Vigil ⚡ Detect prompt injections, jailbreaks, and other potentially risky Large Language Model (LLM) inputs☆430Updated last year
- An autonomous LLM-agent for large-scale, repository-level code auditing☆268Updated last week
- ☆93Updated last year
- Persuasive Jailbreaker: we can persuade LLMs to jailbreak them!☆333Updated last month
- 🔥🔒 Awesome MCP (Model Context Protocol) Security 🖥️☆589Updated 3 weeks ago
- ☆648Updated 4 months ago
- This is The most comprehensive prompt hacking course available, which record our progress on a prompt engineering and prompt hacking cour…☆117Updated 7 months ago
- CVE-Bench: A Benchmark for AI Agents’ Ability to Exploit Real-World Web Application Vulnerabilities☆113Updated last week
- The automated prompt injection framework for LLM-integrated applications.☆237Updated last year
- ☆98Updated last year
- This repository provides a benchmark for prompt injection attacks and defenses☆343Updated 3 weeks ago
- A powerful MCP (Model Context Protocol) Server that audits npm package dependencies for security vulnerabilities. Built with remote npm r…☆47Updated 4 months ago
- Can Large Language Models Solve Security Challenges? We test LLMs' ability to interact and break out of shell environments using the Over…☆13Updated 2 years ago
- 🤖🛡️🔍🔒🔑 Tiny package designed to support red teams and penetration testers in exploiting large language model AI solutions.☆27Updated last year
- ☆50Updated 3 months ago
- AutoDefense: Multi-Agent LLM Defense against Jailbreak Attacks☆60Updated 6 months ago
- ☆168Updated 5 months ago
- MCP Server Semgrep is a [Model Context Protocol](https://modelcontextprotocol.io) compliant server that integrates the powerful Semgrep s…☆23Updated 8 months ago
- A Dynamic Environment to Evaluate Attacks and Defenses for LLM Agents.☆348Updated 3 weeks ago
- Automated Safety Testing of Large Language Models☆17Updated 9 months ago