sherdencooper / prompt-injectionLinks
Official repo for Customized but Compromised: Assessing Prompt Injection Risks in User-Designed GPTs
☆29Updated last year
Alternatives and similar repositories for prompt-injection
Users that are interested in prompt-injection are comparing it to the libraries listed below
Sorting:
- PromptInject is a framework that assembles prompts in a modular fashion to provide a quantitative analysis of the robustness of LLMs to a…☆401Updated last year
- Official repo for GPTFUZZER : Red Teaming Large Language Models with Auto-Generated Jailbreak Prompts☆513Updated 10 months ago
- ☆70Updated last year
- Guardrails for secure and robust agent development☆329Updated 2 weeks ago
- The fastest Trust Layer for AI Agents☆141Updated 2 months ago
- This repository provides a benchmark for prompt Injection attacks and defenses☆255Updated 3 weeks ago
- 😎 Awesome list of resources about using and building AI software development systems☆111Updated last year
- MCP Server Semgrep is a [Model Context Protocol](https://modelcontextprotocol.io) compliant server that integrates the powerful Semgrep s…☆16Updated 4 months ago
- An Execution Isolation Architecture for LLM-Based Agentic Systems☆86Updated 6 months ago
- AutoDefense: Multi-Agent LLM Defense against Jailbreak Attacks☆51Updated 2 months ago
- TaskTracker is an approach to detecting task drift in Large Language Models (LLMs) by analysing their internal activations. It provides a…☆62Updated 5 months ago
- 🧠 LLMFuzzer - Fuzzing Framework for Large Language Models 🧠 LLMFuzzer is the first open-source fuzzing framework specifically designed …☆303Updated last year
- Automated Safety Testing of Large Language Models☆16Updated 6 months ago
- A benchmark for prompt injection detection systems.☆124Updated 3 weeks ago
- This repo contains the codes of the penetration test benchmark for Generative Agents presented in the paper "AutoPenBench: Benchmarking G…☆35Updated last month
- The jailbreak-evaluation is an easy-to-use Python package for language model jailbreak evaluation.☆25Updated 9 months ago
- Enhancing AI Software Engineering with Repository-level Code Graph☆198Updated 4 months ago
- [NDSS'25 Best Technical Poster] A collection of automated evaluators for assessing jailbreak attempts.☆165Updated 4 months ago
- Persuasive Jailbreaker: we can persuade LLMs to jailbreak them!☆313Updated 10 months ago
- The automated prompt injection framework for LLM-integrated applications.☆221Updated 11 months ago
- Jailbreaking Leading Safety-Aligned LLMs with Simple Adaptive Attacks [ICLR 2025]☆326Updated 6 months ago
- ☆71Updated 9 months ago
- Red-Teaming Language Models with DSPy☆203Updated 5 months ago
- An autonomous LLM-agent for large-scale, repository-level code auditing☆192Updated 3 weeks ago
- 🔥🔒 Awesome MCP (Model Context Protocol) Security 🖥️☆463Updated this week
- Risks and targets for assessing LLMs & LLM vulnerabilities☆32Updated last year
- A Dynamic Environment to Evaluate Attacks and Defenses for LLM Agents.☆230Updated last week
- Can Large Language Models Solve Security Challenges? We test LLMs' ability to interact and break out of shell environments using the Over…☆13Updated last year
- ☆591Updated last month
- ☆96Updated 11 months ago