invariantlabs-ai / mcp-injection-experimentsLinks
Code snippets to reproduce MCP tool poisoning attacks.
β177Updated 4 months ago
Alternatives and similar repositories for mcp-injection-experiments
Users that are interested in mcp-injection-experiments are comparing it to the libraries listed below
Sorting:
- MCPSafetyScanner - Automated MCP safety auditing and remediation using Agents. More info: https://www.arxiv.org/abs/2504.03767β114Updated 4 months ago
- π₯π Awesome MCP (Model Context Protocol) Security π₯οΈβ502Updated 2 weeks ago
- β59Updated last month
- XBOW Validation Benchmarksβ214Updated 2 months ago
- MCP Server for Burpβ246Updated 2 months ago
- β333Updated this week
- Top 10 for Agentic AI (AI Agent Security) serves as the core for OWASP and CSA Red teaming workβ131Updated 2 months ago
- A productionized greedy coordinate gradient (GCG) attack tool for large language models (LLMs)β128Updated 8 months ago
- Automated web vulnerability scanning with LLM agentsβ346Updated 2 months ago
- Constrain, log and scan your MCP connections for security vulnerabilities.β1,020Updated this week
- MCP for Security: A collection of Model Context Protocol servers for popular security tools like SQLMap, FFUF, NMAP, Masscan and more. Inβ¦β348Updated this week
- CVE-Bench: A Benchmark for AI Agentsβ Ability to Exploit Real-World Web Application Vulnerabilitiesβ78Updated last month
- An Execution Isolation Architecture for LLM-Based Agentic Systemsβ89Updated 6 months ago
- A Model Context Protocol (MCP) server for querying the CVE-Search APIβ46Updated last month
- π§ LLMFuzzer - Fuzzing Framework for Large Language Models π§ LLMFuzzer is the first open-source fuzzing framework specifically designed β¦β307Updated last year
- Experimental tools to backdoor large language models by re-writing their system prompts at a raw parameter level. This allows you to poteβ¦β182Updated 4 months ago
- Code for the paper "Defeating Prompt Injections by Design"β87Updated 2 months ago
- A very simple open source implementation of Google's Project Naptimeβ165Updated 5 months ago
- https://arxiv.org/abs/2412.02776β59Updated 8 months ago
- Every practical and proposed defense against prompt injection.β528Updated 6 months ago
- LLM proxy to observe and debug what your AI agents are doing.β46Updated last month
- LLM | Security | Operations in one github repo with good links and pictures.β49Updated 7 months ago
- Dropbox LLM Security research code and resultsβ233Updated last year
- DeepTeam is a framework to red team LLMs and LLM systems.β636Updated last week
- All things specific to LLM Red Teaming Generative AIβ28Updated 10 months ago
- The D-CIPHER and NYU CTF baseline LLM Agents built for NYU CTF Benchβ93Updated 3 weeks ago
- A CLI tool for threat modeling and visualizing AI agents built using popular frameworks like LangGraph, AutoGen, CrewAI, and more.β230Updated 3 months ago
- β63Updated 3 weeks ago
- Learn AI security through a series of vulnerable LLM CTF challenges. No sign ups, no cloud fees, run everything locally on your system.β297Updated last year
- Make your GenAI Apps Safe & Secure Test & harden your system promptβ547Updated 3 weeks ago