invariantlabs-ai / mcp-injection-experimentsLinks
Code snippets to reproduce MCP tool poisoning attacks.
☆187Updated 7 months ago
Alternatives and similar repositories for mcp-injection-experiments
Users that are interested in mcp-injection-experiments are comparing it to the libraries listed below
Sorting:
- MCPSafetyScanner - Automated MCP safety auditing and remediation using Agents. More info: https://www.arxiv.org/abs/2504.03767☆156Updated 7 months ago
- AI agent for autonomous cyber operations☆428Updated last week
- A productionized greedy coordinate gradient (GCG) attack tool for large language models (LLMs)☆150Updated 11 months ago
- ☆93Updated 4 months ago
- Top 10 for Agentic AI (AI Agent Security) serves as the core for OWASP and CSA Red teaming work☆153Updated 2 months ago
- We present MAPTA, a multi-agent system for autonomous web application security assessment that combines large language model orchestratio…☆80Updated 3 months ago
- 🔥🔒 Awesome MCP (Model Context Protocol) Security 🖥️☆601Updated last week
- ☆110Updated last week
- XBOW Validation Benchmarks☆359Updated 5 months ago
- ☆178Updated 5 months ago
- Raptor turns Claude Code into a general-purpose AI offensive/defensive security agent. By using Claude.md and creating rules, sub-agents,…☆433Updated this week
- ☆411Updated 2 weeks ago
- Constrain, log and scan your MCP connections for security vulnerabilities.☆1,312Updated this week
- A Model Context Protocol (MCP) server for querying the CVE-Search API☆70Updated 4 months ago
- Experimental tools to backdoor large language models by re-writing their system prompts at a raw parameter level. This allows you to pote…☆191Updated 2 months ago
- Code for the paper "Defeating Prompt Injections by Design"☆155Updated 5 months ago
- Project Mantis: Hacking Back the AI-Hacker; Prompt Injection as a Defense Against LLM-driven Cyberattacks☆92Updated 6 months ago
- Automated web vulnerability scanning with LLM agents☆382Updated 5 months ago
- Code Repository for: AIRTBench: Measuring Autonomous AI Red Teaming Capabilities in Language Models☆90Updated this week
- Rapidly identify and mitigate container security vulnerabilities with generative AI.☆174Updated 2 weeks ago
- CyberGym is a large-scale, high-quality cybersecurity evaluation framework designed to rigorously assess the capabilities of AI agents on…☆98Updated 2 months ago
- ☆55Updated 7 months ago
- LLM | Security | Operations in one github repo with good links and pictures.☆67Updated 11 months ago
- https://arxiv.org/abs/2412.02776☆66Updated last year
- A very simple open source implementation of Google's Project Naptime☆174Updated 8 months ago
- 🧠 LLMFuzzer - Fuzzing Framework for Large Language Models 🧠 LLMFuzzer is the first open-source fuzzing framework specifically designed …☆328Updated last year
- CVE-Bench: A Benchmark for AI Agents’ Ability to Exploit Real-World Web Application Vulnerabilities☆124Updated 3 weeks ago
- A comprehensive security scanner for Model Context Protocol (MCP) servers that detects vulnerabilities and security issues in your MCP se…☆110Updated last week
- Make your GenAI Apps Safe & Secure Test & harden your system prompt☆595Updated 2 months ago
- MCP Server for Burp☆383Updated last month