AIM-Intelligence / awesome-mcp-securityLinks
Security Threats related with MCP (Model Context Protocol), MCP Servers and more
☆28Updated 2 months ago
Alternatives and similar repositories for awesome-mcp-security
Users that are interested in awesome-mcp-security are comparing it to the libraries listed below
Sorting:
- The LLM Red Teaming Framework☆532Updated this week
- A security scanner for your LLM agentic workflows☆636Updated this week
- [Corca / ML] Automatically solved Gandalf AI with LLM☆50Updated 2 years ago
- Red-Teaming Language Models with DSPy☆202Updated 5 months ago
- Guardrails for secure and robust agent development☆316Updated last month
- This project investigates the security of large language models by performing binary classification of a set of input prompts to discover…☆42Updated last year
- Make your GenAI Apps Safe & Secure Test & harden your system prompt☆519Updated last month
- ☆121Updated last month
- The fastest Trust Layer for AI Agents☆138Updated last month
- 🔥🔒 Awesome MCP (Model Context Protocol) Security 🖥️☆415Updated last week
- ☆45Updated 11 months ago
- A benchmark for prompt injection detection systems.☆122Updated 2 months ago
- Every practical and proposed defense against prompt injection.☆495Updated 4 months ago
- Top 10 for Agentic AI (AI Agent Security) serves as the core for OWASP and CSA Red teaming work☆119Updated last month
- source for llmsec.net☆16Updated 11 months ago
- LLM security and privacy☆48Updated 9 months ago
- The Granite Guardian models are designed to detect risks in prompts and responses.☆91Updated 3 weeks ago
- A plugin-based gateway that orchestrates other MCPs and allows developers to build upon it enterprise-grade agents.☆237Updated this week
- A Dynamic Environment to Evaluate Attacks and Defenses for LLM Agents.☆209Updated last week
- Testing and evaluation framework for voice agents☆128Updated last month
- A framework for fine-tuning retrieval-augmented generation (RAG) systems.☆122Updated this week
- This repository provides a benchmark for prompt Injection attacks and defenses☆246Updated this week
- ⚡ Vigil ⚡ Detect prompt injections, jailbreaks, and other potentially risky Large Language Model (LLM) inputs☆396Updated last year
- AI Verify☆23Updated this week
- ☆71Updated 4 months ago
- A repository of Language Model Vulnerabilities and Exposures (LVEs).☆112Updated last year
- OWASP Top 10 for Large Language Model Apps (Part of the GenAI Security Project)☆798Updated last week
- TaskTracker is an approach to detecting task drift in Large Language Models (LLMs) by analysing their internal activations. It provides a…☆60Updated 4 months ago
- Moonshot - A simple and modular tool to evaluate and red-team any LLM application.☆258Updated this week
- ☆71Updated 9 months ago