shaialon / ai-security-demosLinks
π€― AI Security EXPOSED! Live Demos Showing Hidden Risks of π€ Agentic AI Flows: πPrompt Injection, β£οΈ Data Poisoning. Watch the recorded session:
β20Updated 11 months ago
Alternatives and similar repositories for ai-security-demos
Users that are interested in ai-security-demos are comparing it to the libraries listed below
Sorting:
- LLM Security Platform.β17Updated 7 months ago
- A collection of prompt injection mitigation techniques.β23Updated last year
- Top 10 for Agentic AI (AI Agent Security)β110Updated last week
- HoneyAgents is a PoC demo of an AI-driven system that combines honeypots with autonomous AI agents to detect and mitigate cyber threats. β¦β49Updated last year
- Make your GenAI Apps Safe & Secure Test & harden your system promptβ498Updated 7 months ago
- β44Updated last month
- Curated list of Open Source project focused on LLM securityβ43Updated 7 months ago
- Dropbox LLM Security research code and resultsβ228Updated last year
- π§ LLMFuzzer - Fuzzing Framework for Large Language Models π§ LLMFuzzer is the first open-source fuzzing framework specifically designed β¦β280Updated last year
- The fastest Trust Layer for AI Agentsβ136Updated last week
- β48Updated last week
- π€ A GitHub action that leverages fabric patterns through an agent-based approachβ27Updated 5 months ago
- Every practical and proposed defense against prompt injection.β472Updated 3 months ago
- π₯π Awesome MCP (Model Context Protocol) Security π₯οΈβ193Updated last week
- Secure Jupyter Notebooks and Experimentation Environmentβ75Updated 4 months ago
- β‘ Vigil β‘ Detect prompt injections, jailbreaks, and other potentially risky Large Language Model (LLM) inputsβ389Updated last year
- Codebase of https://arxiv.org/abs/2410.14923β47Updated 7 months ago
- A powerful tool that leverages AI to automatically generate comprehensive security documentation for your projectsβ80Updated last month
- A repository of Language Model Vulnerabilities and Exposures (LVEs).β110Updated last year
- Red-Teaming Language Models with DSPyβ195Updated 3 months ago
- Protection against Model Serialization Attacksβ493Updated this week
- source for llmsec.netβ15Updated 10 months ago
- Using Agents To Automate Pentestingβ276Updated 4 months ago
- Code snippets to reproduce MCP tool poisoning attacks.β135Updated last month
- A curated list of large language model tools for cybersecurity research.β458Updated last year
- β247Updated 4 months ago
- Guardrails for secure and robust agent developmentβ292Updated this week
- OWASP Foundation Web Respositoryβ263Updated last week
- OWASP Machine Learning Security Top 10 Projectβ85Updated 4 months ago
- A plugin-based gateway that orchestrates other MCPs and allows developers to build upon it enterprise-grade agents.β183Updated last month