shaialon / ai-security-demosLinks
π€― AI Security EXPOSED! Live Demos Showing Hidden Risks of π€ Agentic AI Flows: πPrompt Injection, β£οΈ Data Poisoning. Watch the recorded session:
β20Updated last year
Alternatives and similar repositories for ai-security-demos
Users that are interested in ai-security-demos are comparing it to the libraries listed below
Sorting:
- Make your GenAI Apps Safe & Secure Test & harden your system promptβ518Updated 3 weeks ago
- LLM Security Platform.β19Updated 8 months ago
- β‘ Vigil β‘ Detect prompt injections, jailbreaks, and other potentially risky Large Language Model (LLM) inputsβ396Updated last year
- The fastest Trust Layer for AI Agentsβ138Updated last month
- Top 10 for Agentic AI (AI Agent Security) serves as the core for OWASP and CSA Red teaming workβ115Updated last month
- β50Updated 2 months ago
- Protection against Model Serialization Attacksβ522Updated this week
- Every practical and proposed defense against prompt injection.β495Updated 4 months ago
- π§ LLMFuzzer - Fuzzing Framework for Large Language Models π§ LLMFuzzer is the first open-source fuzzing framework specifically designed β¦β286Updated last year
- A repository of Language Model Vulnerabilities and Exposures (LVEs).β112Updated last year
- A CLI tool for threat modeling and visualizing AI agents built using popular frameworks like LangGraph, AutoGen, CrewAI, and more.β218Updated 2 months ago
- Dropbox LLM Security research code and resultsβ227Updated last year
- Curated list of Open Source project focused on LLM securityβ49Updated 8 months ago
- A curated list of large language model tools for cybersecurity research.β465Updated last year
- Guardrails for secure and robust agent developmentβ313Updated last month
- OWASP Top 10 for Large Language Model Apps (Part of the GenAI Security Project)β798Updated this week
- Red-Teaming Language Models with DSPyβ202Updated 5 months ago
- π€ A GitHub action that leverages fabric patterns through an agent-based approachβ28Updated 6 months ago
- A security scanner for your LLM agentic workflowsβ624Updated 3 weeks ago
- HoneyAgents is a PoC demo of an AI-driven system that combines honeypots with autonomous AI agents to detect and mitigate cyber threats. β¦β53Updated last year
- β42Updated last week
- A collection of awesome resources related AI securityβ258Updated 3 weeks ago
- OWASP Foundation Web Respositoryβ282Updated 2 weeks ago
- source for llmsec.netβ16Updated 11 months ago
- LLM proxy to observe and debug what your AI agents are doing.β38Updated this week
- A benchmark for prompt injection detection systems.β122Updated 2 months ago
- The project serves as a strategic advisory tool, capitalizing on the ZySec series of AI models to amplify the capabilities of security prβ¦β52Updated last year
- All things specific to LLM Red Teaming Generative AIβ25Updated 8 months ago
- A plugin-based gateway that orchestrates other MCPs and allows developers to build upon it enterprise-grade agents.β230Updated 2 months ago
- Code snippets to reproduce MCP tool poisoning attacks.β143Updated 3 months ago