shaialon / ai-security-demos
π€― AI Security EXPOSED! Live Demos Showing Hidden Risks of π€ Agentic AI Flows: πPrompt Injection, β£οΈ Data Poisoning. Watch the recorded session:
β19Updated 9 months ago
Alternatives and similar repositories for ai-security-demos:
Users that are interested in ai-security-demos are comparing it to the libraries listed below
- HoneyAgents is a PoC demo of an AI-driven system that combines honeypots with autonomous AI agents to detect and mitigate cyber threats. β¦β47Updated last year
- Top 10 for Agentic AI (AI Agent Security) - Pre-release versionβ84Updated last month
- LLM Security Platform.β14Updated 5 months ago
- π€ A GitHub action that leverages fabric patterns through an agent-based approachβ25Updated 3 months ago
- Make your GenAI Apps Safe & Secure Test & harden your system promptβ469Updated 6 months ago
- A powerful tool that leverages AI to automatically generate comprehensive security documentation for your projectsβ72Updated last week
- All things specific to LLM Red Teaming Generative AIβ24Updated 6 months ago
- A repository of Language Model Vulnerabilities and Exposures (LVEs).β109Updated last year
- β‘ Vigil β‘ Detect prompt injections, jailbreaks, and other potentially risky Large Language Model (LLM) inputsβ379Updated last year
- Red-Teaming Language Models with DSPyβ183Updated 2 months ago
- β35Updated 2 months ago
- Dropbox LLM Security research code and resultsβ222Updated 11 months ago
- A plugin-based gateway that orchestrates other MCPs and allows developers to build upon it enterprise-grade agents.β130Updated this week
- π§ LLMFuzzer - Fuzzing Framework for Large Language Models π§ LLMFuzzer is the first open-source fuzzing framework specifically designed β¦β274Updated last year
- β97Updated last month
- Every practical and proposed defense against prompt injection.β424Updated 2 months ago
- π€π‘οΈπππ Tiny package designed to support red teams and penetration testers in exploiting large language model AI solutions.β23Updated 11 months ago
- Payloads for Attacking Large Language Modelsβ81Updated 9 months ago
- Curated list of Open Source project focused on LLM securityβ40Updated 5 months ago
- Secure Jupyter Notebooks and Experimentation Environmentβ74Updated 2 months ago
- β367Updated last year
- A collection of prompt injection mitigation techniques.β22Updated last year
- The fastest Trust Layer for AI Agentsβ130Updated last month
- A curated list of large language model tools for cybersecurity research.β449Updated last year
- A collection of awesome resources related AI securityβ206Updated this week
- A benchmark for prompt injection detection systems.β100Updated 2 months ago
- The project serves as a strategic advisory tool, capitalizing on the ZySec series of AI models to amplify the capabilities of security prβ¦β45Updated 11 months ago
- A MCP server for using Semgrep to scan code for security vulnerabilities.β127Updated 2 weeks ago
- Official repo for Customized but Compromised: Assessing Prompt Injection Risks in User-Designed GPTsβ24Updated last year
- Delving into the Realm of LLM Security: An Exploration of Offensive and Defensive Tools, Unveiling Their Present Capabilities.β162Updated last year