shaialon / ai-security-demos
π€― AI Security EXPOSED! Live Demos Showing Hidden Risks of π€ Agentic AI Flows: πPrompt Injection, β£οΈ Data Poisoning. Watch the recorded session:
β19Updated 10 months ago
Alternatives and similar repositories for ai-security-demos
Users that are interested in ai-security-demos are comparing it to the libraries listed below
Sorting:
- Curated list of Open Source project focused on LLM securityβ42Updated 6 months ago
- Top 10 for Agentic AI (AI Agent Security)β99Updated 2 months ago
- β40Updated last week
- HoneyAgents is a PoC demo of an AI-driven system that combines honeypots with autonomous AI agents to detect and mitigate cyber threats. β¦β48Updated last year
- Make your GenAI Apps Safe & Secure Test & harden your system promptβ475Updated 7 months ago
- Red-Teaming Language Models with DSPyβ192Updated 3 months ago
- A powerful tool that leverages AI to automatically generate comprehensive security documentation for your projectsβ76Updated last week
- LLM Security Platform.β17Updated 6 months ago
- A Python-based tool that monitors dark web sources for mentions of specific organizations for Threat Monitoring.β17Updated last month
- π€ A GitHub action that leverages fabric patterns through an agent-based approachβ26Updated 4 months ago
- A curated list of large language model tools for cybersecurity research.β454Updated last year
- Rapidly identify and mitigate container security vulnerabilities with generative AI.β120Updated 3 weeks ago
- Secure Jupyter Notebooks and Experimentation Environmentβ74Updated 3 months ago
- The project serves as a strategic advisory tool, capitalizing on the ZySec series of AI models to amplify the capabilities of security prβ¦β48Updated 11 months ago
- The fastest Trust Layer for AI Agentsβ133Updated 2 months ago
- A repository of Language Model Vulnerabilities and Exposures (LVEs).β109Updated last year
- π§ LLMFuzzer - Fuzzing Framework for Large Language Models π§ LLMFuzzer is the first open-source fuzzing framework specifically designed β¦β278Updated last year
- A productionized greedy coordinate gradient (GCG) attack tool for large language models (LLMs)β109Updated 5 months ago
- Use AI to Scan Your Code from the Command Line for security and code smells. Bring your own keys. Supports OpenAI and Geminiβ169Updated 3 weeks ago
- A collection of agents that use Large Language Models (LLMs) to perform tasks common on our day to day jobs in cyber security.β111Updated last year
- All things specific to LLM Red Teaming Generative AIβ24Updated 6 months ago
- A MCP server for using Semgrep to scan code for security vulnerabilities.β148Updated 2 weeks ago
- Dropbox LLM Security research code and resultsβ225Updated 11 months ago
- β‘ Vigil β‘ Detect prompt injections, jailbreaks, and other potentially risky Large Language Model (LLM) inputsβ382Updated last year
- some prompt about cyber securityβ209Updated last year
- Protection against Model Serialization Attacksβ478Updated this week
- Delving into the Realm of LLM Security: An Exploration of Offensive and Defensive Tools, Unveiling Their Present Capabilities.β161Updated last year
- OWASP Machine Learning Security Top 10 Projectβ85Updated 3 months ago
- LLM | Security | Operations in one github repo with good links and pictures.β29Updated 4 months ago
- Every practical and proposed defense against prompt injection.β456Updated 2 months ago