NVIDIA / garakLinks
the LLM vulnerability scanner
☆6,784Updated this week
Alternatives and similar repositories for garak
Users that are interested in garak are comparing it to the libraries listed below
Sorting:
- The Python Risk Identification Tool for generative AI (PyRIT) is an open source framework built to empower security professionals and eng…☆3,326Updated this week
- The Security Toolkit for LLM Interactions☆2,413Updated last month
- Agentic LLM Vulnerability Scanner / AI red teaming kit 🧪☆1,746Updated 3 weeks ago
- A powerful tool for automated LLM fuzzing. It is designed to help developers and security researchers identify and mitigate potential jai…☆1,134Updated last month
- LLM Prompt Injection Detector☆1,396Updated last year
- Protection against Model Serialization Attacks☆632Updated last month
- Every practical and proposed defense against prompt injection.☆614Updated 10 months ago
- New ways of breaking app-integrated LLMs☆2,036Updated 6 months ago
- Test your prompts, agents, and RAGs. AI Red teaming, pentesting, and vulnerability scanning for LLMs. Compare performance of GPT, Claude,…☆9,960Updated this week
- Zero shot vulnerability discovery using LLMs☆2,451Updated 11 months ago
- OWASP Top 10 for Large Language Model Apps (Part of the GenAI Security Project)☆1,038Updated 2 weeks ago
- A curation of awesome tools, documents and projects about LLM Security.☆1,503Updated 5 months ago
- DeepTeam is a framework to red team LLMs and LLM systems.☆1,230Updated last week
- Set of tools to assess and improve LLM security.☆3,976Updated last week
- Damn Vulnerable MCP Server☆1,241Updated last month
- a security scanner for custom LLM applications☆1,089Updated last month
- Make your GenAI Apps Safe & Secure Test & harden your system prompt☆609Updated 3 months ago
- ⚡ Vigil ⚡ Detect prompt injections, jailbreaks, and other potentially risky Large Language Model (LLM) inputs☆436Updated last year
- Helping Ethical Hackers use LLMs in 50 Lines of Code or less..☆925Updated 4 months ago
- Cybersecurity AI (CAI), the framework for AI Security☆6,794Updated this week
- An overview of LLMs for cybersecurity.☆1,181Updated last month
- AI Red Teaming playground labs to run AI Red Teaming trainings including infrastructure.☆1,781Updated last week
- The Simple Agent Development Kit.☆1,315Updated 4 months ago
- A collection of awesome resources related AI security☆397Updated last week
- A curated list of large language model tools for cybersecurity research.☆479Updated last year
- Constrain, log and scan your MCP connections for security vulnerabilities.☆1,392Updated last week
- A comprehensive security checklist for MCP-based AI tools. Built by SlowMist to safeguard LLM plugin ecosystems.☆793Updated 8 months ago
- Buttercup finds and patches software vulnerabilities☆1,434Updated this week
- The LLM Evaluation Framework☆13,118Updated this week
- Prompt Injection Primer for Engineers☆542Updated 2 years ago