NVIDIA / garakLinks
the LLM vulnerability scanner
☆6,694Updated this week
Alternatives and similar repositories for garak
Users that are interested in garak are comparing it to the libraries listed below
Sorting:
- The Python Risk Identification Tool for generative AI (PyRIT) is an open source framework built to empower security professionals and eng…☆3,250Updated this week
- Zero shot vulnerability discovery using LLMs☆2,439Updated 10 months ago
- The Security Toolkit for LLM Interactions☆2,358Updated 2 weeks ago
- Set of tools to assess and improve LLM security.☆3,947Updated last week
- Agentic LLM Vulnerability Scanner / AI red teaming kit 🧪☆1,726Updated last week
- Protection against Model Serialization Attacks☆622Updated last month
- A powerful tool for automated LLM fuzzing. It is designed to help developers and security researchers identify and mitigate potential jai…☆1,088Updated last month
- OWASP Top 10 for Large Language Model Apps (Part of the GenAI Security Project)☆1,015Updated this week
- Every practical and proposed defense against prompt injection.☆598Updated 10 months ago
- Make your GenAI Apps Safe & Secure Test & harden your system prompt☆602Updated 3 months ago
- a security scanner for custom LLM applications☆1,075Updated last month
- Cybersecurity AI (CAI), the framework for AI Security☆6,559Updated last week
- Test your prompts, agents, and RAGs. AI Red teaming, pentesting, and vulnerability scanning for LLMs. Compare performance of GPT, Claude,…☆9,630Updated this week
- [CCS'24] A dataset consists of 15,140 ChatGPT prompts from Reddit, Discord, websites, and open-source datasets (including 1,405 jailbreak…☆3,491Updated last year
- Universal and Transferable Attacks on Aligned Language Models☆4,420Updated last year
- Helping Ethical Hackers use LLMs in 50 Lines of Code or less..☆904Updated 3 months ago
- Modern CLI for exploring vulnerability data with powerful search, filtering, and analysis capabilities.☆2,243Updated this week
- AI Red Teaming playground labs to run AI Red Teaming trainings including infrastructure.☆1,760Updated last week
- ⚡ Vigil ⚡ Detect prompt injections, jailbreaks, and other potentially risky Large Language Model (LLM) inputs☆433Updated last year
- CVE cache of the official CVE List in CVE JSON 5 format☆2,377Updated this week
- Adding guardrails to large language models.☆6,198Updated 2 weeks ago
- NeMo Guardrails is an open-source toolkit for easily adding programmable guardrails to LLM-based conversational systems.☆5,448Updated last week
- Buttercup finds and patches software vulnerabilities☆1,422Updated 2 weeks ago
- ☆3,054Updated last month
- Blazingly fast LLM inference.☆6,310Updated 2 weeks ago
- a CLI that provides a generic automation layer for assessing the security of ML models☆901Updated 5 months ago
- An open-source framework for detecting, redacting, masking, and anonymizing sensitive data (PII) across text, images, and structured data…☆6,517Updated this week
- Open Source Machine Learning Research Platform designed for frontier AI/ML workflows. Local, on-prem, or in the cloud. Open source.☆4,724Updated this week
- Open Adversarial Exposure Validation Platform☆1,458Updated this week
- The recursive internet scanner for hackers. 🧡☆9,245Updated last week