NVIDIA / garak
the LLM vulnerability scanner
☆4,384Updated this week
Alternatives and similar repositories for garak:
Users that are interested in garak are comparing it to the libraries listed below
- The Python Risk Identification Tool for generative AI (PyRIT) is an open source framework built to empower security professionals and eng…☆2,462Updated this week
- The Security Toolkit for LLM Interactions☆1,658Updated this week
- Set of tools to assess and improve LLM security.☆3,250Updated this week
- LLM Prompt Injection Detector☆1,269Updated 9 months ago
- Zero shot vulnerability discovery using LLMs☆1,764Updated 3 months ago
- A curation of awesome tools, documents and projects about LLM Security.☆1,208Updated 3 weeks ago
- Test your prompts, agents, and RAGs. Red teaming, pentesting, and vulnerability scanning for LLMs. Compare performance of GPT, Claude, Ge…☆6,478Updated this week
- [EMNLP'23, ACL'24] To speed up LLMs' inference and enhance LLM's perceive of key information, compress the prompt and KV-Cache, which ach…☆5,068Updated last month
- NeMo Guardrails is an open-source toolkit for easily adding programmable guardrails to LLM-based conversational systems.☆4,711Updated this week
- ☆2,933Updated 7 months ago
- Easily use and train state of the art late-interaction retrieval methods (ColBERT) in any RAG pipeline. Designed for modularity and ease-…☆3,436Updated 3 months ago
- OWASP Foundation Web Respository☆719Updated last week
- Adding guardrails to large language models.☆4,899Updated this week
- AdalFlow: The library to build & auto-optimize LLM applications.☆3,015Updated last month
- Every practical and proposed defense against prompt injection.☆453Updated 2 months ago
- Open Source Application for Advanced LLM Engineering: interact, train, fine-tune, and evaluate large language models on your own computer…☆3,059Updated this week
- ⚡ Vigil ⚡ Detect prompt injections, jailbreaks, and other potentially risky Large Language Model (LLM) inputs☆380Updated last year
- New ways of breaking app-integrated LLMs☆1,929Updated last year
- Protection against Model Serialization Attacks☆478Updated this week
- Helping Ethical Hackers use LLMs in 50 Lines of Code or less..☆572Updated last week
- Make your GenAI Apps Safe & Secure Test & harden your system prompt☆470Updated 6 months ago
- a prompt injection scanner for custom LLM applications☆784Updated 2 months ago
- Tools for merging pretrained large language models.☆5,628Updated this week
- Reverse Engineering: Decompiling Binary Code with Large Language Models☆5,531Updated 6 months ago
- DSPy: The framework for programming—not prompting—language models☆24,061Updated this week
- Everything about the SmolLM2 and SmolVLM family of models☆2,273Updated last month
- HarmBench: A Standardized Evaluation Framework for Automated Red Teaming and Robust Refusal☆638Updated 8 months ago
- Distilabel is a framework for synthetic data and AI feedback for engineers who need fast, reliable and scalable pipelines based on verifi…☆2,671Updated last week
- A curated list of large language model tools for cybersecurity research.☆453Updated last year
- ☆2,780Updated 2 weeks ago