protectai / llm-guardView external linksLinks
The Security Toolkit for LLM Interactions
☆2,511Dec 15, 2025Updated 2 months ago
Alternatives and similar repositories for llm-guard
Users that are interested in llm-guard are comparing it to the libraries listed below
Sorting:
- ⚡ Vigil ⚡ Detect prompt injections, jailbreaks, and other potentially risky Large Language Model (LLM) inputs☆454Jan 31, 2024Updated 2 years ago
- LLM Prompt Injection Detector☆1,415Aug 7, 2024Updated last year
- Protection against Model Serialization Attacks☆645Nov 24, 2025Updated 2 months ago
- Adding guardrails to large language models.☆6,399Updated this week
- NeMo Guardrails is an open-source toolkit for easily adding programmable guardrails to LLM-based conversational systems.☆5,650Updated this week
- the LLM vulnerability scanner☆6,948Feb 5, 2026Updated last week
- Set of tools to assess and improve LLM security.☆4,020Updated this week
- 🐢 Open-Source Evaluation & Testing library for LLM Agents☆5,111Feb 6, 2026Updated last week
- Secure Jupyter Notebooks and Experimentation Environment☆85Feb 6, 2025Updated last year
- The Python Risk Identification Tool for generative AI (PyRIT) is an open source framework built to empower security professionals and eng…☆3,408Updated this week
- A curation of awesome tools, documents and projects about LLM Security.☆1,525Aug 20, 2025Updated 5 months ago
- Test your prompts, agents, and RAGs. AI Red teaming, pentesting, and vulnerability scanning for LLMs. Compare performance of GPT, Claude,…☆10,339Feb 8, 2026Updated last week
- 🔍 LangKit: An open-source toolkit for monitoring Large Language Models (LLMs). 📚 Extracts signals from prompts & responses, ensuring sa…☆974Nov 22, 2024Updated last year
- Supercharge Your LLM Application Evaluations 🚀☆12,605Jan 31, 2026Updated 2 weeks ago
- Uses the ChatGPT model to determine if a user-supplied question is safe and filter out dangerous questions☆49May 4, 2023Updated 2 years ago
- An open-source framework for detecting, redacting, masking, and anonymizing sensitive data (PII) across text, images, and structured data…☆6,873Updated this week
- The LLM Evaluation Framework☆13,613Updated this week
- Make your GenAI Apps Safe & Secure Test & harden your system prompt☆622Jan 24, 2026Updated 3 weeks ago
- 🪢 Open source LLM engineering platform: LLM Observability, metrics, evals, prompt management, playground, datasets. Integrates with Open…☆21,640Feb 7, 2026Updated last week
- AI Observability & Evaluation☆8,530Updated this week
- New ways of breaking app-integrated LLMs☆2,052Jul 17, 2025Updated 6 months ago
- Python SDK, Proxy Server (AI Gateway) to call 100+ LLM APIs in OpenAI (or native) format, with cost tracking, guardrails, loadbalancing a…☆35,968Updated this week
- Every practical and proposed defense against prompt injection.☆630Feb 22, 2025Updated 11 months ago
- DSPy: The framework for programming—not prompting—language models☆32,156Updated this week
- Lambda function that streamlines containment of an AWS account compromise☆344Dec 1, 2023Updated 2 years ago
- The fastest Trust Layer for AI Agents☆151Feb 3, 2026Updated last week
- OWASP Top 10 for Large Language Model Apps (Part of the GenAI Security Project)☆1,076Feb 3, 2026Updated last week
- A security scanner for your LLM agentic workflows☆905Nov 27, 2025Updated 2 months ago
- 🧠 LLMFuzzer - Fuzzing Framework for Large Language Models 🧠 LLMFuzzer is the first open-source fuzzing framework specifically designed …☆339Feb 12, 2024Updated 2 years ago
- Semantic cache for LLMs. Fully integrated with LangChain and llama_index.☆7,928Jul 11, 2025Updated 7 months ago
- PromptInject is a framework that assembles prompts in a modular fashion to provide a quantitative analysis of the robustness of LLMs to a…☆454Feb 26, 2024Updated last year
- Agentic LLM Vulnerability Scanner / AI red teaming kit 🧪☆1,765Feb 3, 2026Updated last week
- structured outputs for llms☆12,357Updated this week
- Open-source tools for prompt testing and experimentation, with support for both LLMs (e.g. OpenAI, LLaMA) and vector databases (e.g. Chro…☆3,003Updated this week
- Dropbox LLM Security research code and results☆254May 21, 2024Updated last year
- Structured Outputs☆13,403Feb 6, 2026Updated last week
- a security scanner for custom LLM applications☆1,126Dec 1, 2025Updated 2 months ago
- AI orchestration framework to build customizable, production-ready LLM applications. Connect components (models, vector DBs, file convert…☆24,162Updated this week
- Framework for orchestrating role-playing, autonomous AI agents. By fostering collaborative intelligence, CrewAI empowers agents to work t…☆44,061Updated this week