The Security Toolkit for LLM Interactions
☆2,620Dec 15, 2025Updated 2 months ago
Alternatives and similar repositories for llm-guard
Users that are interested in llm-guard are comparing it to the libraries listed below
Sorting:
- ⚡ Vigil ⚡ Detect prompt injections, jailbreaks, and other potentially risky Large Language Model (LLM) inputs☆459Jan 31, 2024Updated 2 years ago
- LLM Prompt Injection Detector☆1,426Aug 7, 2024Updated last year
- Protection against Model Serialization Attacks☆647Feb 18, 2026Updated 2 weeks ago
- Adding guardrails to large language models.☆6,492Updated this week
- NeMo Guardrails is an open-source toolkit for easily adding programmable guardrails to LLM-based conversational systems.☆5,731Updated this week
- the LLM vulnerability scanner☆7,088Feb 25, 2026Updated last week
- Set of tools to assess and improve LLM security.☆4,051Updated this week
- 🐢 Open-Source Evaluation & Testing library for LLM Agents☆5,141Feb 27, 2026Updated last week
- Secure Jupyter Notebooks and Experimentation Environment☆86Feb 6, 2025Updated last year
- The Python Risk Identification Tool for generative AI (PyRIT) is an open source framework built to empower security professionals and eng…☆3,500Updated this week
- A curation of awesome tools, documents and projects about LLM Security.☆1,537Aug 20, 2025Updated 6 months ago
- Test your prompts, agents, and RAGs. AI Red teaming, pentesting, and vulnerability scanning for LLMs. Compare performance of GPT, Claude,…☆10,821Updated this week
- 🔍 LangKit: An open-source toolkit for monitoring Large Language Models (LLMs). 📚 Extracts signals from prompts & responses, ensuring sa…☆977Nov 22, 2024Updated last year
- Supercharge Your LLM Application Evaluations 🚀☆12,826Feb 24, 2026Updated last week
- Uses the ChatGPT model to determine if a user-supplied question is safe and filter out dangerous questions☆49May 4, 2023Updated 2 years ago
- An open-source framework for detecting, redacting, masking, and anonymizing sensitive data (PII) across text, images, and structured data…☆7,068Updated this week
- The LLM Evaluation Framework☆13,904Updated this week
- Make your GenAI Apps Safe & Secure Test & harden your system prompt☆635Feb 16, 2026Updated 2 weeks ago
- AI Observability & Evaluation☆8,746Updated this week
- 🪢 Open source LLM engineering platform: LLM Observability, metrics, evals, prompt management, playground, datasets. Integrates with Open…☆22,717Updated this week
- New ways of breaking app-integrated LLMs☆2,055Jul 17, 2025Updated 7 months ago
- Python SDK, Proxy Server (AI Gateway) to call 100+ LLM APIs in OpenAI (or native) format, with cost tracking, guardrails, loadbalancing a…☆37,994Updated this week
- Every practical and proposed defense against prompt injection.☆645Feb 22, 2025Updated last year
- Lambda function that streamlines containment of an AWS account compromise☆344Dec 1, 2023Updated 2 years ago
- DSPy: The framework for programming—not prompting—language models☆32,519Updated this week
- 🧠 LLMFuzzer - Fuzzing Framework for Large Language Models 🧠 LLMFuzzer is the first open-source fuzzing framework specifically designed …☆344Feb 12, 2024Updated 2 years ago
- The fastest Trust Layer for AI Agents☆152Feb 3, 2026Updated last month
- OWASP Top 10 for Large Language Model Apps (Part of the GenAI Security Project)☆1,121Feb 22, 2026Updated last week
- Semantic cache for LLMs. Fully integrated with LangChain and llama_index.☆7,951Jul 11, 2025Updated 7 months ago
- A security scanner for your LLM agentic workflows☆915Nov 27, 2025Updated 3 months ago
- PromptInject is a framework that assembles prompts in a modular fashion to provide a quantitative analysis of the robustness of LLMs to a…☆458Feb 26, 2024Updated 2 years ago
- Agentic LLM Vulnerability Scanner / AI red teaming kit 🧪☆1,790Feb 3, 2026Updated last month
- structured outputs for llms☆12,468Feb 25, 2026Updated last week
- Open-source tools for prompt testing and experimentation, with support for both LLMs (e.g. OpenAI, LLaMA) and vector databases (e.g. Chro…☆3,022Feb 11, 2026Updated 3 weeks ago
- Structured Outputs☆13,488Updated this week
- Dropbox LLM Security research code and results☆255May 21, 2024Updated last year
- Framework for orchestrating role-playing, autonomous AI agents. By fostering collaborative intelligence, CrewAI empowers agents to work t…☆45,147Updated this week
- a security scanner for custom LLM applications☆1,140Dec 1, 2025Updated 3 months ago
- Open-source AI orchestration framework for building context-engineered, production-ready LLM applications. Design modular pipelines and a…☆24,370Updated this week