The Security Toolkit for LLM Interactions
☆2,832Dec 15, 2025Updated 4 months ago
Alternatives and similar repositories for llm-guard
Users that are interested in llm-guard are comparing it to the libraries listed below. We may earn a commission when you buy through links labeled 'Ad' on this page.
Sorting:
- ⚡ Vigil ⚡ Detect prompt injections, jailbreaks, and other potentially risky Large Language Model (LLM) inputs☆470Jan 31, 2024Updated 2 years ago
- LLM Prompt Injection Detector☆1,459Aug 7, 2024Updated last year
- Protection against Model Serialization Attacks☆677Feb 18, 2026Updated 2 months ago
- Adding guardrails to large language models.☆6,675Apr 3, 2026Updated 2 weeks ago
- NeMo Guardrails is an open-source toolkit for easily adding programmable guardrails to LLM-based conversational systems.☆5,986Updated this week
- Managed hosting for WordPress and PHP on Cloudways • AdManaged hosting for WordPress, Magento, Laravel, or PHP apps, on multiple cloud providers. Deploy in minutes on Cloudways by DigitalOcean.
- the LLM vulnerability scanner☆7,559Updated this week
- Secure Jupyter Notebooks and Experimentation Environment☆87Feb 6, 2025Updated last year
- Set of tools to assess and improve LLM security.☆4,121Apr 13, 2026Updated last week
- A curation of awesome tools, documents and projects about LLM Security.☆1,565Aug 20, 2025Updated 8 months ago
- 🐢 Open-Source Evaluation & Testing library for LLM Agents☆5,273Updated this week
- Uses the ChatGPT model to determine if a user-supplied question is safe and filter out dangerous questions☆49May 4, 2023Updated 2 years ago
- The Python Risk Identification Tool for generative AI (PyRIT) is an open source framework built to empower security professionals and eng…☆3,704Updated this week
- 🔍 LangKit: An open-source toolkit for monitoring Large Language Models (LLMs). 📚 Extracts signals from prompts & responses, ensuring sa…☆981Nov 22, 2024Updated last year
- New ways of breaking app-integrated LLMs☆2,067Jul 17, 2025Updated 9 months ago
- End-to-end encrypted email - Proton Mail • AdSpecial offer: 40% Off Yearly / 80% Off First Month. All Proton services are open source and independently audited for security.
- Test your prompts, agents, and RAGs. Red teaming/pentesting/vulnerability scanning for AI. Compare performance of GPT, Claude, Gemini, Ll…☆20,196Updated this week
- Supercharge Your LLM Application Evaluations 🚀☆13,415Feb 24, 2026Updated last month
- OWASP Top 10 for Large Language Model Apps (Part of the GenAI Security Project)☆1,194Feb 22, 2026Updated last month
- An open-source framework for detecting, redacting, masking, and anonymizing sensitive data (PII) across text, images, and structured data…☆7,648Updated this week
- The LLM Evaluation Framework☆14,878Updated this week
- 🧠 LLMFuzzer - Fuzzing Framework for Large Language Models 🧠 LLMFuzzer is the first open-source fuzzing framework specifically designed …☆345Feb 12, 2024Updated 2 years ago
- Make your GenAI Apps Safe & Secure Test & harden your system prompt☆674Feb 16, 2026Updated 2 months ago
- 🪢 Open source LLM engineering platform: LLM Observability, metrics, evals, prompt management, playground, datasets. Integrates with Open…☆25,055Updated this week
- Every practical and proposed defense against prompt injection.☆673Feb 22, 2025Updated last year
- Deploy on Railway without the complexity - Free Credits Offer • AdConnect your repo and Railway handles the rest with instant previews. Quickly provision container image services, databases, and storage volumes.
- PromptInject is a framework that assembles prompts in a modular fashion to provide a quantitative analysis of the robustness of LLMs to a…☆477Feb 26, 2024Updated 2 years ago
- Lambda function that streamlines containment of an AWS account compromise☆344Dec 1, 2023Updated 2 years ago
- Python SDK, Proxy Server (AI Gateway) to call 100+ LLM APIs in OpenAI (or native) format, with cost tracking, guardrails, loadbalancing a…☆43,478Updated this week
- a security scanner for custom LLM applications☆1,175Dec 1, 2025Updated 4 months ago
- AI Observability & Evaluation☆9,284Updated this week
- DSPy: The framework for programming—not prompting—language models☆33,649Apr 13, 2026Updated last week
- Agentic LLM Vulnerability Scanner / AI red teaming kit 🧪☆1,844Feb 3, 2026Updated 2 months ago
- Semantic cache for LLMs. Fully integrated with LangChain and llama_index.☆7,990Jul 11, 2025Updated 9 months ago
- A security scanner for your LLM agentic workflows☆951Nov 27, 2025Updated 4 months ago
- 1-Click AI Models by DigitalOcean Gradient • AdDeploy popular AI models on DigitalOcean Gradient GPU virtual machines with just a single click. Zero configuration with optimized deployments.
- structured outputs for llms☆12,749Apr 13, 2026Updated last week
- Universal and Transferable Attacks on Aligned Language Models☆4,613Aug 2, 2024Updated last year
- Dropbox LLM Security research code and results☆256May 21, 2024Updated last year
- The fastest Trust Layer for AI Agents☆151Feb 3, 2026Updated 2 months ago
- Official repo for GPTFUZZER : Red Teaming Large Language Models with Auto-Generated Jailbreak Prompts☆577Feb 27, 2026Updated last month
- Structured Outputs☆13,694Updated this week
- An easy-to-use Python framework to generate adversarial jailbreak prompts.☆834Mar 30, 2026Updated 3 weeks ago