The Security Toolkit for LLM Interactions
☆2,737Dec 15, 2025Updated 3 months ago
Alternatives and similar repositories for llm-guard
Users that are interested in llm-guard are comparing it to the libraries listed below. We may earn a commission when you buy through links labeled 'Ad' on this page.
Sorting:
- ⚡ Vigil ⚡ Detect prompt injections, jailbreaks, and other potentially risky Large Language Model (LLM) inputs☆467Jan 31, 2024Updated 2 years ago
- LLM Prompt Injection Detector☆1,451Aug 7, 2024Updated last year
- Protection against Model Serialization Attacks☆667Feb 18, 2026Updated last month
- Adding guardrails to large language models.☆6,585Updated this week
- NeMo Guardrails is an open-source toolkit for easily adding programmable guardrails to LLM-based conversational systems.☆5,854Updated this week
- Managed Kubernetes at scale on DigitalOcean • AdDigitalOcean Kubernetes includes the control plane, bandwidth allowance, container registry, automatic updates, and more for free.
- the LLM vulnerability scanner☆7,312Mar 19, 2026Updated last week
- Secure Jupyter Notebooks and Experimentation Environment☆87Feb 6, 2025Updated last year
- Set of tools to assess and improve LLM security.☆4,084Mar 18, 2026Updated last week
- A curation of awesome tools, documents and projects about LLM Security.☆1,554Aug 20, 2025Updated 7 months ago
- 🐢 Open-Source Evaluation & Testing library for LLM Agents☆5,184Mar 20, 2026Updated last week
- Uses the ChatGPT model to determine if a user-supplied question is safe and filter out dangerous questions☆49May 4, 2023Updated 2 years ago
- The Python Risk Identification Tool for generative AI (PyRIT) is an open source framework built to empower security professionals and eng…☆3,593Updated this week
- 🔍 LangKit: An open-source toolkit for monitoring Large Language Models (LLMs). 📚 Extracts signals from prompts & responses, ensuring sa…☆979Nov 22, 2024Updated last year
- Test your prompts, agents, and RAGs. Red teaming/pentesting/vulnerability scanning for AI. Compare performance of GPT, Claude, Gemini, Ll…☆18,597Updated this week
- Managed Kubernetes at scale on DigitalOcean • AdDigitalOcean Kubernetes includes the control plane, bandwidth allowance, container registry, automatic updates, and more for free.
- New ways of breaking app-integrated LLMs☆2,066Jul 17, 2025Updated 8 months ago
- Supercharge Your LLM Application Evaluations 🚀☆13,106Feb 24, 2026Updated last month
- The LLM Evaluation Framework☆14,227Mar 20, 2026Updated last week
- OWASP Top 10 for Large Language Model Apps (Part of the GenAI Security Project)☆1,164Feb 22, 2026Updated last month
- An open-source framework for detecting, redacting, masking, and anonymizing sensitive data (PII) across text, images, and structured data…☆7,314Mar 19, 2026Updated last week
- Make your GenAI Apps Safe & Secure Test & harden your system prompt☆652Feb 16, 2026Updated last month
- 🪢 Open source LLM engineering platform: LLM Observability, metrics, evals, prompt management, playground, datasets. Integrates with Open…☆23,868Updated this week
- 🧠 LLMFuzzer - Fuzzing Framework for Large Language Models 🧠 LLMFuzzer is the first open-source fuzzing framework specifically designed …☆347Feb 12, 2024Updated 2 years ago
- Every practical and proposed defense against prompt injection.☆662Feb 22, 2025Updated last year
- Managed hosting for WordPress and PHP on Cloudways • AdManaged hosting with the flexibility to host WordPress, Magento, Laravel, or PHP apps, on multiple cloud providers. Cloudways by DigitalOcean.
- PromptInject is a framework that assembles prompts in a modular fashion to provide a quantitative analysis of the robustness of LLMs to a…☆465Feb 26, 2024Updated 2 years ago
- Lambda function that streamlines containment of an AWS account compromise☆344Dec 1, 2023Updated 2 years ago
- Python SDK, Proxy Server (AI Gateway) to call 100+ LLM APIs in OpenAI (or native) format, with cost tracking, guardrails, loadbalancing a…☆40,834Updated this week
- a security scanner for custom LLM applications☆1,152Dec 1, 2025Updated 3 months ago
- AI Observability & Evaluation☆9,020Updated this week
- DSPy: The framework for programming—not prompting—language models☆33,038Updated this week
- Agentic LLM Vulnerability Scanner / AI red teaming kit 🧪☆1,818Feb 3, 2026Updated last month
- Semantic cache for LLMs. Fully integrated with LangChain and llama_index.☆7,964Jul 11, 2025Updated 8 months ago
- A security scanner for your LLM agentic workflows☆935Nov 27, 2025Updated 4 months ago
- DigitalOcean Gradient AI Platform • AdBuild production-ready AI agents using customizable tools or access multiple LLMs through a single endpoint. Create custom knowledge bases or connect external data.
- structured outputs for llms☆12,589Updated this week
- Universal and Transferable Attacks on Aligned Language Models☆4,583Aug 2, 2024Updated last year
- Dropbox LLM Security research code and results☆256May 21, 2024Updated last year
- The fastest Trust Layer for AI Agents☆153Feb 3, 2026Updated last month
- Official repo for GPTFUZZER : Red Teaming Large Language Models with Auto-Generated Jailbreak Prompts☆576Feb 27, 2026Updated last month
- Structured Outputs☆13,588Updated this week
- An easy-to-use Python framework to generate adversarial jailbreak prompts.☆830Mar 27, 2025Updated last year