protectai / llm-guardLinks
The Security Toolkit for LLM Interactions
β2,120Updated last week
Alternatives and similar repositories for llm-guard
Users that are interested in llm-guard are comparing it to the libraries listed below
Sorting:
- LLM Prompt Injection Detectorβ1,356Updated last year
- π LangKit: An open-source toolkit for monitoring Large Language Models (LLMs). π Extracts signals from prompts & responses, ensuring saβ¦β948Updated 10 months ago
- Protection against Model Serialization Attacksβ577Updated 2 weeks ago
- β‘ Vigil β‘ Detect prompt injections, jailbreaks, and other potentially risky Large Language Model (LLM) inputsβ417Updated last year
- OWASP Top 10 for Large Language Model Apps (Part of the GenAI Security Project)β914Updated last week
- Make your GenAI Apps Safe & Secure Test & harden your system promptβ572Updated 2 weeks ago
- NeMo Guardrails is an open-source toolkit for easily adding programmable guardrails to LLM-based conversational systems.β5,115Updated this week
- Adding guardrails to large language models.β5,739Updated last week
- Every practical and proposed defense against prompt injection.β556Updated 7 months ago
- Evaluation and Tracking for LLM Experiments and AI Agentsβ2,826Updated this week
- The Python Risk Identification Tool for generative AI (PyRIT) is an open source framework built to empower security professionals and engβ¦β2,942Updated last week
- New ways of breaking app-integrated LLMsβ1,994Updated 2 months ago
- The production toolkit for LLMs. Observability, prompt management and evaluations.β1,419Updated 2 weeks ago
- A tool for evaluating LLMsβ423Updated last year
- Moonshot - A simple and modular tool to evaluate and red-team any LLM application.β275Updated last month
- the LLM vulnerability scannerβ6,107Updated last week
- PromptInject is a framework that assembles prompts in a modular fashion to provide a quantitative analysis of the robustness of LLMs to aβ¦β422Updated last year
- Open-source tools for prompt testing and experimentation, with support for both LLMs (e.g. OpenAI, LLaMA) and vector databases (e.g. Chroβ¦β2,938Updated last year
- Set of tools to assess and improve LLM security.β3,807Updated last week
- A curation of awesome tools, documents and projects about LLM Security.β1,410Updated last month
- Superfast AI decision making and intelligent processing of multi-modal data.β2,822Updated last week
- DeepTeam is a framework to red team LLMs and LLM systems.β756Updated this week
- A unified evaluation framework for large language modelsβ2,717Updated 2 months ago
- Dropbox LLM Security research code and resultsβ235Updated last year
- LangServe π¦οΈπβ2,170Updated last week
- Langtrace π is an open-source, Open Telemetry based end-to-end observability tool for LLM applications, providing real-time tracing, evβ¦β1,038Updated 5 months ago
- π§ LLMFuzzer - Fuzzing Framework for Large Language Models π§ LLMFuzzer is the first open-source fuzzing framework specifically designed β¦β315Updated last year
- OpenTelemetry Instrumentation for AI Observabilityβ632Updated last week
- Harness LLMs with Multi-Agent Programmingβ3,720Updated last week
- A benchmark for prompt injection detection systems.β142Updated last month