protectai / llm-guardLinks
The Security Toolkit for LLM Interactions
โ2,193Updated this week
Alternatives and similar repositories for llm-guard
Users that are interested in llm-guard are comparing it to the libraries listed below
Sorting:
- LLM Prompt Injection Detectorโ1,362Updated last year
- ๐ LangKit: An open-source toolkit for monitoring Large Language Models (LLMs). ๐ Extracts signals from prompts & responses, ensuring saโฆโ951Updated 11 months ago
- OWASP Top 10 for Large Language Model Apps (Part of the GenAI Security Project)โ930Updated last week
- Adding guardrails to large language models.โ5,842Updated last week
- Make your GenAI Apps Safe & Secure Test & harden your system promptโ579Updated last month
- โก Vigil โก Detect prompt injections, jailbreaks, and other potentially risky Large Language Model (LLM) inputsโ421Updated last year
- NeMo Guardrails is an open-source toolkit for easily adding programmable guardrails to LLM-based conversational systems.โ5,177Updated this week
- Protection against Model Serialization Attacksโ594Updated last week
- Superfast AI decision making and intelligent processing of multi-modal data.โ2,861Updated 3 weeks ago
- New ways of breaking app-integrated LLMsโ1,995Updated 3 months ago
- Every practical and proposed defense against prompt injection.โ570Updated 8 months ago
- Evaluation and Tracking for LLM Experiments and AI Agentsโ2,876Updated this week
- The Python Risk Identification Tool for generative AI (PyRIT) is an open source framework built to empower security professionals and engโฆโ3,001Updated last week
- The production toolkit for LLMs. Observability, prompt management and evaluations.โ1,432Updated last week
- Open-source tool to visualise your RAG ๐ฎโ1,171Updated 9 months ago
- Langtrace ๐ is an open-source, Open Telemetry based end-to-end observability tool for LLM applications, providing real-time tracing, evโฆโ1,040Updated 5 months ago
- A curation of awesome tools, documents and projects about LLM Security.โ1,431Updated 2 months ago
- Moonshot - A simple and modular tool to evaluate and red-team any LLM application.โ278Updated last month
- Open-source tools for prompt testing and experimentation, with support for both LLMs (e.g. OpenAI, LLaMA) and vector databases (e.g. Chroโฆโ2,944Updated last year
- Test your prompts, agents, and RAGs. AI Red teaming, pentesting, and vulnerability scanning for LLMs. Compare performance of GPT, Claude,โฆโ8,834Updated this week
- A tool for evaluating LLMsโ425Updated last year
- DeepTeam is a framework to red team LLMs and LLM systems.โ799Updated last week
- PromptInject is a framework that assembles prompts in a modular fashion to provide a quantitative analysis of the robustness of LLMs to aโฆโ427Updated last year
- Harness LLMs with Multi-Agent Programmingโ3,730Updated last week
- OpenTelemetry Instrumentation for AI Observabilityโ675Updated last week
- โ898Updated last year
- Lite & Super-fast re-ranking for your search & retrieval pipelines. Supports SoTA Listwise and Pairwise reranking based on LLMs and croโฆโ877Updated last month
- Enforce the output format (JSON Schema, Regex etc) of a language modelโ1,942Updated 2 months ago
- Developer APIs to Accelerate LLM Projectsโ1,731Updated last year
- the LLM vulnerability scannerโ6,260Updated this week