protectai / llm-guard
The Security Toolkit for LLM Interactions
โ1,593Updated last week
Alternatives and similar repositories for llm-guard:
Users that are interested in llm-guard are comparing it to the libraries listed below
- LLM Prompt Injection Detectorโ1,247Updated 8 months ago
- ๐ LangKit: An open-source toolkit for monitoring Large Language Models (LLMs). ๐ Extracts signals from prompts & responses, ensuring saโฆโ900Updated 4 months ago
- โก Vigil โก Detect prompt injections, jailbreaks, and other potentially risky Large Language Model (LLM) inputsโ377Updated last year
- Protection against Model Serialization Attacksโ462Updated this week
- Superfast AI decision making and intelligent processing of multi-modal data.โ2,545Updated this week
- Make your GenAI Apps Safe & Secure Test & harden your system promptโ461Updated 6 months ago
- A tool for evaluating LLMsโ414Updated 11 months ago
- Adding guardrails to large language models.โ4,808Updated last week
- Every practical and proposed defense against prompt injection.โ423Updated last month
- Deploy your agentic worfklows to productionโ1,995Updated 3 weeks ago
- NeMo Guardrails is an open-source toolkit for easily adding programmable guardrails to LLM-based conversational systems.โ4,639Updated this week
- The production toolkit for LLMs. Observability, prompt management and evaluations.โ1,284Updated this week
- Easily use and train state of the art late-interaction retrieval methods (ColBERT) in any RAG pipeline. Designed for modularity and ease-โฆโ3,396Updated 2 months ago
- Open-source tool to visualise your RAG ๐ฎโ1,121Updated 3 months ago
- A lightweight, low-dependency, unified API to use all common reranking and cross-encoder models.โ1,375Updated 2 weeks ago
- PromptInject is a framework that assembles prompts in a modular fashion to provide a quantitative analysis of the robustness of LLMs to aโฆโ361Updated last year
- Automated Evaluation of RAG Systemsโ576Updated 3 weeks ago
- the LLM vulnerability scannerโ4,286Updated this week
- Build applications that make decisions (chatbots, agents, simulations, etc...). Monitor, trace, persist, and execute on your own infrastrโฆโ1,575Updated 2 weeks ago
- Test your prompts, agents, and RAGs. Red teaming, pentesting, and vulnerability scanning for LLMs. Compare performance of GPT, Claude, Geโฆโ6,182Updated this week
- The Python Risk Identification Tool for generative AI (PyRIT) is an open source framework built to empower security professionals and engโฆโ2,407Updated this week
- AI Observability & Evaluationโ5,405Updated this week
- Evaluation and Tracking for LLM Experimentsโ2,434Updated this week
- New ways of breaking app-integrated LLMsโ1,918Updated last year
- Developer APIs to Accelerate LLM Projectsโ1,632Updated 6 months ago
- Open-source tools for prompt testing and experimentation, with support for both LLMs (e.g. OpenAI, LLaMA) and vector databases (e.g. Chroโฆโ2,836Updated 8 months ago
- Langtrace ๐ is an open-source, Open Telemetry based end-to-end observability tool for LLM applications, providing real-time tracing, evโฆโ892Updated this week
- Distilabel is a framework for synthetic data and AI feedback for engineers who need fast, reliable and scalable pipelines based on verifiโฆโ2,629Updated this week
- OpenTelemetry Instrumentation for AI Observabilityโ377Updated this week
- The LLM Evaluation Frameworkโ5,972Updated this week