protectai / llm-guardLinks
The Security Toolkit for LLM Interactions
โ2,261Updated 2 weeks ago
Alternatives and similar repositories for llm-guard
Users that are interested in llm-guard are comparing it to the libraries listed below
Sorting:
- LLM Prompt Injection Detectorโ1,375Updated last year
- ๐ LangKit: An open-source toolkit for monitoring Large Language Models (LLMs). ๐ Extracts signals from prompts & responses, ensuring saโฆโ959Updated 11 months ago
- NeMo Guardrails is an open-source toolkit for easily adding programmable guardrails to LLM-based conversational systems.โ5,280Updated this week
- OWASP Top 10 for Large Language Model Apps (Part of the GenAI Security Project)โ952Updated this week
- Protection against Model Serialization Attacksโ601Updated 3 weeks ago
- Make your GenAI Apps Safe & Secure Test & harden your system promptโ587Updated last month
- Adding guardrails to large language models.โ5,970Updated 2 weeks ago
- Evaluation and Tracking for LLM Experiments and AI Agentsโ2,922Updated this week
- Every practical and proposed defense against prompt injection.โ579Updated 8 months ago
- Superfast AI decision making and intelligent processing of multi-modal data.โ2,887Updated last week
- Langtrace ๐ is an open-source, Open Telemetry based end-to-end observability tool for LLM applications, providing real-time tracing, evโฆโ1,051Updated 6 months ago
- The Python Risk Identification Tool for generative AI (PyRIT) is an open source framework built to empower security professionals and engโฆโ3,085Updated this week
- Moonshot - A simple and modular tool to evaluate and red-team any LLM application.โ282Updated 2 months ago
- Open source platform for AI Engineering: OpenTelemetry-native LLM Observability, GPU Monitoring, Guardrails, Evaluations, Prompt Managemeโฆโ2,028Updated this week
- Open-source tools for prompt testing and experimentation, with support for both LLMs (e.g. OpenAI, LLaMA) and vector databases (e.g. Chroโฆโ2,958Updated last year
- OpenTelemetry Instrumentation for AI Observabilityโ714Updated this week
- New ways of breaking app-integrated LLMsโ2,007Updated 4 months ago
- A curation of awesome tools, documents and projects about LLM Security.โ1,450Updated 3 months ago
- Open-source tool to visualise your RAG ๐ฎโ1,197Updated 10 months ago
- A tool for evaluating LLMsโ427Updated last year
- PromptInject is a framework that assembles prompts in a modular fashion to provide a quantitative analysis of the robustness of LLMs to aโฆโ433Updated last year
- DeepTeam is a framework to red team LLMs and LLM systems.โ943Updated this week
- The production toolkit for LLMs. Observability, prompt management and evaluations.โ1,438Updated last week
- Developer APIs to Accelerate LLM Projectsโ1,733Updated last year
- โ901Updated last year
- Deploy your agentic worfklows to productionโ2,061Updated 2 months ago
- A unified evaluation framework for large language modelsโ2,743Updated last month
- Easily use and train state of the art late-interaction retrieval methods (ColBERT) in any RAG pipeline. Designed for modularity and ease-โฆโ3,761Updated 6 months ago
- Automated Evaluation of RAG Systemsโ670Updated 7 months ago
- Test your prompts, agents, and RAGs. AI Red teaming, pentesting, and vulnerability scanning for LLMs. Compare performance of GPT, Claude,โฆโ9,096Updated this week