protectai / llm-guardLinks
The Security Toolkit for LLM Interactions
☆1,757Updated this week
Alternatives and similar repositories for llm-guard
Users that are interested in llm-guard are comparing it to the libraries listed below
Sorting:
- LLM Prompt Injection Detector☆1,296Updated 10 months ago
- 🔍 LangKit: An open-source toolkit for monitoring Large Language Models (LLMs). 📚 Extracts signals from prompts & responses, ensuring sa…☆918Updated 6 months ago
- ⚡ Vigil ⚡ Detect prompt injections, jailbreaks, and other potentially risky Large Language Model (LLM) inputs☆394Updated last year
- OWASP Foundation Web Respository☆765Updated this week
- Adding guardrails to large language models.☆5,104Updated 2 weeks ago
- Protection against Model Serialization Attacks☆500Updated last week
- Evaluation and Tracking for LLM Experiments☆2,570Updated this week
- Make your GenAI Apps Safe & Secure Test & harden your system prompt☆504Updated this week
- NeMo Guardrails is an open-source toolkit for easily adding programmable guardrails to LLM-based conversational systems.☆4,806Updated this week
- Superfast AI decision making and intelligent processing of multi-modal data.☆2,634Updated last month
- Every practical and proposed defense against prompt injection.☆481Updated 3 months ago
- The production toolkit for LLMs. Observability, prompt management and evaluations.☆1,342Updated this week
- Chainlit's cookbook repo☆1,159Updated last month
- An awesome & curated list of best LLMOps tools for developers☆4,981Updated last month
- the LLM vulnerability scanner☆4,596Updated this week
- A curation of awesome tools, documents and projects about LLM Security.☆1,248Updated 2 months ago
- The open-source LLMOps platform: prompt playground, prompt management, LLM evaluation, and LLM observability all in one place.☆2,831Updated last week
- Evaluate your LLM's response with Prometheus and GPT4 💯☆954Updated last month
- The LLM Evaluation Framework☆8,370Updated this week
- A tool for evaluating LLMs☆419Updated last year
- ☆917Updated 6 months ago
- Automatically evaluate your LLMs in Google Colab☆641Updated last year
- A language for constraint-guided and efficient LLM programming.☆3,961Updated 3 weeks ago
- New ways of breaking app-integrated LLMs☆1,937Updated 2 years ago
- A unified evaluation framework for large language models☆2,636Updated 3 weeks ago
- LangServe 🦜️🏓☆2,106Updated last week
- Efficient Retrieval Augmentation and Generation Framework☆1,575Updated 5 months ago
- Open-source tool to visualise your RAG 🔮☆1,136Updated 5 months ago
- PromptInject is a framework that assembles prompts in a modular fashion to provide a quantitative analysis of the robustness of LLMs to a…☆379Updated last year
- Sharing both practical insights and theoretical knowledge about LLM evaluation that we gathered while managing the Open LLM Leaderboard a…☆1,423Updated 5 months ago