protectai / rebuff
LLM Prompt Injection Detector
☆1,166Updated 5 months ago
Alternatives and similar repositories for rebuff:
Users that are interested in rebuff are comparing it to the libraries listed below
- The Security Toolkit for LLM Interactions☆1,373Updated 2 weeks ago
- 🔍 LangKit: An open-source toolkit for monitoring Large Language Models (LLMs). 📚 Extracts signals from prompts & responses, ensuring sa…☆870Updated 2 months ago
- New ways of breaking app-integrated LLMs☆1,877Updated last year
- A tool for evaluating LLMs☆400Updated 8 months ago
- ☆756Updated last year
- PromptInject is a framework that assembles prompts in a modular fashion to provide a quantitative analysis of the robustness of LLMs to a…☆326Updated 11 months ago
- 🍰 PromptLayer - Maintain a log of your prompts and OpenAI API requests. Track, debug, and replay old completions.☆543Updated this week
- Evaluation tool for LLM QA chains☆1,067Updated last year
- Adding guardrails to large language models.☆4,410Updated this week
- An LLM-powered advanced RAG pipeline built from scratch☆820Updated last year
- OWASP Foundation Web Respository☆631Updated this week
- Open-source tools for prompt testing and experimentation, with support for both LLMs (e.g. OpenAI, LLaMA) and vector databases (e.g. Chro…☆2,773Updated 5 months ago
- ⚡ Vigil ⚡ Detect prompt injections, jailbreaks, and other potentially risky Large Language Model (LLM) inputs☆341Updated 11 months ago
- ⛓️ Serving LangChain LLM apps and agents automagically with FastApi. LLMops☆914Updated 6 months ago
- Modular Python framework for AI agents and workflows with chain-of-thought reasoning, tools, and memory.☆2,152Updated this week
- ☆832Updated last month
- A tiny library for coding with large language models.☆1,219Updated 6 months ago
- LLM(😽)☆1,647Updated 3 weeks ago
- Retrieval Augmented Generation (RAG) framework and context engine powered by Pinecone☆998Updated 2 months ago
- Get 100% uptime, reliability from OpenAI. Handle Rate Limit, Timeout, API, Keys Errors☆638Updated last year
- Make your GenAI Apps Safe & Secure Test & harden your system prompt☆430Updated 3 months ago
- Promptimize is a prompt engineering evaluation and testing toolkit.☆444Updated 3 months ago
- ☆1,438Updated last year
- The web framework for building LLM microservices☆985Updated 6 months ago
- Scale LLM Engine public repository☆789Updated this week
- Open-source tool to visualise your RAG 🔮☆1,101Updated 3 weeks ago
- Protection against Model Serialization Attacks☆375Updated this week
- LangSmith Client SDK Implementations☆473Updated this week
- The production toolkit for LLMs. Observability, prompt management and evaluations.☆1,147Updated this week
- NeMo Guardrails is an open-source toolkit for easily adding programmable guardrails to LLM-based conversational systems.☆4,365Updated this week