protectai / rebuffLinks
LLM Prompt Injection Detector
☆1,391Updated last year
Alternatives and similar repositories for rebuff
Users that are interested in rebuff are comparing it to the libraries listed below
Sorting:
- 🔍 LangKit: An open-source toolkit for monitoring Large Language Models (LLMs). 📚 Extracts signals from prompts & responses, ensuring sa…☆971Updated last year
- The Security Toolkit for LLM Interactions☆2,358Updated 2 weeks ago
- New ways of breaking app-integrated LLMs☆2,029Updated 5 months ago
- Make your GenAI Apps Safe & Secure Test & harden your system prompt☆602Updated 3 months ago
- OWASP Top 10 for Large Language Model Apps (Part of the GenAI Security Project)☆1,015Updated this week
- A tool for evaluating LLMs☆428Updated last year
- PromptInject is a framework that assembles prompts in a modular fashion to provide a quantitative analysis of the robustness of LLMs to a…☆445Updated last year
- ⚡ Vigil ⚡ Detect prompt injections, jailbreaks, and other potentially risky Large Language Model (LLM) inputs☆433Updated last year
- Protection against Model Serialization Attacks☆622Updated last month
- Adding guardrails to large language models.☆6,198Updated 2 weeks ago
- Every practical and proposed defense against prompt injection.☆598Updated 10 months ago
- Open-source tools for prompt testing and experimentation, with support for both LLMs (e.g. OpenAI, LLaMA) and vector databases (e.g. Chro…☆2,985Updated last year
- NeMo Guardrails is an open-source toolkit for easily adding programmable guardrails to LLM-based conversational systems.☆5,448Updated last week
- ☆779Updated 6 months ago
- 🧠 LLMFuzzer - Fuzzing Framework for Large Language Models 🧠 LLMFuzzer is the first open-source fuzzing framework specifically designed …☆333Updated last year
- 🍰 PromptLayer - Maintain a log of your prompts and OpenAI API requests. Track, debug, and replay old completions.☆717Updated last week
- Toolkit for fine-tuning, ablating and unit-testing open-source LLMs.☆865Updated last year
- Visualization and debugging tool for LangChain workflows☆741Updated last year
- Moonshot - A simple and modular tool to evaluate and red-team any LLM application.☆294Updated 3 weeks ago
- Retrieval Augmented Generation (RAG) framework and context engine powered by Pinecone☆1,029Updated last year
- ⛓️ Serving LangChain LLM apps and agents automagically with FastApi. LLMops☆936Updated last year
- Dropbox LLM Security research code and results☆250Updated last year
- Open-source tool to visualise your RAG 🔮☆1,201Updated 11 months ago
- Evaluation tool for LLM QA chains☆1,093Updated 2 years ago
- ☆986Updated last month
- The fastest Trust Layer for AI Agents☆144Updated 7 months ago
- An LLM-powered advanced RAG pipeline built from scratch☆854Updated last year
- Get 100% uptime, reliability from OpenAI. Handle Rate Limit, Timeout, API, Keys Errors☆689Updated 2 years ago
- LLM(😽)☆1,697Updated 10 months ago
- ☆1,515Updated 2 years ago