protectai / rebuff
LLM Prompt Injection Detector
☆1,130Updated 3 months ago
Related projects ⓘ
Alternatives and complementary repositories for rebuff
- 🔍 LangKit: An open-source toolkit for monitoring Large Language Models (LLMs). 📚 Extracts signals from prompts & responses, ensuring sa…☆851Updated 2 weeks ago
- The Security Toolkit for LLM Interactions☆1,249Updated last month
- OWASP Foundation Web Respository☆578Updated this week
- Adding guardrails to large language models.☆4,139Updated this week
- A tool for evaluating LLMs☆392Updated 6 months ago
- New ways of breaking app-integrated LLMs☆1,829Updated last year
- Get 100% uptime, reliability from OpenAI. Handle Rate Limit, Timeout, API, Keys Errors☆632Updated last year
- PromptInject is a framework that assembles prompts in a modular fashion to provide a quantitative analysis of the robustness of LLMs to a…☆313Updated 8 months ago
- The production toolkit for LLMs. Observability, prompt management and evaluations.☆1,085Updated this week
- ⚡ Vigil ⚡ Detect prompt injections, jailbreaks, and other potentially risky Large Language Model (LLM) inputs☆315Updated 9 months ago
- A language for constraint-guided and efficient LLM programming.☆3,704Updated 5 months ago
- NeMo Guardrails is an open-source toolkit for easily adding programmable guardrails to LLM-based conversational systems.☆4,190Updated this week
- Evaluation tool for LLM QA chains☆1,063Updated last year
- ☆744Updated 10 months ago
- 🍰 PromptLayer - Maintain a log of your prompts and OpenAI API requests. Track, debug, and replay old completions.☆522Updated this week
- Inspect: A framework for large language model evaluations☆624Updated this week
- Dropbox LLM Security research code and results☆217Updated 6 months ago
- Exact structure out of any language model completion.☆502Updated last year
- LLM(😽)☆1,628Updated 2 months ago
- An open-source visual programming environment for battle-testing prompts to LLMs.☆2,346Updated 3 weeks ago
- Scale LLM Engine public repository☆783Updated this week
- Every practical and proposed defense against prompt injection.☆347Updated 5 months ago
- Retrieval Augmented Generation (RAG) framework and context engine powered by Pinecone☆974Updated last week
- Make your GenAI Apps Safe & Secure Test & harden your system prompt☆402Updated last month
- The web framework for building LLM microservices☆976Updated 4 months ago
- automatically tests prompt injection attacks on ChatGPT instances☆648Updated 11 months ago
- Modular Python framework for AI agents and workflows with chain-of-thought reasoning, tools, and memory.☆2,018Updated this week
- Guide for fine-tuning Llama/Mistral/CodeLlama models and more☆534Updated 2 months ago
- Build robust LLM applications with true composability 🔗☆416Updated 10 months ago