lakeraai / chainguard
Guard your LangChain applications against prompt injection with Lakera ChainGuard.
☆21Updated last month
Alternatives and similar repositories for chainguard:
Users that are interested in chainguard are comparing it to the libraries listed below
- Official repo for Customized but Compromised: Assessing Prompt Injection Risks in User-Designed GPTs☆24Updated last year
- Lakera - ChatGPT Data Leak Protection☆22Updated 9 months ago
- ☆72Updated 6 months ago
- A better way of testing, inspecting, and analyzing AI Agent traces.☆35Updated this week
- A benchmark for prompt injection detection systems.☆100Updated 2 months ago
- Security and compliance proxy for LLM APIs☆46Updated last year
- Secure Jupyter Notebooks and Experimentation Environment☆74Updated 2 months ago
- Make your GenAI Apps Safe & Secure Test & harden your system prompt☆469Updated 6 months ago
- 🧠 LLMFuzzer - Fuzzing Framework for Large Language Models 🧠 LLMFuzzer is the first open-source fuzzing framework specifically designed …☆274Updated last year
- source for llmsec.net☆15Updated 9 months ago
- A prompt defence is a multi-layer defence that can be used to protect your applications against prompt injection attacks.☆16Updated 6 months ago
- The fastest Trust Layer for AI Agents☆130Updated last month
- DevOps AI Assistant CLI. Ask questions about your AWS services, cloudwatch metrics, and billing.☆69Updated 8 months ago
- Generative AI Governance for Enterprises☆16Updated 3 months ago
- Red-Teaming Language Models with DSPy☆183Updated 2 months ago
- Project LLM Verification Standard☆43Updated last year
- A plugin-based gateway that orchestrates other MCPs and allows developers to build upon it enterprise-grade agents.☆106Updated this week
- 😎 Awesome list of resources about using and building AI software development systems☆110Updated 11 months ago
- Rapidly identify and mitigate container security vulnerabilities with generative AI.☆111Updated this week
- A text embedding viewer for the Jupyter environment☆19Updated last year
- 🔥🔒 Awesome MCP (Model Context Protocol) Security 🖥️☆95Updated this week
- An external version of a pull request for langchain.☆26Updated 2 months ago
- 🤖 A GitHub action that leverages fabric patterns through an agent-based approach☆25Updated 3 months ago
- [Corca / ML] Automatically solved Gandalf AI with LLM☆49Updated last year
- Rank LLMs, RAG systems, and prompts using automated head-to-head evaluation☆103Updated 4 months ago
- AI agent with RAG+ReAct on Indian Constitution & BNS☆62Updated 6 months ago
- A research python package for detecting, categorizing, and assessing the severity of personal identifiable information (PII)☆85Updated last year
- LLM Security Platform.☆14Updated 5 months ago
- Test Generation for Prompts☆70Updated this week
- The Granite Guardian models are designed to detect risks in prompts and responses.☆78Updated last month