lakeraai / chainguard
Guard your LangChain applications against prompt injection with Lakera ChainGuard.
☆22Updated 2 months ago
Alternatives and similar repositories for chainguard
Users that are interested in chainguard are comparing it to the libraries listed below
Sorting:
- Lakera - ChatGPT Data Leak Protection☆22Updated 10 months ago
- A benchmark for prompt injection detection systems.☆110Updated this week
- Security and compliance proxy for LLM APIs☆47Updated last year
- TaskTracker is an approach to detecting task drift in Large Language Models (LLMs) by analysing their internal activations. It provides a…☆55Updated 2 months ago
- Every practical and proposed defense against prompt injection.☆456Updated 2 months ago
- Dropbox LLM Security research code and results☆225Updated 11 months ago
- Risks and targets for assessing LLMs & LLM vulnerabilities☆30Updated 11 months ago
- 🧠 LLMFuzzer - Fuzzing Framework for Large Language Models 🧠 LLMFuzzer is the first open-source fuzzing framework specifically designed …☆278Updated last year
- ☆72Updated 6 months ago
- A benchmark for evaluating the robustness of LLMs and defenses to indirect prompt injection attacks.☆66Updated last year
- A better way of testing, inspecting, and analyzing AI Agent traces.☆35Updated this week
- A Dynamic Environment to Evaluate Attacks and Defenses for LLM Agents.☆154Updated last week
- A repository of Language Model Vulnerabilities and Exposures (LVEs).☆109Updated last year
- BlindBox is a tool to isolate and deploy applications inside Trusted Execution Environments for privacy-by-design apps☆56Updated last year
- Turning Gandalf against itself. Use LLMs to automate playing Lakera Gandalf challenge without needing to set up an account with a platfor…☆29Updated last year
- [Corca / ML] Automatically solved Gandalf AI with LLM☆50Updated last year
- Fiddler Auditor is a tool to evaluate language models.☆179Updated last year
- 🤯 AI Security EXPOSED! Live Demos Showing Hidden Risks of 🤖 Agentic AI Flows: 💉Prompt Injection, ☣️ Data Poisoning. Watch the recorded…☆19Updated 10 months ago
- A text embedding viewer for the Jupyter environment☆19Updated last year
- Official repo for Customized but Compromised: Assessing Prompt Injection Risks in User-Designed GPTs☆26Updated last year
- Generative AI Governance for Enterprises☆16Updated 4 months ago
- Project LLM Verification Standard☆43Updated last year
- The fastest Trust Layer for AI Agents☆133Updated 2 months ago
- Zero-trust AI APIs for easy and private consumption of open-source LLMs☆40Updated 9 months ago
- Red-Teaming Language Models with DSPy☆192Updated 3 months ago
- 📚 A curated list of papers & technical articles on AI Quality & Safety☆179Updated last month
- Guardrails for secure and robust agent development☆252Updated this week
- 😎 Awesome list of resources about using and building AI software development systems☆110Updated last year
- Secure Jupyter Notebooks and Experimentation Environment☆74Updated 3 months ago
- LLM model runway server☆13Updated last year