lakeraai / chainguard
Guard your LangChain applications against prompt injection with Lakera ChainGuard.
☆18Updated last month
Alternatives and similar repositories for chainguard:
Users that are interested in chainguard are comparing it to the libraries listed below
- Lakera - ChatGPT Data Leak Protection☆22Updated 7 months ago
- [Corca / ML] Automatically solved Gandalf AI with LLM☆48Updated last year
- Official repo for Customized but Compromised: Assessing Prompt Injection Risks in User-Designed GPTs☆23Updated last year
- TaskTracker is an approach to detecting task drift in Large Language Models (LLMs) by analysing their internal activations. It provides a…☆43Updated 2 months ago
- Turning Gandalf against itself. Use LLMs to automate playing Lakera Gandalf challenge without needing to set up an account with a platfor…☆29Updated last year
- Private ChatGPT/Perplexity. Securely unlocks knowledge from confidential business information.☆62Updated 4 months ago
- A framework-less approach to robust agent development.☆154Updated this week
- Every practical and proposed defense against prompt injection.☆389Updated 8 months ago
- A repository of Language Model Vulnerabilities and Exposures (LVEs).☆108Updated 11 months ago
- PromptInject is a framework that assembles prompts in a modular fashion to provide a quantitative analysis of the robustness of LLMs to a…☆333Updated 11 months ago
- Supply chain security for ML☆130Updated 2 weeks ago
- Dropbox LLM Security research code and results☆220Updated 9 months ago
- Security and compliance proxy for LLM APIs☆46Updated last year
- A benchmark for evaluating the robustness of LLMs and defenses to indirect prompt injection attacks.☆58Updated 10 months ago
- ☆70Updated 4 months ago
- Project LLM Verification Standard☆38Updated 10 months ago
- Risks and targets for assessing LLMs & LLM vulnerabilities☆30Updated 8 months ago
- A better way of testing, inspecting, and analyzing AI Agent traces.☆28Updated this week
- Agile Agents (A2) is an open-source framework for the creation and deployment of serverless intelligent agents using public and private c…☆16Updated 7 months ago
- A text embedding viewer for the Jupyter environment☆19Updated last year
- Whispers in the Machine: Confidentiality in LLM-integrated Systems☆33Updated 2 weeks ago
- ☆43Updated 2 years ago
- A tool that helps you build prompts with lots of code blocks in them.☆47Updated 9 months ago
- Chat with GPT-4 turbo on any AWS page. Share your current screen with AI.☆23Updated last year
- Self-hardening firewall for large language models☆263Updated 11 months ago
- Access the Cohere Command R family of models☆34Updated 10 months ago
- Jailbreaking Leading Safety-Aligned LLMs with Simple Adaptive Attacks [ICLR 2025]☆264Updated 3 weeks ago
- LLM model runway server☆12Updated last year
- ☆40Updated 6 months ago
- A prompt defence is a multi-layer defence that can be used to protect your applications against prompt injection attacks.☆14Updated 4 months ago