lakeraai / chainguardLinks
Guard your LangChain applications against prompt injection with Lakera ChainGuard.
☆22Updated 3 months ago
Alternatives and similar repositories for chainguard
Users that are interested in chainguard are comparing it to the libraries listed below
Sorting:
- A benchmark for prompt injection detection systems.☆115Updated 3 weeks ago
- Security and compliance proxy for LLM APIs☆47Updated last year
- The fastest Trust Layer for AI Agents☆136Updated last week
- ☆72Updated 7 months ago
- Lakera - ChatGPT Data Leak Protection☆22Updated 11 months ago
- A repository of Language Model Vulnerabilities and Exposures (LVEs).☆110Updated last year
- Risks and targets for assessing LLMs & LLM vulnerabilities☆30Updated last year
- Generative AI Governance for Enterprises☆16Updated 5 months ago
- Official repo for Customized but Compromised: Assessing Prompt Injection Risks in User-Designed GPTs☆27Updated last year
- TaskTracker is an approach to detecting task drift in Large Language Models (LLMs) by analysing their internal activations. It provides a…☆56Updated 3 months ago
- AgentFence is an open-source platform for automatically testing AI agent security. It identifies vulnerabilities such as prompt injection…☆12Updated 3 months ago
- A Dynamic Environment to Evaluate Attacks and Defenses for LLM Agents.☆175Updated this week
- Litmus is a comprehensive LLM testing and evaluation tool designed for GenAI Application Development. It provides a robust platform with …☆33Updated last month
- Red-Teaming Language Models with DSPy☆195Updated 3 months ago
- A text embedding viewer for the Jupyter environment☆19Updated last year
- ☆44Updated last month
- A prompt defence is a multi-layer defence that can be used to protect your applications against prompt injection attacks.☆17Updated 7 months ago
- A better way of testing, inspecting, and analyzing AI Agent traces.☆37Updated last week
- A benchmark for evaluating the robustness of LLMs and defenses to indirect prompt injection attacks.☆69Updated last year
- Fiddler Auditor is a tool to evaluate language models.☆181Updated last year
- ☆44Updated 10 months ago
- Rank LLMs, RAG systems, and prompts using automated head-to-head evaluation☆104Updated 5 months ago
- LLM model runway server☆13Updated last year
- 🧠 LLMFuzzer - Fuzzing Framework for Large Language Models 🧠 LLMFuzzer is the first open-source fuzzing framework specifically designed …☆280Updated last year
- Turning Gandalf against itself. Use LLMs to automate playing Lakera Gandalf challenge without needing to set up an account with a platfor…☆29Updated last year
- LLM proxy to observe and debug what your AI agents are doing.☆33Updated this week
- Agent Name Service (ANS) Protocol, introduced by the OWASP GenAI Security Project, is a foundational framework designed to facilitate sec…☆23Updated 3 weeks ago
- Every practical and proposed defense against prompt injection.☆472Updated 3 months ago
- Run evals using LLM☆25Updated last year
- Guardrails for secure and robust agent development☆292Updated this week