lakeraai / chainguard
Guard your LangChain applications against prompt injection with Lakera ChainGuard.
☆18Updated 7 months ago
Related projects ⓘ
Alternatives and complementary repositories for chainguard
- Lakera - ChatGPT Data Leak Protection☆23Updated 4 months ago
- [Corca / ML] Automatically solved Gandalf AI with LLM☆46Updated last year
- A benchmark for prompt injection detection systems.☆87Updated 2 months ago
- Turning Gandalf against itself. Use LLMs to automate playing Lakera Gandalf challenge without needing to set up an account with a platfor…☆25Updated last year
- A text embedding viewer for the Jupyter environment☆18Updated 9 months ago
- Official repo for Customized but Compromised: Assessing Prompt Injection Risks in User-Designed GPTs☆21Updated last year
- Security and compliance proxy for LLM APIs☆45Updated last year
- TaskTracker is an approach to detecting task drift in Large Language Models (LLMs) by analysing their internal activations. It provides a…☆31Updated last week
- 🧠 LLMFuzzer - Fuzzing Framework for Large Language Models 🧠 LLMFuzzer is the first open-source fuzzing framework specifically designed …☆233Updated 9 months ago
- Supply chain security for ML☆113Updated this week
- ☆44Updated last year
- Project LLM Verification Standard☆36Updated 7 months ago
- A repository of Language Model Vulnerabilities and Exposures (LVEs).☆107Updated 8 months ago
- BlindBox is a tool to isolate and deploy applications inside Trusted Execution Environments for privacy-by-design apps☆57Updated last year
- Risks and targets for assessing LLMs & LLM vulnerabilities☆25Updated 5 months ago
- A benchmark for evaluating the robustness of LLMs and defenses to indirect prompt injection attacks.☆47Updated 7 months ago
- Every practical and proposed defense against prompt injection.☆347Updated 5 months ago
- Make your GenAI Apps Safe & Secure Test & harden your system prompt☆404Updated last month
- Zero-trust AI APIs for easy and private consumption of open-source LLMs☆36Updated 3 months ago
- Static Analysis meets Large Language Models☆46Updated 6 months ago
- Red-Teaming Language Models with DSPy☆142Updated 7 months ago
- ☆34Updated 3 months ago
- LLM Security Platform.☆3Updated 3 weeks ago
- LLM plugin for models hosted by OpenRouter☆68Updated 6 months ago
- A trace analysis tool for AI agents.☆124Updated last month
- Self-hardening firewall for large language models☆258Updated 8 months ago
- OWASP Machine Learning Security Top 10 Project☆76Updated 2 months ago
- Python client for PromptWatch.io - LLM tracking platform☆28Updated 6 months ago
- The jailbreak-evaluation is an easy-to-use Python package for language model jailbreak evaluation.☆19Updated 2 weeks ago
- SecGPT: An execution isolation architecture for LLM-based systems☆49Updated 3 weeks ago