lakeraai / chainguard
Guard your LangChain applications against prompt injection with Lakera ChainGuard.
☆16Updated 5 months ago
Related projects: ⓘ
- Lakera - ChatGPT Data Leak Protection☆23Updated 2 months ago
- TaskTracker is an approach to detecting task drift in Large Language Models (LLMs) by analysing their internal activations. It provides a…☆27Updated last month
- PromptInject is a framework that assembles prompts in a modular fashion to provide a quantitative analysis of the robustness of LLMs to a…☆293Updated 6 months ago
- Red-Teaming Language Models with DSPy☆116Updated 5 months ago
- Jailbreaking Leading Safety-Aligned LLMs with Simple Adaptive Attacks [arXiv, Apr 2024]☆181Updated last month
- A repository of Language Model Vulnerabilities and Exposures (LVEs).☆103Updated 6 months ago
- [Corca / ML] Automatically solved Gandalf AI with LLM☆46Updated last year
- LLM plugin for models hosted by OpenRouter☆69Updated 4 months ago
- Official repo for Customized but Compromised: Assessing Prompt Injection Risks in User-Designed GPTs☆16Updated 9 months ago
- A benchmark for prompt injection detection systems.☆80Updated last week
- Streamlit app for recommending eval functions using prompt diffs☆24Updated 8 months ago
- A trace analysis tool for AI agents.☆97Updated this week
- 🧠 LLMFuzzer - Fuzzing Framework for Large Language Models 🧠 LLMFuzzer is the first open-source fuzzing framework specifically designed …☆218Updated 7 months ago
- A text embedding viewer for the Jupyter environment☆18Updated 7 months ago
- Turning Gandalf against itself. Use LLMs to automate playing Lakera Gandalf challenge without needing to set up an account with a platfor…☆24Updated 11 months ago
- Open Source LLM proxy that transparently captures and logs all interactions with LLM API☆45Updated last week
- 📚 A curated list of papers & technical articles on AI Quality & Safety☆155Updated 11 months ago
- Python SDK for experimenting, testing, evaluating & monitoring LLM-powered applications - Parea AI (YC S23)☆72Updated last week
- A subset of jailbreaks automatically discovered by the Haize Labs haizing suite.☆77Updated 3 months ago
- Fiddler Auditor is a tool to evaluate language models.☆163Updated 6 months ago
- AI Verify☆111Updated this week
- A data-centric AI package for ML/AI. Get the best high-quality data for the best results. Discord: https://discord.gg/t6ADqBKrdZ☆63Updated 10 months ago
- A toolkit for detecting and protecting against vulnerabilities in Large Language Models (LLMs).☆116Updated 8 months ago
- ⚡ Vigil ⚡ Detect prompt injections, jailbreaks, and other potentially risky Large Language Model (LLM) inputs☆299Updated 7 months ago
- Moonshot - A simple and modular tool to evaluate and red-team any LLM application.☆144Updated this week
- Framework for LLM evaluation, guardrails and security☆94Updated last week
- A benchmark for evaluating the robustness of LLMs and defenses to indirect prompt injection attacks.☆43Updated 5 months ago
- Zep: Long-Term Memory for AI Assistants (Python Client)☆61Updated last week
- Record and replay LLM interactions for langchain☆76Updated 2 months ago
- Whispers in the Machine: Confidentiality in LLM-integrated Systems☆28Updated last week