lakeraai / chrome-extension
Lakera - ChatGPT Data Leak Protection
☆23Updated 2 months ago
Related projects: ⓘ
- ☆43Updated last year
- Red-Teaming Language Models with DSPy☆116Updated 5 months ago
- A subset of jailbreaks automatically discovered by the Haize Labs haizing suite.☆77Updated 3 months ago
- Guard your LangChain applications against prompt injection with Lakera ChainGuard.☆16Updated 5 months ago
- A trace analysis tool for AI agents.☆97Updated this week
- Fiddler Auditor is a tool to evaluate language models.☆163Updated 6 months ago
- A repository of Language Model Vulnerabilities and Exposures (LVEs).☆103Updated 6 months ago
- A benchmark for prompt injection detection systems.☆80Updated last week
- The first platform designed to empower organizations by automating and enhancing their employment processes through advanced autonomous a…☆33Updated 2 months ago
- Jailbreaking Leading Safety-Aligned LLMs with Simple Adaptive Attacks [arXiv, Apr 2024]☆181Updated last month
- PromptInject is a framework that assembles prompts in a modular fashion to provide a quantitative analysis of the robustness of LLMs to a…☆293Updated 6 months ago
- Turning Gandalf against itself. Use LLMs to automate playing Lakera Gandalf challenge without needing to set up an account with a platfor…☆24Updated 11 months ago
- Agile Agents (A2) is an open-source framework for the creation and deployment of serverless intelligent agents using public and private c…☆10Updated 2 months ago
- Ai power Dev using the rUv approach☆58Updated 5 months ago
- A text embedding viewer for the Jupyter environment☆18Updated 7 months ago
- ⚡ Vigil ⚡ Detect prompt injections, jailbreaks, and other potentially risky Large Language Model (LLM) inputs☆299Updated 7 months ago
- Reactive DDD with DSPy☆20Updated 6 months ago
- Framework for LLM evaluation, guardrails and security☆94Updated last week
- LangChain chat model abstractions for dynamic failover, load balancing, chaos engineering, and more!☆79Updated 7 months ago
- 🤖 Headless IDE for AI agents☆110Updated this week
- Sphynx Hallucination Induction☆44Updated last month
- Masked Python SDK wrapper for OpenAI API. Use public LLM APIs securely.☆110Updated last year
- Test Software for the Characterization of AI Technologies☆212Updated last week
- Dropbox LLM Security research code and results☆210Updated 3 months ago
- Protection against Model Serialization Attacks☆273Updated this week
- ComPromptMized: Unleashing Zero-click Worms that Target GenAI-Powered Applications☆189Updated 6 months ago
- ☆86Updated last week
- [Corca / ML] Automatically solved Gandalf AI with LLM☆46Updated last year
- ☆29Updated 5 months ago
- The Security Toolkit for managing Generative AI(especially LLMs) and Supervised Learning processes(Learning and Inference).☆19Updated last month