lakeraai / chrome-extension
Lakera - ChatGPT Data Leak Protection
☆22Updated 9 months ago
Alternatives and similar repositories for chrome-extension:
Users that are interested in chrome-extension are comparing it to the libraries listed below
- Guard your LangChain applications against prompt injection with Lakera ChainGuard.☆21Updated last month
- ☆45Updated 2 years ago
- A better way of testing, inspecting, and analyzing AI Agent traces.☆35Updated this week
- A text embedding viewer for the Jupyter environment☆19Updated last year
- A benchmark for prompt injection detection systems.☆100Updated 2 months ago
- Guardrails for secure and robust agent development☆237Updated last week
- Red-Teaming Language Models with DSPy☆183Updated 2 months ago
- A repository of Language Model Vulnerabilities and Exposures (LVEs).☆109Updated last year
- [Corca / ML] Automatically solved Gandalf AI with LLM☆49Updated last year
- Zero-trust AI APIs for easy and private consumption of open-source LLMs☆40Updated 9 months ago
- 🤯 AI Security EXPOSED! Live Demos Showing Hidden Risks of 🤖 Agentic AI Flows: 💉Prompt Injection, ☣️ Data Poisoning. Watch the recorded…☆19Updated 9 months ago
- A prompt defence is a multi-layer defence that can be used to protect your applications against prompt injection attacks.☆16Updated 6 months ago
- Verdict is a library for scaling judge-time compute.☆199Updated last week
- Turning Gandalf against itself. Use LLMs to automate playing Lakera Gandalf challenge without needing to set up an account with a platfor…☆29Updated last year
- Fiddler Auditor is a tool to evaluate language models.☆179Updated last year
- ☆72Updated 6 months ago
- Top 10 for Agentic AI (AI Agent Security) - Pre-release version☆84Updated last month
- Moonshot - A simple and modular tool to evaluate and red-team any LLM application.☆229Updated this week
- ☆42Updated 8 months ago
- source for llmsec.net☆15Updated 9 months ago
- ☆93Updated last month
- A subset of jailbreaks automatically discovered by the Haize Labs haizing suite.☆89Updated last week
- This repository stems from our paper, “Cataloguing LLM Evaluations”, and serves as a living, collaborative catalogue of LLM evaluation fr…☆17Updated last year
- An open-source compliance-centered evaluation framework for Generative AI models☆147Updated 4 months ago
- The Security Toolkit for managing Generative AI(especially LLMs) and Supervised Learning processes(Learning and Inference).☆21Updated 8 months ago
- Framework for LLM evaluation, guardrails and security☆111Updated 7 months ago
- 🤖 A GitHub action that leverages fabric patterns through an agent-based approach☆25Updated 3 months ago
- OWASP Machine Learning Security Top 10 Project☆83Updated 2 months ago
- Security and compliance proxy for LLM APIs☆46Updated last year
- ☆35Updated 2 months ago