lakeraai / chrome-extension
Lakera - ChatGPT Data Leak Protection
☆22Updated 6 months ago
Alternatives and similar repositories for chrome-extension:
Users that are interested in chrome-extension are comparing it to the libraries listed below
- Guard your LangChain applications against prompt injection with Lakera ChainGuard.☆18Updated this week
- Red-Teaming Language Models with DSPy☆153Updated 9 months ago
- ☆67Updated 2 months ago
- A text embedding viewer for the Jupyter environment☆19Updated 11 months ago
- A benchmark for prompt injection detection systems.☆94Updated 4 months ago
- Helps you build better AI agents through debuggable unit testing☆141Updated this week
- Fiddler Auditor is a tool to evaluate language models.☆174Updated 10 months ago
- PromptInject is a framework that assembles prompts in a modular fashion to provide a quantitative analysis of the robustness of LLMs to a…☆323Updated 10 months ago
- ☆43Updated last year
- Sphynx Hallucination Induction☆51Updated 5 months ago
- A repository of Language Model Vulnerabilities and Exposures (LVEs).☆108Updated 10 months ago
- Make your GenAI Apps Safe & Secure Test & harden your system prompt☆429Updated 3 months ago
- [Corca / ML] Automatically solved Gandalf AI with LLM☆47Updated last year
- A subset of jailbreaks automatically discovered by the Haize Labs haizing suite.☆88Updated 7 months ago
- Ai power Dev using the rUv approach☆65Updated 2 months ago
- ☆19Updated 2 months ago
- Turning Gandalf against itself. Use LLMs to automate playing Lakera Gandalf challenge without needing to set up an account with a platfor…☆27Updated last year
- Every practical and proposed defense against prompt injection.☆372Updated 7 months ago
- The fastest && easiest LLM security guardrails for CX AI Agents and applications.☆114Updated this week
- Moonshot - A simple and modular tool to evaluate and red-team any LLM application.☆198Updated this week
- Logging and caching superpowers for the openai sdk☆102Updated 10 months ago
- Jailbreaking Leading Safety-Aligned LLMs with Simple Adaptive Attacks [arXiv, Apr 2024]☆247Updated 3 months ago
- 🤯 AI Security EXPOSED! Live Demos Showing Hidden Risks of 🤖 Agentic AI Flows: 💉Prompt Injection, ☣️ Data Poisoning. Watch the recorded…☆16Updated 6 months ago
- Python SDK for running evaluations on LLM generated responses☆253Updated last week
- Self-hardening firewall for large language models☆260Updated 10 months ago
- ⚡ Vigil ⚡ Detect prompt injections, jailbreaks, and other potentially risky Large Language Model (LLM) inputs☆339Updated 11 months ago
- Framework for LLM evaluation, guardrails and security☆107Updated 4 months ago
- ☆39Updated 5 months ago
- AI Verify☆129Updated this week
- Simple AI agents / assistants☆40Updated 3 months ago