lakeraai / chrome-extensionLinks
Lakera - ChatGPT Data Leak Protection
☆22Updated last year
Alternatives and similar repositories for chrome-extension
Users that are interested in chrome-extension are comparing it to the libraries listed below
Sorting:
- A repository of Language Model Vulnerabilities and Exposures (LVEs).☆112Updated last year
- Red-Teaming Language Models with DSPy☆202Updated 5 months ago
- The fastest Trust Layer for AI Agents☆138Updated last month
- Guardrails for secure and robust agent development☆316Updated last month
- A subset of jailbreaks automatically discovered by the Haize Labs haizing suite.☆92Updated 3 months ago
- A better way of testing, inspecting, and analyzing AI Agent traces.☆39Updated last week
- Make your GenAI Apps Safe & Secure Test & harden your system prompt☆519Updated last month
- A benchmark for prompt injection detection systems.☆122Updated 2 months ago
- Fiddler Auditor is a tool to evaluate language models.☆184Updated last year
- Agent Name Service (ANS) Protocol, introduced by the OWASP GenAI Security Project, is a foundational framework designed to facilitate sec…☆28Updated 2 months ago
- Framework for LLM evaluation, guardrails and security☆112Updated 10 months ago
- Python SDK for running evaluations on LLM generated responses☆289Updated last month
- PromptInject is a framework that assembles prompts in a modular fashion to provide a quantitative analysis of the robustness of LLMs to a…☆395Updated last year
- Zero-trust AI APIs for easy and private consumption of open-source LLMs☆40Updated 11 months ago
- Dropbox LLM Security research code and results☆228Updated last year
- Moonshot - A simple and modular tool to evaluate and red-team any LLM application.☆258Updated this week
- ⚡ Vigil ⚡ Detect prompt injections, jailbreaks, and other potentially risky Large Language Model (LLM) inputs☆396Updated last year
- source for llmsec.net☆16Updated 11 months ago
- Every practical and proposed defense against prompt injection.☆495Updated 4 months ago
- [Corca / ML] Automatically solved Gandalf AI with LLM☆50Updated 2 years ago
- ☆71Updated 8 months ago
- Self-hardening firewall for large language models☆265Updated last year
- Open LLM Telemetry package☆28Updated 7 months ago
- TaskTracker is an approach to detecting task drift in Large Language Models (LLMs) by analysing their internal activations. It provides a…☆60Updated 4 months ago
- ☆20Updated 3 months ago
- ☆24Updated 8 months ago
- ☆52Updated 2 months ago
- Masked Python SDK wrapper for OpenAI API. Use public LLM APIs securely.☆119Updated 2 years ago
- A Dynamic Environment to Evaluate Attacks and Defenses for LLM Agents.☆209Updated this week
- 🤖 Headless IDE for AI agents☆192Updated 3 months ago