lakeraai / chrome-extensionLinks
Lakera - ChatGPT Data Leak Protection
☆25Updated last year
Alternatives and similar repositories for chrome-extension
Users that are interested in chrome-extension are comparing it to the libraries listed below
Sorting:
- Guardrails for secure and robust agent development☆355Updated 3 months ago
- A repository of Language Model Vulnerabilities and Exposures (LVEs).☆112Updated last year
- Make your GenAI Apps Safe & Secure Test & harden your system prompt☆579Updated last month
- A subset of jailbreaks automatically discovered by the Haize Labs haizing suite.☆97Updated 6 months ago
- A better way of testing, inspecting, and analyzing AI Agent traces.☆40Updated last week
- A prompt defence is a multi-layer defence that can be used to protect your applications against prompt injection attacks.☆18Updated last year
- Red-Teaming Language Models with DSPy☆235Updated 8 months ago
- The fastest Trust Layer for AI Agents☆144Updated 5 months ago
- AgentFence is an open-source platform for automatically testing AI agent security. It identifies vulnerabilities such as prompt injection…☆27Updated 7 months ago
- A collection of prompt injection mitigation techniques.☆24Updated 2 years ago
- Moonshot - A simple and modular tool to evaluate and red-team any LLM application.☆278Updated last month
- AI aware proxy☆19Updated last year
- ☆73Updated last year
- LLM proxy to observe and debug what your AI agents are doing.☆51Updated 3 months ago
- Fiddler Auditor is a tool to evaluate language models.☆188Updated last year
- Open LLM Telemetry package☆29Updated 11 months ago
- ☆26Updated last year
- A benchmark for prompt injection detection systems.☆144Updated 2 months ago
- Code for the paper "Defeating Prompt Injections by Design"☆138Updated 4 months ago
- Framework for LLM evaluation, guardrails and security☆113Updated last year
- Train LLMs on private data. Simply make an API request to our training endpoint specifying you data and model. LangDrive will handle the …☆160Updated last year
- ⚡ Vigil ⚡ Detect prompt injections, jailbreaks, and other potentially risky Large Language Model (LLM) inputs☆421Updated last year
- A personal assistant for planning and executing on-chain transactions.☆126Updated last year
- PromptInject is a framework that assembles prompts in a modular fashion to provide a quantitative analysis of the robustness of LLMs to a…☆427Updated last year
- A framework for generative software.☆114Updated 3 months ago
- Zero-trust AI APIs for easy and private consumption of open-source LLMs☆39Updated last year
- The Security Toolkit for managing Generative AI(especially LLMs) and Supervised Learning processes(Learning and Inference).☆21Updated 4 months ago
- Every practical and proposed defense against prompt injection.☆570Updated 8 months ago
- Agent Name Service (ANS) Protocol, introduced by the OWASP GenAI Security Project, is a foundational framework designed to facilitate sec…☆40Updated 5 months ago
- AI Verify☆36Updated 3 weeks ago