lakeraai / chrome-extensionLinks
Lakera - ChatGPT Data Leak Protection
☆23Updated last year
Alternatives and similar repositories for chrome-extension
Users that are interested in chrome-extension are comparing it to the libraries listed below
Sorting:
- Guardrails for secure and robust agent development☆342Updated last month
- Red-Teaming Language Models with DSPy☆211Updated 6 months ago
- A subset of jailbreaks automatically discovered by the Haize Labs haizing suite.☆96Updated 4 months ago
- Make your GenAI Apps Safe & Secure Test & harden your system prompt☆553Updated last month
- A better way of testing, inspecting, and analyzing AI Agent traces.☆40Updated last month
- Every practical and proposed defense against prompt injection.☆537Updated 6 months ago
- A benchmark for prompt injection detection systems.☆128Updated last week
- Open LLM Telemetry package☆29Updated 9 months ago
- LLM Security Platform.☆22Updated 10 months ago
- A prompt defence is a multi-layer defence that can be used to protect your applications against prompt injection attacks.☆18Updated 10 months ago
- LLM proxy to observe and debug what your AI agents are doing.☆46Updated last month
- Self-hardening firewall for large language models☆265Updated last year
- Python SDK for running evaluations on LLM generated responses☆291Updated 3 months ago
- The fastest Trust Layer for AI Agents☆142Updated 3 months ago
- Masked Python SDK wrapper for OpenAI API. Use public LLM APIs securely.☆119Updated 2 years ago
- ☆72Updated 10 months ago
- PromptInject is a framework that assembles prompts in a modular fashion to provide a quantitative analysis of the robustness of LLMs to a…☆412Updated last year
- ⚡ Vigil ⚡ Detect prompt injections, jailbreaks, and other potentially risky Large Language Model (LLM) inputs☆410Updated last year
- A powerful AI observability framework that provides comprehensive insights into agent interactions across platforms, enabling developers …☆93Updated 3 months ago
- Fiddler Auditor is a tool to evaluate language models.☆187Updated last year
- Moonshot - A simple and modular tool to evaluate and red-team any LLM application.☆267Updated this week
- anonLLM: Anonymize Personally Identifiable Information (PII) for Large Language Model APIs☆66Updated last year
- AI aware proxy☆19Updated 11 months ago
- Security Threats related with MCP (Model Context Protocol), MCP Servers and more☆30Updated 4 months ago
- Let Claude control a web browser on your machine.☆36Updated 3 months ago
- AgentFence is an open-source platform for automatically testing AI agent security. It identifies vulnerabilities such as prompt injection…☆20Updated 6 months ago
- ☆55Updated 4 months ago
- Prompt engineering, automated.☆340Updated 4 months ago
- Framework for LLM evaluation, guardrails and security☆113Updated 11 months ago
- An external version of a pull request for langchain.☆27Updated 6 months ago