lakeraai / chrome-extensionLinks
Lakera - ChatGPT Data Leak Protection
☆26Updated last year
Alternatives and similar repositories for chrome-extension
Users that are interested in chrome-extension are comparing it to the libraries listed below
Sorting:
- Guardrails for secure and robust agent development☆366Updated 4 months ago
- Red-Teaming Language Models with DSPy☆238Updated 9 months ago
- Make your GenAI Apps Safe & Secure Test & harden your system prompt☆591Updated 2 months ago
- The fastest Trust Layer for AI Agents☆145Updated 6 months ago
- A subset of jailbreaks automatically discovered by the Haize Labs haizing suite.☆99Updated 7 months ago
- A better way of testing, inspecting, and analyzing AI Agent traces.☆40Updated last month
- ☆74Updated last year
- Open LLM Telemetry package☆29Updated last year
- A repository of Language Model Vulnerabilities and Exposures (LVEs).☆112Updated last year
- LLM Security Platform.☆25Updated last year
- Python SDK for running evaluations on LLM generated responses☆292Updated 5 months ago
- Moonshot - A simple and modular tool to evaluate and red-team any LLM application.☆285Updated 2 months ago
- Write YAML, execute Agent Workflows☆291Updated 3 weeks ago
- Every practical and proposed defense against prompt injection.☆586Updated 9 months ago
- PromptInject is a framework that assembles prompts in a modular fashion to provide a quantitative analysis of the robustness of LLMs to a…☆437Updated last year
- Fiddler Auditor is a tool to evaluate language models.☆188Updated last year
- Making LLMs generate entire projects. Go from idea to runnable project in one step.☆34Updated 2 years ago
- ⚡ Vigil ⚡ Detect prompt injections, jailbreaks, and other potentially risky Large Language Model (LLM) inputs☆430Updated last year
- ☆26Updated last year
- LLM proxy to observe and debug what your AI agents are doing.☆54Updated 3 weeks ago
- Open source AI Agent evaluation framework for web tasks 🐒🍌☆325Updated 10 months ago
- AgentFence is an open-source platform for automatically testing AI agent security. It identifies vulnerabilities such as prompt injection…☆42Updated 8 months ago
- Self-hardening firewall for large language models☆266Updated last year
- Dropbox LLM Security research code and results☆245Updated last year
- ☆39Updated 8 months ago
- Framework for LLM evaluation, guardrails and security☆113Updated last year
- Masked Python SDK wrapper for OpenAI API. Use public LLM APIs securely.☆119Updated 2 years ago
- source for llmsec.net☆16Updated last year
- A prompt defence is a multi-layer defence that can be used to protect your applications against prompt injection attacks.☆19Updated last year
- A framework for generative software.☆114Updated 4 months ago