lakeraai / chrome-extensionLinks
Lakera - ChatGPT Data Leak Protection
☆27Updated last year
Alternatives and similar repositories for chrome-extension
Users that are interested in chrome-extension are comparing it to the libraries listed below
Sorting:
- A subset of jailbreaks automatically discovered by the Haize Labs haizing suite.☆100Updated 9 months ago
- A better way of testing, inspecting, and analyzing AI Agent traces.☆46Updated 3 weeks ago
- Red-Teaming Language Models with DSPy☆250Updated 11 months ago
- ☆76Updated last year
- Guardrails for secure and robust agent development☆383Updated 3 weeks ago
- A prompt defence is a multi-layer defence that can be used to protect your applications against prompt injection attacks.☆21Updated last month
- The fastest Trust Layer for AI Agents☆149Updated 8 months ago
- AgentFence is an open-source platform for automatically testing AI agent security. It identifies vulnerabilities such as prompt injection…☆50Updated 10 months ago
- Make your GenAI Apps Safe & Secure Test & harden your system prompt☆610Updated last week
- Self-hardening firewall for large language models☆267Updated last year
- LLM Security Platform.☆26Updated last year
- Masked Python SDK wrapper for OpenAI API. Use public LLM APIs securely.☆120Updated 2 years ago
- ☆20Updated 9 months ago
- AI aware proxy☆19Updated last year
- Deploy agents easily☆102Updated 3 months ago
- A collection of prompt injection mitigation techniques.☆26Updated 2 years ago
- source for llmsec.net☆16Updated last year
- Building self-refined guardrails via DSPy☆14Updated last year
- ☆46Updated 10 months ago
- Fiddler Auditor is a tool to evaluate language models.☆188Updated last year
- This library provides you with an easy way to create and run Hive Agents.☆19Updated last year
- Agent Name Service (ANS) Protocol, introduced by the OWASP GenAI Security Project, is a foundational framework designed to facilitate sec…☆51Updated 8 months ago
- A repository of Language Model Vulnerabilities and Exposures (LVEs).☆112Updated last year
- Framework for LLM evaluation, guardrails and security☆114Updated last year
- A benchmark for prompt injection detection systems.☆156Updated last month
- Crews Control is an abstraction layer on top of crewAI, designed to facilitate the creation and execution of AI-driven projects without w…☆37Updated 7 months ago
- Security and compliance proxy for LLM APIs☆50Updated 2 years ago
- Python SDK for running evaluations on LLM generated responses☆295Updated 7 months ago
- Python SDK for experimenting, testing, evaluating & monitoring LLM-powered applications - Parea AI (YC S23)☆82Updated 11 months ago
- Lyzr SDKs help you to build all your favorite GenAI SaaS products as enterprise applications in minutes.☆192Updated last year