lakeraai / lcguardLinks
Guard your LangChain applications against prompt injection with Lakera LCGuard.
☆2Updated last month
Alternatives and similar repositories for lcguard
Users that are interested in lcguard are comparing it to the libraries listed below
Sorting:
- Security and compliance proxy for LLM APIs☆47Updated 2 years ago
- Lakera - ChatGPT Data Leak Protection☆22Updated last year
- Crews Control is an abstraction layer on top of crewAI, designed to facilitate the creation and execution of AI-driven projects without w…☆33Updated last month
- GuardRail: Advanced tool for data analysis and AI content generation using OpenAI GPT models. Features sentiment analysis, content classi…☆132Updated last year
- Test Software for the Characterization of AI Technologies☆260Updated this week
- ☆71Updated 9 months ago
- 😎 Awesome list of resources about using and building AI software development systems☆111Updated last year
- Agent Name Service (ANS) Protocol, introduced by the OWASP GenAI Security Project, is a foundational framework designed to facilitate sec…☆31Updated 2 months ago
- Make your GenAI Apps Safe & Secure Test & harden your system prompt☆537Updated last week
- Red-Teaming Language Models with DSPy☆203Updated 5 months ago
- Self-hardening firewall for large language models☆265Updated last year
- The fastest Trust Layer for AI Agents☆141Updated 2 months ago
- Masked Python SDK wrapper for OpenAI API. Use public LLM APIs securely.☆119Updated 2 years ago
- DevOps AI Assistant CLI. Ask questions about your AWS services, cloudwatch metrics, and billing.☆70Updated last year
- A toolkit for detecting and protecting against vulnerabilities in Large Language Models (LLMs).☆142Updated last year
- A plugin-based gateway that orchestrates other MCPs and allows developers to build upon it enterprise-grade agents.☆250Updated 3 weeks ago
- Authenticated Knowledge & Trust Architecture for AI Agents☆25Updated last week
- ⚡Simplify and optimize the use of LLMs☆44Updated last year
- A prompt defence is a multi-layer defence that can be used to protect your applications against prompt injection attacks.☆17Updated 9 months ago
- A framework for building large-scale, deterministic, interactive workflows with a fault-tolerant, conversational UX☆21Updated this week
- Supply chain security for ML☆181Updated this week
- Sample GitHub Actions reusable workflows and Terraform reusable modules☆56Updated last year
- Test Generation for Prompts☆116Updated this week
- LLM Security Platform.☆21Updated 9 months ago
- A research python package for detecting, categorizing, and assessing the severity of personal identifiable information (PII)☆89Updated 2 years ago
- Constrain LLM output☆113Updated last year
- anonLLM: Anonymize Personally Identifiable Information (PII) for Large Language Model APIs☆65Updated last year
- Tools for the Griptape Framework.☆28Updated last year
- Generative AI Governance for Enterprises☆16Updated 7 months ago
- The Official Python Client for Together's API☆71Updated this week