ZenGuard-AI / fast-llm-security-guardrails
The fastest && easiest LLM security guardrails for AI Agents and applications.
☆102Updated last week
Related projects ⓘ
Alternatives and complementary repositories for fast-llm-security-guardrails
- Red-Teaming Language Models with DSPy☆142Updated 7 months ago
- Framework for LLM evaluation, guardrails and security☆96Updated 2 months ago
- A trace analysis tool for AI agents.☆119Updated last month
- ☆33Updated 3 months ago
- Fiddler Auditor is a tool to evaluate language models.☆171Updated 8 months ago
- The Rule-based Retrieval package is a Python package that enables you to create and manage Retrieval Augmented Generation (RAG) applicati…☆221Updated last month
- A repository of Language Model Vulnerabilities and Exposures (LVEs).☆107Updated 8 months ago
- ⚡ Vigil ⚡ Detect prompt injections, jailbreaks, and other potentially risky Large Language Model (LLM) inputs☆309Updated 9 months ago
- LLM security and privacy☆40Updated 3 weeks ago
- Initiative to evaluate and rank the most popular LLMs across common task types based on their propensity to hallucinate.☆100Updated 2 months ago
- Sphynx Hallucination Induction☆47Updated 3 months ago
- Risks and targets for assessing LLMs & LLM vulnerabilities☆25Updated 5 months ago
- A benchmark for prompt injection detection systems.☆86Updated 2 months ago
- AutoDefense: Multi-Agent LLM Defense against Jailbreak Attacks☆27Updated 5 months ago
- Lightweight LLM Interaction Framework☆207Updated last month
- AI agent with RAG+ReAct on Indian Constitution & BNS☆43Updated 3 weeks ago
- 🦜💯 Flex those feathers!☆234Updated 3 weeks ago
- LLM | Security | Operations in one github repo with good links and pictures.☆17Updated 3 weeks ago
- Automated knowledge graph creation SDK☆110Updated 4 months ago
- Security and compliance proxy for LLM APIs☆44Updated last year
- Python SDK for running evaluations on LLM generated responses☆216Updated last week
- RAGElo is a set of tools that helps you selecting the best RAG-based LLM agents by using an Elo ranker☆105Updated last week
- Make your GenAI Apps Safe & Secure Test & harden your system prompt☆399Updated 3 weeks ago
- HoneyAgents is a PoC demo of an AI-driven system that combines honeypots with autonomous AI agents to detect and mitigate cyber threats. …☆38Updated 10 months ago
- ☆31Updated 2 weeks ago
- Tutorial for building LLM router☆159Updated 3 months ago
- CodeSage: Code Representation Learning At Scale (ICLR 2024)☆82Updated 2 weeks ago
- This repository provides implementation to formalize and benchmark Prompt Injection attacks and defenses☆142Updated 2 months ago
- Payloads for Attacking Large Language Models☆63Updated 4 months ago
- Routing on Random Forest (RoRF)☆83Updated last month