invariantlabs-ai / invariantLinks
Guardrails for secure and robust agent development
☆364Updated 3 months ago
Alternatives and similar repositories for invariant
Users that are interested in invariant are comparing it to the libraries listed below
Sorting:
- Red-Teaming Language Models with DSPy☆235Updated 9 months ago
- A better way of testing, inspecting, and analyzing AI Agent traces.☆40Updated 3 weeks ago
- ☆168Updated 5 months ago
- The fastest Trust Layer for AI Agents☆144Updated 5 months ago
- A Dynamic Environment to Evaluate Attacks and Defenses for LLM Agents.☆348Updated 2 weeks ago
- An alignment auditing agent capable of quickly exploring alignment hypothesis☆652Updated this week
- A repository of Language Model Vulnerabilities and Exposures (LVEs).☆112Updated last year
- A plugin-based gateway that orchestrates other MCPs and allows developers to build upon it enterprise-grade agents.☆311Updated 4 months ago
- Make your GenAI Apps Safe & Secure Test & harden your system prompt☆584Updated last month
- Enhancing AI Software Engineering with Repository-level Code Graph☆225Updated 7 months ago
- Inference-time scaling for LLMs-as-a-judge.☆308Updated last week
- MCPSafetyScanner - Automated MCP safety auditing and remediation using Agents. More info: https://www.arxiv.org/abs/2504.03767☆152Updated 7 months ago
- Collection of evals for Inspect AI☆284Updated this week
- LLM proxy to observe and debug what your AI agents are doing.☆53Updated last week
- TaskTracker is an approach to detecting task drift in Large Language Models (LLMs) by analysing their internal activations. It provides a…☆74Updated 2 months ago
- Code for the paper "Defeating Prompt Injections by Design"☆146Updated 4 months ago
- A security scanner for your LLM agentic workflows☆799Updated 3 weeks ago
- ☆49Updated last year
- 🔥🔒 Awesome MCP (Model Context Protocol) Security 🖥️☆589Updated 2 weeks ago
- PromptInject is a framework that assembles prompts in a modular fashion to provide a quantitative analysis of the robustness of LLMs to a…☆433Updated last year
- DeepTeam is a framework to red team LLMs and LLM systems.☆834Updated last week
- A subset of jailbreaks automatically discovered by the Haize Labs haizing suite.☆98Updated 7 months ago
- A code-graph demo using GraphRAG-SDK and FalkorDB☆229Updated 3 weeks ago
- Sandboxed code execution for AI agents, locally or on the cloud. Massively parallel, easy to extend. Powering SWE-agent and more.☆358Updated last week
- CodeSage: Code Representation Learning At Scale (ICLR 2024)☆114Updated last year
- Python SDK for running evaluations on LLM generated responses☆293Updated 5 months ago
- Code snippets to reproduce MCP tool poisoning attacks.☆184Updated 7 months ago
- Constrain, log and scan your MCP connections for security vulnerabilities.☆1,268Updated this week
- ☆611Updated 2 months ago
- An open-source compliance-centered evaluation framework for Generative AI models☆170Updated this week