invariantlabs-ai / invariantLinks
Guardrails for secure and robust agent development
☆313Updated last month
Alternatives and similar repositories for invariant
Users that are interested in invariant are comparing it to the libraries listed below
Sorting:
- Red-Teaming Language Models with DSPy☆202Updated 5 months ago
- A better way of testing, inspecting, and analyzing AI Agent traces.☆39Updated this week
- The fastest Trust Layer for AI Agents☆138Updated last month
- ☆119Updated last month
- Inference-time scaling for LLMs-as-a-judge.☆250Updated last week
- A repository of Language Model Vulnerabilities and Exposures (LVEs).☆112Updated last year
- PromptInject is a framework that assembles prompts in a modular fashion to provide a quantitative analysis of the robustness of LLMs to a…☆393Updated last year
- A Dynamic Environment to Evaluate Attacks and Defenses for LLM Agents.☆202Updated last week
- Make your GenAI Apps Safe & Secure Test & harden your system prompt☆518Updated 3 weeks ago
- TaskTracker is an approach to detecting task drift in Large Language Models (LLMs) by analysing their internal activations. It provides a…☆59Updated 4 months ago
- Sandboxed code execution for AI agents, locally or on the cloud. Massively parallel, easy to extend. Powering SWE-agent and more.☆242Updated last week
- Python SDK for running evaluations on LLM generated responses☆289Updated last month
- The LLM Red Teaming Framework☆512Updated this week
- LLM proxy to observe and debug what your AI agents are doing.☆38Updated this week
- Enhancing AI Software Engineering with Repository-level Code Graph☆191Updated 3 months ago
- ☆490Updated 2 weeks ago
- 🔥🔒 Awesome MCP (Model Context Protocol) Security 🖥️☆403Updated this week
- Collection of evals for Inspect AI☆173Updated this week
- A security scanner for your LLM agentic workflows☆624Updated 3 weeks ago
- ☆261Updated 3 weeks ago
- ☆45Updated 11 months ago
- A plugin-based gateway that orchestrates other MCPs and allows developers to build upon it enterprise-grade agents.☆230Updated 2 months ago
- An open-source compliance-centered evaluation framework for Generative AI models☆158Updated this week
- A Text-Based Environment for Interactive Debugging☆234Updated this week
- ☆71Updated 8 months ago
- TapeAgents is a framework that facilitates all stages of the LLM Agent development lifecycle☆285Updated this week
- Static Analysis meets Large Language Models☆50Updated last year
- ⚖️ Awesome LLM Judges ⚖️☆107Updated 2 months ago
- ☆96Updated 2 weeks ago
- CodeSage: Code Representation Learning At Scale (ICLR 2024)☆109Updated 8 months ago