invariantlabs-ai / invariantLinks
Guardrails for secure and robust agent development
☆355Updated 3 months ago
Alternatives and similar repositories for invariant
Users that are interested in invariant are comparing it to the libraries listed below
Sorting:
- Red-Teaming Language Models with DSPy☆235Updated 8 months ago
- ☆165Updated 4 months ago
- A better way of testing, inspecting, and analyzing AI Agent traces.☆40Updated last week
- LLM proxy to observe and debug what your AI agents are doing.☆51Updated 3 months ago
- The fastest Trust Layer for AI Agents☆144Updated 5 months ago
- An alignment auditing agent capable of quickly exploring alignment hypothesis☆609Updated last week
- Code for the paper "Defeating Prompt Injections by Design"☆138Updated 4 months ago
- Inference-time scaling for LLMs-as-a-judge.☆304Updated 3 weeks ago
- A security scanner for your LLM agentic workflows☆772Updated this week
- A repository of Language Model Vulnerabilities and Exposures (LVEs).☆112Updated last year
- DeepTeam is a framework to red team LLMs and LLM systems.☆799Updated last week
- Collection of evals for Inspect AI☆264Updated this week
- Enhancing AI Software Engineering with Repository-level Code Graph☆217Updated 6 months ago
- Sandboxed code execution for AI agents, locally or on the cloud. Massively parallel, easy to extend. Powering SWE-agent and more.☆349Updated this week
- ☆172Updated this week
- A plugin-based gateway that orchestrates other MCPs and allows developers to build upon it enterprise-grade agents.☆299Updated 3 months ago
- A Text-Based Environment for Interactive Debugging☆272Updated last week
- Python SDK for running evaluations on LLM generated responses☆292Updated 4 months ago
- 🔥🔒 Awesome MCP (Model Context Protocol) Security 🖥️☆574Updated 2 weeks ago
- Constrain, log and scan your MCP connections for security vulnerabilities.☆1,166Updated this week
- Vivaria is METR's tool for running evaluations and conducting agent elicitation research.☆116Updated this week
- ⚖️ Awesome LLM Judges ⚖️☆132Updated 6 months ago
- TapeAgents is a framework that facilitates all stages of the LLM Agent development lifecycle☆298Updated last week
- A subset of jailbreaks automatically discovered by the Haize Labs haizing suite.☆97Updated 6 months ago
- PromptInject is a framework that assembles prompts in a modular fashion to provide a quantitative analysis of the robustness of LLMs to a…☆427Updated last year
- Sphynx Hallucination Induction☆53Updated 8 months ago
- Let Claude control a web browser on your machine.☆39Updated 4 months ago
- CodeSage: Code Representation Learning At Scale (ICLR 2024)☆113Updated last year
- [NeurIPS 2025 D&B Spotlight] Scaling Data for SWE-agents☆432Updated last week
- DevQualityEval: An evaluation benchmark 📈 and framework to compare and evolve the quality of code generation of LLMs.☆182Updated 5 months ago