invariantlabs-ai / invariant
Tool suite for secure and robust agent development
☆179Updated last week
Alternatives and similar repositories for invariant:
Users that are interested in invariant are comparing it to the libraries listed below
- Verdict is a library for scaling judge-time compute.☆195Updated 3 weeks ago
- Red-Teaming Language Models with DSPy☆181Updated 2 months ago
- A better way of testing, inspecting, and analyzing AI Agent traces.☆34Updated this week
- Sphynx Hallucination Induction☆53Updated 2 months ago
- ⚖️ Awesome LLM Judges ⚖️☆90Updated last month
- Sandboxed code execution for AI agents, locally or on the cloud. Massively parallel, easy to extend. Powering SWE-agent and more.☆149Updated last week
- Vivaria is METR's tool for running evaluations and conducting agent elicitation research.☆89Updated this week
- A repository of Language Model Vulnerabilities and Exposures (LVEs).☆109Updated last year
- ☆385Updated this week
- ☆92Updated last month
- Let Claude control a web browser on your machine.☆24Updated last month
- Python SDK for running evaluations on LLM generated responses☆276Updated last week
- Commit0: Library Generation from Scratch☆143Updated last week
- Code for Paper: Training Software Engineering Agents and Verifiers with SWE-Gym☆430Updated 2 weeks ago
- Open sourced predictions, execution logs, trajectories, and results from model inference + evaluation runs on the SWE-bench task.☆166Updated 2 weeks ago
- Contains random samples referenced in the paper "Sleeper Agents: Training Robustly Deceptive LLMs that Persist Through Safety Training".☆100Updated last year
- Enhancing AI Software Engineering with Repository-level Code Graph☆153Updated 2 weeks ago
- Collection of evals for Inspect AI☆114Updated this week
- A subset of jailbreaks automatically discovered by the Haize Labs haizing suite.☆89Updated this week
- DevQualityEval: An evaluation benchmark 📈 and framework to compare and evolve the quality of code generation of LLMs.☆166Updated last week
- ☆128Updated 2 weeks ago
- ☆42Updated 8 months ago
- ☆22Updated 5 months ago
- A Dynamic Environment to Evaluate Attacks and Defenses for LLM Agents.☆124Updated 2 weeks ago
- Functional Benchmarks and the Reasoning Gap☆85Updated 6 months ago
- Prototype advanced LLM algorithms for reasoning and planning.☆96Updated 8 months ago
- ☆120Updated 3 weeks ago
- AWM: Agent Workflow Memory☆257Updated 2 months ago
- Fiddler Auditor is a tool to evaluate language models.☆179Updated last year
- METR Task Standard☆147Updated 2 months ago