anthropic-experimental / agentic-misalignmentLinks
☆550Updated 7 months ago
Alternatives and similar repositories for agentic-misalignment
Users that are interested in agentic-misalignment are comparing it to the libraries listed below
Sorting:
- Prompts used in the Automated Auditing Blog Post☆134Updated 5 months ago
- An alignment auditing agent capable of quickly exploring alignment hypothesis☆815Updated this week
- ☆252Updated last week
- open source interpretability platform 🧠☆632Updated last week
- Collection of evals for Inspect AI☆337Updated last week
- ☆214Updated 3 weeks ago
- Inference-time scaling for LLMs-as-a-judge.☆325Updated 2 months ago
- Real-Time Detection of Hallucinated Entities in Long-Form Generation☆274Updated 2 months ago
- ☆483Updated 6 months ago
- Curated collection of community environments☆204Updated last week
- ☆189Updated last year
- Open source interpretability artefacts for R1.☆167Updated 9 months ago
- Vivaria is METR's tool for running evaluations and conducting agent elicitation research.☆131Updated last week
- ⚖️ Awesome LLM Judges ⚖️☆148Updated 8 months ago
- Testing baseline LLMs performance across various models☆335Updated last week
- Atropos is a Language Model Reinforcement Learning Environments framework for collecting and evaluating LLM trajectories through diverse …☆831Updated this week
- Harbor is a framework for running agent evaluations and creating and using RL environments.☆438Updated this week
- Public repository containing METR's DVC pipeline for eval data analysis☆183Updated last week
- A Tree Search Library with Flexible API for LLM Inference-Time Scaling☆512Updated last month
- Super basic implementation (gist-like) of RLMs with REPL environments.☆435Updated 2 weeks ago
- [NeurIPS 2025 D&B Spotlight] Scaling Data for SWE-agents☆514Updated last week
- Frontier Models playing the board game Diplomacy.☆620Updated 3 weeks ago
- τ²-Bench: Evaluating Conversational Agents in a Dual-Control Environment☆662Updated last month
- An agent benchmark with tasks in a simulated software company.☆626Updated 2 months ago
- ☆116Updated 2 months ago
- A toolkit for describing model features and intervening on those features to steer behavior.☆226Updated last month
- ☆105Updated 5 months ago
- METR Task Standard☆170Updated 11 months ago
- Sandboxed code execution for AI agents, locally or on the cloud. Massively parallel, easy to extend. Powering SWE-agent and more.☆409Updated last week
- Provider-agnostic, open-source evaluation infrastructure for language models☆709Updated 3 weeks ago