anthropic-experimental / agentic-misalignmentLinks
☆417Updated 3 months ago
Alternatives and similar repositories for agentic-misalignment
Users that are interested in agentic-misalignment are comparing it to the libraries listed below
Sorting:
- ☆204Updated 6 months ago
- open source interpretability platform 🧠☆398Updated this week
- Prompts used in the Automated Auditing Blog Post☆96Updated last month
- Collection of evals for Inspect AI☆230Updated last week
- Testing baseline LLMs performance across various models☆307Updated last month
- Inference-time scaling for LLMs-as-a-judge.☆297Updated 2 weeks ago
- ☆477Updated 2 months ago
- ⚖️ Awesome LLM Judges ⚖️☆127Updated 4 months ago
- ☆129Updated last week
- Vivaria is METR's tool for running evaluations and conducting agent elicitation research.☆111Updated this week
- Collection of scripts and notebooks for OpenAI's latest GPT OSS models☆437Updated 3 weeks ago
- A Text-Based Environment for Interactive Debugging☆265Updated this week
- Open source interpretability artefacts for R1.☆158Updated 4 months ago
- A benchmark for LLMs on complicated tasks in the terminal☆691Updated this week
- Red-Teaming Language Models with DSPy☆213Updated 7 months ago
- Scaling Data for SWE-agents☆399Updated last week
- Atropos is a Language Model Reinforcement Learning Environments framework for collecting and evaluating LLM trajectories through diverse …☆692Updated this week
- ☆165Updated 8 months ago
- Persona Vectors: Monitoring and Controlling Character Traits in Language Models☆230Updated last month
- Public repository containing METR's DVC pipeline for eval data analysis☆108Updated 5 months ago
- Training teachers with reinforcement learning able to make LLMs learn how to reason for test time scaling.☆340Updated 2 months ago
- Real-Time Detection of Hallucinated Entities in Long-Form Generation☆232Updated last week
- MCP-Universe is a comprehensive framework designed for developing, testing, and benchmarking AI agents☆423Updated this week
- A Tree Search Library with Flexible API for LLM Inference-Time Scaling☆458Updated last month
- Sandboxed code execution for AI agents, locally or on the cloud. Massively parallel, easy to extend. Powering SWE-agent and more.☆315Updated this week
- A toolkit for describing model features and intervening on those features to steer behavior.☆201Updated 10 months ago
- Provider-agnostic, open-source evaluation infrastructure for language models☆531Updated this week
- OSS RL environment + evals toolkit☆176Updated this week
- An agent benchmark with tasks in a simulated software company.☆546Updated 3 weeks ago
- ☆101Updated 6 months ago