METR / vivaria
Vivaria is METR's tool for running evaluations and conducting agent elicitation research.
☆89Updated this week
Alternatives and similar repositories for vivaria:
Users that are interested in vivaria are comparing it to the libraries listed below
- METR Task Standard☆147Updated 2 months ago
- ☆21Updated 2 weeks ago
- ☆87Updated last month
- Collection of evals for Inspect AI☆112Updated this week
- ☆53Updated 6 months ago
- ☆71Updated 2 months ago
- ☆130Updated last month
- A toolkit for describing model features and intervening on those features to steer behavior.☆172Updated 5 months ago
- ☆128Updated 2 weeks ago
- ControlArena is a suite of realistic settings, mimicking complex deployment environments, for running control evaluations. This is an alp…☆48Updated this week
- Measuring the situational awareness of language models☆34Updated last year
- ☆10Updated 9 months ago
- Mechanistic Interpretability Visualizations using React☆239Updated 3 months ago
- Delphi was the home of a temple to Phoebus Apollo, which famously had the inscription, 'Know Thyself.' This library lets language models …☆166Updated this week
- ☆9Updated 8 months ago
- Verdict is a library for scaling judge-time compute.☆195Updated 3 weeks ago
- OpenPipe ART (Agent Reinforcement Trainer): train LLM agents☆77Updated this week
- Contains random samples referenced in the paper "Sleeper Agents: Training Robustly Deceptive LLMs that Persist Through Safety Training".☆100Updated last year
- Sphynx Hallucination Induction☆53Updated 2 months ago
- Create feature-centric and prompt-centric visualizations for sparse autoencoders (like those from Anthropic's published research).☆194Updated 4 months ago
- Notebooks accompanying Anthropic's "Toy Models of Superposition" paper☆118Updated 2 years ago
- Steering vectors for transformer language models in Pytorch / Huggingface☆94Updated last month
- Machine Learning for Alignment Bootcamp☆72Updated 2 years ago
- (Model-written) LLM evals library☆18Updated 4 months ago
- ☆26Updated last year
- open source interpretability platform 🧠☆82Updated this week
- An attribution library for LLMs☆38Updated 6 months ago
- Keeping language models honest by directly eliciting knowledge encoded in their activations.☆198Updated last week
- Archon provides a modular framework for combining different inference-time techniques and LMs with just a JSON config file.☆167Updated last month
- ☆30Updated 11 months ago