METR / vivariaLinks
Vivaria is METR's tool for running evaluations and conducting agent elicitation research.
☆111Updated this week
Alternatives and similar repositories for vivaria
Users that are interested in vivaria are comparing it to the libraries listed below
Sorting:
- METR Task Standard☆161Updated 8 months ago
- ☆26Updated 4 months ago
- ☆104Updated last week
- Collection of evals for Inspect AI☆241Updated this week
- Sphynx Hallucination Induction☆53Updated 8 months ago
- ☆102Updated 5 months ago
- ☆142Updated 3 weeks ago
- Open source interpretability artefacts for R1.☆160Updated 5 months ago
- Inference-time scaling for LLMs-as-a-judge.☆299Updated last month
- A toolkit for describing model features and intervening on those features to steer behavior.☆203Updated 10 months ago
- ☆58Updated last week
- ControlArena is a collection of settings, model organisms and protocols - for running control experiments.☆98Updated this week
- ☆215Updated 6 months ago
- ☆135Updated last week
- Contains random samples referenced in the paper "Sleeper Agents: Training Robustly Deceptive LLMs that Persist Through Safety Training".☆118Updated last year
- ☆24Updated last year
- ☆133Updated 6 months ago
- Red-Teaming Language Models with DSPy☆214Updated 7 months ago
- Training-Ready RL Environments + Evals☆116Updated this week
- ☆301Updated last year
- Notebooks accompanying Anthropic's "Toy Models of Superposition" paper☆129Updated 3 years ago
- Archon provides a modular framework for combining different inference-time techniques and LMs with just a JSON config file.☆184Updated 6 months ago
- ⚖️ Awesome LLM Judges ⚖️☆128Updated 5 months ago
- Draw more samples☆194Updated last year
- An attribution library for LLMs☆42Updated last year
- Extract full next-token probabilities via language model APIs☆248Updated last year
- Create feature-centric and prompt-centric visualizations for sparse autoencoders (like those from Anthropic's published research).☆221Updated 9 months ago
- explore token trajectory trees on instruct and base models☆133Updated 4 months ago
- ☆225Updated 3 months ago
- Keeping language models honest by directly eliciting knowledge encoded in their activations.☆209Updated last week