METR / task-standardLinks
METR Task Standard
☆147Updated 4 months ago
Alternatives and similar repositories for task-standard
Users that are interested in task-standard are comparing it to the libraries listed below
Sorting:
- Vivaria is METR's tool for running evaluations and conducting agent elicitation research.☆94Updated this week
- ☆96Updated 2 months ago
- Collection of evals for Inspect AI☆139Updated this week
- ControlArena is a suite of realistic settings, mimicking complex deployment environments, for running control evaluations. This is an alp…☆60Updated this week
- Mechanistic Interpretability Visualizations using React☆251Updated 5 months ago
- ☆10Updated 10 months ago
- ☆76Updated last month
- Create feature-centric and prompt-centric visualizations for sparse autoencoders (like those from Anthropic's published research).☆200Updated 5 months ago
- Resources for skilling up in AI alignment research engineering. Covers basics of deep learning, mechanistic interpretability, and RL.☆214Updated last year
- Inference API for many LLMs and other useful tools for empirical research☆47Updated last week
- ☆54Updated 8 months ago
- ☆62Updated last week
- ☆22Updated this week
- (Model-written) LLM evals library☆18Updated 5 months ago
- ☆274Updated 11 months ago
- ☆121Updated last year
- Machine Learning for Alignment Bootcamp☆72Updated 3 years ago
- ☆223Updated 8 months ago
- ☆131Updated 2 months ago
- ☆152Updated 2 months ago
- Redwood Research's transformer interpretability tools☆15Updated 3 years ago
- A toolkit for describing model features and intervening on those features to steer behavior.☆184Updated 6 months ago
- Open source interpretability artefacts for R1.☆140Updated last month
- A python sdk for LLM finetuning and inference on runpod infrastructure☆11Updated last week
- Notebooks accompanying Anthropic's "Toy Models of Superposition" paper☆126Updated 2 years ago
- ☆31Updated last year
- Contains random samples referenced in the paper "Sleeper Agents: Training Robustly Deceptive LLMs that Persist Through Safety Training".☆105Updated last year
- Keeping language models honest by directly eliciting knowledge encoded in their activations.☆204Updated last week
- ☆28Updated last year
- The NDIF server, which performs deep inference and serves nnsight requests remotely☆28Updated this week