UKGovernmentBEIS / control-arenaLinks
ControlArena is a collection of settings, model organisms and protocols - for running control experiments.
☆128Updated this week
Alternatives and similar repositories for control-arena
Users that are interested in control-arena are comparing it to the libraries listed below
Sorting:
- METR Task Standard☆167Updated 9 months ago
- ☆253Updated last year
- Inference API for many LLMs and other useful tools for empirical research☆80Updated last week
- Mechanistic Interpretability Visualizations using React☆301Updated 11 months ago
- Collection of evals for Inspect AI☆289Updated this week
- A toolkit for describing model features and intervening on those features to steer behavior.☆214Updated last year
- ☆81Updated last month
- Create feature-centric and prompt-centric visualizations for sparse autoencoders (like those from Anthropic's published research).☆227Updated 11 months ago
- Tools for studying developmental interpretability in neural networks.☆114Updated 4 months ago
- Contains random samples referenced in the paper "Sleeper Agents: Training Robustly Deceptive LLMs that Persist Through Safety Training".☆122Updated last year
- ☆62Updated last month
- Sparse Autoencoder for Mechanistic Interpretability☆284Updated last year
- Resources for skilling up in AI alignment research engineering. Covers basics of deep learning, mechanistic interpretability, and RL.☆232Updated 3 months ago
- Delphi was the home of a temple to Phoebus Apollo, which famously had the inscription, 'Know Thyself.' This library lets language models …☆225Updated last week
- ☆63Updated last month
- Unified access to Large Language Model modules using NNsight☆59Updated last week
- ☆226Updated 3 weeks ago
- Steering vectors for transformer language models in Pytorch / Huggingface☆129Updated 9 months ago
- Vivaria is METR's tool for running evaluations and conducting agent elicitation research.☆120Updated last week
- ☆106Updated this week
- ☆19Updated last week
- The nnsight package enables interpreting and manipulating the internals of deep learned models.☆701Updated this week
- Improving Alignment and Robustness with Circuit Breakers☆242Updated last year
- Keeping language models honest by directly eliciting knowledge encoded in their activations.☆212Updated last week
- ☆141Updated 3 months ago
- ☆188Updated last year
- ☆195Updated last month
- 🧠 Starter templates for doing interpretability research☆75Updated 2 years ago
- ☆310Updated last year
- A library for efficient patching and automatic circuit discovery.☆80Updated 4 months ago