UKGovernmentBEIS / control-arenaLinks
ControlArena is a collection of settings, model organisms and protocols - for running control experiments.
☆93Updated this week
Alternatives and similar repositories for control-arena
Users that are interested in control-arena are comparing it to the libraries listed below
Sorting:
- METR Task Standard☆159Updated 7 months ago
- Mechanistic Interpretability Visualizations using React☆289Updated 9 months ago
- ☆240Updated 11 months ago
- Inference API for many LLMs and other useful tools for empirical research☆68Updated last week
- Collection of evals for Inspect AI☆230Updated last week
- (Model-written) LLM evals library☆18Updated 9 months ago
- Notebooks accompanying Anthropic's "Toy Models of Superposition" paper☆128Updated 3 years ago
- Resources for skilling up in AI alignment research engineering. Covers basics of deep learning, mechanistic interpretability, and RL.☆226Updated last month
- Create feature-centric and prompt-centric visualizations for sparse autoencoders (like those from Anthropic's published research).☆219Updated 9 months ago
- A toolkit for describing model features and intervening on those features to steer behavior.☆201Updated 10 months ago
- Sparse Autoencoder for Mechanistic Interpretability☆265Updated last year
- Tools for studying developmental interpretability in neural networks.☆103Updated 2 months ago
- ☆186Updated 2 months ago
- ☆76Updated 3 months ago
- Vivaria is METR's tool for running evaluations and conducting agent elicitation research.☆111Updated this week
- The nnsight package enables interpreting and manipulating the internals of deep learned models.☆661Updated last week
- Delphi was the home of a temple to Phoebus Apollo, which famously had the inscription, 'Know Thyself.' This library lets language models …☆211Updated last week
- Unified access to Large Language Model modules using NNsight☆44Updated this week
- Steering vectors for transformer language models in Pytorch / Huggingface☆124Updated 6 months ago
- ☆103Updated 6 months ago
- ☆121Updated this week
- ☆127Updated last year
- ☆345Updated last month
- ☆57Updated last month
- Keeping language models honest by directly eliciting knowledge encoded in their activations.☆209Updated this week
- ☆34Updated last year
- Machine Learning for Alignment Bootcamp☆78Updated 3 years ago
- ☆122Updated last year
- ☆298Updated last year
- Contains random samples referenced in the paper "Sleeper Agents: Training Robustly Deceptive LLMs that Persist Through Safety Training".☆115Updated last year