UKGovernmentBEIS / control-arena
ControlArena is a suite of realistic settings, mimicking complex deployment environments, for running control evaluations. This is an alpha release; we welcome feedback.
☆28Updated this week
Alternatives and similar repositories for control-arena:
Users that are interested in control-arena are comparing it to the libraries listed below
- Collection of evals for Inspect AI☆101Updated this week
- ☆10Updated 8 months ago
- METR Task Standard☆146Updated last month
- Vivaria is METR's tool for running evaluations and conducting agent elicitation research.☆85Updated this week
- Mechanistic Interpretability Visualizations using React☆235Updated 3 months ago
- Tools for studying developmental interpretability in neural networks.☆86Updated 2 months ago
- ☆87Updated last week
- Steering vectors for transformer language models in Pytorch / Huggingface☆90Updated last month
- 🧠 Starter templates for doing interpretability research☆67Updated last year
- Inference API for many LLMs and other useful tools for empirical research☆24Updated last week
- Delphi was the home of a temple to Phoebus Apollo, which famously had the inscription, 'Know Thyself.' This library lets language models …☆163Updated this week
- Notebooks accompanying Anthropic's "Toy Models of Superposition" paper☆117Updated 2 years ago
- ☆26Updated 11 months ago
- ☆213Updated 5 months ago
- ☆53Updated 6 months ago
- (Model-written) LLM evals library☆18Updated 3 months ago
- Machine Learning for Alignment Bootcamp☆72Updated 2 years ago
- ☆50Updated this week
- ☆124Updated this week
- ☆67Updated last month
- ☆17Updated last month
- Resources for skilling up in AI alignment research engineering. Covers basics of deep learning, mechanistic interpretability, and RL.☆206Updated last year
- ☆43Updated 2 months ago
- A toolkit for describing model features and intervening on those features to steer behavior.☆163Updated 4 months ago
- we got you bro☆35Updated 7 months ago
- Contains random samples referenced in the paper "Sleeper Agents: Training Robustly Deceptive LLMs that Persist Through Safety Training".☆96Updated last year
- Draw more samples☆186Updated 9 months ago
- Open source replication of Anthropic's Crosscoders for Model Diffing☆46Updated 5 months ago
- Improving Alignment and Robustness with Circuit Breakers☆190Updated 6 months ago
- ☆19Updated 2 years ago