acsresearch / interlab
☆18Updated 8 months ago
Alternatives and similar repositories for interlab:
Users that are interested in interlab are comparing it to the libraries listed below
- A dataset of alignment research and code to reproduce it☆75Updated last year
- ☆12Updated this week
- (Model-written) LLM evals library☆18Updated 3 months ago
- METR Task Standard☆146Updated last month
- Interpreting how transformers simulate agents performing RL tasks☆78Updated last year
- General-Sum variant of the game Diplomacy for evaluating AIs.☆28Updated 11 months ago
- Redwood Research's transformer interpretability tools☆14Updated 2 years ago
- Machine Learning for Alignment Bootcamp☆72Updated 2 years ago
- ☆53Updated 6 months ago
- Mechanistic Interpretability for Transformer Models☆50Updated 2 years ago
- Machine Learning for Alignment Bootcamp (MLAB).☆28Updated 3 years ago
- ☆87Updated 2 weeks ago
- Tools for studying developmental interpretability in neural networks.☆86Updated 2 months ago
- Tools for running experiments on RL agents in procgen environments☆18Updated 11 months ago
- A TinyStories LM with SAEs and transcoders☆11Updated 2 months ago
- Vivaria is METR's tool for running evaluations and conducting agent elicitation research.☆85Updated this week
- ☆262Updated 8 months ago
- ☆130Updated 4 months ago
- Measuring the situational awareness of language models☆34Updated last year
- My writings about ARC (Abstraction and Reasoning Corpus)☆71Updated this week
- Inference API for many LLMs and other useful tools for empirical research☆24Updated last week
- Code for reproducing the results from the paper Avoiding Side Effects in Complex Environments☆12Updated 3 years ago
- ☆19Updated 2 years ago
- ☆26Updated 11 months ago
- we got you bro☆35Updated 8 months ago
- ☆10Updated 8 months ago
- Stampy's copy of Alignment Research Dataset scraper☆12Updated last month
- ☆19Updated 2 years ago
- Keeping language models honest by directly eliciting knowledge encoded in their activations.☆197Updated this week
- Repo for the paper on Escalation Risks of AI systems☆37Updated 11 months ago