redwoodresearch / mlab
Machine Learning for Alignment Bootcamp
☆71Updated 2 years ago
Alternatives and similar repositories for mlab:
Users that are interested in mlab are comparing it to the libraries listed below
- Resources for skilling up in AI alignment research engineering. Covers basics of deep learning, mechanistic interpretability, and RL.☆206Updated last year
- Machine Learning for Alignment Bootcamp (MLAB).☆25Updated 3 years ago
- METR Task Standard☆144Updated last month
- Tools for studying developmental interpretability in neural networks.☆86Updated last month
- Mechanistic Interpretability Visualizations using React☆233Updated 2 months ago
- (Model-written) LLM evals library☆18Updated 3 months ago
- ☆210Updated 5 months ago
- Vivaria is METR's tool for running evaluations and conducting agent elicitation research.☆80Updated this week
- ☆470Updated this week
- Mechanistic Interpretability for Transformer Models☆50Updated 2 years ago
- Keeping language models honest by directly eliciting knowledge encoded in their activations.☆197Updated this week
- ☆62Updated 2 years ago
- ☆10Updated 8 months ago
- ☆19Updated 2 years ago
- Redwood Research's transformer interpretability tools☆14Updated 2 years ago
- Tools for running experiments on RL agents in procgen environments☆18Updated 11 months ago
- 🧠 Starter templates for doing interpretability research☆66Updated last year
- Notebooks accompanying Anthropic's "Toy Models of Superposition" paper☆116Updated 2 years ago
- ☆258Updated 8 months ago
- ☆84Updated this week
- The Happy Faces Benchmark☆14Updated last year
- Improving Steering Vectors by Targeting Sparse Autoencoder Features☆15Updated 3 months ago
- ☆120Updated last year
- we got you bro☆35Updated 7 months ago
- ☆29Updated 10 months ago
- A curated list of awesome resources for Artificial Intelligence Alignment research☆69Updated last year
- A dataset of alignment research and code to reproduce it☆74Updated last year