danielmamay / mlabLinks
Machine Learning for Alignment Bootcamp (MLAB).
☆30Updated 3 years ago
Alternatives and similar repositories for mlab
Users that are interested in mlab are comparing it to the libraries listed below
Sorting:
- Machine Learning for Alignment Bootcamp☆78Updated 3 years ago
- Resources for skilling up in AI alignment research engineering. Covers basics of deep learning, mechanistic interpretability, and RL.☆225Updated 3 weeks ago
- Mechanistic Interpretability Visualizations using React☆282Updated 8 months ago
- Tools for studying developmental interpretability in neural networks.☆101Updated 2 months ago
- 🧠 Starter templates for doing interpretability research☆73Updated 2 years ago
- ☆685Updated this week
- ☆238Updated 11 months ago
- we got you bro☆36Updated last year
- METR Task Standard☆159Updated 7 months ago
- Keeping language models honest by directly eliciting knowledge encoded in their activations.☆209Updated last week
- (Model-written) LLM evals library☆18Updated 8 months ago
- ☆126Updated last year
- The nnsight package enables interpreting and manipulating the internals of deep learned models.☆646Updated this week
- ☆276Updated last year
- ControlArena is a collection of settings, model organisms and protocols - for running control experiments.☆88Updated last week
- Create feature-centric and prompt-centric visualizations for sparse autoencoders (like those from Anthropic's published research).☆214Updated 8 months ago
- ☆18Updated 2 years ago
- Attribution-based Parameter Decomposition☆29Updated 2 months ago
- Inference API for many LLMs and other useful tools for empirical research☆68Updated 2 weeks ago
- ☆335Updated 2 weeks ago
- Notebooks accompanying Anthropic's "Toy Models of Superposition" paper☆129Updated 2 years ago
- Machine Learning for Alignment Bootcamp☆26Updated last year
- Tools for understanding how transformer predictions are built layer-by-layer☆521Updated 3 weeks ago
- Arrakis is a library to conduct, track and visualize mechanistic interpretability experiments.☆31Updated 4 months ago
- ☆53Updated 9 months ago
- ☆293Updated last year
- ☆81Updated 6 months ago
- A collection of different ways to implement accessing and modifying internal model activations for LLMs☆19Updated 10 months ago
- Sparse Autoencoder for Mechanistic Interpretability☆260Updated last year
- Redwood Research's transformer interpretability tools☆14Updated 3 years ago