danielmamay / mlabLinks
Machine Learning for Alignment Bootcamp (MLAB).
☆30Updated 3 years ago
Alternatives and similar repositories for mlab
Users that are interested in mlab are comparing it to the libraries listed below
Sorting:
- Machine Learning for Alignment Bootcamp☆74Updated 3 years ago
- Resources for skilling up in AI alignment research engineering. Covers basics of deep learning, mechanistic interpretability, and RL.☆216Updated last year
- we got you bro☆35Updated 11 months ago
- Tools for studying developmental interpretability in neural networks.☆95Updated this week
- Mechanistic Interpretability Visualizations using React☆258Updated 6 months ago
- ☆227Updated 8 months ago
- 🧠 Starter templates for doing interpretability research☆71Updated last year
- Create feature-centric and prompt-centric visualizations for sparse autoencoders (like those from Anthropic's published research).☆202Updated 6 months ago
- A curated list of awesome resources for Artificial Intelligence Alignment research☆71Updated last year
- ☆121Updated last year
- ☆11Updated 11 months ago
- ☆28Updated last year
- (Model-written) LLM evals library☆18Updated 6 months ago
- METR Task Standard☆151Updated 4 months ago
- ☆270Updated last year
- ☆71Updated 2 years ago
- ☆19Updated 2 years ago
- ☆120Updated 10 months ago
- Steering vectors for transformer language models in Pytorch / Huggingface☆110Updated 4 months ago
- Machine Learning for Alignment Bootcamp☆25Updated last year
- Redwood Research's transformer interpretability tools☆14Updated 3 years ago
- Resources from the EleutherAI Math Reading Group☆53Updated 4 months ago
- Open source replication of Anthropic's Crosscoders for Model Diffing☆55Updated 8 months ago
- Keeping language models honest by directly eliciting knowledge encoded in their activations.☆207Updated last week
- Notebooks accompanying Anthropic's "Toy Models of Superposition" paper☆127Updated 2 years ago
- ☆98Updated 3 months ago
- ☆55Updated 9 months ago
- A collection of different ways to implement accessing and modifying internal model activations for LLMs☆18Updated 8 months ago
- The nnsight package enables interpreting and manipulating the internals of deep learned models.☆599Updated this week
- ☆44Updated 7 months ago