redwoodresearch / mlabLinks
Machine Learning for Alignment Bootcamp
☆74Updated 3 years ago
Alternatives and similar repositories for mlab
Users that are interested in mlab are comparing it to the libraries listed below
Sorting:
- Resources for skilling up in AI alignment research engineering. Covers basics of deep learning, mechanistic interpretability, and RL.☆216Updated last year
- Machine Learning for Alignment Bootcamp (MLAB).☆30Updated 3 years ago
- Mechanistic Interpretability Visualizations using React☆262Updated 6 months ago
- Tools for studying developmental interpretability in neural networks.☆99Updated 3 weeks ago
- METR Task Standard☆154Updated 5 months ago
- (Model-written) LLM evals library☆18Updated 7 months ago
- ☆16Updated 3 weeks ago
- ☆19Updated 2 years ago
- Keeping language models honest by directly eliciting knowledge encoded in their activations.☆207Updated this week
- Inference API for many LLMs and other useful tools for empirical research☆52Updated last week
- ☆611Updated this week
- ☆231Updated 9 months ago
- The nnsight package enables interpreting and manipulating the internals of deep learned models.☆608Updated this week
- ControlArena is a collection of settings, model organisms and protocols - for running control experiments.☆73Updated last week
- 🧠 Starter templates for doing interpretability research☆72Updated 2 years ago
- ☆283Updated last year
- ☆122Updated last year
- ☆63Updated 2 years ago
- Create feature-centric and prompt-centric visualizations for sparse autoencoders (like those from Anthropic's published research).☆206Updated 7 months ago
- ☆99Updated 4 months ago
- Machine Learning for Alignment Bootcamp☆25Updated last year
- Decoder only transformer, built from scratch with PyTorch☆30Updated last year
- ☆273Updated last year
- A library for bridging Python and HTML/Javascript (via Svelte) for creating interactive visualizations☆192Updated 3 years ago
- Notebooks accompanying Anthropic's "Toy Models of Superposition" paper☆127Updated 2 years ago
- A collection of different ways to implement accessing and modifying internal model activations for LLMs☆19Updated 8 months ago
- we got you bro☆35Updated 11 months ago
- Redwood Research's transformer interpretability tools☆14Updated 3 years ago
- Tools for running experiments on RL agents in procgen environments☆19Updated last year
- A python sdk for LLM finetuning and inference on runpod infrastructure☆11Updated last week