EffiSciencesResearch / ML4GLinks
Machine Learning for Alignment Bootcamp
☆26Updated last year
Alternatives and similar repositories for ML4G
Users that are interested in ML4G are comparing it to the libraries listed below
Sorting:
- 🧠 Starter templates for doing interpretability research☆74Updated 2 years ago
- Tools for studying developmental interpretability in neural networks.☆105Updated 3 months ago
- Keeping language models honest by directly eliciting knowledge encoded in their activations.☆208Updated this week
- Resources for skilling up in AI alignment research engineering. Covers basics of deep learning, mechanistic interpretability, and RL.☆228Updated last month
- Machine Learning for Alignment Bootcamp☆79Updated 3 years ago
- Erasing concepts from neural representations with provable guarantees☆236Updated 8 months ago
- Mechanistic Interpretability Visualizations using React☆291Updated 9 months ago
- we got you bro☆36Updated last year
- ☆19Updated 2 years ago
- Machine Learning for Alignment Bootcamp (MLAB).☆30Updated 3 years ago
- A dataset of alignment research and code to reproduce it☆77Updated 2 years ago
- ☆303Updated last year
- Notebooks accompanying Anthropic's "Toy Models of Superposition" paper☆129Updated 3 years ago
- ☆27Updated 2 years ago
- ☆97Updated 2 months ago
- A curated list of awesome resources for Artificial Intelligence Alignment research☆71Updated 2 years ago
- Mechanistic Interpretability for Transformer Models☆53Updated 3 years ago
- ☆242Updated last year
- A library for bridging Python and HTML/Javascript (via Svelte) for creating interactive visualizations☆199Updated 3 years ago
- A library for bridging Python and HTML/Javascript (via Svelte) for creating interactive visualizations☆14Updated last year
- Arrakis is a library to conduct, track and visualize mechanistic interpretability experiments.☆31Updated 5 months ago
- ☆276Updated last year
- Create feature-centric and prompt-centric visualizations for sparse autoencoders (like those from Anthropic's published research).☆221Updated 9 months ago
- A Mechanistic Interpretability Analysis of Grokking☆22Updated 3 years ago
- Tools for understanding how transformer predictions are built layer-by-layer☆530Updated 2 months ago
- Resources from the EleutherAI Math Reading Group☆54Updated 7 months ago
- ☆17Updated last year
- ControlArena is a collection of settings, model organisms and protocols - for running control experiments.☆99Updated this week
- ☆83Updated last year
- Attribution-based Parameter Decomposition☆31Updated 3 months ago