redwoodresearch / remix_public
☆19Updated last year
Related projects ⓘ
Alternatives and complementary repositories for remix_public
- Keeping language models honest by directly eliciting knowledge encoded in their activations.☆186Updated this week
- Machine Learning for Alignment Bootcamp☆25Updated 8 months ago
- METR Task Standard☆127Updated 3 weeks ago
- Machine Learning for Alignment Bootcamp☆64Updated 2 years ago
- ☆24Updated 7 months ago
- (Model-written) LLM evals library☆16Updated 3 months ago
- Tools for studying developmental interpretability in neural networks.☆77Updated last week
- Mechanistic Interpretability for Transformer Models☆49Updated 2 years ago
- Machine Learning for Alignment Bootcamp (MLAB).☆22Updated 2 years ago
- ☆61Updated last year
- Mechanistic Interpretability Visualizations using React☆200Updated 4 months ago
- ☆44Updated last month
- Erasing concepts from neural representations with provable guarantees☆210Updated last week
- Resources for skilling up in AI alignment research engineering. Covers basics of deep learning, mechanistic interpretability, and RL.☆200Updated 9 months ago
- ☆188Updated last month
- Redwood Research's transformer interpretability tools☆12Updated 2 years ago
- ☆9Updated 3 months ago
- Notebooks accompanying Anthropic's "Toy Models of Superposition" paper☆97Updated 2 years ago
- ☆20Updated this week
- ☆26Updated 6 months ago
- The Happy Faces Benchmark☆14Updated last year
- ☆44Updated last week
- Measuring the situational awareness of language models☆33Updated 9 months ago
- we got you bro☆33Updated 3 months ago
- ☆240Updated 4 months ago
- Vivaria is METR's tool for running evaluations and conducting agent elicitation research.☆64Updated this week
- ☆106Updated last month
- See the issue board for the current status of active and prospective projects!☆65Updated 2 years ago
- A dataset of alignment research and code to reproduce it☆69Updated last year
- Tools for running experiments on RL agents in procgen environments☆16Updated 7 months ago