redwoodresearch / remix_publicLinks
☆20Updated 2 years ago
Alternatives and similar repositories for remix_public
Users that are interested in remix_public are comparing it to the libraries listed below
Sorting:
- Resources for skilling up in AI alignment research engineering. Covers basics of deep learning, mechanistic interpretability, and RL.☆236Updated 5 months ago
- Tools for studying developmental interpretability in neural networks.☆122Updated last week
- Machine Learning for Alignment Bootcamp☆81Updated 3 years ago
- Erasing concepts from neural representations with provable guarantees☆242Updated 11 months ago
- Keeping language models honest by directly eliciting knowledge encoded in their activations.☆216Updated 2 weeks ago
- Mechanistic Interpretability Visualizations using React☆307Updated last year
- ☆65Updated 2 years ago
- ☆29Updated last year
- (Model-written) LLM evals library☆18Updated last year
- Mechanistic Interpretability for Transformer Models☆53Updated 3 years ago
- ☆262Updated last year
- METR Task Standard☆169Updated 11 months ago
- ControlArena is a collection of settings, model organisms and protocols - for running control experiments.☆145Updated 3 weeks ago
- ☆132Updated 2 years ago
- Extract full next-token probabilities via language model APIs☆248Updated last year
- ☆36Updated last year
- Utilities for the HuggingFace transformers library☆73Updated 2 years ago
- Machine Learning for Alignment Bootcamp (MLAB).☆30Updated 3 years ago
- Create feature-centric and prompt-centric visualizations for sparse autoencoders (like those from Anthropic's published research).☆236Updated last year
- Attribution-based Parameter Decomposition☆33Updated 7 months ago
- Redwood Research's transformer interpretability tools☆14Updated 3 years ago
- ☆283Updated last year
- 🧠 Starter templates for doing interpretability research☆76Updated 2 years ago
- ☆20Updated last year
- ☆77Updated 3 weeks ago
- ☆319Updated last year
- Notebooks accompanying Anthropic's "Toy Models of Superposition" paper☆132Updated 3 years ago
- A python sdk for LLM finetuning and inference on runpod infrastructure☆17Updated this week
- A collection of different ways to implement accessing and modifying internal model activations for LLMs☆20Updated last year
- we got you bro☆37Updated last year