acsresearch / interlabLinks
☆20Updated 11 months ago
Alternatives and similar repositories for interlab
Users that are interested in interlab are comparing it to the libraries listed below
Sorting:
- ☆134Updated 7 months ago
- A dataset of alignment research and code to reproduce it☆77Updated 2 years ago
- Mechanistic Interpretability for Transformer Models☆51Updated 3 years ago
- Interpreting how transformers simulate agents performing RL tasks☆84Updated last year
- Machine Learning for Alignment Bootcamp☆74Updated 3 years ago
- METR Task Standard☆151Updated 4 months ago
- ☆55Updated 9 months ago
- Tools for studying developmental interpretability in neural networks.☆95Updated this week
- ☆98Updated 3 months ago
- ControlArena is a suite of realistic settings, mimicking complex deployment environments, for running control evaluations. This is an alp…☆69Updated this week
- Redwood Research's transformer interpretability tools☆14Updated 3 years ago
- ☆280Updated 11 months ago
- Keeping language models honest by directly eliciting knowledge encoded in their activations.☆207Updated 2 weeks ago
- (Model-written) LLM evals library☆18Updated 6 months ago
- General-Sum variant of the game Diplomacy for evaluating AIs.☆29Updated last year
- ☆19Updated 2 years ago
- ☆99Updated 4 months ago
- Code for Columbia University COMS 3997 – LLM Ethics and Foundations☆14Updated 5 months ago
- Contains random samples referenced in the paper "Sleeper Agents: Training Robustly Deceptive LLMs that Persist Through Safety Training".☆108Updated last year
- ☆31Updated last year
- Vivaria is METR's tool for running evaluations and conducting agent elicitation research.☆96Updated this week
- ☆11Updated 11 months ago
- Measuring the situational awareness of language models☆35Updated last year
- Awesome Open-ended AI☆298Updated this week
- A collection of different ways to implement accessing and modifying internal model activations for LLMs☆18Updated 8 months ago
- ☆82Updated last year
- An attribution library for LLMs☆41Updated 9 months ago
- ☆66Updated last month
- Mechanistic Interpretability Visualizations using React☆257Updated 6 months ago
- Psych 290Q S23 @ UC Berkeley: Large Language Models and Cognitive Science☆18Updated last year