safety-research / safety-examples
☆13Updated last week
Alternatives and similar repositories for safety-examples:
Users that are interested in safety-examples are comparing it to the libraries listed below
- Inference API for many LLMs and other useful tools for empirical research☆33Updated this week
- ☆10Updated 9 months ago
- ControlArena is a suite of realistic settings, mimicking complex deployment environments, for running control evaluations. This is an alp…☆50Updated this week
- Machine Learning for Alignment Bootcamp☆72Updated 2 years ago
- Redwood Research's transformer interpretability tools☆14Updated 3 years ago
- METR Task Standard☆146Updated 2 months ago
- ☆31Updated 11 months ago
- Improving Steering Vectors by Targeting Sparse Autoencoder Features☆17Updated 5 months ago
- A collection of different ways to implement accessing and modifying internal model activations for LLMs☆15Updated 6 months ago
- Repository with sample code using Apollo's suggested engineering practices☆8Updated 4 months ago
- A TinyStories LM with SAEs and transcoders☆11Updated 3 weeks ago
- Tools for studying developmental interpretability in neural networks.☆88Updated 3 months ago
- (Model-written) LLM evals library☆18Updated 4 months ago
- ☆89Updated last month
- Applying SAEs for fine-grained control☆17Updated 4 months ago
- Resources for skilling up in AI alignment research engineering. Covers basics of deep learning, mechanistic interpretability, and RL.☆209Updated last year
- ☆49Updated 3 months ago
- ☆218Updated 6 months ago
- Interpreting how transformers simulate agents performing RL tasks☆80Updated last year
- Open source replication of Anthropic's Crosscoders for Model Diffing☆52Updated 5 months ago
- A library for efficient patching and automatic circuit discovery.☆63Updated this week
- ☆36Updated 5 months ago
- This repo is built to facilitate the training and analysis of autoregressive transformers on maze-solving tasks.☆27Updated 7 months ago
- ☆19Updated 2 years ago
- Benchmarking Agentic LLM and VLM Reasoning On Games☆129Updated 2 weeks ago
- ☆12Updated 2 weeks ago
- Official Code for our paper: "Language Models Learn to Mislead Humans via RLHF""☆13Updated 6 months ago
- Experiments with representation engineering☆11Updated last year
- Vivaria is METR's tool for running evaluations and conducting agent elicitation research.☆89Updated last week
- ☆54Updated 6 months ago