safety-research / safety-examplesLinks
☆15Updated last month
Alternatives and similar repositories for safety-examples
Users that are interested in safety-examples are comparing it to the libraries listed below
Sorting:
- Inference API for many LLMs and other useful tools for empirical research☆48Updated last week
- ControlArena is a suite of realistic settings, mimicking complex deployment environments, for running control evaluations. This is an alp…☆61Updated this week
- METR Task Standard☆148Updated 4 months ago
- (Model-written) LLM evals library☆18Updated 5 months ago
- A python sdk for LLM finetuning and inference on runpod infrastructure☆11Updated this week
- Machine Learning for Alignment Bootcamp☆73Updated 3 years ago
- ☆10Updated 10 months ago
- Repository with sample code using Apollo's suggested engineering practices☆9Updated 5 months ago
- Tools for studying developmental interpretability in neural networks.☆91Updated 4 months ago
- ☆31Updated last year
- Mechanistic Interpretability Visualizations using React☆253Updated 5 months ago
- Resources for skilling up in AI alignment research engineering. Covers basics of deep learning, mechanistic interpretability, and RL.☆214Updated last year
- Redwood Research's transformer interpretability tools☆15Updated 3 years ago
- Open source replication of Anthropic's Crosscoders for Model Diffing☆55Updated 7 months ago
- Vivaria is METR's tool for running evaluations and conducting agent elicitation research.☆94Updated last week
- A TinyStories LM with SAEs and transcoders☆11Updated 2 months ago
- A library for efficient patching and automatic circuit discovery.☆65Updated last month
- ☆223Updated 8 months ago
- Delphi was the home of a temple to Phoebus Apollo, which famously had the inscription, 'Know Thyself.' This library lets language models …☆181Updated this week
- Collection of evals for Inspect AI☆144Updated this week
- Applying SAEs for fine-grained control☆18Updated 5 months ago
- ☆43Updated 6 months ago
- ☆124Updated 6 months ago
- ☆12Updated last month
- Official Code for our paper: "Language Models Learn to Mislead Humans via RLHF""☆14Updated 7 months ago
- Notebooks accompanying Anthropic's "Toy Models of Superposition" paper☆126Updated 2 years ago
- ☆50Updated last month
- Improving Steering Vectors by Targeting Sparse Autoencoder Features☆20Updated 6 months ago
- Attribution-based Parameter Decomposition☆24Updated this week
- A small package implementing some useful wrapping around nnsight☆13Updated this week