safety-research / safety-toolingLinks
Inference API for many LLMs and other useful tools for empirical research
☆84Updated this week
Alternatives and similar repositories for safety-tooling
Users that are interested in safety-tooling are comparing it to the libraries listed below
Sorting:
- ☆20Updated last month
- METR Task Standard☆168Updated 10 months ago
- ☆19Updated 8 months ago
- Mechanistic Interpretability Visualizations using React☆302Updated 11 months ago
- ControlArena is a collection of settings, model organisms and protocols - for running control experiments.☆134Updated this week
- Open source replication of Anthropic's Crosscoders for Model Diffing☆63Updated last year
- ☆259Updated last year
- Unified access to Large Language Model modules using NNsight☆68Updated 3 weeks ago
- ☆132Updated last year
- Create feature-centric and prompt-centric visualizations for sparse autoencoders (like those from Anthropic's published research).☆232Updated 11 months ago
- ☆68Updated this week
- A library for efficient patching and automatic circuit discovery.☆80Updated 4 months ago
- ☆121Updated last month
- A toolkit for describing model features and intervening on those features to steer behavior.☆220Updated last year
- Sparse Autoencoder Training Library☆55Updated 7 months ago
- ☆81Updated 2 months ago
- Delphi was the home of a temple to Phoebus Apollo, which famously had the inscription, 'Know Thyself.' This library lets language models …☆231Updated last week
- Resources for skilling up in AI alignment research engineering. Covers basics of deep learning, mechanistic interpretability, and RL.☆233Updated 4 months ago
- ☆196Updated last month
- ☆132Updated 2 years ago
- ☆36Updated last year
- Tools for studying developmental interpretability in neural networks.☆117Updated 5 months ago
- ☆191Updated last year
- Steering vectors for transformer language models in Pytorch / Huggingface☆130Updated 9 months ago
- Open source interpretability artefacts for R1.☆164Updated 7 months ago
- ☆58Updated last year
- Collection of evals for Inspect AI☆305Updated this week
- datasets from the paper "Towards Understanding Sycophancy in Language Models"☆99Updated 2 years ago
- Investigating the generalization behavior of LM probes trained to predict truth labels: (1) from one annotator to another, and (2) from e…☆28Updated last year
- Improving Steering Vectors by Targeting Sparse Autoencoder Features☆24Updated last year