dit7ya / awesome-ai-alignmentLinks
A curated list of awesome resources for Artificial Intelligence Alignment research
☆71Updated 2 years ago
Alternatives and similar repositories for awesome-ai-alignment
Users that are interested in awesome-ai-alignment are comparing it to the libraries listed below
Sorting:
- 🧠 Starter templates for doing interpretability research☆72Updated 2 years ago
- Keeping language models honest by directly eliciting knowledge encoded in their activations.☆207Updated this week
- A dataset of alignment research and code to reproduce it☆77Updated 2 years ago
- Tools for studying developmental interpretability in neural networks.☆99Updated 3 weeks ago
- we got you bro☆35Updated 11 months ago
- Resources for skilling up in AI alignment research engineering. Covers basics of deep learning, mechanistic interpretability, and RL.☆216Updated last year
- Mechanistic Interpretability Visualizations using React☆262Updated 6 months ago
- ☆273Updated last year
- ☆283Updated last year
- ☆72Updated 2 years ago
- Machine Learning for Alignment Bootcamp☆74Updated 3 years ago
- Emergent world representations: Exploring a sequence model trained on a synthetic task☆183Updated 2 years ago
- A puzzle to learn about prompting☆131Updated 2 years ago
- ☆26Updated 2 years ago
- Resources from the EleutherAI Math Reading Group☆53Updated 4 months ago
- Utilities for the HuggingFace transformers library☆69Updated 2 years ago
- Experiments with representation engineering☆12Updated last year
- Erasing concepts from neural representations with provable guarantees☆230Updated 5 months ago
- RuLES: a benchmark for evaluating rule-following in language models☆227Updated 4 months ago
- ☆231Updated 9 months ago
- datasets from the paper "Towards Understanding Sycophancy in Language Models"☆82Updated last year
- ControlArena is a collection of settings, model organisms and protocols - for running control experiments.☆76Updated this week
- Contains random samples referenced in the paper "Sleeper Agents: Training Robustly Deceptive LLMs that Persist Through Safety Training".☆109Updated last year
- ☆55Updated 9 months ago
- LLM experiments done during SERI MATS - focusing on activation steering / interpreting activation spaces☆95Updated last year
- unofficial re-implementation of "Grokking: Generalization Beyond Overfitting on Small Algorithmic Datasets"☆77Updated 3 years ago
- Mechanistic Interpretability for Transformer Models☆51Updated 3 years ago
- ☆137Updated 8 months ago
- ☆28Updated last year
- Tools for understanding how transformer predictions are built layer-by-layer☆505Updated last year