dit7ya / awesome-ai-alignmentLinks
A curated list of awesome resources for Artificial Intelligence Alignment research
ā72Updated 2 years ago
Alternatives and similar repositories for awesome-ai-alignment
Users that are interested in awesome-ai-alignment are comparing it to the libraries listed below
Sorting:
- Keeping language models honest by directly eliciting knowledge encoded in their activations.ā214Updated last week
- š§ Starter templates for doing interpretability researchā75Updated 2 years ago
- ā142Updated 4 months ago
- ā270Updated 10 months ago
- A dataset of alignment research and code to reproduce itā78Updated 2 years ago
- Contains random samples referenced in the paper "Sleeper Agents: Training Robustly Deceptive LLMs that Persist Through Safety Training".ā122Updated last year
- Emergent world representations: Exploring a sequence model trained on a synthetic taskā191Updated 2 years ago
- ā283Updated last year
- RuLES: a benchmark for evaluating rule-following in language modelsā240Updated 9 months ago
- Tools for studying developmental interpretability in neural networks.ā116Updated 5 months ago
- A puzzle to learn about promptingā135Updated 2 years ago
- datasets from the paper "Towards Understanding Sycophancy in Language Models"ā97Updated 2 years ago
- ā313Updated last year
- This repository collects all relevant resources about interpretability in LLMsā384Updated last year
- Resources for skilling up in AI alignment research engineering. Covers basics of deep learning, mechanistic interpretability, and RL.ā232Updated 3 months ago
- Code accompanying the paper Pretraining Language Models with Human Preferencesā180Updated last year
- Experiments with representation engineeringā13Updated last year
- unofficial re-implementation of "Grokking: Generalization Beyond Overfitting on Small Algorithmic Datasets"ā79Updated 3 years ago
- ā75Updated 2 years ago
- we got you broā36Updated last year
- Resources from the EleutherAI Math Reading Groupā54Updated 9 months ago
- Erasing concepts from neural representations with provable guaranteesā239Updated 10 months ago
- ā255Updated last year
- Investigating the generalization behavior of LM probes trained to predict truth labels: (1) from one annotator to another, and (2) from eā¦ā28Updated last year
- ā111Updated 9 months ago
- LLM experiments done during SERI MATS - focusing on activation steering / interpreting activation spacesā99Updated 2 years ago
- Mechanistic Interpretability for Transformer Modelsā53Updated 3 years ago
- Tools for understanding how transformer predictions are built layer-by-layerā549Updated 3 months ago
- ā27Updated 2 years ago
- ā248Updated 2 years ago