dit7ya / awesome-ai-alignmentLinks
A curated list of awesome resources for Artificial Intelligence Alignment research
ā72Updated 2 years ago
Alternatives and similar repositories for awesome-ai-alignment
Users that are interested in awesome-ai-alignment are comparing it to the libraries listed below
Sorting:
- Keeping language models honest by directly eliciting knowledge encoded in their activations.ā212Updated this week
- š§ Starter templates for doing interpretability researchā75Updated 2 years ago
- unofficial re-implementation of "Grokking: Generalization Beyond Overfitting on Small Algorithmic Datasets"ā79Updated 3 years ago
- Tools for studying developmental interpretability in neural networks.ā114Updated 4 months ago
- ā139Updated 3 months ago
- we got you broā36Updated last year
- A puzzle to learn about promptingā135Updated 2 years ago
- ā281Updated last year
- datasets from the paper "Towards Understanding Sycophancy in Language Models"ā94Updated 2 years ago
- Emergent world representations: Exploring a sequence model trained on a synthetic taskā191Updated 2 years ago
- RuLES: a benchmark for evaluating rule-following in language modelsā239Updated 8 months ago
- A dataset of alignment research and code to reproduce itā78Updated 2 years ago
- ā61Updated last month
- Investigating the generalization behavior of LM probes trained to predict truth labels: (1) from one annotator to another, and (2) from eā¦ā28Updated last year
- Machine Learning for Alignment Bootcamp (MLAB).ā30Updated 3 years ago
- ā14Updated last year
- ā305Updated last year
- Resources for skilling up in AI alignment research engineering. Covers basics of deep learning, mechanistic interpretability, and RL.ā231Updated 2 months ago
- ā27Updated 2 years ago
- Contains random samples referenced in the paper "Sleeper Agents: Training Robustly Deceptive LLMs that Persist Through Safety Training".ā120Updated last year
- ā75Updated 2 years ago
- ā114Updated 3 weeks ago
- Functional local implementations of main model parallelism approachesā96Updated 2 years ago
- ā268Updated 9 months ago
- Erasing concepts from neural representations with provable guaranteesā239Updated 9 months ago
- Measuring the situational awareness of language modelsā39Updated last year
- LLM experiments done during SERI MATS - focusing on activation steering / interpreting activation spacesā98Updated 2 years ago
- ā128Updated last year
- Adversarial Attacks on GPT-4 via Simple Random Search [Dec 2023]ā43Updated last year
- Code accompanying the paper Pretraining Language Models with Human Preferencesā180Updated last year