dit7ya / awesome-ai-alignmentLinks
A curated list of awesome resources for Artificial Intelligence Alignment research
☆71Updated 2 years ago
Alternatives and similar repositories for awesome-ai-alignment
Users that are interested in awesome-ai-alignment are comparing it to the libraries listed below
Sorting:
- Keeping language models honest by directly eliciting knowledge encoded in their activations.☆209Updated last week
- 🧠 Starter templates for doing interpretability research☆73Updated 2 years ago
- datasets from the paper "Towards Understanding Sycophancy in Language Models"☆86Updated last year
- A dataset of alignment research and code to reproduce it☆77Updated 2 years ago
- ☆137Updated 2 weeks ago
- ☆274Updated last year
- ☆73Updated 2 years ago
- Emergent world representations: Exploring a sequence model trained on a synthetic task☆186Updated 2 years ago
- ☆287Updated last year
- Tools for studying developmental interpretability in neural networks.☆100Updated last month
- unofficial re-implementation of "Grokking: Generalization Beyond Overfitting on Small Algorithmic Datasets"☆78Updated 3 years ago
- ☆55Updated last week
- RuLES: a benchmark for evaluating rule-following in language models☆228Updated 5 months ago
- Resources for skilling up in AI alignment research engineering. Covers basics of deep learning, mechanistic interpretability, and RL.☆220Updated last year
- we got you bro☆36Updated last year
- A puzzle to learn about prompting☆132Updated 2 years ago
- Erasing concepts from neural representations with provable guarantees☆232Updated 6 months ago
- Tools for understanding how transformer predictions are built layer-by-layer☆512Updated last year
- ☆26Updated 2 years ago
- Measuring the situational awareness of language models☆37Updated last year
- Utilities for the HuggingFace transformers library☆70Updated 2 years ago
- ☆234Updated 10 months ago
- Code for the ICLR 2024 paper "How to catch an AI liar: Lie detection in black-box LLMs by asking unrelated questions"☆72Updated last year
- ☆267Updated 6 months ago
- ☆291Updated last year
- The Prism Alignment Project☆79Updated last year
- Project 2 (Building Large Language Models) for Stanford CS324: Understanding and Developing Large Language Models (Winter 2022)☆105Updated 2 years ago
- Contains random samples referenced in the paper "Sleeper Agents: Training Robustly Deceptive LLMs that Persist Through Safety Training".☆111Updated last year
- Mechanistic Interpretability Visualizations using React☆272Updated 7 months ago
- Code accompanying the paper Pretraining Language Models with Human Preferences☆182Updated last year