EleutherAI / elk
Keeping language models honest by directly eliciting knowledge encoded in their activations.
☆197Updated this week
Alternatives and similar repositories for elk:
Users that are interested in elk are comparing it to the libraries listed below
- ☆258Updated 8 months ago
- Erasing concepts from neural representations with provable guarantees☆225Updated last month
- ☆262Updated last year
- Mechanistic Interpretability Visualizations using React☆233Updated 2 months ago
- ☆210Updated 5 months ago
- Create feature-centric and prompt-centric visualizations for sparse autoencoders (like those from Anthropic's published research).☆184Updated 2 months ago
- Notebooks accompanying Anthropic's "Toy Models of Superposition" paper☆115Updated 2 years ago
- ☆112Updated 7 months ago
- ☆120Updated last year
- Steering Llama 2 with Contrastive Activation Addition☆125Updated 9 months ago
- datasets from the paper "Towards Understanding Sycophancy in Language Models"☆72Updated last year
- A dataset of alignment research and code to reproduce it☆74Updated last year
- Tools for understanding how transformer predictions are built layer-by-layer☆478Updated 9 months ago
- ☆26Updated 11 months ago
- ☆62Updated this week
- Using sparse coding to find distributed representations used by neural networks.☆218Updated last year
- Tools for studying developmental interpretability in neural networks.☆86Updated last month
- Resources for skilling up in AI alignment research engineering. Covers basics of deep learning, mechanistic interpretability, and RL.☆206Updated last year
- Inspecting and Editing Knowledge Representations in Language Models☆112Updated last year
- ☆60Updated 3 months ago
- Mechanistic Interpretability for Transformer Models☆49Updated 2 years ago
- Algebraic value editing in pretrained language models☆63Updated last year
- ☆159Updated this week
- Steering vectors for transformer language models in Pytorch / Huggingface☆90Updated 2 weeks ago
- Code for the ICLR 2024 paper "How to catch an AI liar: Lie detection in black-box LLMs by asking unrelated questions"☆66Updated 8 months ago
- ☆88Updated last month
- LLM experiments done during SERI MATS - focusing on activation steering / interpreting activation spaces☆91Updated last year
- Measuring the situational awareness of language models☆34Updated last year
- ☆148Updated this week
- A library for efficient patching and automatic circuit discovery.☆56Updated 3 weeks ago