ruizheliUOA / Awesome-Interpretability-in-Large-Language-Models
This repository collects all relevant resources about interpretability in LLMs
☆341Updated 5 months ago
Alternatives and similar repositories for Awesome-Interpretability-in-Large-Language-Models:
Users that are interested in Awesome-Interpretability-in-Large-Language-Models are comparing it to the libraries listed below
- Mechanistic Interpretability Visualizations using React☆241Updated 4 months ago
- ☆144Updated 5 months ago
- Training Sparse Autoencoders on Language Models☆730Updated this week
- Using sparse coding to find distributed representations used by neural networks.☆236Updated last year
- Sparsify transformers with SAEs and transcoders☆519Updated this week
- ☆274Updated 2 months ago
- Sparse Autoencoder for Mechanistic Interpretability☆241Updated 9 months ago
- Create feature-centric and prompt-centric visualizations for sparse autoencoders (like those from Anthropic's published research).☆194Updated 4 months ago
- ☆114Updated 8 months ago
- A curated list of LLM Interpretability related material - Tutorial, Library, Survey, Paper, Blog, etc..☆223Updated last month
- ☆218Updated 6 months ago
- Delphi was the home of a temple to Phoebus Apollo, which famously had the inscription, 'Know Thyself.' This library lets language models …☆169Updated this week
- ☆104Updated 5 months ago
- Steering vectors for transformer language models in Pytorch / Huggingface☆95Updated 2 months ago
- ☆161Updated 2 weeks ago
- A resource repository for representation engineering in large language models☆119Updated 5 months ago
- ☆453Updated 9 months ago
- The nnsight package enables interpreting and manipulating the internals of deep learned models.☆548Updated this week
- ☆85Updated last week
- ☆202Updated last year
- Tools for understanding how transformer predictions are built layer-by-layer☆486Updated 10 months ago
- Function Vectors in Large Language Models (ICLR 2024)☆161Updated last week
- LLM experiments done during SERI MATS - focusing on activation steering / interpreting activation spaces☆91Updated last year
- For OpenMOSS Mechanistic Interpretability Team's Sparse Autoencoder (SAE) research.☆109Updated 2 weeks ago
- Steering Llama 2 with Contrastive Activation Addition☆144Updated 11 months ago
- ☆71Updated 2 months ago
- A toolkit for describing model features and intervening on those features to steer behavior.☆178Updated 5 months ago
- ☆121Updated last year
- Codebase for reproducing the experiments of the semantic uncertainty paper (short-phrase and sentence-length experiments).☆310Updated last year
- Code and results accompanying the paper "Refusal in Language Models Is Mediated by a Single Direction".☆208Updated 6 months ago