ruizheliUOA / Awesome-Interpretability-in-Large-Language-ModelsLinks
This repository collects all relevant resources about interpretability in LLMs
☆368Updated 9 months ago
Alternatives and similar repositories for Awesome-Interpretability-in-Large-Language-Models
Users that are interested in Awesome-Interpretability-in-Large-Language-Models are comparing it to the libraries listed below
Sorting:
- ☆185Updated 9 months ago
- Using sparse coding to find distributed representations used by neural networks.☆262Updated last year
- ☆162Updated 9 months ago
- ☆238Updated 10 months ago
- Sparse Autoencoder for Mechanistic Interpretability☆260Updated last year
- ☆184Updated last month
- Stanford NLP Python library for understanding and improving PyTorch models via interventions☆793Updated 3 weeks ago
- ☆327Updated this week
- ☆114Updated last month
- Steering Llama 2 with Contrastive Activation Addition☆174Updated last year
- A curated list of LLM Interpretability related material - Tutorial, Library, Survey, Paper, Blog, etc..☆265Updated 5 months ago
- Sparsify transformers with SAEs and transcoders☆609Updated last week
- Mechanistic Interpretability Visualizations using React☆280Updated 8 months ago
- Create feature-centric and prompt-centric visualizations for sparse autoencoders (like those from Anthropic's published research).☆212Updated 8 months ago
- Delphi was the home of a temple to Phoebus Apollo, which famously had the inscription, 'Know Thyself.' This library lets language models …☆206Updated last week
- A resource repository for representation engineering in large language models☆130Updated 9 months ago
- ☆81Updated 6 months ago
- Steering vectors for transformer language models in Pytorch / Huggingface☆122Updated 6 months ago
- ☆508Updated last year
- ☆122Updated last year
- Tools for understanding how transformer predictions are built layer-by-layer☆516Updated 2 weeks ago
- An Open Source Implementation of Anthropic's Paper: "Towards Monosemanticity: Decomposing Language Models with Dictionary Learning"☆48Updated last year
- For OpenMOSS Mechanistic Interpretability Team's Sparse Autoencoder (SAE) research.☆145Updated this week
- Training Sparse Autoencoders on Language Models☆925Updated this week
- ☆223Updated last year
- Stanford NLP Python library for benchmarking the utility of LLM interpretability methods☆124Updated 2 months ago
- The nnsight package enables interpreting and manipulating the internals of deep learned models.☆630Updated last week
- Materials for EACL2024 tutorial: Transformer-specific Interpretability☆60Updated last year
- ☆53Updated 9 months ago
- Function Vectors in Large Language Models (ICLR 2024)☆177Updated 4 months ago