ruizheliUOA / Awesome-Interpretability-in-Large-Language-ModelsLinks
This repository collects all relevant resources about interpretability in LLMs
☆366Updated 9 months ago
Alternatives and similar repositories for Awesome-Interpretability-in-Large-Language-Models
Users that are interested in Awesome-Interpretability-in-Large-Language-Models are comparing it to the libraries listed below
Sorting:
- ☆177Updated 8 months ago
- Using sparse coding to find distributed representations used by neural networks.☆261Updated last year
- ☆154Updated 8 months ago
- Sparse Autoencoder for Mechanistic Interpretability☆257Updated last year
- ☆183Updated 3 weeks ago
- ☆234Updated 10 months ago
- ☆107Updated 3 weeks ago
- ☆324Updated 3 weeks ago
- Stanford NLP Python library for understanding and improving PyTorch models via interventions☆783Updated last week
- Sparsify transformers with SAEs and transcoders☆595Updated last week
- Delphi was the home of a temple to Phoebus Apollo, which famously had the inscription, 'Know Thyself.' This library lets language models …☆200Updated this week
- A curated list of LLM Interpretability related material - Tutorial, Library, Survey, Paper, Blog, etc..☆262Updated 4 months ago
- ☆505Updated last year
- Steering vectors for transformer language models in Pytorch / Huggingface☆119Updated 5 months ago
- Training Sparse Autoencoders on Language Models☆895Updated this week
- ☆121Updated last year
- Steering Llama 2 with Contrastive Activation Addition☆167Updated last year
- An Open Source Implementation of Anthropic's Paper: "Towards Monosemanticity: Decomposing Language Models with Dictionary Learning"☆48Updated last year
- Create feature-centric and prompt-centric visualizations for sparse autoencoders (like those from Anthropic's published research).☆207Updated 7 months ago
- A resource repository for representation engineering in large language models☆129Updated 8 months ago
- ☆222Updated last year
- ☆81Updated 5 months ago
- Tools for understanding how transformer predictions are built layer-by-layer☆512Updated last year
- For OpenMOSS Mechanistic Interpretability Team's Sparse Autoencoder (SAE) research.☆141Updated this week
- Mechanistic Interpretability Visualizations using React☆272Updated 7 months ago
- The nnsight package enables interpreting and manipulating the internals of deep learned models.☆619Updated this week
- Materials for EACL2024 tutorial: Transformer-specific Interpretability☆59Updated last year
- Sparse probing paper full code.☆58Updated last year
- ☆51Updated 8 months ago
- Codebase for reproducing the experiments of the semantic uncertainty paper (short-phrase and sentence-length experiments).☆349Updated last year