Dakingrai / awesome-mechanistic-interpretability-lm-papers
☆68Updated 3 months ago
Related projects ⓘ
Alternatives and complementary repositories for awesome-mechanistic-interpretability-lm-papers
- [NeurIPS 2024] Knowledge Circuits in Pretrained Transformers☆66Updated 3 weeks ago
- Function Vectors in Large Language Models (ICLR 2024)☆116Updated 3 weeks ago
- ☆78Updated last year
- ☆70Updated 3 months ago
- A Mechanistic Understanding of Alignment Algorithms: A Case Study on DPO and Toxicity.☆51Updated this week
- ☆26Updated 6 months ago
- Repo accompanying our paper "Do Llamas Work in English? On the Latent Language of Multilingual Transformers".☆56Updated 7 months ago
- Inspecting and Editing Knowledge Representations in Language Models☆107Updated last year
- The accompanying code for "Transformer Feed-Forward Layers Are Key-Value Memories". Mor Geva, Roei Schuster, Jonathan Berant, and Omer Le…☆84Updated 3 years ago
- Easy-to-Hard Generalization: Scalable Alignment Beyond Human Supervision☆95Updated 2 months ago
- Steering Knowledge Selection Behaviours in LLMs via SAE-Based Representation Engineering☆21Updated 2 weeks ago
- In-Context Sharpness as Alerts: An Inner Representation Perspective for Hallucination Mitigation (ICML 2024)☆45Updated 7 months ago
- [NeurIPS'23] Aging with GRACE: Lifelong Model Editing with Discrete Key-Value Adaptors☆69Updated 8 months ago
- AI Logging for Interpretability and Explainability🔬☆87Updated 5 months ago
- Implementation of PaCE: Parsimonious Concept Engineering for Large Language Models (NeurIPS 2024)☆26Updated this week
- LoFiT: Localized Fine-tuning on LLM Representations☆21Updated 4 months ago
- A curated list of awesome resources dedicated to Scaling Laws for LLMs☆63Updated last year
- The Paper List on Data Contamination for Large Language Models Evaluation.☆73Updated this week
- ☆35Updated 9 months ago
- A resource repository for representation engineering in large language models☆50Updated last month
- For OpenMOSS Mechanistic Interpretability Team's Sparse Autoencoder (SAE) research.☆45Updated this week
- Code release for "Debating with More Persuasive LLMs Leads to More Truthful Answers"☆83Updated 7 months ago
- ☆37Updated last year
- Semi-Parametric Editing with a Retrieval-Augmented Counterfactual Model☆62Updated 2 years ago
- ☆102Updated last month
- Code associated with Tuning Language Models by Proxy (Liu et al., 2024)☆96Updated 7 months ago
- ☆44Updated 2 months ago
- Offical code of the paper Large Language Models Are Implicitly Topic Models: Explaining and Finding Good Demonstrations for In-Context Le…☆68Updated 7 months ago
- Röttger et al. (2023): "XSTest: A Test Suite for Identifying Exaggerated Safety Behaviours in Large Language Models"☆61Updated 10 months ago
- LLM experiments done during SERI MATS - focusing on activation steering / interpreting activation spaces☆77Updated last year