msakarvadia / AttentionLensLinks
Interpretating the latent space representations of attention head outputs for LLMs
☆33Updated 11 months ago
Alternatives and similar repositories for AttentionLens
Users that are interested in AttentionLens are comparing it to the libraries listed below
Sorting:
- Code for NeurIPS 2024 Spotlight: "Scaling Laws and Compute-Optimal Training Beyond Fixed Training Durations"☆75Updated 8 months ago
- Open source replication of Anthropic's Crosscoders for Model Diffing☆57Updated 8 months ago
- A library for efficient patching and automatic circuit discovery.☆70Updated 2 months ago
- ☆54Updated 2 years ago
- ☆45Updated last year
- This repository includes code for the paper "Does Localization Inform Editing? Surprising Differences in Where Knowledge Is Stored vs. Ca…☆61Updated 2 years ago
- The accompanying code for "Transformer Feed-Forward Layers Are Key-Value Memories". Mor Geva, Roei Schuster, Jonathan Berant, and Omer Le…☆94Updated 3 years ago
- ☆87Updated 11 months ago
- This repository contains the code used for the experiments in the paper "Fine-Tuning Enhances Existing Mechanisms: A Case Study on Entity…☆27Updated last year
- Investigating the generalization behavior of LM probes trained to predict truth labels: (1) from one annotator to another, and (2) from e…☆28Updated last year
- Evaluate interpretability methods on localizing and disentangling concepts in LLMs.☆49Updated 9 months ago
- A Kernel-Based View of Language Model Fine-Tuning https://arxiv.org/abs/2210.05643☆76Updated last year
- Simple and scalable tools for data-driven pretraining data selection.☆24Updated last month
- ☆50Updated 4 months ago
- ☆23Updated 5 months ago
- This is the official repository for the "Towards Vision-Language Mechanistic Interpretability: A Causal Tracing Tool for BLIP" paper acce…☆22Updated last year
- Skill-It! A Data-Driven Skills Framework for Understanding and Training Language Models☆46Updated last year
- Sparse Autoencoder Training Library☆53Updated 2 months ago
- Code and Data Repo for the CoNLL Paper -- Future Lens: Anticipating Subsequent Tokens from a Single Hidden State☆18Updated last year
- Is In-Context Learning Sufficient for Instruction Following in LLMs? [ICLR 2025]☆30Updated 5 months ago
- Codebase for Context-aware Meta-learned Loss Scaling (CaMeLS). https://arxiv.org/abs/2305.15076.☆25Updated last year
- Self-Supervised Alignment with Mutual Information☆20Updated last year
- Language models scale reliably with over-training and on downstream tasks☆97Updated last year
- ☆27Updated 5 months ago
- ☆87Updated last year
- Revisiting Efficient Training Algorithms For Transformer-based Language Models (NeurIPS 2023)☆80Updated last year
- Offcial Repo of Paper "Eliminating Position Bias of Language Models: A Mechanistic Approach""☆14Updated last month
- ☆28Updated last year
- ☆35Updated 2 years ago
- Official code repo for paper "Great Memory, Shallow Reasoning: Limits of kNN-LMs"☆23Updated 2 months ago