msakarvadia / AttentionLens
Interpretating the latent space representations of attention head outputs for LLMs
☆30Updated 7 months ago
Alternatives and similar repositories for AttentionLens:
Users that are interested in AttentionLens are comparing it to the libraries listed below
- Open source replication of Anthropic's Crosscoders for Model Diffing☆48Updated 5 months ago
- ☆33Updated last month
- ☆54Updated last year
- ☆45Updated last year
- Code for NeurIPS 2024 Spotlight: "Scaling Laws and Compute-Optimal Training Beyond Fixed Training Durations"☆71Updated 5 months ago
- Stanford NLP Python library for benchmarking the utility of LLM interpretability methods☆62Updated this week
- Sparse Autoencoder Training Library☆47Updated 5 months ago
- ☆27Updated last month
- ☆26Updated last year
- ☆26Updated 2 months ago
- ☆13Updated last year
- Investigating the generalization behavior of LM probes trained to predict truth labels: (1) from one annotator to another, and (2) from e…☆26Updated 10 months ago
- ☆33Updated last week
- ☆30Updated last year
- Code and Data Repo for the CoNLL Paper -- Future Lens: Anticipating Subsequent Tokens from a Single Hidden State☆18Updated last year
- Official code repo for paper "Great Memory, Shallow Reasoning: Limits of kNN-LMs"☆22Updated 7 months ago
- Applies ROME and MEMIT on Mamba-S4 models☆14Updated 11 months ago
- A library for efficient patching and automatic circuit discovery.☆59Updated last month
- Exploration of automated dataset selection approaches at large scales.☆34Updated 3 weeks ago
- Code and Configs for Asynchronous RLHF: Faster and More Efficient RL for Language Models☆38Updated this week
- ☆18Updated 8 months ago
- [NeurIPS 2024 Spotlight] Code and data for the paper "Finding Transformer Circuits with Edge Pruning".☆47Updated 3 weeks ago
- This repository contains the code used for the experiments in the paper "Fine-Tuning Enhances Existing Mechanisms: A Case Study on Entity…☆25Updated last year
- Xmixers: A collection of SOTA efficient token/channel mixers☆11Updated 4 months ago
- ☆82Updated 7 months ago
- ☆25Updated last year
- Codebase for Context-aware Meta-learned Loss Scaling (CaMeLS). https://arxiv.org/abs/2305.15076.☆25Updated last year
- Language models scale reliably with over-training and on downstream tasks☆96Updated 11 months ago
- Code for ICLR 2025 Paper "What is Wrong with Perplexity for Long-context Language Modeling?"☆44Updated this week
- https://footprints.baulab.info☆17Updated 5 months ago